Ensuring Responsible Internet Use with Artificial Intelligence

Ensuring Responsible Internet Use with Artificial Intelligence

A new generation of web monitoring and filtering software uses machine learning technology to improve student safety, cut down on lost instructional time, and help teach responsible Internet use — while reducing administrators’ time by more than half.

Putting a digital device in every student’s hands opens a world of possibilities for learning, but educators have to ensure that students are using the Internet safely and appropriately. This means monitoring students’ web use, blocking access to inappropriate material, and following up with students when they break the rules or engage in dangerous online activity.

Traditional approaches to Internet monitoring and filtering have been problematic. For instance, the ever-changing nature of the Internet makes filtering websites by their domain name or URL largely ineffective. And keyword flagging can’t account for the context in which a word or phrase appears, which results in a large number of false positives.

However, a new generation of software that uses artificial intelligence (AI) could help solve these problems. Instead of looking for certain keywords, or blocking a website based on its URL, the software analyzes the entire content on the page in real time to determine what it’s about and whether it’s appropriate for the student.

Here’s how it works: Developers show the software thousands of examples of web pages that are appropriate for students at different age levels, as well as pages that are not appropriate—and the software “learns” how to distinguish between these by recognizing patterns between the words and the content on the page.

What’s more, the software continually improves as it receives feedback from users, becoming even smarter over time.

Not only can AI improve web filtering, but the automatic notifications that K-12 leaders receive when students try to access explicit web content—or post inappropriate content to a social media website, or otherwise misuse the Internet—also become more accurate. This helps administrators enforce their school or district Acceptable Use Policy (AUP).

Sorting through a list of flagged activity to determine which instances are harmless and which need further action can be onerous, especially when there are many false positives to account for. Busy administrators don’t have time to engage in this process.

But if AI can reduce the number of false positives and improve the accuracy of these alerts, “this results in a huge increase in productivity,” says Tyler Shaddix, head of products and innovation for GoGuardian. “It actually makes it possible for administrators to manage their notifications. And every time they check these alerts, they have actionable activity they can use to make real changes in their district.”

Feedback from administrators suggests the use of AI technology can reduce the number of false positives by a factor of 100 when compared with traditional keyword flagging algorithms.

“Having an AI-based solution that can analyze the contents of a web page before our students even see it has been one of the best things we’ve done to ensure that our kids are getting only age-appropriate material,” says Brian Seymour, director of instructional technology for Pickerington Local School District in Ohio.

Seymour’s district of 10,000 students has gone fully 1:1 with digital devices. Students in pre-kindergarten through second grade use iPads that are stored in carts after school. Students in grades three and four use Chromebooks that stay at school, and students in grades 5-12 receive Chromebooks they can take home at night.

“We needed a way to assure parents that if students were taking the devices home, their Internet access would be safe and educational even from home,” he says. With GoGuardian’s intelligent Internet monitoring and filtering software, “we can ensure that students are using their Chromebooks for learning—which has relieved parents’ fears.”

Before using an AI-based solution, “I would probably get 100 emails a day (reporting inappropriate online behavior), and maybe about five were something I needed to follow up on,” Seymour notes. “Now, I very rarely get any false positives—so I know that if I do get a notification, I need to look at it and take care of it immediately. It has made a very big difference in my ability to take action and shape students’ online behavior.”

Additional Resources on AI

How Artificial Intelligence Supports Safer Learning

A new method has emerged that uses artificial intelligence to analyze the actual content on web pages in real time, allowing students to see only material that educators deem safe and productive. Content-based filtering marks a different approach to keeping students safe online—one that’s proving to be both more accurate and easier for educators to manage.

Click here to download the full white paper.