AI Deepfake Explicit Images of Students: What Can School Leaders Do?

protecting students from deepfakes
(Image credit: Getty Images)

Consider this scenario:

A teacher notices a student in homeroom is off–they are keeping to themselves and seem distressed. The teacher approaches them to say hello and check in on them. They quietly tell the teacher they were dared last night by a small group of their friends to create an AI-generated nude image of themselves. They did it, and the group had a laugh. This morning, they have been getting texts from classmates outside of that original small group that say things such as, “You look hot,” “I had no idea you were like that ;)” and “Send me more.” The student is distraught. This was meant to be a joke with friends, and now it’s outside their control.

While educators are already navigating both the benefits and challenges of Generative AI (GenAI) in the classroom, it is equally critical to recognize that these tools can also be used to create explicit images involving students. It has been estimated that 88% of the explicit images of minors that are shared and misused online are youth-created. We do not yet know how many of those youth-created images are deepfakes or which are made using GenAI, but given the exponential growth of GenAI in the past three years, it is likely larger than some might assume.

One common misconception is that deepfake explicit images are somehow less harmful than explicit photos, but that’s not true. According to the American Academy of Pediatrics: “If a child is a victim of AI-generated, image-based sexual abuse, they may experience humiliation, shame, anger, violation, and self-blame.” They also go on to add: “If deepfakes are passed around a school community or peer groups, the victim may be bullied, teased, and harassed; trauma is amplified each time the content is shared.”

These incidents can have a lasting, traumatic impact on a student’s life that goes well beyond their school years.

How does GenAI learn to create explicit images?

Large language models (LLMs) are the foundation on which GenAI operates and are trained on huge amounts of data by using “self-supervised learning” techniques, according to IBM. Essentially, GenAI learns and continues to learn from data, including text and images, inputted by millions of users. Therefore, if GenAI is able to create explicit images of a child, it has learned from images of real victims of child sexual abuse material (CSAM).

Put another way, as the National Center for Missing and Exploited Children says, “GAI [GenAI] CSAM is CSAM.”

Educators are likely already familiar with sexting and sextortion, the exchange of explicit images using digital messaging tools, because of the impact it has had on their schools and students. Now, with the rise of GenAI, the ability to create explicit images is also impacting students in schools.

Schools should feel empowered to put protocols in place for potential deepfake explicit imagery to protect students and help support them if they become victims. This is true even if the images are produced off school grounds, beyond school hours, and on devices not owned by the school. Students are better able to learn and participate when school is a safe environment and they are supported.

Educators, both administrators and teachers, care for the well-being of their adolescent students, and creating research-based effective prevention programming and protocols for handling incidents involving deepfake explicit images can help them do this.

How schools can best do prevention programming

Traditionally, prevention programming is in the form of an assembly during which school-age children are warned not to create social media accounts or to keep all accounts completely private.

Following this model, schools might assume they should restrict or ban the use of GenAI. In reality, these fear-based messages can cause a child who later becomes a victim of explicit material to feel shame and might even prevent them from reporting the harm so they can get help.

Research shows that a proactive and victim-centered approach is more effective. This means the goal of programming is to ensure students’ voices are heard, they are engaged throughout the program, and they know how to seek support.

The Online Child Exploitation Prevention Initiative (OCEPI) recently published a research-based guide for effective prevention programming that stays focused on behavioral goals such as using social media for good and teaching how to get help when encountering something scary or harmful online.

Schools can use the 10 best practices in this guide when partnering with parent groups, school resources officers, and community organizations to collaboratively create effective and engaging prevention programming for the children and teens they serve. Prevention programming can reduce future instances of deepfake explicit images, but schools should also be prepared with protocol for when it does happen.

What school protocols can help with managing deepfakes when they happen

Who is OCEPI:

The Online Child Exploitation Prevention Initiative (OCEPI), established in 2023, aims to “establish a technology safety partnership with an ongoing goal of working collaboratively to achieve a consistent and effective global prevention message.” As the most comprehensive and collaborative online safety group in the U.S., OCEPI brings together experts – including federal, state, and local law enforcement, ICAC Task Force members, researchers, educators, prevention specialists, training partners, and child protection organizations – all committed to one shared goal: keeping children safe online.

While GenAI-created explicit deepfakes can feel like an overwhelming problem, especially at a time when school leaders and teachers are already being asked to manage the incredible amount of change to teaching and learning that has come with GenAI, there are some simple and actionable steps available.

In June 2025, OCEPI published Guidance for School-Based Professionals and School Leaders. One approach in conjunction with this that has already worked well for some school leaders is to send the guide to the school district legal counsel along with a copy of the student and teacher handbooks, asking for updates to the language to help inform everyone and shape school protocols.

Following up on that, leaders need to share these updates with school administrators, school counselors, student support specialists, teachers, and parents so that all the adults who care about the students are aware and able to support the effort. They should also encourage their team and the broader school community to read the OCEPI guide so they, too, can understand the issue and organizations they could connect with.

As education leaders, we are in a pivotal moment to act. By addressing the risks of GenAI now, while the technology is still emerging, we can build proactive systems that protect students and prevent harm in an increasingly digital world, rather than facing greater consequences down the line.


Stephanie Jones

Stephanie Jones is Global Prevention and Education Specialist for A21.org.

Kerry Gallagher, is Assistant Principal for Teaching and Learning at St. John’s Prep in Danvers, MA, and Education Director at ConnectSafely.