Thursday, April 23, 2026

The Deepfake Nudes crisis in schools is much worse than you thought

Share

Nevertheless, there are clear patterns. In almost all cases, teenage boys are allegedly responsible for creating the photos and videos. They are often shared on social media apps or via instant messaging with classmates. And they are extremely harmful to victims. “I worry that every time they see me, they see these photos” – one of the victims in Iowa he said earlier this year. “She was crying and not eating” – another family he said.

In many cases, victims often do not want to go to school or meet with the people who created explicit photos or videos of them. “She feels hopeless because she knows these photos will probably end up on the Internet and reach pedophiles,” says attorney Shane Vogt and three Yale Law School students, Catharine Forceful, Tony Sjodin and Suzanne Castillo, who are representation one anonymous Modern Jersey teenager in legal action against nudification services. “She is very concerned to know that these photos are out there somewhere and will need to monitor the Internet for the rest of her life to prevent them from spreading.”

IN South Korea AND Australiaschools gave students the option to not have their photos included in yearbooks or stopped posting student photos on their official social media accounts, citing their operate for potential deepfake abuse. “There have been cases around the world where school images have been taken from public social media pages, altered using artificial intelligence and turned into harmful fake news” – one school in Australia he said. “Photos will instead show side profiles, silhouettes, backs of heads, distant group shots, creative filters, or approved stock photos.”

Sexual deepfakes created using artificial intelligence have existed since the delayed 20th century 2017; However, as artificial intelligence systems have emerged and become more powerful, they have given rise to a mysterious ecosystem of “nudification” or “stripping” technologies. Dozens of apps, bots and websites allow anyone to create sexual images and videos with just a few clicks, often without technical knowledge.

“What changes AI is scale, speed and accessibility,” says Siddharth Pillai, co-founder and director of the RATI Foundation, a Mumbai-based organization that aims to prevent violence against women and children. “The technical barrier has dropped significantly, which means more people, including teenagers, can create more compelling results with minimal effort. As with much of the harm caused by artificial intelligence, this results in an overabundance of content.”

Amanda Goharian, director of research and insights at child safety group Thorn, says her research shows that teens create fraudulent harassment for a variety of reasons, ranging from sexual motivation, curiosity, revenge, and even teens challenging themselves to create such images. Studies involving adults who have committed false sexual abuse similarly show: for many different reasons why images can be created. “The goal is not always sexual satisfaction,” says Pillai. “It is increasingly about humiliation, denigration and social control.”

“It’s not just about technology,” says Tanya Horeck, a feminist media studies professor and gender-based violence researcher who has looked at, among other things, sexualized deepfakes in British schools at Anglia Ruskin University. “It’s about long-standing gender dynamics that facilitate these crimes.”

Latest Posts

More News