October 7 a TikTok account called @fujitiva48 asked a provocative question alongside the latest video. “What do you think about this new toy for toddlers?” they asked more than 2,000 viewers who came across what appeared to be a parody of a TV ad. The answer was clear. “Hey so this isn’t funny,” one person wrote. “Whoever did this should be investigated.”
It’s uncomplicated to see why the video sparked such a powerful reaction. The counterfeit ad begins with a photorealistic youthful girl holding a toy – pink, shiny, with a bumblebee decorating the handle. It’s a pen, we were told, as the girl and two others scribbled on paper while the narrator was an adult male. However, it is clear that the floral design, the ability to buzz and the item name – Vibro Rose – look and sound very similar to a sex toy. The “add your own” button – a feature on TikTok that encourages people to share the video on their channels – with the words “I’m using my rose toy” removes even the slightest doubt. (WIRED reached out to the @fujitiva48 account for comment but received no response.)
The unsavory clip was created using Sora 2, OpenAI’s latest video generator, which was initially released by invitation only in the US September 30. In just a week, videos like Vibro Rose’s video were migrated from Sora and onto the For You page on TikTok. Some other false ads were even more explicit: WIRED discovered several accounts posting similar Sora 2-generated videos showing water toys shaped like roses or mushrooms, and cake decorators squirting “sticky milk”, “white foam” or “goo” onto realistic images of children.
In many countries, the above would constitute grounds for investigation if these were real children and not digital connections. However, regulations regarding AI-generated fetish content involving minors remain unclear. Fresh data for 2025 from Internet Watch Foundation in the UK notes that the number of reports of AI-generated child sexual abuse material (CSAM) has doubled in one year from 199 in January to October 2024 to 426 in the same period in 2025. Fifty-six percent of this content falls into Category A – the most sedate category in the UK covering penetrative sexual activity, sexual activity with animals or sadism. 94 percent of illegal AI images tracked by the IWF were of girls. (Sora doesn’t appear to be generating any A-rated content.)
“We often see likenesses of real children being manipulated to create nude or sexual images, and overwhelmingly we see artificial intelligence being used to create images of girls. This is yet another way girls are targeted online,” Kerry Smith, IWF’s CEO, tells WIRED.
The influx of harmful AI-generated material has prompted the UK to introduce: a new amendment to the Crime and Police Actwhich will allow “authorized testers” to check whether AI tools are unable to generate CSAM. As reported by the BBC, this amendment would provide models with protections for certain images, including in particular extreme pornography and non-consensual intimate images. In the US, 45 states have implemented regulations regarding criminalize AI-generated CSAMthe most in the last two years, as artificial intelligence generators continue to develop.
