Open the site one explicit deepfake generator and a menu of horrors will appear. With just a few clicks, you can convert a single photo into an eight-second video clip featuring women in realistic, graphic sexual situations. “Transform any photo into a nude version with our advanced artificial intelligence technology,” says the text on the website.
The possibilities for potential abuse are wide. Among the 65 “template” videos available on the site are a number of “strip” videos in which the women featured take off their clothes, but there are also vulgar video scenes titled “Deepthroat Fuck Machine” and various “semen” videos. Each video costs a petite fee to generate; adding AI-generated sound costs more.
The site, which is named WIRED and is not intended to limit further exposure, includes warnings saying that people should only upload photos that they have consented to being processed using artificial intelligence. It is unclear whether there are any controls in place to enforce this.
Grok, a chatbot created by Elon Musk’s companies, was used to create thousands of non-consensual “strip” or “nudity” bikini photos, further industrializing and normalizing the process of digital sexual harassment. But this is only the most noticeable – and not the most noticeable. For years, the deepfake ecosystem, which includes dozens of websites, bots and apps, has been growing, making it easier than ever before to automate image-based sexual exploitation, including creating Child Sexual Abuse Material (CSAM). This “nudification” ecosystem and the harm it does to women and girls is probably more sophisticated than many people understand.
“It’s no longer a very primitive synthetic strip,” says Henry Ajder, a deepfakes expert who has been following the technology for more than half a decade. “We’re talking about a much greater degree of realism of what’s actually being generated, but also a much broader range of functionality.” Collectively, these services probably make millions of dollars a year. “It’s a social plague and one of the worst, darkest parts of the AI revolution and synthetic media revolution that we’re seeing,” he says.
Over the past year, WIRED has tracked how many explicit deepfake services have introduced novel features and rapidly expanded to offer the ability to create malicious videos. Image-to-video models now typically only need one photo to generate a brief clip. A WIRED review of more than 50 “deepfake” websites that likely receive millions of views per month shows that almost all of them now offer explicit, high-quality video generation and often list dozens of sexual scenarios in which women can be depicted.
Meanwhile, on Telegram, dozens of phony sexual channels and bots regularly release novel features and software updates, such as various sexual positions and positions. Last June, for example, one phony service promoted “sex mode,” advertising it alongside the message: “Try different clothes, your favorite poses, ages and other settings.” Another posted that “more styles” of images and videos would be coming soon, and that users would be able to “create exactly what you imagine with your own descriptions,” using custom suggestions to AI systems.
“It’s not just like, ‘You want to undress someone.’ It’s like, ‘Here are all these different versions of fantasy.’ These are different poses. It’s about different sexual positions,” says independent analyst Santiago Lakatos, who cooperates with the media Indicator examined how “nudify” services often exploit the infrastructure of large tech companies and have likely made a lot of money in the process. “There are versions where you can create someone [appear] pregnant,” says Lakatos.
A WIRED review found that more than 1.4 million accounts were registered with 39 deepfake bots and channels on Telegram. After WIRED asked Telegram about the services, the company removed at least 32 deepfake tools. “Non-consensual pornography – including deepfakes and the tools used to create them – is strictly prohibited under Telegram law. terms of service” says a Telegram spokesperson, adding that it removes content when detected and last year removed 44 million pieces of content that violated its rules.
