Elon Musk isn’t stopping Grok, a chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports Last week, it emerged that X’s image generator tool had been used to create sexualized images of children, with Grok creating potentially thousands of nonsensical images of women in “undressed” and “bikini” photos.
According to a WIRED review of publicly posted live chatbot results, Grok creates photos of women in bikinis or underwear every few seconds in response to user prompts on X. An analysis of the posts shows that on Tuesday, Grok published at least 90 photos showing women in swimsuits and at various levels of undress.
The images do not contain nudity, but show Musk’s chatbot “stripping” clothes from photos posted on X by other users. Often, in an attempt to bypass Grok’s security barriers, users, not necessarily successfully, ask women to edit their photos so that the women are wearing “thong bikinis” or “see-through bikinis.”
While malicious AI image-generating technology has been used to digitally harass and exploit women for years – these results are often called deepfakes and are created by “nudify” software – the continued operate of Grok to create huge numbers of non-consensual images represents seemingly the most common and widespread case of abuse to date. Unlike nudity or “stripping” malware, Grok doesn’t charge users money to generate images, produces results in seconds, and is available to millions of people in X – all of which could aid normalize the creation of intimate images without their consent.
“When a company offers generative AI tools on its platform, it has a responsibility to minimize the risk of image-related abuse,” says Sloan Thompson, director of training and education at EndTAB, an organization dedicated to combating technology-enabled abuse. “What is disturbing is that Company X has done the opposite. It has implemented AI-powered image exploitation directly on a mainstream platform, making sexual violence easier and more scalable.”
Grok’s creation of sexual images began gaining popularity on X tardy last year, although the system’s ability to create such images was discovered known for months. In recent days, photos of social media influencers, celebrities and politicians have become the target of X-users, who can reply to a post from another account and ask Grok to change the shared photo.
Women who posted photos of themselves received replies to their accounts and asked Grok to turn the photo into a “bikini” image. In one examplemany X users asked Grok to change the image of the Swedish Deputy Prime Minister to show her in a bikini. Two British government ministers were also “stripped” to their bikinis, according to the report to talk.
An analyst who has been tracking outright deepfakes for years and asked to remain anonymous for privacy reasons says Grok has likely become one of the largest platforms hosting malicious deepfake images. “It’s completely mainstream,” says the researcher. “This is not a mysterious group [creating images]it’s literally everyone, from all walks of life. People post on their home networks. Zero worries.”
