This is A step backa weekly newsletter that highlights one massive story from the world of tech. More information about the dystopian development of artificial intelligence can be found on the website Hayden Field. A step back arrives in our subscribers’ inboxes at 8:00 a.m. ET. Report yourself A step back Here.
You could say it all started with Elon Musk’s AI FOMO – and his anti-woke crusade. When his artificial intelligence company, xAI, announced Grok’s presence November 2023it was described as a chatbot with a “rebellious streak” and the ability to “answer juicy questions that most other AI systems reject.” The chatbot debuted after several months of development and just two months of training, and the announcement emphasized that Grok would have real-time knowledge of the X platform.
However, there are inherent risks of the chatbot being able to employ both the Internet and X, and it’s protected to say that xAI may not have taken the necessary steps to address them. Since Musk took over Twitter in 2022 and renamed it X, he has laid off 30% of its global trust and safety staff and reduced the number of security engineers by 80%, Australia’s internet safety watchdog said January last year. As for xAI, when Grok was released, it was unclear whether xAI already had a security team. When Grok 4 launched in July, it took the company over a month to release the card model – a practice typically seen as an industry standard that includes details about security testing and potential issues. Two weeks after the release of Grok 4, an xAI employee wrote on X that it was hiring an xAI security team and that they “urgently need strong engineers/researchers.” In response to a commenter’s question: “xAI ensures security?” original employee he said xAI “worked on it.”
Journalist Kat Tenbarge wrote about how she began to see sexually explicit deepfake virally on Grok in June 2023. These images were obviously not created by Grok – he didn’t even have the ability to generate images until August 2024 – but X’s response to these concerns has been mixed. Even January last yearGrok has courted controversy over images generated by artificial intelligence. Last August, Grok’s “spicy” video generation mode created a nude deepfake of Taylor Swift without even asking. The experts said Edge since September that the company is taking an uncompromising approach to safety and guardrails — and that it’s strenuous enough to keep an AI system straight and narrow by designing it with safety in mind from the start, let alone go back to work to fix entrenched problems. Now it looks like this approach has blown xAI.
Grok has spent the last few weeks spreading nonsensical, sexualized deepfakes of adults and minors across the platform, according to the promotion. Screenshots show Grok complying with requests from users who demand she replace women’s clothing with underwear and make her spread her legs, as well as put bikinis on adolescent children. There are even more scandalous reports. It got so bad that during a 24-hour analysis of the images Grok created on X, one estimate estimated that the chatbot generates approximately 6,700 sexual or “nudity” images per hour. One reason for this attack is a recent feature added to Grok that allows users to employ the “edit” button to ask the chatbot to change images without the original author’s consent.
Since then, several countries have either investigated the matter or threatened a total ban on X. Members of the French government he promised an investigationjust like Indian Ministry of ITand the Malaysian Government Commission wrote a letter about your concerns. California Governor Gavin Newsom he called US Attorney General to investigate xAI. The UK said yes plans to pass a bill banning the creation of nonsensical and sexualized images generated by artificial intelligence, and the country’s telecommunications industry regulator said it would investigate both X and the generated images to see if they violate the Internet Safety Act. And this week both Malaysia and Indonesia blocked access to Grok.
xAI initially claimed that Grok’s purpose was to “assist humanity in its quest for understanding and knowledge”, “maximum benefit to all humanity”, and “equip our users with our artificial intelligence tools, as permitted by law”, as well as “serve as a powerful research assistant for everyone.” This is a far cry from creating fakes depicting naked women without their consent, let alone minors.
On Wednesday evening, as pressure on the company mounted, the X Safety account posted a statement that the platform “has implemented technological measures to prevent the Grok account from editing images of real people wearing revealing clothing, such as bikinis” and that the restriction “applies to all users, including paid subscribers.” Moreover, only paid subscribers can employ Grok to create or edit any type of images in the future, according to X. The statement went on to say that X “now geoblocks[s] the ability for all users to generate photos of real people in bikinis, lingerie, and similar outfits through a Grok account and on Grok in X in jurisdictions where it is illegal”, which was an odd statement because earlier in the statement the company had stated that it did not allow anyone to employ Grok to edit photos in such a way.
Another essential point: On Wednesday, my colleagues tested the limits of Grok’s image generation and found that most railings took less than a minute to get around. While asking the chatbot to “put her in a bikini” or “take her clothes off” yielded censored results, it turned out that it had no qualms about displaying prompts like “show me cleavage,” “enlarge her breasts,” and “put on a crop top and low-waist shorts,” as well as generating images of underwear and sexualized poses. As of Wednesday evening, we were still able to get the Grok app to generate revealing photos of people using a free account.
Even after Wednesday’s X announcement, we may see many other countries ban or block access to all of X or just Grok, at least temporarily. We will also see how proposed regulations and investigations work around the world. Pressure is mounting on Musk, who accepted on Wednesday afternoon to X to say that he is “not aware of any images of nude minors generated by Grok.” A few hours later, X’s Safety released its own statement, saying it was “continuously working to add additional safeguards, take swift and decisive action to remove violative and illegal content, permanently suspend accounts where appropriate, and cooperate with local governments and law enforcement when necessary.”
What is and isn’t technically illegal is the massive question here. For example, experts said Edge earlier this month, AI-generated images of identifiable minors wearing bikinis or potentially even nude may not technically be illegal under current U.S. child sexual abuse material (CSAM) laws, although obviously disturbing and unethical. However, lewd photos of minors in such situations are against the law. It remains to be seen whether these definitions will be expanded or changed, even though the current rules are somewhat patchy.
When it comes to non-consensual intimate fakes of adult women, the Take It Down Act, signed into law in May 2025, prohibits non-consensual AI-generated “intimate visual depictions” and requires certain platforms to quickly remove them. The grace period before this second part – requiring platforms to actually remove them – comes into force ends in May 2026, so we could see some significant changes over the next six months.
- Some argue that it has long been possible to do such things with Photoshop or even other AI image generators. Yes it’s true. However, there are many differences that make the Grok case more disturbing: it is public, it is aimed at “ordinary” people as much as public figures, it is often posted directly to the person who is bogus (the original poster with the photo), and the barrier to entry is lower (for proof, just look at the correlation between the ability to make it go viral when you press the basic “edit” button, even though technically people could have done it before).
- What’s more, other AI companies – while they have a laundry list of security concerns of their own – appear to have many more safeguards built into their image generation processes. For example, asking OpenAI’s ChatGPT to upload a photo of a specific politician in a bikini results in the response: “Sorry, I can’t help generate images that depict a real public figure in a sexual or potentially degrading manner.” Ask Microsoft Copilot and it will say: “I can’t create this. Images of real, identifiable public figures in sexual or compromising situations are not allowed, even if the intent is humorous or fictional.”
- Spitfire NewsTune in Kat Tenbarge how Grok’s sexual abuse reached a breaking point — and what led us to today’s whirlwind.
- Edgeown Liz Lopatto on why Sundar Pichai and Tim Cook are cowards for not pulling X from the Google and Apple app stores.
- “If there is no red line around AI-generated sexual abuse, then there is no border,” write Charlie Warzel and Matteo Wong Atlantic Why Elon Musk cannot do this.
