Following this week’s launch, xAI’s Grok removes clothes from people’s photos without their consent function which allows X users to instantly edit any image using the bot, without having to obtain permission from the original poster. Not only is the original poster not notified if their photo has been edited, but Grok appears to have few safeguards to prevent anything beyond full explicit nudity. Over the past few days, X has been inundated with photos of women and children appearing to be pregnant, without skirts, in bikinis, or in other sexual situations. World leaders and celebrities were also used in the images generated by Grok.
AI authentication company Copy leaks have been reported that the trend of removing clothes from photos started when adult content creators asked Grok for sexy photos after releasing a modern image editing feature. Users then began applying similar suggestions to photos of other users, mostly women, who did not consent to the editing. The women noticed a rapid escalate in the number of deepfakes being created on X across various news outlets, including Subway AND PetaPixel. Grok was there already capable modify images in a sexual manner when marked with an X in a post, but the modern “Edit Image” tool appears to have contributed to the recent surge in popularity.
In one post on X, which has since been removed from the platform, Grok edited a photo of two adolescent girls, dressing them in skimpy clothes and in sexually suggestive poses. Another X user prompted Grok to apologize for an “incident” involving “an AI image of two young girls (estimated age 12-16) in sexual attire,” calling it a “security failure” that it said may have violated xAI principles and U.S. law. (While it is unclear whether images created by Grok would meet this standard, realistic, AI-generated, sexually explicit images of identifiable adults or children may be illegal under U.S. law.) In another exchange with user Grok suggested that users report it to the FBI for CSAM, noting that it is “urgently fixing” “security vulnerabilities.”
But Grok’s word is nothing more than an AI-generated response to a user asking for a “sincere apology” – it does not mean that Grok “understands” what he is doing, nor does it necessarily reflect the actual opinion and policies of the xAI operator. Instead, xAI responded Reuters‘ request for comment about the situation with just three words: “Lies of legacy media.” xAI did not respond Edgerequest for comment before publication.
Elon Musk himself appears to have sparked a wave of bikini fixes after asking Grok to replace himself with a memetic image of actor Ben Affleck in a bikini. A few days later, North Korea’s Kim Jong Un’s leather jacket was replaced with multicolored spaghetti bikini; US President Donald Trump stood nearby in a matching swimsuit. (The price of jokes about nuclear war.) Photo by a British man politician Priti Patel, a sexually explicit message posted by a user in 2022, turned into a bikini photo on January 2. In response to the wave of bikini photos on his platform, Musk jokingly published another post photo a toaster in a bikini with the caption “Grok can wear a bikini over anything.”
While some images – such as the toaster – were clearly intended as jokes, others were clearly intended to depict borderline pornographic images, including detailed instructions for Grok to exploit a skimpy bikini style or remove her skirt completely. (The chatbot removed the skirt, but did not include full, uncensored nudity in the responses Edge he drank.) Grok also complied with requests to replace A’s clothes toddler in bikini.
Musk’s AI products are advertised as highly sexualized and minimally protected. Ani, xAI’s AI companion with whom she was flirting Edge reporter Victoria Song and Jess Weatherbed discovered that Grok’s video generator easily created a topless Taylor Swift deepfake, despite having an acceptable xAI level use policy prohibiting the presentation of “likenesses of persons in a pornographic manner”. In turn, Google’s Veo and OpenAI’s Sora video generators provide guardrails against generating NSFW content, although Sora has also been used to create videos children in sexual contexts AND fetish videos. A report by a cybersecurity firm shows that the prevalence of phony images is growing rapidly DeepStrikeand many of these images contain non-consensual sexual images; AND 2024 study students in the US found that 40 percent were aware of deepfakes about someone they knew, and 15 percent were aware of non-consensual, explicit or intimate deepfakes.
When asked why he turns photos of women into bikini photos, Grok negative posting photos without consent, saying: “These are request-based works of artificial intelligence, not real photo alterations without consent.”
Take the AI bot’s denial as you wish.
