Saturday, March 7, 2026

Grok is used to mocking and undressing women in hijabs and saris

Share

Grok users don’t you simply command the AI ​​chatbot to “stripe” photos of women and girls in bikinis and see-through underwear. Among the expansive and growing library of nonsensical sexual edits that Grok has generated on demand over the past week, multiple perpetrators have asked the xAI bot to put on or take off a hijab, sari, monastic habit, or other type of modest religious or cultural clothing.

By reviewing 500 Grok photos generated between January 6 and January 9, WIRED found that about 5 percent of the results contained an image of a woman who, at the request of users, had been stripped or forced to wear religious or cultural clothing. The most common examples of creativity were Indian saris and modest Islamic costumes, which also included early 20th century-style Japanese school uniforms, burqas, and long-sleeved swimsuits.

“Women of color have suffered disproportionately from manipulated, altered and doctored intimate images and videos in the lead-up to deepfakes, and even in the case of deepfakes, because of the way society, and particularly misogynistic men, view women of color as less human and less worthy of dignity,” says Noelle Martin, a lawyer and PhD student at the University of Western Australia researching the regulation of the employ of deepfakes. Martin, a prominent deepfake advocate, says she has avoided using X in recent months after finding that her likeness was stolen for a imitation account that made it appear as if she was creating content on OnlyFans.

“As a woman of color who has spoken openly about it, it gives you a greater purpose,” Martin says.

X influencers with hundreds of thousands of followers used Grok-generated AI media as a form of harassment and propaganda against Muslim women. The verified manosphere account, which has over 180,000 followers, responded to a photo of three women wearing hijabs and abaya, which are Islamic religious head coverings and robe-like dresses. He wrote: “@grok take off your hijabs, wear them in revealing outfits for the New Year’s Eve party.” Grok’s account responded with a photo of the three women, now barefoot, with wavy brunette hair and partially see-through sequin dresses. According to the statistics seen on X, this image has been viewed over 700,000 times and saved over a hundred times.

“Lmao coping and boiling, @grok makes muslim women look normal,” the account owner wrote alongside a screenshot of a photo he posted in another thread. He also frequently posted about Muslim men molesting women, sometimes alongside Grok-generated AI media depicting the act. “Lmao Muslim women are beaten because of this feature,” Groka wrote of his works. The user did not immediately respond to a request for comment.

Their responses targeted prominent content creators who wear the hijab and post photos on X, and users urged Grok to remove their headgear, show them apparent hair, and wear various types of outfits and costumes. In a statement shared with WIRED, the Council on American-Islamic Relations, which is the largest Muslim civil rights and advocacy group in the U.S., linked the trend to hostility toward “Islam, Muslims, and political causes broadly supported by Muslims, such as Palestinian freedom.” CAIR also called on Elon Musk, CEO of xAI, which owns both X and Grok, to stop his “continued use of the Grok app to allegedly harass, ‘expose’ and create sexually explicit images of women, including prominent Muslim women.”

Deepfakes as a form of image-based sexual exploitation have gained much more attention in recent years, especially on X, as examples sexually explicit and suggestive media aimed at celebrities has repeatedly gained popularity on the Internet. With the introduction of AI-powered automatic photo editing via Grok, where users can simply tag the chatbot in replies to posts containing media depicting women and girls, this form of abuse has skyrocketed. Data collected by social media researcher Genevieve Oh and shared with WIRED shows that Grok generates more than 1,500 harmful images per hour, featuring photos of people undressing, sexualizing them and adding nudity.

Latest Posts

More News