Elon Musk’s artificial intelligence The Grok chatbot is used to flood X thousands of sexualized images of adults and apparently minors dressed in modest clothing. Some of this content appears to violate more than just X content politicsthat prohibits the sharing of illegal content such as child sexual abuse material (CSAM), but may also violate Apple App Store and Google Play guidelines.
Both Apple and Google explicitly ban apps containing CSAM, which are illegal to host and distribute in many countries. Tech giants also ban apps that contain pornographic material or facilitate harassment. Apple App Store says does not allow the posting of “overtly sexual or pornographic material” or “content that is defamatory, discriminatory, or malicious,” especially if the app “may humiliate, intimidate, or harm a targeted individual or group.” Google Play Store prohibitions apps that “contain or promote content related to sexual predatory behavior or distribute non-consensual sexual content,” as well as programs that “contain or facilitate threats, harassment, or abuse.”
Over the past two years, Apple and Google have removed many “nudify” and AI image generating apps investigations by BBC and 404 Media found were advertised or used to effectively transform ordinary photos into explicit images of women without their consent.
However, at the time of publication, both the X app and the standalone Grok app are still available in both app stores. Apple, Google and X did not respond to requests for comment. Grok is operated by Musk’s multibillion-dollar artificial intelligence startup xAI, which also did not respond to WIRED’s questions. In public statement published on January 3, X said it is taking action against illegal content on its platform, including CSAM. “Anyone who uses or encourages Grok to create illegal content will face the same consequences as those who upload illegal content,” the company warned.
Sloan Thompson, director of training and education at EndTAB, a group that teaches organizations how to prevent the spread of consensual sexual content, says it is “absolutely appropriate” for companies like Apple and Google to take action against X and Grok.
The amount of nonsensical, vulgar images on X generated by Grok has exploded in the last two weeks. One researcher told Bloomberg that in a 24-hour period between Jan. 5 and Jan. 6, Grok generated approximately 6,700 photos every hour that he described as “of a sexual or nude nature.” Another analyst collected more than 15,000 image URLs that Grok created on X in two hours on December 31. WIRED reviewed about a third of the images and found that many of them showed women wearing revealing clothing. More than 2,500 of them were marked as unavailable during the week, and almost 500 were marked as “age-restricted adult content.”
Earlier this week, a spokesman for the European Commission, the European Union’s governing body, publicly condemned sexually explicit and non-consensual images generated by Grok in X as “illegal” and “horrifying,” telling Reuters that such content “has no place in Europe.”
On Thursday, the EU X ordered to retain all internal documents and data relating to Grok until the end of 2026, extending the earlier retention directive to ensure authorities have access to material relevant to compliance with the EU Digital Services Act, although no fresh formal investigation has yet been announced. Regulators in other countriesincluding the UK, India and Malaysia, have also stated this they are social media platform research.
