Saturday, March 7, 2026

Why no one stops Grok

Share

Today’s episode Decoder concerns X, Grok and Elon Musk. For several weeks now, one of the worst, most annoying, and most stupidly irresponsible AI controversies in the compact history of generative AI has been going on. Grok, a chatbot created by Elon Musk’s xAI, is capable of creating all kinds of AI-generated images, including intimate photos of women and minors without their consent.

Since Grok is connected to X, the platform formerly known as Twitter, users can simply ask Grok to edit any image on that platform, and Grok will mostly do so, and then distribute that image throughout the platform. Over the past few weeks, X and Elon have claimed over and over again that various protective barriers have been put in place, but until now, getting around them has been mostly insignificant. It has now become clear that Elon I want Grok so he can do it, and he gets very annoyed with anyone who wants him to stop, esp various governments around the world that threaten legal action against X.

This is one of those situations where if you simply describe a problem to someone, they will intuitively feel that someone should be able to do something about it. It’s true – someone should be able to do something about this one-click harassment machine that generates photos of women and children without their consent. But who has that power and what they can do with it is an incredibly complicated question, and one that is entangled in the thorny mess of history that is content moderation and the legal precedents that underlie it. So I invited Riana Pfefferkorn on the show to tell me all about it.

Riana joined me earlier to explain some complicated issues with internet moderation in the past. He is currently a policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence and has a deep understanding of what regulators and lawmakers in the U.S. and around the world can do about a problem like Grok’s, if they so choose.

Edge subscribers, don’t forget that you have exclusive access to ad-free content Decoder wherever you find your podcasts. Head here. Not a subscriber? You can register here.

So Riana really helped me review the legal framework that’s in place here, the different entities involved that have influence and can exert pressure to influence the situation, and where we can see all of this as xAI controls the damage, but largely still delivers this product that continues to do real harm.

Here’s one thing I’ve been thinking about a lot as this whole situation has unfolded. Over the past 20 years, the concept of content moderation has waxed and waned as different types of social media platforms have risen and fallen in popularity. The history of a platform such as Reddit is only a microcosm of the entire history of content moderation.

Around 2021, we hit a really high plateau on the idea of ​​moderation and trust and safety on these platforms as a whole. This is when Covid disinformation, election lies, QAnon conspiracies, and inciting the Capitol mob can actually get you banned from most major platforms… even if you are the President of the United States.

It’s sheltered to say that the era of content moderation is over and we now live in a much more messy and laissez-faire place. It’s possible that Elon and his pornographic image generator will push the pendulum back, but even if that happens, the consequences could be more complicated than anyone would like.

If you want to read more about what we talked about in this episode, check out these links:

  • Massive problem with Grok AI deepfakes | Edge
  • Grok undresses children – can the law prevent it? | Edge
  • Tim Cook and Sundar Pichai are cowards | Edge
  • Senate passes bill that would allow victims of deepfakes without their consent to sue | Edge
  • EU intends to ban nudity app following Grok outrage | Policy
  • Within a few days, Grok flooded X million with sexualized images | New York Times
  • The Supreme Court Just Overturned the Internet Law and I Have Questions | Edge
  • Mother of Elon Musk’s son sues xAI for false sexual images | AP

Have questions or comments about this episode? Write to us at decoder@theverge.com. We really read every email!

Decoder with Nilay Patel

Podcast from Edge about gigantic ideas and other problems.

SUBSCRIBE NOW!

Follow topics and authors from this story to see more events like this in your personalized homepage feed and receive email updates.


Latest Posts

More News