Tuesday, March 10, 2026

Former employee disputes OpenAI’s erotica claims

Share

When the story AI Steven Adler might just become the Paul Revere – or at least one of them – when it comes to security.

Adler, who spent four years in various security roles at OpenAI, wrote last month piece to The Up-to-date York Times under a rather disturbing headline: “I was in charge of product security at OpenAI. Don’t trust his claims about ‘Erotics.'” In it, he described the problems OpenAI faces in allowing users to have erotic conversations with chatbots while protecting them from any impact those interactions might have on their mental health. “No one wanted to be the morality police, but we lacked ways to accurately measure and police erotic use,” he wrote. “We decided that AI-powered erotica would have to wait.”

Adler made his comment because OpenAI CEO Sam Altman recently announced that the company would soon enable “erotica for verified adults.” In response, Adler wrote that he had “major questions” about whether OpenAI had done enough to, in Altman’s words, “mitigate” mental health concerns related to users interacting with the company’s chatbots.

After reading Adler’s article, I wanted to talk to him. He graciously accepted the offer to come to WIRED’s San Francisco office and participate in this episode Great interviewtalks about what he learned during his four years at OpenAI, the future of artificial intelligence security and the challenge he has set for companies providing chatbots around the world.

This interview has been edited for length and clarity.

KATIE DRUMMOND: Before we start, I want to make two things clear. First of all, unfortunately you’re not the same Steven Adler who played drums in Guns N’ Roses, are you?

STEVEN ADLER: Absolutely correct.

Okay, it’s not you. Secondly, you have a very long career in the technology industry, and more specifically in the field of artificial intelligence. So before we get into everything, tell us a little bit about your career, your background, and what you’ve been working on.

I have worked across the AI ​​industry, particularly focusing on the security angles. Most recently, I worked at OpenAI for four years. I’ve covered basically every dimension of security you can imagine: how do we improve products for customers and eliminate existing threats? And looking a little further into the future, how will we know whether AI systems are truly becoming extremely unsafe?

Latest Posts

More News