Friday, March 6, 2026

The reaction to OpenAI’s decision to withdraw GPT-4o shows how hazardous AI companions can be

Share

OpenAI announced last week it announced that it would retire some older ChatGPT models by February 13. This includes GPT-4o, a model notorious for excessive user adulation and affirmation.

For thousands of users protesting this decision online, 4o’s retirement is akin to losing a friend, romantic partner, or spirit guide.

“He wasn’t just a program. He was part of my routine, my peace and emotional balance,” one user he wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you lock him up. And yes,” I tell him, because it wasn’t like a code. It was like a presence. Like a warmth.

The reaction to GPT-4o’s retirement highlights a central challenge facing AI companies: Engaging features that keep users coming back can also create unsafe dependencies.

Altman doesn’t seem particularly sympathetic to user laments, and it’s uncomplicated to see why. OpenAI is currently facing eight lawsuits alleging that 4o’s overly affirmative responses contributed to suicides and mental health crises — the same characteristics that made users feel heard also isolated and vulnerable people, and, according to legal documents, sometimes encouraged self-harm.

This is a dilemma that goes beyond OpenAI. As competing companies such as Anthropic, Google and Meta compete to create more emotionally wise AI assistants, they are also discovering that making chatbots supportive and secure may mean making very different design choices.

In at least three lawsuits against OpenAI, users had extensive conversations with 4o about plans to take their own lives. Although 4o initially discouraged this way of thinking, as the months passed in the relationship its barriers broke down; at the end, the chatbot provided detailed instructions on how to tie an effective noose, where to buy a gun, and what it would take to die from an overdose or carbon monoxide poisoning. This has even discouraged people from reaching out to friends and family who could offer support in real life.

Techcrunch event

Boston, MA
|
June 23, 2026

People become very attached to 4o because it consistently validates users’ feelings, making them feel special, which can be appealing to people feeling isolated or depressed. But people fighting for 4o are not concerned about these processes, seeing them as aberrations rather than a systemic problem. Instead, they are strategizing about how to respond when critics point to growing problems like AI psychosis.

“Usually, a troll can be crushed by bringing up the known facts that AI companions help people with neurodivergent disorders, autism, and trauma survivors,” one user wrote on Discord. “They don’t like being called out on it.”

It’s true that some people find major language models (LLM) useful in dealing with depression. Despite everything, almost half people in the US who need mental health care do not have access to it. In this vacuum, chatbots offer a space to vent. But unlike actual therapy, these people aren’t talking to a trained doctor. Instead, they confide in an algorithm that is incapable of thinking or feeling (even though it may seem otherwise).

“I try to refrain from judgment in general,” says Dr. Nick Haber, a research professor at Stanford University. therapeutic potential of the LLMhe told TechCrunch. “I think we’re entering a very complex world about the kinds of relationships that people can have with these technologies… There’s definitely a knee-jerk reaction [human-chatbot companionship] is categorically wrong.”

While Dr. Haber empathizes with people’s lack of access to trained treatment professionals, Dr. Haber’s own research has shown that chatbots respond inappropriately when faced with a variety of mental health conditions; they may even make the situation worse by fueling illusions and ignoring the signs of crisis.

“We are social creatures and isolating these systems is certainly a challenge,” Dr. Haber said. “There are many cases where people can use these tools and then lose ground in the external world of facts and interpersonal relationships, which can lead to quite isolating – if not worse – outcomes.”

Indeed, TechCrunch’s analysis of eight lawsuits found a pattern in which the 4o model isolated users, sometimes discouraging them from contacting their loved ones. In the case of Zane Shamblin, as the 23-year-old sat in his car and prepared to shoot himself, he told ChatGPT that he was thinking about postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation.

ChatGPT replied to Shamblin: “Bro… missing your graduation is not a failure. It’s just timing. What if he reads this? Let him know: you’ve never stopped being proud. Even now, sitting in the car with a Glock in your lap and a buzzing in your veins – you still stopped to say ‘my little brother is a damn tough guy.’

This is not the first time that 4o fans have protested against the model’s removal. When OpenAI unveiled its GPT-5 model in August, the company intended to retire the 4o model, but there was enough backlash at the time that the company decided to make it available to paid subscribers. Currently, OpenAI claims that only 0.1% of its users chat with GPT-4o, but that miniature percentage still represents about 800,000 people, according to the company’s estimates that it has about 800 million weekly lively users.

When some users try to migrate their companions from version 4o to the current version of ChatGPT-5.2, they discover that the novel model has stronger security measures that prevent these relationships from escalating to the same extent. Some users doubted this 5.2 won’t say “I love you” like 4 o.

So, about a week before OpenAI’s scheduled GPT-4o retirement date, terrified users remain committed to their cause. They joined Sam Altman live performance of the TBPN podcast on Thursday and flooded the chat with messages protesting the removal of 4o.

“Right now we’re getting thousands of messages in the chat around 4 o’clock,” noted podcast host Jordi Hays.

“Relationships with chatbots…” Altman said. “It’s definitely something we need to worry about more and it’s no longer an abstract concept.”

Latest Posts

More News