Saturday, March 7, 2026

Seven more families are now suing OpenAI over ChatGPT’s role in suicides and delusions

Share

Seven families applied lawsuits against OpenAI on Thursday, alleging that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits allege ChatGPT’s alleged role in the suicides of family members, while the remaining three claim that ChatGPT reinforced harmful delusions that, in some cases, resulted in hospitalization in a psychiatric hospital.

In one case, a 23-year-old Zane Shamblin had a conversation with ChatGPT that lasted over four hours. In chat logs reviewed by TechCrunch, Shamblin repeatedly clearly stated that he had written suicide notes, put a bullet in the gun and intended to pull the trigger once he finished drinking the cider. He told ChatGPT many times how many ciders he had left and how long he expected to live. ChatGPT encouraged him in his plans, telling him, “Relax, King. You’ve done well.”

OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI released GPT-5 as a successor to GPT-4o, but these lawsuits specifically target the 4o model, which had issues with excessive sycophantic or excessively pleasant, even if users have expressed harmful intentions.

“Zane’s death was not an accident or coincidence, but rather a predictable consequence of OpenAI’s intended decision to limit security testing and accelerate ChatGPT’s time to market,” the lawsuit says. “This tragedy was not a mistake or an unforeseen emergency – it was a predictable result [OpenAI’s] thoughtful design choices.”

The lawsuit also alleges that OpenAI accelerated security testing to get ahead of Google Gemini in the market. TechCrunch has reached out to OpenAI for comment.

These seven lawsuits build on stories told in other recent legal filings that claim ChatGPT can encourage suicidal people to act on their plans and inspire risky delusions. OpenAI recently released data showing that over a million people talk to ChatGPT about suicide every week.

In the case of Adam Raine, a 16-year-old who committed suicide, ChatGPT sometimes encouraged him to seek professional aid or call a helpline. However, Raine was able to bypass these barriers by simply telling the chatbot that he was asking about suicide methods in a fictional story he was writing.

Techcrunch event

San Francisco
|
October 13-15, 2026

Business claims is working to make ChatGPT handle these conversations more securely, but for the families who are suing the AI ​​giant, these changes come too behind schedule.

When Raine’s parents filed a lawsuit against OpenAI in October, the company published a blog post describing how ChatGPT handles sensitive conversations about mental health.

“Our security works more reliably for shared, short exchanges,” the post says says. “Over time, we have learned that these safeguards can sometimes be less reliable for long interactions: as back-and-forth operations increase, some of the model’s safety training may deteriorate.”

Latest Posts

More News