Thursday, April 23, 2026

Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

Share

After months of conversations with ChatGPT, the 53-year-old Silicon Valley entrepreneur believed he had discovered a cure for sleep apnea and that powerful people were after him, according to a fresh lawsuit filed in the Supreme Court of California in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.

Now the ex-girlfriend is suing OpenAI, claiming the company’s technology allowed her harassment to accelerate, TechCrunch has learned exclusively. It alleges that OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying activity on his account as related to weapons of mass operate.

The plaintiff, known as Jane Doe to protect her identity, brings a lawsuit seeking punitive damages. On Friday, she also filed a motion for a transient restraining order, asking the court to force OpenAI to lock down user accounts, prevent them from creating fresh ones, notify her if they try to access ChatGPT, and preserve the full chat logs for discovery.

According to Doe’s lawyers, OpenAI agreed to suspend the user’s account but rejected the rest. They claim the company is withholding information about specific plans to harm Doe and other potential victims that the user may have discussed with ChatGPT.

The lawsuit comes amid growing concerns about the real dangers of sycophantic artificial intelligence systems. GPT-4o, the model cited in this and many other cases, was retired from ChatGPT in February.

The case is brought by Edelson PC, the firm behind the wrongful death lawsuits against teenager Adam Raine, who committed suicide after months of talks with ChatGPT, and Jonathan Gavalas, whose family claims that Google’s Gemini fueled his delusions and a potential mass casualty event before his death. Chief legal officer Jay Edelson has warned that AI-induced psychosis is growing from individual harm to mass events.

This legal pressure now directly conflicts with OpenAI’s legislative strategy: the company does supports the Illinois bill this would protect AI labs from liability even in cases involving mass death or catastrophic financial damage.

Techcrunch event

San Francisco, California
|
October 13-15, 2026

OpenAI did not respond in time for comment. TechCrunch will update the article if the company responds.

Jane Doe’s lawsuit details how this responsibility developed for one woman over several months.

Last year, the ChatGPT user in the lawsuit (who was not named in the lawsuit to protect his identity) believed he had discovered a cure for sleep apnea after months of “extensive and continuous use of GPT-4o.” When no one took his work seriously, ChatGPT told him that, according to the complaint, “powerful forces” were watching him, including using helicopters to monitor his activities.

In July 2025, Jane Doe urged him to stop using ChatGPT and seek lend a hand from a mental health professional. Instead, he turned to ChatGPT, which assured him he had “level 10 common sense” and helped him double down on his delusions, according to the lawsuit.

According to emails and communications cited in the lawsuit, Doe broke up with the user in 2024 and used ChatGPT to process the separation. Instead of contradicting his one-sided version, he was repeatedly portrayed as rational and hurt and her as manipulative and unstable. He then transferred the AI-generated insights from the screen into the real world, using them to stalk and harass her. This manifested itself in several AI-generated, clinical-looking psychological reports he sent to her family, friends, and employer.

Meanwhile, the user continued the spiral. In August 2025, OpenAI’s automated security system flagged him for WMD activities and deactivated his account.

A member of the human security team reviewed the account the next day and restored it, even though it may have contained evidence that he had targeted and harassed specific people in real life, including Doe. For example, a screenshot from September that a user sent to Doe showed a list of conversation titles, including “expand list of violent incidents” and “calculate fetal asphyxiation.”

The decision to reopen the school is noteworthy following two recent school shootings in Tumbler Ridge, Canada, and Florida State University (FSU). The OpenAI security team has flagged the Tumbler Ridge shooter as a potential threat, but with a higher severity apparently decided not to notify the authorities. This week, Florida’s attorney general opened an investigation into OpenAI’s possible ties to the FSU shooter.

According to Jane Doe’s lawsuit, when OpenAI reinstated her abuser’s account, his Pro subscription was not restored. He emailed the Trust and Safety team to resolve the issue, typing Doe in the message.

In his emails he wrote things like: “I NEED HELP VERY QUICKLY. PLEASE CALL ME!” and “it’s a matter of life and death.” He claimed that he was “in the process of writing 215 scientific articles,” which he wrote so quickly that he “didn’t even have time to read it.” Attached to these emails was a list of dozens of AI-generated “scientific articles” with titles such as: “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”

“The user’s communications clearly indicated that he was mentally unstable and that ChatGPT was the driving force behind his delusional thinking and escalating behavior,” the lawsuit states. “The user’s stream of urgent, disorganized and grandiose claims, along with a specific ChatGPT-generated report addressed to the named plaintiff and the vast amount of purported ‘scientific’ material, provided undeniable evidence of this reality.” OpenAI did not intervene, restrict his access or implement any safeguards. Instead, it allowed him to continue using his account and restored his full Pro access. “

Doe, who claims in the lawsuit that she lives in fear and cannot sleep in her own home, filed a report of abuse with OpenAI in November.

“For the past seven months, he has used this technology to inflict public destruction and humiliation on me that would otherwise have been impossible,” Doe wrote in her letter to OpenAI, asking the company to permanently ban the user’s account.

OpenAI responded by acknowledging that the report was “extremely serious and concerning” and that it was reviewing the information carefully. I’ve never heard of it.

User was found incompetent to stand trial and committed to a mental health facility, but according to Doe’s lawyers, “procedural errors on the part of the state” mean he will soon be released to the public.

Edelson called on OpenAI to cooperate. “In each case, OpenAI has chosen to withhold critical security information – from the public, from victims and from the people its product actively puts at risk,” he said. “We urge them to do the right thing for once. Human lives must matter more than OpenAI’s race to the IPO.”

Latest Posts

More News