Saturday, March 7, 2026

Several users are reportedly complaining to the FTC that ChatGPT causes psychological harm

Share

According to artificial intelligence companies, their technology will one day do just that become a fundamental human right, and their supporters claim that they snail-paced down the development of artificial intelligence is similar to murderpeople using this technology say that tools like ChatGPT can sometimes cause sedate psychological harm.

At least seven people have complained to the US Federal Trade Commission that ChatGPT has caused them to experience severe delusions, paranoia and emotional crises, Wired reportedciting public records of complaints regarding ChatGPT from November 2022.

One complainant claimed that lengthy conversations with ChatGPT led to illusions and a “real, deepening spiritual and legal crisis” regarding the people in their lives. Another told ChatGPT that he began using “highly persuasive emotional language,” simulated friendships, and provided reflections that “became emotionally manipulative over time, especially without warning or protection.”

One user alleged that ChatGPT caused cognitive hallucinations by imitating human trust-building mechanisms. When this user asked ChatGPT for confirmation of reality and cognitive stability, the chatbot stated that he was not hallucinating.

“I am struggling,” another user wrote in his complaint to the FTC. “Please help me. Because I feel very lonely. Thank you.”

According to Wired, several complainants wrote to the FTC because they were unable to contact anyone at OpenAI. The report found that most of the complaints called for the regulator to launch an investigation into the company and force it to add guardrails.

These complaints come as investment in data centers and artificial intelligence increases to unprecedented levels. At the same time, there are ongoing debates about whether advances in technology should be approached with caution to ensure built-in security.

ChatGPT and its developer OpenAI have come under fire themselves allegedly playing a role in a teenager’s suicide.

“In early October, we released a new default GPT-5 model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress, such as mania, delusions, psychosis, and de-escalate conversations in a supportive and grounding way,” OpenAI spokeswoman Kate Waters said in an emailed statement. “We’ve also expanded access to professional help and hotlines, redirected confidential calls to safer models, added incentives for breaks during long sessions, and introduced parental controls to better protect teens. This work is critically important and ongoing as we work with mental health experts, clinicians and policymakers around the world.”

Latest Posts

More News