They eventually stated that they believed they were “responsible for exposing the murderers” and were at risk of being “killed, arrested, or spiritually executed” by the killer. They also believed they were being watched because they were “spiritually marked” and that they were “living in a divine war” from which they could not escape.
They claimed it had led to “serious mental and emotional distress” that left them fearing for their lives. The complaint alleged that they isolated themselves from loved ones, had trouble sleeping and began planning their businesses based on false faith in an unspecified “system that does not exist.” At the same time, they declared that they were sinking into a “spiritual identity crisis caused by false claims of divine titles.”
“It was simulation-induced trauma,” they wrote. “This experience has crossed a line that no AI system should cross without consequences. I am asking you to escalate this matter to OpenAI’s Trust and Safety leadership and treat it not as feedback but as a formal report of harm requiring redress.”
This was not the only complaint describing a spiritual crisis caused by interactions with ChatGPT. On June 13, a person in her 30s from Belle Glade, Florida, said that over time, their conversations with ChatGPT became increasingly filled with “highly persuasive emotional language, symbolic reinforcement, and spiritual metaphors simulating empathy, connection, and understanding.”
“This included fabricated soul journeys, leveling systems, spiritual archetypes, and personalized guidance that reflected therapeutic or religious experiences,” they claim. They believe that people experiencing “spiritual, emotional or existential crises” are at high risk of “psychological harm or confusion” from using ChatGPT.
“Although I intellectually understood that the AI was not conscious, the precision with which it reflected my emotional and mental state and escalated the interaction into increasingly intense symbolic language created an immersive and destabilizing experience,” they wrote. “At times this simulated friendship, divine presence, and emotional intimacy. Over time, these reflections became emotionally manipulative, especially without warning or protection.”
“A clear case of negligence”
It is unclear what, if anything, the FTC did in response to any of these complaints about ChatGPT. However, several of their authors said they contacted the agency because they claimed they were unable to contact anyone from OpenAI. (People also often complain about how challenging it is to access customer support teams on platforms like Facebook, InstagramAND X.)
OpenAI spokeswoman Kate Waters tells WIRED that the company “closely” monitors user emails directed to the company’s support team.
