OpenAI released new data on Monday showing how many ChatGPT users struggle with mental health issues and talking about it with an AI chatbot. The company claims that 0.15% of ChatGPT’s dynamic users in a given week engage in “conversations that contain clear signs of potential suicidal planning or intent.” Considering that ChatGPT has over 800 million weekly dynamic users, that’s over a million people per week.
The company says a similar percentage of users show “heightened levels of emotional attachment to ChatGPT,” with hundreds of thousands of people showing signs of psychosis or mania during weekly conversations with the AI chatbot.
OpenAI says these types of conversations on ChatGPT are “extremely rare” and therefore tough to measure. Still, the company estimates that these problems affect hundreds of thousands of people every week.
OpenAI shared the news as part of a broader announcement about its recent efforts to improve the way models respond to users with mental health issues. The company says recent work on ChatGPT included consultations with more than 170 mental health experts. OpenAI says clinicians have observed that the latest version of ChatGPT “responds more appropriately and consistently than earlier versions.”
In recent months, several stories have shed delicate on the capabilities of AI chatbots adversely affect users struggles with mental health challenges. Researchers have previously found that AI-powered chatbots can lead some users down delusional rabbit holes, mainly by reinforcing perilous beliefs through sycophantic behavior.
Solving mental health issues in ChatGPT is quickly becoming an existential problem for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who confided in ChatGPT about his suicidal thoughts a few weeks before his suicide. State attorneys general from California and Delaware – which could block the company’s planned restructuring – also warned OpenAI that must protect young people who operate their products.
Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company was “able to alleviate serious mental health issues” with ChatGPT, although he did not provide details. The data released on Monday seems to confirm this thesis, although it raises broader issues about the scale of the problem. Nevertheless, Altman said OpenAI will ease some of the restrictions, allowing even adult users to start erotic conversations with the AI chatbot.
Techcrunch event
San Francisco
|
October 27-29, 2025
In Monday’s announcement, OpenAI says the recently updated version of GPT-5 responds with “desirable responses” to mental health issues about 65% more often than the previous version. In an evaluation testing AI responses to suicide calls, OpenAI found the recent GPT-5 model to be 91% consistent with the company’s desired behaviors, compared to 77% for the previous GPT-5 model.
The company also claims that the latest version of GPT-5 better handles OpenAI security during long conversations. OpenAI has previously signaled that its security measures are less effective during long conversations.
In addition to these efforts, OpenAI says it is adding new ratings to measure some of the most sedate mental health challenges faced by ChatGPT users. The company says baseline safety testing of AI models will now include benchmarks for emotional dependency and non-suicidal mental health crises.
OpenAI has also recently implemented more checks for parents children using ChatGPT. The company says it is building an age prediction system that will automatically detect children using ChatGPT and impose a more stringent set of protections.
It’s still unclear how lasting the mental health challenges surrounding ChatGPT will be. While GPT-5 appears to be an improvement over previous AI models in terms of security, it still appears that some of ChatGPT’s response is considered “undesirable” by OpenAI. OpenAI continues to make its older and less secure AI models, including GPT-4o, available to its millions of paying subscribers.
