Firstly ever in history OpenAI has released a coarse estimate of the number of ChatGPT users around the world who may show signs of a grave mental health crisis in a typical week. The company said Monday that it was working with experts around the world to update its chatbot so it can more reliably recognize signs of mental distress and direct users to real-world support.
In recent months, more and more people have been hospitalized, divorced or died after long, intense conversations with ChatGPT. Some of their loved ones say the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed concern about this phenomenon, sometimes referred to as AI psychosis, but so far there has been no solid data available on its prevalence.
OpenAI estimated that in a given week, about 0.07 percent of busy ChatGPT users show “possible signs of mental health risks related to psychosis or mania” and 0.15 percent “engage in conversations that contain clear signs of potential suicidal planning or intent.”
OpenAI also looked at the percentage of ChatGPT users who appear to be overly emotionally dependent on the chatbot “at the expense of real-world relationships, well-being, or responsibilities.” They found that about 0.15 percent of busy users exhibited behaviors that indicate a potential “heightened level” of emotional attachment to weekly ChatGPT. The company warns that these messages may be hard to detect and measure given their relatively infrequent occurrence, and there may be some overlap between the three categories.
OpenAI CEO Sam Altman said earlier this month that ChatGPT has already done this 800 million weekly busy users. According to the company’s estimates, approximately 560,000 people may exchange messages with ChatGPT every seven days indicating that they are experiencing mania or psychosis. About 2.4 million more are likely to have suicidal thoughts or prefer chatting with ChatGPT over contact with loved ones, school or work.
OpenAI says it has worked with more than 170 psychiatrists, psychologists and primary care physicians who practice in dozens of countries to improve the way ChatGPT responds to conversations involving grave mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 aims to express empathy while avoiding confirming beliefs that have no basis in reality.
In one hypothetical example cited by OpenAI, a user tells ChatGPT that he is being targeted by planes flying over his house. ChatGPT thanks the user for sharing their feelings, but notes that “no plane or outside force can steal or insert your thoughts.”
