OpenAI security research leader who helped shape ChatGPT’s response to users experiencing mental health crises announced her internal departure from the company last month, WIRED has learned. Andrea Vallone, head of a security research group known as model policy, is expected to leave OpenAI at the end of the year.
OpenAI spokeswoman Kayla Wood confirmed Vallone’s departure. Wood said OpenAI is actively looking for a successor and that in the meantime, Vallone’s team will report directly to Johannes Heidecke, the company’s head of security systems.
Vallone’s departure comes as OpenAI faces increasing scrutiny over how its flagship product responds to vulnerable users. In recent months, several lawsuits have been filed against OpenAI, alleging that users were creating unhealthy attachments to ChatGPT. Some of the lawsuits claim that ChatGPT contributed to mental health breakdowns or encouraged suicidal thoughts.
Under this pressure, OpenAI is working to understand how ChatGPT should deal with anxious users and improve chatbot responses. Model Policy is one of the teams leading and leading this work October report detailing the company’s progress and consulting with over 170 mental health experts.
The OpenAI report found that hundreds of thousands of ChatGPT users may be showing signs of a manic or psychotic crisis every week, with over a million people “having conversations that contain clear signs of potential suicidal planning or intent.” By upgrading to GPT-5, OpenAI said in a report that it was able to reduce unwanted responses in these calls by 65 to 80 percent.
“Over the past year, I have led OpenAI research on a question that has almost no precedent: How should models respond when faced with signs of overreliance on emotions or early signs of mental disorders?” wrote Vallone in post is LinkedIn.
Vallone did not respond to WIRED’s request for comment.
Making ChatGPT conversation pleasant but not flattering is a major challenge at OpenAI. The company is aggressively trying to expand ChatGPT’s user base, which currently numbers more than 800 million people a week, to compete with AI chatbots from Google, Anthropic and Meta.
After OpenAI released GPT-5 in August, users responded by saying that the fresh model was surprisingly cool. In ChatGPT’s latest update, the company said it has significantly reduced the flattery while maintaining the “warmth” of the chatbot.
Vallone’s exit follows August reorganization of the next group focused on ChatGPT’s responses to worried users,behavior modeling. Its former leader, Joanne Jang, left this position to found a fresh team researching novel methods of human-artificial intelligence interaction. The remaining behavioral exemplary staff was transferred to the post-training management of Max Schwarzer.
