OpenAI on Thursday announced a novel feature called Trusted Contact, designed to alert a trusted third party if self-harm is mentioned in a conversation. This feature allows an adult ChatGPT user to designate another person as a trusted contact on their account, such as a friend or family member. In cases where a conversation may escalate into self-harm, OpenAI will now prompt the user to contact that person. It also sends an automatic alert to the contact, prompting them to contact you.
OpenAI faced a wave of lawsuits from the families of people who committed suicide after talking to a chatbot. In many cases, families say ChatGPT encouraged a loved one commit suicide – or even helped them plan it.
Currently, OpenAI uses a combination of automation and manual control to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal thoughts, which then relays the information to the human safety team. The company says that every time it receives this type of report, the incident is investigated by a human. “We aim to review these security alerts in less than an hour,” the company says.
If OpenAI’s internal team deems a situation to be a significant security risk, ChatGPT sends a notification to a trusted contact – via email, text, or in-app notification. The notification should be compact and encourage the contact person to contact the person. The company says it does not provide detailed information on the topic discussed to protect user privacy.
The Trusted Contact feature is compliant with company security introduced in September last year this gave parents the ability to have some oversight of their teens’ accounts, including collections security notifications designed to alert parents if the OpenAI system deems their child is facing a “serious safety risk”. For some time now, ChatGPT has also included automatic notifications to seek professional healthcare if the conversation turns towards the topic of self-harm.
Most importantly, Trust Contact is optional and even if protection is activated on a specific account, each user can have multiple ChatGPT accounts. OpenAI Parental Controls are also optional and come with a similar limitation.
“Trusted Contact is part of OpenAI’s broader efforts to build artificial intelligence systems help people in difficult times” – wrote the company in announcement post. “We will continue to work with clinicians, researchers and policymakers to improve the way AI systems respond when people may be experiencing anxiety.”
Techcrunch event
San Francisco, California
|
October 13-15, 2026
