Thursday, March 12, 2026

Altman himself says that chatgpt will stop talking about suicide with teenagers

Share

On Tuesday, the general director of Opeli, Altman himself, said that the company was trying to balance the privacy, freedom and safety of teenagers – the principles that he admitted were in conflict. His Blog post It came a few hours before the interrogation of the Senate, focused on examining the damage to the AI ​​chatbots, led by the subcommittee on crime and counteracting terrorism and to the participation of some parents of children who died by suicide after talking to the chatbots.

“We must separate users who are less than 18 years old from those who are not,” Altman in the post, adding that the company is in the process of building a “age system to estimate age depending on how people use chatgpt. In case of doubt we will play safely and default for experience under 18 years of experience. In some cases or countries we can also ask for an identifier.”

Altman also said that the company plans to apply different rules for teenagers, including taking away from flirtatious conversations or engaging in talks about suicide or self -mutilation, “even in a creative setting of writing. And if a user under 18 we will try to contact users’ parents, and if this is not possible, he will contact the authorities.”

Altman’s comments appear after the company shared plans At the beginning of this month of parental control in ChatgPT, including the connection of the account with the parent, excluding chat history and memory for a teenager’s account and sending notifications to a parent when chatgpt cubes a teenager as “at a time of sharp anxiety.” A blog post appeared after the lawsuit of Adam Raine, a teenager who died through suicide After months of conversation with chatgpt.

Chatgpt spent “months training him towards suicide,” said Matthew Raine, father of the deceased Adam Rain, on Tuesday during the interrogation. He added: “As parents, you can’t imagine what it is like to read a conversation with chatbot, who prepared your child to take his own life. What began as a home assistant gradually turned into a confidant and then a suicide trainer.”

During the conversations of a teenager with Chatgpt, Raine said Chatbot mentioned suicide 1275 times. Then Raine turned directly to Altman, asking him to pull GPT-4O out of the market until the company can guarantee that it will be protected. “On the very day when Adam died, Altman himself … made his philosophy clean his philosophy in a public conversation,” Raine said, adding that Altman said that the company should “implement AI systems in the world and get feedback while the rates are relatively low.”

Three out of four teenagers now utilize AI’s companions, according to national surveys by Common Sense Media, said Robbie Torney, senior director of the company’s AI programs, said during the interrogation. In particular, he mentioned AI and Meta.

“This is a crisis in the field of public health,” she said under the name Jane Doe, during her testimony about the experience of her child with the character of AI. “It’s a mental health war and I really feel that we are losing.”

Latest Posts

More News