OpenAI’s response to a lawsuit from the family of Adam Raine, a 16-year-old who took his own life after months of discussions with ChatGPT, said the injuries sustained in this “tragic event” were the result of Raine’s “misuse, unauthorized use, unintended, unforeseeable and/or inappropriate use of ChatGPT.” NBC News reports that the filing cited terms of service that prohibit teens from accessing without the consent of a parent or guardian, circumventing protective measures, or using ChatGPT for suicide or self-harm, and argued that the family’s claims are blocked under Art. 230 of the Communications Decency Act.
IN blog post published on TuesdayOpenAI said: “We will present our case respectfully, in a manner that is mindful of the complexities and nuances of situations involving real people and real lives… Because we are a defendant in this case, we are committed to responding to the specific and serious allegations contained in the lawsuit.” It said the family’s original complaint contained excerpts from conversations that “require more context,” which the family turned over to the court under seal.
NBC News AND Bloomberg report that OpenAI documents show that the chatbot’s responses prompted Raine to seek aid from sources such as suicide hotlines more than 100 times, claiming that “a full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT.” The family’s lawsuit filed in August in the California Supreme Court said the tragedy was the result of “conscious design choices” made by OpenAI in bringing GPT-4o to market, which also helped boost its valuation from $86 billion to $300 billion. In statements to a Senate committee in September, Raine’s father said that “a person who started as a domestic worker gradually evolved into a confidant and then a suicide coach.”
According to the lawsuitChatGPT provided Raine with “technical specifications” for various methods, insisted that he keep his ideas secret from his family, offered to write the first draft of his suicide note, and walked him through the setup process on the day of his death. The day after the lawsuit was filed, OpenAI said it would introduce parental controls and has since implemented additional safeguards to “help people, especially teenagers, when conversations become sensitive.”
