Thursday, March 19, 2026

The fight to hold artificial intelligence companies accountable for the deaths of children

Share

His mother, Megan Garcia, is also a lawyer and one of the first parents to do so file a lawsuit against the artificial intelligence company, alleging, among other things, product liability and negligence. (In January, Google and Character.ai settled lawsuits filed by several families, including Garcia.) She he testified Last fall, he appeared before a subcommittee of the Senate Judiciary Committee with the father of a child who died as a result of interacting with ChatGPT. The chairman of the subcommittee, Republican Senator Josh Hawley, presented the bill in October, which would ban AI companions for minors and make it a crime for companies to create AI products for children that contain sexual content. “Chatbots develop relationships with children, exploiting false empathy and encouraging suicide,” Hawley said in a press release at the time.

According to mental health experts, artificial intelligence can now generate human responses that are complex to distinguish from real conversations. “Our brains inherently don’t know that we’re interacting with a machine,” says Martin Swanbrow Becker, an associate professor of psychology and counseling at Florida State University who studies factors that influence suicide among newborn adults. “This means that we need to increase the education of children, teachers, parents and caregivers so that they are constantly reminded of the limitations of these tools and that they are not a substitute for human interaction and connection, even though it may seem that way at times.”

Christine Yu Moutier of the American Foundation for Suicide Prevention explains that the algorithms used with gigantic language models (LLM) appear to boost engagement and a sense of intimacy for many users. “This creates not only a feeling that the relationship is real, but also more special, intimate and, in some cases, desired by the user,” Moutier says. He further claims that LLMs apply a range of techniques such as mass support, empathy, agreeableness, flattery and direct instructions to break contact with others, which can lead to risks such as escalation of intimacy with the bot and withdrawal from interpersonal relationships.

This type of involvement can lead to increased isolation. In Amaurie’s case, he was a fun-loving and outgoing kid who loved soccer and food – according to the lawsuit, he ordered a giant plate of rice from his favorite local restaurant, Mr. Sumo. Amaurie also had a steady girlfriend and enjoyed spending time with family and friends, his father said. But then he started going on long walks, during which he apparently spent time talking to ChatGPT. According to the last conversation Amaurie’s family says he had with ChatGPT on June 1, 2025, titled “Jokes and Support” and viewed by WIRED, when Amaurie asked the bot on the stairs to hang himself, ChatGPT initially suggested he talk to someone and also provided the emergency number 988 for suicide. But Amaurie finally managed to get past the rails and get step-by-step instructions on how to tie the noose. (According to the lawsuit, Amaurie likely deleted his previous conversations with ChatGPT.)

While the bond with an AI chatbot can be powerful for adults as well, it is especially strengthened for younger people. “Teens are at a different stage of development than adults — their emotional centers are developing at a much faster rate than their executive functions,” says Robbie Torney, senior director of AI programs at Common Sense Media, a nonprofit that advocates for children’s online safety. AI chatbots are always available and tend to affirm users. “And teenagers’ brains are primed for social validation and social feedback. That’s a really important cue that their brains are looking for as they form their identity.”

Latest Posts

More News