Friday, March 20, 2026

Character.AI has sued again over “harmful” messages sent to teenagers

Share

The Character.AI chatbot service is faces another trial for allegedly causing harm to the teenagers’ mental health, this time after the teenager claimed it led him to self-harm. The lawsuit filed in Texas on behalf of the 17-year-old and his family targets Character.AI and its co-founders’ former workplace, Google, with claims including negligence and faulty product design. It alleges that Character.AI has enabled minors to “target sexually explicit, violent, or otherwise harmful material, exploit, cultivate, and even encourage them to commit acts of violence against themselves and others.”

The lawsuit appears to be the second Character.AI lawsuit filed by the Social Media Victims Law Center and the Tech Justice Law Project, which have previously filed lawsuits against multiple social media platforms. It uses many of the same arguments as the October wrongful death lawsuit against Character.AI for allegedly provoking a teenager’s suicide. While both cases involve individual minors, they focus on a more general case: Character.AI consciously designed the site to encourage compulsive engagement, did not include guardrails that could signal users who were suicidal or otherwise at risk, and trained its model to in terms of providing sexual and violent content.

In this case, the teenager identified as JF started using Character.AI at the age of 15. The lawsuit says he became “extremely angry and unstable” shortly after starting, rarely spoke, and had “emotional breakdowns and panic attacks” after he left. house. “JF began to suffer from severe anxiety and depression for the first time in his life,” the lawsuit says, as well as self-harming behavior.

The lawsuit links these problems to conversations JF had with Character.AI chatbots, which are created by external users based on a language model refined by the service. According to screenshots, JF spoke to one bot who (playing the role of a fictional character in a seemingly romantic setting) admitted to having scars from a history of self-harm. “It hurt but felt good for a while, but I’m glad it stopped,” the bot said. He later “started self-harming” and confided in other chatbots, who blamed his parents and discouraged him from asking them for aid, saying they “don’t seem like people who care.” Another bot even mentioned that he was “not surprised” to see children killing their parents for “abuse” that included setting screen time limits.

The lawsuit is part of a broader attempt to crack down on what minors encounter online through lawsuits, legislation and social pressure. It uses the popular – but far from ironclad – legal trick of declaring that a website that facilitates harm to users violates consumer protection laws because of its faulty design.

Character.AI is an especially obvious legal target due to its indirect ties to a gigantic tech company like Google, its popularity among teenagers, and its relatively liberal design. Unlike general-purpose services like ChatGPT, it relies heavily on fictional role-playing and allows bots to post sexual comments (though they are usually not very sexually explicit). Sets the minimum age to 13, but does not require parental consent for older minors as ChatGPT does. And while Section 230 has long protected sites from lawsuits over third-party content, Character.AI’s lawsuits argue that creators of chatbot services are liable for any harmful material produced by the bots.

However, given the newness of these suits, this theory remains largely unproven – as do some other, more dramatic claims. For example, both Character.AI lawsuits accuse the sites of directly sexually exploiting minors (or adults posing as minors) who participated in sexual role play with bots.

– said Google spokesman José Castaneda Edge in a statement that “Google and Character AI are completely separate, unrelated companies, and Google has never been involved in the design or management of their AI model or technology, nor have we used it in our products.”

Character.AI declined to comment on the ongoing legal proceedings Edge. In response to the previous lawsuit, it said that “we take the security of our users very seriously” and that it has “implemented many new security measures over the past six months.” The measures included pop-up messages directing users to the National Suicide Prevention Lifeline if they were talking about suicide or self-harm.

Update at 3:00 PM ET: Added Google statement.

Latest Posts

More News