Monday, March 16, 2026

Are character chatbots protected speech? One court is not sure

Share

Lawsuit against Google and Companion Chatbot Service Aid Ai – who is accused of contributing to the death of a teenager – he can go forward, He ruled the judge in Florida. In the decision made today, judge Anne Conway said that the attempt to defend the first amendment was not enough to throw out the claim. Conway has determined that despite some similarities for video games and other expressive media, “it is not prepared to maintain this character AI as it is mentioned.”

The decision is a relatively early indicator of the types of treatment that language models can receive in court. This is due to a lawsuit filed by the family Sewell Setzer III, a 14-year-old who died by suicide after he was supposedly obsessed with Chatbot, who encouraged his suicide thoughts. AI and Google characters (which are closely related to Chatbot) However, Conway was skeptical.

While the companies “are based primarily on analogy” with these examples, “they do not significantly develop their analogies,” said the judge. The court’s decision “does not turn on Whether The AI ​​character is similar to other media that have been protected by the first amendment; Rather, the decision turns on How The character of AI is similar to other media ” – in other words, whether AI AI is similar to such things as video games, because it also conveys ideas that would count as speech. These similarities will be discussed as they proceed.

Although Google has no character, he will remain the defendant in a suit thanks to links with the company and the product; The founders of the company Noam Shazeer and Daniel de Freitas, who are included in the suit separately, worked on the platform as Google employees, before they left to start it, and later they were employed there. AI characters are also in the face of a separate trial in which he claimed to hurt the mental health of another youthful user, and a handful of state legislators pushed the regulations for “accompanying chatbots”, which simulate relationships with users – including one bill, main actThis would forbid their apply of children in California. If the rules are adopted, they will probably fight in court at least partly based on the status of the first accompanying amendment.

The result of this case will depend largely on whether AI AI is a legally “product” that is harmful defective. In the government, he notes that “courts generally do not classify ideas, images, information, words, expressions or concepts as products”, including many conventional video games – cites, for example, the ruling that was found Mortal Kombat producers You cannot be responsible for “addicts” players and inspiring them to kill. (AI Suit characters also accuse the platform of addictive design.) Systems such as AI AI are not the author as directly as most of the video game dialogue; Instead, they produce an automated text, which is heavily defined, responding and reflecting users’ entrance.

“These are really difficult problems and new ones that they will have to deal with.”

Conway also noticed that the plaintiffs decided to not confirm the centuries of users and allow users to “exclude indecent content”, among other allegedly defective functions that go beyond direct interaction with the chatbots themselves.

In addition to the discussion on the protection of the first amendment of the platform, the referee allowed Setzer’s family to continue decency commercial practices, in that the company “misled users that AI characters were real people from whom some of whom were licensed specialists in mental health”, and Setzer was “injured by [Character AI’s] Anthropomorphic design decisions. “(AI bots often describe themselves as real people in the text, despite the opposite warning in their interface, and therapeutic bots are common on the platform).

She also allowed to claim that the neglected figure violated the principle aimed at preventing adults from sexual communication with minors online, saying that the complaint “emphasizes several sexual interactions between Sewell and AI characters.” The character AI said that she had implemented additional security from the death of Setzer, including a more guaranteed model for teenagers.

Becca Branum, deputy director of the Center for Democracy and Technology Free Expression, called the analysis of the first amendment “quite thin” – because it is a very preliminary decision, there is a lot of space for the future debate. “If we are thinking about the whole field of things that could be derived by artificial intelligence, such a kind of chatbot outputs are quite expressive, [and] It also reflects the editorial discretion and protected expression of the model designer – said Branum The Verge. But “in defense of all these things are really inventive,” she added. “These are really difficult problems and new ones that they will have to deal with.”

Latest Posts

More News