Federal Trade Commission (FTC) is Ordering seven Chatbot AI companies To provide information on how they assess the influence of virtual companions on children and teenagers.
Opeli, Meta, his subsidiary Instagram, Snap, Xai, Google Mattern Company Alphabet and character creator. Orders received To share information about how their comrades AI earn money, how they plan to keep user databases and how they try to alleviate potential damage to users. The question is part of the study, not enforcement activities, to learn more about how technology companies assess the safety of their AI chatbots. In a broader conversation about the safety of children on the Internet, the risk of chatbots AI broke out as a special reason for concern among many parents and decision -makers because of the human manner in which they can communicate with users.
“Despite all their incredible ability to simulate human cognition, these chatbots are products like any other, and those who provide them are obliged to comply with consumer protection regulations,” said FTC Commissioner Mark MEADOR in a statement. Chairman Andrew Ferguson emphasized in a statement that the need to “consider the effects of children, while ensuring that the United States maintains their role as a global leader in this new and exciting industry.” All three Republicans of the Commission voted in favor of the examination of the survey, which requires companies within 45 days.
He appears after deafening reports of teenagers who died by suicide after involving these technologies. The 16-year-old in California discussed his plans of suicide with chatgpt, The New York Times Reported Last month, and Chatbot gave advice that seemed to support him in his death. Last year, Times also reported About the death of a 14-year-old in Florida, who died after engaging with a virtual companion in character.
In addition to FTC, legislators are also looking for up-to-date rules to protect children and teenagers against potentially negative effects of AI. California state assembly He recently adopted an account This would require AI chatbots safety standards and imposes responsibility on the companies that create them.
Although orders to seven companies are not related to the operation of enforcement, FTC can open such a probe if it finds a reason. “If the facts – as developed on the basis of later and properly targeted investigations, if it is justified – indicate that the act has been violated, the Commission should not hesitate to protect the most sensitive among us,” said Meador.
