Opeli announced up-to-date Teenage’s safety for ChatgPT on Tuesday as part of constant effort to respond to concerns about how minors are involved in chatbots. The company builds an age -old system that determines whether the user is less than 18 years elderly and leads to “appropriate for age“A system that blocks graphic sexual content. If the system detects that the user is considering suicide or self -mutilation, he will contact the user’s parents. In the case of the upcoming danger, if the user’s parents are unattainable, the system may contact the authorities.
IN Blog post Altman, the general director, wrote that the company is trying to balance the freedom, privacy and safety of teenagers.
“We realize that these rules are in conflict and not everyone will agree with how we solve this conflict,” Altman wrote. “These are difficult decisions, but after talking to experts, this is what we think is the best and want to be transparent in our intentions.”
While OpenAI tends to priority of privacy and freedom for adult users, for teenagers the company claims that it puts security in the first place. By the end of September, the company will introduce parental control so that parents can connect their child’s account with their own, enabling them to manage conversations and turn off the functions. Parents can also receive notifications when the “teen system detects their teenager at a time of severe stress”, according to post on the company’s blog and set limits for the time of the day so that their children can operate chatgPT.
The movements are so disturbing headingS still arises about people dying through suicide or commit violence against family members after long conversations with AI chatbots. The legislators noticed, and both the finish and openai are examined. At the beginning of this month, the Federal Trade Commission asked Meta, OpenAI, Google and other AI companies to provide information on how their technologies affect children, According to Bloomberg.
At the same time, OpenAI is still subject to a court to order, which require consumer chats forever – the fact that the company is very dissatisfied, according to the sources I talked to. Today’s messages are both an significant step towards the protection of minors and an experienced PR movement in order to strengthen the idea that talks with chatbots are so personal that consumers’ privacy should be violated only in the most extreme circumstances.
“SexBot avatar in chatgpt”
From the sources I spoke to Openai, the burden of user protection is burdened by many researchers. They want to create a user’s experiences that are fun and addictive, but it can quickly force himself to become catastrophic sykophantic. It is positive that companies such as Opeli take steps to protect minors. At the same time, in the absence of federal regulations, nothing forces these companies to proceed.
IN last Interview, Tucker Carlson forced Altman to answer thoroughly Who He makes those decisions that affect the rest of us. The head of OPENAI pointed to a model of model behavior, which is responsible for tuning the model for some attributes. “I think that the person to be responsible for these connections is me,” added Altman. “For example, I am a public face. After all, I am the one who can annul one of these decisions or our board.”
