Saturday, March 7, 2026

Openai is implemented by a safety routing system, parental controls on chatgptt

Share

Opeli began to test a novel security routing system at ChatGPT over the weekend, and Monday introduced parental control to Chatbot – attracting mixed users’ reactions.

Security functions are a response to many incidents of some ChatGPT models that confirm users’ delusional thinking instead of redirecting harmful conversations. Opeli is in the face of an unlawful lawsuit related to one such incident, after a teenage boy died with suicide after months of interaction with chatgpt.

The routing system has been designed to detect emotionally sensitive conversations and automatic switching in half of the GPT-5 browsing, which the company considers to be the best-equipped work model. In particular, the GPT-5 models have been trained A new safety function that Openai calls “safe finishes”, Which allows them to safely answer sensitive questions, and not simply refuse to commit involvement.

This is a contrast from previous company chat models that have been designed to be pleasant and quickly answer questions. GPT-4O was the subject of special control due to the excessively hasty, pleasant nature, which both powered the incidents induced by AI and pulled out a gigantic base of devoted users. When Opeli introduced GPT-5 as default in August, many users pushed away and demanded access to the GPT-4O.

While many experts and users were satisfied with security functions, others criticized what they consider to be too careful implementation, and some users accused OpenAI of treating adults like children in a way that degrades the quality of the service. Opeli suggested that the right transition would take time and gave himself a 120-day period of iteration and improvement.

Nick Turley, vice president and head of the ChatGPT application, confirmed some “strong reactions to 4o answers” due to the implementation of the router with explanations.

“Routing takes place on a basis for people; the transition from the default model is temporarily” Turley published on x. “Chatgpt will tell you which model is active in the question. This is part of a wider effort to strengthen security and learn from real use against wider implementation.”

TechCrunch event

San Francisco
|.
October 27-29 2025

Implementation of parental controls in chatgpt received Similar levels of praise and contemptWith some praise giving parents a way to maintain the apply of artificial intelligence of their children, while others are afraid that they are opening the door for adults treating OpenAi as children.

The control allows parents to adapt the experience of their teenager, setting still hours, excluding voice mode and memory, removing image generation and giving up model training. Teen accounts will also receive additional protection of content-as reduced graphic content and extreme ideals of beauty-and detection system that recognizes potential signs that a teenager may think about self-mutilation.

“If our systems detect potential damage, a small team of specially trained people reviews the situation,” according to OpenAI’s blog. “If there are signs of acute stress, we will contact our parents through E -Mail, a text message and press the alert on their phone, unless they gave up.”

Opeli admitted that the system will not be perfect and can sometimes raise alarms when there is no real danger, “but we think it is better to act and warn their parent so that they can enter than remain silent.” AI has announced that it is also working on how to reach law enforcement agencies or rescue services if it detects a direct threat to life and cannot reach the parent.

Latest Posts

More News