Thursday, April 23, 2026

OpenAI adds open source tools to assist developers build solutions with teen safety in mind

Share

OpenAI said Tuesday that it is releasing a set of tips that developers can operate to make their apps safer for teenagers. The AI ​​lab found that the kit teen safety rules can be used with an open mass safety model known as gpt-oss security.

Instead of working from scratch to figure out how to make AI safer for teenagers, developers can operate these suggestions to improve what they create. They cover issues such as graphic violence and sexual content, harmful body ideals and behaviors, threatening activities and challenges, romantic or violent role play, and age-restricted goods and services.

These security policies are designed in the form of hints, making them easily compatible with other models beyond gpt-oss-safeguard, although they are probably most effective within OpenAI’s own ecosystem.

To write these prompts, OpenAI said it is working with AI security watchdogs Common Sense Media and everyone.ai.

“These instant principles-based policies help establish a meaningful minimum level of security across the ecosystem, and because they are open source, they can be adapted and improved over time,” Robbie Torney, head of AI & Digital Assessments at Common Sense Media, said in a statement.

OpenAI noted in its blog that developers, including experienced teams, often struggle to translate security goals into precise operational policies.

“This may lead to gaps in protection, inconsistent enforcement, or overly broad filtering,” the company wrote. “Clear rules with appropriate scope are a key foundation for effective safety systems.”

Techcrunch event

San Francisco, California
|
October 13-15, 2026

OpenAI acknowledges that these policies are not a solution to elaborate AI security challenges. However, it builds on its previous efforts, including product-level protections such as parental controls and age prediction. Last year, OpenAI updated its guidelines for its immense language models – known as Model specification — address how AI models should behave for users under 18 years of age.

However, OpenAI itself cannot boast of the purest achievements. The company is facing several lawsuits filed by the families of people who committed suicide after extreme operate of ChatGPT. These threatening relationships often arise when a user outshines a chatbot’s security, and no model’s guardrails are completely impenetrable. Still, these rules are at least a step forward, especially since they can assist third-party developers.

Latest Posts

More News