Thursday, April 23, 2026

OpenAI releases recent safety plan to address rise in child sexual abuse

Share

In response to growing concerns about children’s safety on the Internet, OpenAI has unveiled a plan to boost efforts to protect U.S. children amid the artificial intelligence boom. The Child safety planwhich was published on Tuesday, aims to aid detect, better report and more effectively investigate cases of child abuse using artificial intelligence.

The overall goal of the Child Safety Plan is to address the alarming raise in child sexual abuse cases due to advances in artificial intelligence. According to Internet Watch Foundation (IWF) detected over 8,000 reports of AI-generated child sexual abuse content in the first half of 2025, an raise of 14% compared to the previous year. This includes criminals using artificial intelligence tools to generate false images of children for sexual exploitation and to generate persuasive grooming messages.

OpenAI’s plan also comes amid increased scrutiny from policymakers, educators and child safety advocates, especially in delicate of disturbing incidents in which teenage people died by suicide after allegedly interacting with AI chatbots.

Last November, the Law Center for Victims of Social Media and the Tech Justice Law Project filed seven lawsuits in California state courts alleging that OpenAI released GPT-4o before it was ready. The lawsuit claims that the psychologically manipulative nature of the product contributed to wrongful death by suicide and assisted suicide. They cite four people who died by suicide and three others who experienced severe, life-threatening delusions after prolonged interactions with the chatbot.

This plan was developed in partnership with the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, and with input from North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown.

The company says the plan focuses on three aspects: updating regulations to address AI-generated abuse material, improving reporting mechanisms to law enforcement, and integrating preventive safeguards directly into AI systems. In this way, OpenAI aims not only to detect potential threats earlier, but also to ensure that useful information reaches investigators quickly.

OpenAI’s recent child safety plan builds on previous initiatives, including updated guidelines for interacting with users under 18, which prohibit generating inappropriate content or encouraging self-harm, and avoiding advice that would aid teenage people hide hazardous behavior from their caregivers. The company recently released a safety plan for teenagers in India.

Techcrunch event

San Francisco, California
|
October 13-15, 2026

Latest Posts

More News