Florida Attorney General James Uthmeier announced on Thursday, his office will investigate OpenAI for alleged harm to minors, a potential threat to national security and a possible link to shooting which took place last year at Florida State University.
“ChatGPT may have likely been used to assist a murderer in the recent mass school shooting at Florida State University that left two people dead,” Attorney General Uthmeier said in a video posted on social media.
On the day of the FSU shooting last April, the suspect ChatGPT allegedly asked how the country would react to the FSU shooting and what time of day the FSU student union would be busiest. The messages could potentially be used as evidence against the suspect in the October shooting trial.
The attorney general raised further concerns about ChatGPT encouraging suicide in some cases, which has been documented in multiple lawsuits brought by families against OpenAI. He also mentioned his concerns that the Chinese Communist Party could apply OpenAI technology against the United States.
“As big tech deploys these technologies, they should not – cannot – compromise our safety and security,” he said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies or threaten our national security.”
He also called on the Florida Legislature to “work quickly” to protect children from the negative effects of artificial intelligence.
“Every week, more than 900 million people use ChatGPT to improve their everyday lives through applications such as learning new skills or navigating complex healthcare systems,” an OpenAI spokesperson said in a statement to TechCrunch. “Our ongoing work on safety continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.”
Techcrunch event
San Francisco, California
|
October 13-15, 2026
OpenAI added that it is building and continuing to refine ChatGPT to understand user intent and respond in an appropriate, secure manner. The company said it would cooperate with an investigation by the Florida attorney general.
On Wednesday, OpenAI unveiled its Child Safety Plan, which includes policy recommendations to improve children’s safety when it comes to artificial intelligence.
The move comes as chatbot developers face pressure to confront their potential role in creating child sexual abuse material (CSAM). According to the latest report Internet Watch Foundationover 8,000 AI-generated CSAM reports were reported in the first half of 2025, representing a 14% year-over-year raise.
The OpenAI plan recommends updating regulations to protect against AI-generated abusive material, improving the reporting process to law enforcement, and introducing better safeguards to prevent misuse of AI tools.
