Saturday, April 25, 2026

OpenAI supports legislation that would limit liability for mass deaths or financial disasters caused by artificial intelligence

Share

OpenAI throws its support for an Illinois bill that would protect artificial intelligence labs from liability in cases where artificial intelligence models are used to cause stern social harm, such as the death or stern injury of at least 100 people or property damage of at least $1 billion.

This effort appears to mark a shift in OpenAI’s legislative strategy. So far, OpenAI has mostly played defense, opposing bills that could hold AI labs liable for harm caused by their technology. Several AI policy experts told WIRED that SB 3444 — which could set a fresh standard for the industry — is a more extreme measure than bills supported by OpenAI in the past.

The bill would protect pioneering AI developers from liability for “critical harm” caused by their borderline models, as long as they did not intentionally or recklessly cause such an incident and published safety, security and transparency reports on their website. It defines a frontier model as any AI model trained at a computational cost of more than $100 million that would likely be applicable to major U.S. AI labs such as OpenAI, Google, xAI, Anthropic and Meta.

“We support such approaches because they focus on what matters most: reducing the risk of serious harm from the most advanced artificial intelligence systems, while still enabling this technology to get into the hands of people and businesses – small and large – in Illinois,” OpenAI spokesman Jamie Radice said in an emailed statement. “They also help avoid a patchwork of state-by-state regulations and move toward clearer, more consistent national standards.”

Under the definition of critical harm, the bill lists several common areas of concern for the AI ​​industry, such as a bad actor using AI to create chemical, biological, radiological or nuclear weapons. If an AI model itself takes actions that, if committed by a human, would constitute a crime and lead to such extreme consequences, this would also be a stern harm. If an AI model engages in any of these acts under SB 3444, the AI ​​lab behind the model may not be liable unless it was intentional and the lab has published its reports.

Federal and state lawmakers in the U.S. have not yet passed any laws detailing whether AI modelers such as OpenAI can be held liable for this type of damage caused by their technology. But as AI labs continue to release increasingly powerful AI models that address fresh security and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions seem increasingly prescient.

In her testimony in support of SB 3444, OpenAI Global Affairs Team member Caitlin Niedermeyer also advocated for a federal framework for regulating artificial intelligence. Niedermeyer issued a message consistent with the Trump administration’s attacks on state AI security regulations, saying it was essential to avoid “a patchwork of inconsistent state requirements that could create friction without significant improvements in security.” It is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it is of great importance to Artificial intelligence legislation will not hinder America’s position in the global artificial intelligence race. While SB 3444 is itself a state security law, Niedermeyer argued that they could be effective if they “strengthen the path to harmonization with federal systems.”

“At OpenAI, we believe that the lodestar for border regulation should be the safe deployment of the most advanced models in a way that also preserves the United States’ leadership in innovation,” Niedermeyer said.

Scott Wisor, policy director of the Secure AI Project, tells WIRED that he thinks the bill’s chances of passing are slim given Illinois’ reputation for aggressively regulating technology. “We surveyed Illinoisans asking whether they thought AI companies should be exempt from liability, and 90 percent of people opposed it. There is no reason why existing AI companies should face reduced liability,” Wisor says.

Latest Posts

More News