Wednesday, December 25, 2024

Sign or Veto: What’s Next for California’s SB 1047 AI Disaster Bill?

Share

California’s controversial bill to prevent AI disasters, SB 1047, has passed final votes in the state Senate and now heads to Gov. Gavin Newsom’s desk. He must weigh the most extreme theoretical risks of AI systems—including their potential role in human deaths—against the potential to thwart California’s AI boom. He has until Sept. 30 to sign SB 1047 into law or veto it outright.

The bill, SB 1047, introduced by state Senator Scott Wiener, aims to prevent the possibility of very enormous AI models causing catastrophic events, such as loss of life or cyberattacks causing losses exceeding $500 million.

To be clear, there are currently very few AI models that are enormous enough to be covered by the Act, and AI has never been used in a cyberattack of this scale. However, the Act is about the future of AI models, not about problems that exist today.

SB 1047 would hold AI model creators accountable for the harm they cause—like gun manufacturers being held liable for mass shootings—and give California’s attorney general the authority to sue AI companies for hefty fines if their technology is used in a disaster. A court could order a company to cease operations if it acts recklessly; covered models would also have to have a “kill switch” that lets them shut down if they’re deemed unsafe.

The bill could reshape the U.S. AI industry and is on the verge of becoming law. Here’s what the future of SB 1047 could look like.

Why Newsom might sign it

Wiener argues that Silicon Valley needs more accountability, previously telling TechCrunch that America needs to learn from its past failures to regulate technology. Newsom may be motivated to act decisively on AI regulation and hold Huge Tech accountable.

Several AI executives, including Elon Musk, have expressed cautious optimism about SB 1047.

Another cautious optimist about SB 1047 is former Microsoft AI chief Sophia Velastegui. She told TechCrunch that “SB 1047 is a good compromise,” while acknowledging that the bill isn’t perfect. “I think we need an office responsible for AI in America or in any country that’s working on it. It shouldn’t just be Microsoft,” Velastegui said.

Anthropic is another cautious supporter of SB 1047, though the company has not taken an official position on the bill. Several of the startup’s suggested changes have been added to SB 1047, and CEO Dario Amodei now says the bill’s “benefits likely outweigh its costs” in letter to the governor of californiaWith the Anthropic amendments, AI companies can only be sued after their AI models cause catastrophic harm, rather than before as was stated in the previous version of SB 1047.

Why Newsom Might Veto It

Given the industry’s vocal opposition to the bill, it wouldn’t be surprising if Newsom vetoed it. If he signed it, he would be putting his reputation on SB 1047, but if he vetoed it, he could either delay the bill until another year or let Congress take it up.

“This [SB 1047] “it changes the precedent that we’ve had in software policy for 30 years,” Andreessen Horowitz general partner Martin Casado told TechCrunch. “It moves the responsibility out of the application and applies it to the infrastructure, which we’ve never done.”

The tech industry has responded with a vocal outcry against SB 1047. In addition to a16z, House Speaker Nancy Pelosi, OpenAI, Huge Tech trade groups and prominent AI researchers are also urging Newsome not to sign the bill, fearing that this paradigm shift in accountability will have a chilling effect on AI innovation in California.

The last thing anyone wants is a chilling effect on the startup economy. The AI ​​boom has been a huge boost to the U.S. economy, and Newsom faces pressure not to squander it. Even the U.S. Chamber of Commerce has asked Newsome to veto the billstating in a letter to him that “Artificial Intelligence is the foundation of American economic growth.”

If SB 1047 becomes law

As a source involved in the drafting of SB 1047 tells TechCrunch, if Newsom signs the bill into law, nothing will happen from day one.

By January 1, 2025, tech companies would be required to produce security reports for their AI models. At that point, the California attorney general could seek a court order to stop training or operating its AI models if a court finds them unsafe.

In 2026, the bulk of the law goes into effect. At that point, a Frontier Models Council will be formed to begin collecting security reports from tech companies. The nine-member council, chosen by the California governor and legislature, will make recommendations to the California attorney general about which companies are compliant and which are not.

That same year, SB 1047 would also require AI modelers to hire auditors to evaluate their security practices, effectively creating a recent industry of AI security compliance. And California’s attorney general could begin suing AI modelers if their tools are used in catastrophic incidents.

By 2027, the Board of Frontier Models could begin issuing guidelines for AI modelers on how to safely train and operate AI models.

If SB 1047 is vetoed

If Newsom vetoes SB 1047, OpenAI’s desires will be realized, and federal regulators will likely take the lead on regulating AI models… eventually.

OpenAI and Anthropic on Thursday laid the groundwork for what federal regulation of AI will look like when they agreed to give the AI ​​Safety Institute, a federal body, early access to their advanced AI models, according to press releaseAt the same time, OpenAI backed a bill that would allow the AI ​​Safety Institute to set standards for AI models.

“For many reasons, we believe it is important for this to happen at a national level,” OpenAI CEO Sam Altman said in a statement. tweet on Thursday.

Reading between the lines, federal agencies tend to create less burdensome technology regulations than California, and they take much longer to do so. But more importantly, Silicon Valley has historically been an significant tactical and business partner for the U.S. government.

“There’s a long history of state-of-the-art computing working with the federal government,” Casado said. “When I was at the national labs, every time a new supercomputer came out, the first version went to the government. We did it to give the government a capability, and I think that’s a better reason than security testing.”

Latest Posts

More News