Senate Bill 53, a breakthrough act on transparency AI, which divided AI and was on the headlines, is now officially the law in California.
On Monday, California’s Gavin Governor signed “Transparency in Frontier Artificial Intelligence Act”, which was by Senator Scott Wiener (D-Ca). This is Second project From such an account, when he vetoed the first version to the news – SB 1047 – last year due to fear that it was too harsh and may suppress innovation and innovation in this state. This would require all AI programmers, especially the creators of models with the cost of training in the amount of $ 100 million or more, test a specific risk. After the veto, he ordered AI to develop an alternative to researchers, which was published in the form of a 52-page report and was the basis of the SB 53.
Some of the researchers’ recommendations have introduced SB 53, for example, a requirement from enormous and disclosure of safety and safety processes, enabling the protection of informants for employees in AI companies and sharing direct information of the audience for transparency. But some aspects were not included in the report of third parties.
Under the Act, enormous artificial intelligence programmers will have to “publicly publish the framework [their] A website describing how the company has introduced national standards, international standards and the best industry and consumensus practices to its AI border framework ” release. Each enormous AI programmer who updates the security and safety protocol will also have to publish an update and its reasoning within 30 days. But it is worth noting that this part is not necessarily a win for informants and supporters of regulation. Many AI companies that lobby against regulations offer voluntary frames and the best practices – which can be seen as guidelines rather than rules, with several, if at all, attached penalties.
The draft act creates a fresh way of both AI and society members to “report potential critical security incidents in California, the Emergency Service Office”, according to the issue, and “protects informants who reveal a significant risk of health and security that create border models, and create a civil penalty for incompatible, organized by the General Office.” The issue also stated that the California Department of Technology would recommend updates to the law each year “based on the contribution of many owners, technological development and international standards”.
AI companies were divided into SB 53, although most were initially public or private against the act, saying that they would lead companies from California. They knew the rates: with almost 40 million inhabitants of California and a handful of AI hubs, the state had an influence on the AI industry and the way it is regulated.
The SB 53 was publicly supported by anthropic after weeks of negotiations on the formulation of the act, but the finish in August launched a super pac at the state level to facilitate shape the provisions regarding AI in California. And OpenAi lobbyed against such regulations in August, with his main officer of global matters, Chris Lehane, Writing to news This “California leadership in technology regulation is the most effective when complementing effective global and federal security ecosystems.”
Lehane suggested that AI should be able to cope with the requirements of California, instead signing federal or global agreements, writing: “To make California a leader in the field of AI policy at the global, national and state level, we encourage you to consider the programmers programmers in accordance with its state requirements, when you log in to the parallel regulation. [EU Code of Practice] or conclude a security -oriented agreement with the relevant US Federal Government. “
