In September, all the eyes were on the Senate Act 1047, when he reached the desk of the governor of California Gavin Newsom – and died there when he vetoed the buzzing legislation.
SB 1047 would require the creators of all vast AI models, especially those that cost $ 100 million or more, test them in terms of specific threats. AI nuchazors were not satisfied with the veto, but most vast technology companies. But the story is not over. News, who believed that the legislation was too strict and one size, commissioned a group of leading AI researchers to assist propose an alternative plan-thor who would support the development and generative management of artificial intelligence in California, along with paralysis for its risk.
On Tuesday, this report was published.
Authors of the 52-page “California report on border policy“He said that AI’s abilities,” chain “” reasoning “models-” quickly improved “since the decision to veto the SB 1047. Using historical cases of cases, empirical research, modeling and simulation, they suggested modern frames that would require greater transparency and independent scritten AI AI, which appears in In relation to solidia 10-year-old moratorium on states regulating AI states, supported by AI, supported by AI, supported by AI, supported by AI, supported by AI, supported by AI, supported by AI, supported by AI, supported by AI, supported by AI. Republican Congress and companies such as Opeli.
Report-well run by Fei-Fei Li, co-director Stanford Institute for Human Cented Artificial Intelligence; Mariano-Florentino Ciéllar, president of Carnegie Endowment for International Peace; and Jennifer Tour Chayes, dean of UC Berkeley College of Computing, Data Science and Society – said that the breakthrough of AI Frontier in California can have a great impact on agriculture, biotechnology, pure technology, education, finance, medicine and transport. Its authors agreed that it is critical not to suppress innovation and “regulatory loads are such that organizations have resources to follow.”
“Without adequate security … Powerful Al can cause severe, and in some cases potentially irreversible damage”
But reduction of risk is still the most critical, they wrote: “Without adequate security … Powerful Al can cause severe, and in some cases potentially irreversible damage.”
In March, the group published a design version of their report on public commentary. But even since then they wrote in the final version, proof that these models contribute to the risk of “chemical, biological, radiological and nuclear weapons (CBRN) … increased.” They added that leading companies reported on jumping in the capabilities of their models in these areas.
The authors made several changes in the report project. They now notice that the modern AI policy in California will have to move quickly changing “geopolitical realities”. They added more context on the risk of vast AI models, and adopted a more tough line to categorize companies for regulation, saying only on how much calculating their required training was not the best approach.
The authors wrote that AI’s training needs change all the time, and the computing definition ignores the way these models are adopted in real operate. It can be used as “the initial filter for a cheap screen for entities that may require more analysis”, but factors such as initial risk assessments and impact assessment are key.
This is particularly critical because the AI industry is still the Wild West, when it comes to transparency, with little consent to the best practices and “systemic ductivity in key areas”, such as data acquisition, safety and safety processes, pre -space tests and a potential impact on lower, the authors wrote.
The report requires the protection of informants, third parties’ ratings from Unthreatening Harbor for scientists conducting these assessments and sharing information directly with the public, to enable transparency that goes beyond what the current leading AI companies decide to disclose.
One of the main writers of the report said, said Scott Singer The Verge That AI political conversations “have changed completely at federal level” since the report of the report. He argued, however, that California could assist in conducting “harmonization” between countries in the field of “reasonable policy, which is supported by many people throughout the country.” This is a contrast with the mixed patchwork, which supporters of the moratorium claim that state rights will be stated.
IN on ED At the beginning of this month, Dario Amodei, the general director of Anthropic, called on a federal standard of transparency, requiring leading AI companies “public disclosure on their company’s websites … How they plan to test and alleviate national security and other catastrophic risk.”
“Only programmers are simply inappropriate in full understanding of technology, especially its risk and damage”
But even such steps are not enough, the authors of the Tuesday report wrote, because “in the case of the resulting and complex technology developed and acceptance at an extremely fast pace, the programmers themselves are simply inappropriate in full understanding of technology, and especially its risk and damage.”
That is why one of the key rules on Tuesday is the need to assess the risk of third parties.
The authors came to the conclusion that risk assessments would be encouraged by companies such as OpenAI, Anthropic, Google, Microsoft and others, to strengthen model safety, while helping to paint a more pronounced picture of the risk of their models. Currently, leading AI companies usually make their own assessments or employ the other party to this. The authors say, however, that the assessment of third parties is necessary.
“Thousands of people … are eager to engage in risk assessment, overshadow the scale of internal or contracted teams”, but also groups of third parties evaluators have “unparalleled variety, especially when programmers reflect some demographic and geographical data, which often differ very different from the most negative ones from AI.”
But if you allow the evaluators of third parties to test the risk and dead points of your powerful AI models, you must give them access-for significant assessments, and plot access. And this is something that companies hesitate to.
The assessment of access level is not even basic to get this level of access. Meter, Opeli Partners with the safety tests of his own models, wrote in Blog post That the company did not give as much time to test the O3 OpenAI model as with previous models, and that OpenAI did not give it sufficient access to data or internal reasoning of models. These restrictions, the meter wrote, “prevent us from making solid assessments of ability.” Openai later he said Studying methods of sharing more data to companies such as a meter.
The report noted that even the API interface or disclosure of the model’s weight cannot allow third party evaluators to effectively test the risk, and companies could operate “suppressing” conditions for providing services to prohibit or threaten legal activities against independent researchers who discover security defects.
In March last year, over 350 researchers of the AI and others signed open letter Calling to the “safe port” for independent AI safety tests, as well as existing security for external cyber security testers in other areas. Tuesday’s report cites this letter and requires vast changes, as well as reporting options for people hurt by AI systems.
“Even perfectly designed security policies cannot prevent 100% of significant, undesirable results,” the authors wrote. “Because foundation models are widely accepted, understanding of damage arising in practice is becoming more and more important.”
