Over the past few days, I have been asking AI companies to convince me that AI’s security prospects have not diminished. Just a few years ago, there seemed to be widespread agreement among companies, lawmakers, and the general public that earnest regulation and oversight of artificial intelligence was not only necessary, but inevitable. People have speculated about international bodies setting rules to ensure that artificial intelligence is taken more seriously than other emerging technologies, and that they could at least create obstacles to its most hazardous implementations. Corporations have vowed to prioritize safety over competition and profits. While doomers continued to dream up dystopian scenarios, a global consensus was forming on mitigating the risks of artificial intelligence and reaping its benefits.
Last week’s events dealt a blow to those hopes, starting with the bitter dispute between the Pentagon and Anthropic. All parties agree that the existing agreement between them specified – at Anthropic’s insistence – that the Department of Defense (which is now called the Department of War) would not employ Anthropic’s Claude artificial intelligence models to produce autonomous weapons or mass surveillance of Americans. Now the Pentagon wants to blur those red lines, and Anthropic’s refusal not only caused the contract to expire, but also prompted Secretary of Defense Pete Hegseth to declare the company a supply chain risk that prevents government agencies from doing business with Anthropic. Without going into detail about the terms of the contracts and the personal relationship between Hegseth and Anthropic CEO Dario Amodei, the bottom line is that the military is determined to oppose any restrictions on the employ of artificial intelligence, at least within the boundaries of legality – by its own definition.
The bigger question seems to be how we got to the point where releasing killer robotic drones and bombs that identify and eliminate human targets came into the conversation as something even the US military would consider. Have I missed the international debate about the wisdom of creating swarms of deadly autonomous drones to scan war zones, patrol borders or watch out for drug smugglers? Hegseth and his supporters complain about the absurdity of private companies limiting the military’s capabilities. I think what’s crazier is that a lone company would risk existential sanctions to stop potentially uncontrolled technology. In any case, the lack of international agreements means that any advanced militia must employ artificial intelligence in all its forms simply to keep up with their adversaries. At this point, an AI arms race seems inevitable.
The threats go far beyond the military. He was in the shadow of the Pentagon drama disturbing announcement Anthropic was published on February 24. The company said it is making changes to its system for mitigating catastrophic risks from artificial intelligence, called its Responsible Scaling Policy. This was a key founding policy of Anthropic, in which the company promised to tie the release schedule of AI models to security procedures. The policy stated that models should not be placed on the market without safety guardrails to prevent worst-case employ cases. This provided an internal incentive to ensure that safety was not neglected in the rush to introduce advanced technologies. More importantly, Anthropic hoped that adopting this policy would inspire or shame other companies to do the same. This process was called “race to the top” The expectation was that implementing such policies would assist influence industry-wide regulations that set limits on the chaos that AI can cause.
Initially, this approach seemed promising. DeepMind and OpenAI have adopted aspects of the Anthropic framework. More recently, as investment funds have increased, competition among AI labs has increased, and the prospect of federal regulation has begun to seem more distant, Anthropic has admitted that its responsible scaling policies have fallen low. The thresholds did not produce the consensus on the risks of AI that was expected. As the company noted in a blog post: “The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while security discussions have yet to gain significant attention at the federal level.”
Meanwhile, competition among AI companies has become fiercer. Instead of a race to the top, the AI competition is more like a bare-knuckled King of the Mountain game. When the Pentagon exiled Anthropic, OpenAI rushed to fill the void with its own contract with the Department of Defense. OpenAI CEO Sam Altman urged him to make a hasty deal with the Pentagon to ease the pressure on Anthropic, but Amodei was having none of it. “Sam is trying to undermine our position, even though he gives the impression that he supports it,” Amodei said in an interview internal note. “He is trying to make it easier for the administrator to punish us by undercutting our public support.” (Amodea he later apologized due to the tone of the message).
