Saturday, April 25, 2026

Anthropic denies that it would sabotage AI tools in war

Share

Anthropic cannot manipulate regarding Claude’s generative artificial intelligence model once it is implemented by the US military, one of the executives wrote in a court filing on Friday. The statement was issued in response to accusations by the Trump administration that the company had potentially manipulated artificial intelligence tools during the war.

“Anthropic has never been in a position to cause Claude to stop working, alter its functionality, cut off access or otherwise impact or compromise military operations” – Thiyagu Ramasamy, head of public sector at Anthropic, he wrote. “Anthropic does not have the access required to disable technology or change model behavior before or during ongoing operations.”

The Pentagon has been sparring for months with a leading artificial intelligence lab on how its technology can be used for national security and what the limits on that exploit should be. This month, Defense Secretary Pete Hegseth identified Anthropic as a supply chain risk that will prevent the Department of Defense from using the company’s software, including through contractors, in the coming months. Other federal agencies also abandon Claude.

Anthropic has filed two lawsuits challenging the constitutionality of the ban and is seeking an emergency order to repeal it. However, customers have already started canceling their contracts. A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. Shortly thereafter, the judge may decide to temporarily revoke the sentence.

In a document filed earlier this week, government lawyers wrote that the Department of Defense “has no obligation to tolerate the risk that critical military systems will be compromised at key moments to national defense and active military operations.”

According to WIRED, the Pentagon uses Claude to analyze data, write memos and aid develop battle plans. The government argues that Anthropic could disrupt energetic military operations by disabling access to Claude or pushing malicious updates if the company does not agree to certain uses.

Ramasamy rejected this possibility. “Anthropic does not provide any rear doors or remote kill switches,” he wrote. “Anthropic personnel cannot, for example, log into the DoW system to modify or disable models during operations; the technology simply does not work that way.”

He then said that Anthropic would only be able to provide updates with the consent of the government and the cloud service provider, in this case Amazon Web Services, although he did not specify this by name. Ramasamy added that Anthropic does not have access to prompts or other data that military users enter into Claude.

Anthropic executives maintain in court filings that the company does not want veto power over tactical military decisions. Sarah Heck, Head of Policy, he wrote on Friday in a court filing that Anthropic was willing to guarantee that amount in the deal proposed on March 4. “For the avoidance of doubt, [Anthropic] understands that this license does not confer or confer any right to control or veto the lawful decision-making processes of the War Department,” the request said, according to the documentation, which referred to the Pentagon’s alternative name.

Heck claimed the company was also willing to accept language that addressed its concerns about Claude being used to carry out deadly attacks without human supervision. Ultimately, however, the negotiations broke down.

Department of Defense for now he said the lawsuit said it is “taking additional measures to mitigate supply chain risks” posed by the company by “working with third-party cloud service providers to ensure that Anthropic management cannot make unilateral changes” to Claude’s current systems.

Latest Posts

More News