Saturday, April 25, 2026

Anthropic supply chain risk label should remain in place, appeals court says

Share

Anthropic “no met stringent requirements” to temporarily lose its supply chain risk designation imposed by the Pentagon, a U.S. appeals court in Washington ruled Wednesday. The decision conflicts with a decision issued last month by a lower court judge in San Francisco, and it was not immediately clear how the conflicting preliminary rulings would be resolved.

The government sanctioned Anthropic under two different supply chain laws with similar effects, and courts in San Francisco and Washington issued rulings on only one of them. Anthropic said it was the first U.S. company to be designated under two laws that are typically used to punish foreign companies that pose a threat to national security.

“Granting a suspension would force the United States military to extend its relationship with an unwanted provider of critical artificial intelligence services in the midst of a significant, ongoing armed conflict,” the three-judge appeals panel said. he wrote on Wednesday in a case they described as unprecedented. The panel found that while Anthropic may suffer financial losses from the ongoing designation, it did not want to risk “substantial judicial imposition on military operations” or “slightly overrule” military judgments relating to national security.

A San Francisco judge found that the Defense Department likely acted in bad faith against Anthropic based on frustration with the AI ​​company’s proposed restrictions on the operate of its technology and public criticism of those restrictions. Last week, a judge ordered the removal of the supply chain risk label, and the Trump administration complied, restoring access to Anthropic AI tools at the Pentagon and the rest of the federal government.

Anthropic spokeswoman Danielle Cohen says the company is grateful that the Washington court “recognized that these issues need to be resolved quickly” and is confident that “the courts will ultimately agree that these supply chain labels were unlawful.”

The Department of Defense did not immediately respond to a request for comment.

These cases test how much power the executive branch has over the conduct of technology companies. The battle between Anthropic and the Trump administration also continues as the Pentagon deploys artificial intelligence in its war against Iran. The company argued that it was wrongfully penalized for maintaining that its Claude AI tool lacked the accuracy needed for certain sensitive operations, such as carrying out deadly drone strikes without human supervision.

Several experts in government contracting and corporate rights told WIRED that Anthropic has a powerful case against the government, but courts sometimes refuse to overturn White House decisions on national security matters. Some artificial intelligence researchers say the Pentagon’s actions against Anthropic are “chilling professional debate” about the performance of artificial intelligence systems.

Anthropic argued in court that it lost business because of the designation, which government lawyers say prohibits the Pentagon and its contractors from using Claude’s artificial intelligence on military projects. As long as Trump remains in power, Anthropic may not be able to regain prominence in the federal government.

Final decisions in the company’s two lawsuits may be made in a few months. The court’s oral argument in Washington is scheduled for May 19.

The sites have so far revealed minimal details about how exactly the Defense Department has used Claude or what progress it has made in transitioning personnel to other AI tools from Google DeepMind, OpenAI and others. The military, which is called the War Department under President Donald Trump, said it has taken steps to ensure Anthropic cannot intentionally try to sabotage artificial intelligence tools during the transition.

Latest Posts

More News