Friday, March 6, 2026

Anthropic returns after US military labels it a ‘supply chain risk’

Share

US Defense Secretary Pete Hegseth on Friday ordered the Pentagon to designate Anthropic a “supply chain risk,” sending shockwaves through Silicon Valley and leaving many companies struggling to understand whether they can continue to utilize one of the industry’s most popular artificial intelligence models.

“Effective immediately, no contractor, supplier or partner doing business with the United States Military may conduct any commercial activity with Anthropic,” Hegseth wrote in a social media post.

The designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the U.S. military could utilize artificial intelligence models developed by the startup. In blog entry this week, Anthropic argued that its contracts with the Pentagon should not allow its technology to be used for mass surveillance of Americans at home or for fully autonomous weapons. The Pentagon asked Anthropic to consent to the U.S. military using artificial intelligence for “all lawful uses” with no specific exceptions.

The supply chain risk designation allows the Pentagon to limit or exclude certain suppliers from defense contracts if they are deemed to pose security vulnerabilities, such as risks related to foreign ownership, control or influence. It is intended to protect sensitive military systems and data from potential breaches.

Anthropic replied in another blog entry on Friday evening, saying that it would “challenge any supply chain risk designation in court” and that such a designation “would set a dangerous precedent for any U.S. company negotiating with the government.”

Anthropic added that it had not received any direct communication from the Department of Defense or the White House regarding negotiations over the utilize of its artificial intelligence models.

“Secretary Hegseth suggested that this designation would prevent anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to support this statement,” the company wrote.

The Pentagon declined to comment on the matter.

“This is the most shocking, damaging and far-reaching thing I have ever seen the United States government do,” says Dean Ball, a senior fellow at the Foundation for American Innovation and former White House senior adviser on artificial intelligence. “We basically just sanctioned an American company. If you’re an American, you should consider whether you should be living here in 10 years.”

Silicon Valley residents lamented on social media, expressing similar shock and dismay. “The people running this administration are impulsive and vindictive. I think that’s enough to explain their behavior” – Paul Graham, founder of startup accelerator Y Combinator he said.

Boaz Barak, OpenAI researcher, said in post that “the knee jerk to one of our leading artificial intelligence companies is right about the worst own goal we can do. I really hope cooler heads prevail and this announcement is reversed.”

Meanwhile, OpenAI CEO Sam Altman announced Friday evening that the company had reached an agreement with the Department of Defense to deploy its artificial intelligence models in classified environments, seemingly with clippings. “Two of our most important security principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including in the case of autonomous weapons systems,” Altman said. “DoW agrees with these principles, reflects them in law and policy, and we include them in our contract.”

Confused customers

In its Friday blog post, Anthropic said the supply chain risk designation under 10 USC 3252 applies only to DoD contracts directly with suppliers and does not cover how contractors utilize Claude AI software to serve other customers.

Three federal contracting experts say it is impossible at this stage to determine which, if any, Anthropic customers must now cut ties with the company. Hegseth’s statement “is not entangled in any law that we can currently guess at,” says Alex Major, a partner at the law firm McCarter & English, which works with technology companies.

Latest Posts

More News