Tuesday, March 10, 2026

OpenAI and Google Employees File Amicus Brief in Support of Anthropic Against US Government

Share

Over 30 OpenAI and Google employees, including Google DeepMind Chief Scientist Jeff Dean, submitted the request amicus brief on Monday, supporting Anthropic in its legal battle against the U.S. government.

“If allowed to proceed, the attempt to punish one of America’s leading artificial intelligence companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in artificial intelligence and beyond,” the staff wrote.

The filing was filed just hours after Anthropic sued the Department of Defense and other federal agencies over the Pentagon’s decision to designate the company as a “supply chain risk.” The sanction, which severely limits Anthropic’s ability to work with military contractors, came into force after the failure of Anthropic’s negotiations with the Pentagon. The AI ​​startup is seeking a transient restraining order so it can continue working with its military partners as the lawsuit progresses. This document specifically supports this proposal.

Signatories of the brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin and Roman Novak, among others. Amicus briefs are legal documents submitted by parties who are not directly involved in the legal case but have relevant expertise in the matter. The brief shows that the employees signed on their own behalf and do not represent the views of their companies.

OpenAI and Google did not immediately respond to WIRED’s request for comment.

The amicus brief says the Pentagon’s decision to blacklist Anthropic “introduces unpredictability into [their] an industry that undermines American innovation and competitiveness” and “stifles professional debate about the benefits and risks of pioneering artificial intelligence systems.” He notes that Pentagon could have simply terminated its contract with Anthropic if it no longer wanted to be bound by its terms.

The document also points out that the red lines Anthropic demanded, including a ban on the use of artificial intelligence for mass domestic surveillance and the development of autonomous lethal weapons, raise legitimate concerns and require sufficient protective barriers. “In the absence of public law, the contractual and technological requirements that AI developers place on the use of their systems provide important safeguards against their catastrophic misuse,” the document says.

Several other AI leaders have also publicly questioned the Pentagon’s decision to designate Anthropic as a supply chain risk. OpenAI CEO Sam Altman said in: post on social media that “SCR enforcement [supply-chain risk] an Anthropic designation would be very bad for our industry and our country.” He added that “this is a very bad decision by DoW and I hope they reverse it.” As Anthropic’s relationship with the Pentagon deteriorated, OpenAI quickly signed its own contract with the U.S. military, a decision some criticized as an opportunistic decision.

Latest Posts

More News