Saturday, March 7, 2026

AI security meets war machine

Share

When Anthropic last year became the first major artificial intelligence company cleared by the U.S. government for classified operate – including military applications – the news did not make much of a splash. But this week, the second cannonball development hit: Pentagon rethink your relationship with the company, including a $200 million contract, allegedly because the safety-conscious artificial intelligence company objects to participating in certain lethal operations. The so-called War Department could even designate Anthropic as a “supply chain risk,” a scarlet letter usually reserved for companies doing business with countries controlled by federal agencies such as China, meaning the Pentagon would not cooperate with companies using Anthropic’s artificial intelligence in their defense efforts. In a statement to WIRED, Pentagon chief spokesman Sean Parnell confirmed that Anthropic is in the spotlight. “Our nation demands that our partners be ready to help our soldiers win in every fight. Ultimately, it is about our soldiers and the safety of the American people,” he said. This is also news for other companies: OpenAI, xAI and Google, which currently owned by the Department of Defense unclassified employment contracts, they jump through hoops to obtain their own high qualifications.

There’s a lot to unpack here. First, there is the question of whether Anthropic will be punished for complaining about the fact that its Claude AI model was used in a raid to remove Venezuelan President Nicolás Maduro (i.e. what is reported; the company denies this). There is also the fact that Anthropic publicly supports AI regulation, which is an outlier stance in the industry and contrary to the administration’s policy. But there’s a bigger, more troubling problem at play. Will government demands for military applications make AI itself less secure?

Scientists and executives believe that artificial intelligence is the most powerful technology ever invented. Virtually all current artificial intelligence companies were founded on the assumption that it was possible to achieve AGI, or superintelligence, in a way that prevents widespread harm. Elon Musk, the founder of xAI, was once the biggest advocate of stopping artificial intelligence – he co-founded OpenAI because he was concerned that the technology was too unsafe to be left in the hands of for-profit companies.

Anthropic has created the most safety-conscious space of all. The company’s mission is to integrate guardrails so deeply into its models that bad actors cannot exploit the darkest potential of artificial intelligence. Isaac Asimov was the first and best to say it robotics laws: : A robot cannot harm a human being or, through inaction, allow a human being to be harmed. Even if artificial intelligence becomes smarter than any human on Earth – an eventuality that AI leaders fervently believe in – these guardrails must hold.

So it seems contradictory that leading artificial intelligence labs seek to operate their products in cutting-edge military and intelligence operations. As the first major laboratory to have a secret contract, Anthropic provides the government with: “Claude Gov’s custom model kit built exclusively for U.S. national security clients.” Despite this, Anthropic stated that it did so without violating its own safety standards, which included a prohibition on using Claude in the production or design of weapons. Anthropic CEO Dario Amodei he specifically said doesn’t want Claude involved in autonomous weapons or AI government surveillance. But that may not work with the current administration. Technical Director of the Department of Defense Emil Michael (former Uber chief business officer) told reporters this week that the government will not tolerate an AI company restricting how the military uses AI in its weapons. “If a swarm of drones comes out of a military base, what options do you have to destroy it? If human reaction times aren’t fast enough… how will you do it?” – he asked rhetorically. So much for the first law of robotics.

A good argument can be made that effective national security requires the best technology from the most groundbreaking companies. While a few years ago some tech companies were hesitant to work with the Pentagon, in 2026 they will mostly be flag-waving potential military contractors. I haven’t heard any AI executive talk about linking their models to lethal force, but Palantir CEO Alex Karp he’s not ashamed to say itwith noticeable pride: “Our product is sometimes used to kill people.”

Latest Posts

More News