Friday, March 6, 2026

What AI models for war actually look like

Share

Anthropic may be hesitant to give the US military unrestricted access to its AI models, but some startups are creating advanced AI specifically for military applications.

Smack Technologieswhich announced $32 million in funding this week, is developing models that it says will soon exceed Claude’s capabilities when it comes to planning and executing military operations. And unlike Anthropic, the startup seems less interested in banning certain types of military applications.

“When you serve in the military, you take an oath to serve honorably, lawfully, and according to the rules of war,” says CEO Andy Markoff. “In my opinion, the people who implement this technology and ensure its ethical use need to wear uniforms.”

Markoff is no ordinary AI executive. As a former commander of the U.S. Naval Special Operations Command, he helped conduct high-risk special forces operations in Iraq and Afghanistan. He co-founded Smack with Clint Alanis, another former Marine, and Dan Gould, a computer scientist who previously worked as vice president of technology at Tinder.

Smack’s models learn to identify optimal mission plans through trial and error, much like Google trained its 2017 AlphaGo program. In Smack’s case, the strategy involves running the model through various wargaming scenarios and asking expert analysts to provide a signal that tells the model whether the chosen strategy will pay off. The startup may not have the budget of a conventional, pioneering AI lab, but it is spending millions training the first AI models, Markoff says.

Battle Lines

The military utilize of artificial intelligence has become a warm topic in Silicon Valley after Defense Department officials reached an agreement with Anthropic executives on terms of a contract worth about $200 million.

One of the issues that led to the breakdown that led to Defense Secretary Pete Hegseth labeling Anthropic a supply chain risk was Anthropic’s desire to limit the utilize of its models in autonomous weapons.

Markoff says the confusion overshadows the fact that today’s huge language models are not optimized for military applications. General-purpose models like Claude do a good job of summarizing reports, he says. However, they are not trained in military data and lack a human-level understanding of the physical world, making them unsuitable for controlling physical equipment. “I can tell you they have absolutely no ability to identify the target,” Markoff claims.

“No one I know in the War Department is talking about fully automating the kill chain,” he says, referring to the steps involved in deciding whether to utilize lethal force.

Scope of the mission

The U.S. and other militaries already utilize autonomous weapons in some situations, including missile defense systems that must respond at superhuman speeds.

“The United States and more than 30 other states are already deploying weapons systems with varying degrees of autonomy, including some that I would describe as fully autonomous,” says Rebecca Crootof, a specialist in autonomous weapons legal issues at the University of University of Richmond School of Law.

According to Markoff, in the future, specialized models like the one Smack is working on could also be used for mission planning purposes. The company’s models are intended to facilitate commanders automate much of the work involved in drafting mission plans. Markoff says military mission planning is still typically done by hand, using whiteboards and notebooks.

Markoff argues that if the United States went to war with a “close peer” such as Russia or China, automated decision-making could provide the United States with much-needed “decision-making dominance.”

However, it is still an open question whether AI can be reliably used in such circumstances. A recent experiment conducted by a researcher from King’s College London alarmingly showed that the LLM had a tendency to escalate nuclear conflicts in war games.

Latest Posts

More News