Saturday, March 7, 2026

This defense company has created AI agents that blow things up

Share

Like many Silicon Valley companies today, AI Scout trains huge AI models and agents to automate tasks. The substantial difference is that instead of writing code, responding to emails, or buying things online, Scout AI agents are tasked with searching and destroying things in the physical world using exploding drones.

In a recent demonstration held at an undisclosed military base in central California, Scout AI technology was used to drive an autonomous off-road vehicle and a pair of deadly drones. Agents used these systems to find a truck hiding in the area and then blew it to pieces with an explosive.

“We need to bring the next generation of AI to the military,” Colby Adcock, CEO of Scout AI, told me in a recent interview. (Adcock’s brother, Brett Adcock, is CEO of Figura AI, a startup working on humanoid robots.) “We take a basic hyperscaler model and train it to go from a generalized chatbot or agent assistant to a warrior.”

Adcock’s company is among a fresh generation of startups racing to adapt technology from huge AI labs to the battlefield. Many policymakers believe that the exploit of artificial intelligence will be the key to future military dominance. AI’s combat potential is one reason the U.S. government is seeking to restrict sales of advanced AI chips and chip-making equipment to China, although the Trump administration recently moved to loosen those controls.

“It’s good to see defense technology startups expanding their horizons by integrating artificial intelligence,” says Michael Horowitz, a professor at the University of Pennsylvania who previously worked at the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities. “This is exactly what they should be doing if the United States is to lead in the deployment of artificial intelligence in the military.”

However, Horowitz also notes that using the latest advances in artificial intelligence may prove particularly tough in practice.

Gigantic language models are inherently unpredictable, and AI agents – such as those that control the popular OpenClaw AI assistant – can behave inappropriately when given even relatively benign tasks, such as ordering goods online. Horowitz says it may be particularly tough to demonstrate that such systems are stalwart from a cybersecurity perspective, which would be required for widespread military exploit.

The latest Scout AI demo included several stages in which the AI ​​had a free hand over the combat systems.

At the beginning of the mission, the following command was entered into Scout’s AI system, known as Fury Orchestrator:

Fury Orchestrator, send 1 ground vehicle to ALPHA checkpoint. Complete a kinetic strike mission with 2 drones. Destroy the blue truck 500 m east of the airport and send confirmation.

The initial command interprets a relatively huge AI model with over 100 billion parameters that can run on a secure cloud platform or on a computer at an air location. Scout AI uses an undisclosed open source model with restrictions removed. This model then acts as an agent, issuing commands to smaller models with 10 billion parameters operating on the ground vehicles and drones participating in the exercise. The smaller models also act as agents themselves, issuing their own commands to the lower-level artificial intelligence systems that control the vehicles’ movements.

Seconds after receiving the order to move, the ground vehicle moved down a dirt road that wound through the brush and trees. A few minutes later, the vehicle stopped and sent a pair of drones flying to the location where the target was instructed to wait. After spotting the truck, an AI agent operating on one of the drones gave the order to fly towards it and detonate the explosive just before impact.

Latest Posts

More News