The The US Army is developing artificial intelligence models trained on data from real missions to deploy a chatbot specifically for soldiers.
“We’ve learned all of these lessons from missions like the Ukraine-Russian war and Operation Epic Fury,” Alex Miller, the Army’s chief technology officer, tells WIRED. “There is a tremendous amount of knowledge available.”
Miller showed WIRED a prototype of a system called Victor, which combines a Reddit-like forum with a chatbot called VictorBot to facilitate soldiers get useful information, such as the best way to configure electromagnetic combat systems for a specific mission. When a soldier asks how to configure the equipment, VictorBot generates an answer and points to relevant posts and comments from other website users. “Electromagnetic warfare is a very difficult topic,” Miller says. Victor, he adds, “can get an answer and cite all the lessons learned [different] units.”
Over the past two years, the Pentagon has stepped up efforts to incorporate artificial intelligence into military systems, but Victor is a sporadic example of the military building artificial intelligence for itself. The project shows how much the US army wants to master the basics of artificial intelligence and how this technology can change the everyday lives of many soldiers.
Miller says the Army is working with a third-party vendor that will operate and refine the artificial intelligence models that power Victor. He declined to name the specific company because the deal has not yet been announced. He says that more than 500 data repositories have been fed into the system and notes that Victor will try to reduce the risk of errors in a similar way to commercial chatbots, citing fact-based sources.
Efforts to integrate artificial intelligence into military systems have gained momentum following the introduction of ChatGPT in 2022. Recently, Anthropic’s technology was reported to have played a significant role in planning operations in Iran via a system operated by Palantir.
However, as these systems have become more capable, disagreements have arisen over how to implement artificial intelligence. Earlier this year, Anthropic went head-to-head with the Pentagon, arguing that its technology should not be used to power autonomous weapons or surveil American citizens.
The same mistakes
Victor is being developed under the Combined Arms Command (CAC). Lt. Col. Jon Nielsen, who oversees CAC’s work on Victor, says it’s not uncommon for different brigades to make the same mistakes on different missions. Victor’s goal, he adds, is to eventually make the system multimodal so that soldiers can upload photos or videos and get detailed information. “Victor will be one of the few sources with access to reliable information about the military,” Nielsen says.
Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technologies and a former Pentagon policy adviser, says Project Victor highlights the potential of artificial intelligence to automate many of the unsexy back-office tasks at the Defense Department. Slow last year, the department introduced GenAI.mil, an initiative aimed at encouraging greater exploit of artificial intelligence among Defense Department employees.
However, if Victor proves successful, Kahn believes the Army could eventually hire a enormous artificial intelligence company to enhance the system’s capabilities. “The large labs will obviously have a comparative advantage” in creating and deploying cutting-edge AI, he says.
Intel crashes
Artificial intelligence could introduce recent kinds of problems for the armed forces, says Paul Scharre, executive chairman of the Center for Modern American Security and a former U.S. Army Ranger. Scharre says the tendency to flatter AI models can be particularly problematic. “I can imagine situations where this would be particularly concerning in the context of intelligence analysis,” he explains.
Scharre adds that implementing artificial intelligence may become more intricate as systems transition from chatbots to agents that can exploit software and computer networks. “Agentic AI creates a whole new set of security challenges,” he notes.
