Friday, March 13, 2026

Palantir’s demonstrations show how the military could exploit AI chatbots to generate war plans

Share

When a user asks “What enemy military unit is in the region?” The AIP assistant guesses that it is “probably an armored assault battalion, based on the equipment pattern.” This prompts the analyst to request an MQ-9 Reaper drone to investigate the scene. They then ask the AIP Assistant to “generate 3 courses of action to target enemy equipment” and after a while the assistant suggests attacking the unit with “air power”, “long-range artillery” or “tactical team”. The user tells the assistant to send these options to the fictional commander, who ultimately selects the tactical team.

The final steps move quickly: the analyst asks the AIP Assistant to “analyze the battlefield,” then “generate a route” for the troops to reach the enemy, and finally “assign jammers” to sabotage their communications equipment. Within seconds, the analyst makes a final assessment of the battle plan and orders troops to mobilize.

In this scenario, Claude would be the “voice” of the AIP Assistant and the “reasoning” it uses to generate responses. Other AIP demonstrations show users interacting with vast language models in similar ways. On the blog published last week, for example, Palantir detailed NATO’s Maven Intelligent Systems effort Clientcould exploit the AIP agent within this tool.

In one graphic, Palantir shows how an independent defense contractor can choose from several of Palantir’s built-in AI models, including various versions of OpenAI’s ChatGPT and Lamy Meta. The user selects GPT 4.1 OpenAI, but apparently at this point the soldier would also have the option to select Claude.

The analyst then looks at a digital map showing the locations of soldiers and weapons. In a panel labeled “COA” (courses of action), they click a button that causes the GPT-4.1-based tool to generate five possible military strategies, including one called “Support by fire followed by penetration, shock and destruction.”

Another example shows how the system can aid interpret satellite imagery: an analyst selects three tanker detections on the map, uploads them to the AIP Agent chat interface, and asks the analyst to “interpret” the imagery and suggest next steps.

Claude can also be used by the military to create intelligence assessments that can aid plan an attack at a later time. In June 2025, WIRED saw a demonstration by Kunaal Sharma, public sector manager at Anthropic, showing how the enterprise version of Claude could be used to generate “advanced” reports on a real Ukrainian drone attack, called “Operation Spider’s Web” Sharma explained that Claude relied solely on publicly available information during the demonstration. But he said that by working with Palantir, the federal government could also tap into internal data sets.

“It’s usually something where I can sit for five hours with a cup of coffee, read Google, go to consulting centers, start writing reports and citations, etc.” – said Sharma. “But I don’t have that much time.”

In the demo, Sharma asked Claude to create an “interactive dashboard” containing information about Operation Spider’s Web, then translate it into “object types” that could be analyzed in Foundry, one of Palantir’s pre-built programs. He also asked Claude to write a detailed analysis of recent events in Russia’s border provinces, as well as a 200-word summary of the “military and political consequences” of the operation.

“Honestly, I’ve been reading this type of stuff for twenty years. I used to write it, I was an academic myself,” Sharma said. “This is actually quite good.”

Latest Posts

More News