An ongoing and heated dispute between the Pentagon and Anthropic is raising new questions about how the startup’s technology is actually used inside the US military. In late February, Anthropic refused to grant the government unconditional access to its Claude AI models, insisting the systems should not be used for mass surveillance of Americans or fully autonomous weapons. The Pentagon responded by labeling Anthropic's products a “supply-chain risk,” prompting the startup to file two lawsuits this week alleging illegal retaliation by the Trump administration and seeking to overturn the designation.
The clash, along with the rapidly escalating war in Iran, has drawn attention to Anthropic’s partnership with the military contractor Palantir, which announced in November 2024 that it would integrate Claude into the software it sells to US intelligence and defense agencies. Palantir says the Claude integration can help analysts uncover “data-driven insights,” identify patterns, and support making “informed decisions in time-sensitive situations.”
However, Palantir and Anthropic have shared few details about how Claude functions within the military or which Pentagon systems rely on it, even as the AI tool reportedly continues to be used in some US defense operations overseas, including the war in Iran. In January, Claude also reportedly played an instrumental role in the US military operation that led to the capture of Venezuelan president Nicolás Maduro.
WIRED reviewed Palantir software demos, public documentation, and Pentagon records that together paint the clearest picture to date of how American military officials may be using AI chatbots, including what kinds of queries are being fed to them, the data they use to generate responses, and the kinds of recommendations they give analysts.
The Department of Defense did not respond to a request for comment. Palantir and Anthropic declined to comment.
Palantir’s Pentagon Ties
Military officials can use Claude to sift through large volumes of intelligence, according to a source familiar with the matter. Palantir sells multiple software tools to the Pentagon where such analysis might take place, but the company has never publicly specified which of those systems do or don’t incorporate Claude.
Since 2017, Palantir has been the primary contractor behind “Project Maven,” also known as the Algorithmic Warfare Cross-Functional team, a Defense Department initiative for deploying AI in war settings. For the project, Palantir developed a product known as “Maven Smart System,” sometimes simply called “Maven.”
Maven is managed by the National Geospatial Intelligence Agency (NGA), the government body in charge of collecting and analyzing satellite data. Agencies across the military—including the Army, Air Force, Space Force, Navy, Marine Corps, and US Central Command, which is overseeing military operations in Iran—can access Maven. Cameron Stanley, the Pentagon’s chief digital and artificial intelligence officer, said at a recent Palantir conference that Maven is being deployed “across the entire department.”
According to public assessments of Maven published by the military, the tool can apply “computer vision algorithms” to images taken by a “space-based asset” like a satellite, as well as automatically detect objects likely to be “enemy systems.” A Maven demo shown during Stanley’s conference presentation shows the tool distinguishing people from cars.
... continue reading