Despite a DoD ban on Anthropic over its demands that its tech not be used for fully autonomous military targeting, its AI model, Claude, is enjoying prime time use in the U.S. war on Iran.
Indeed, the U.S. military leveraged its AI targeting tools — which still employ Claude — to strike over 1,000 targets in Iran during the first 24 hours of the now rapidly expanding war.
The U.S. is using Claude through Anthropic’s partnership with controversial software company Palantir. As sources told Bloomberg, Claude is central to Palantir’s Maven Smart System, which provides real-time targeting for military operations against Iran. Because of its centrality to the war targeting, Claude won’t be phased out until the DoD has found a replacement, according to sources that spoke with the Washington Post.
The use of AI in military targeting has been controversial dating back at least to the Gaza war. Indeed, IDF forces largely ignored its AI targeting software’s 10% false positive rate when using its “Lavender” system to target and attack alleged militants in Gaza — killing an untold number of civilians in the process.
Now there is concern about its use in Iran. Leading up to the initial attack on Iran, the Washington Post reported that Maven, powered by Claude, proposed “hundreds” of targets for the U.S. military to strike, prioritized them in order of importance, and provided location coordinates for them — helping the U.S. carry out attacks quickly, and blunting Iran’s ability to respond in kind.
But what about oversight? It is not comforting that DoD Secretary Pete Hegseth exclaimed there were “no stupid rules of engagement” in the war at a press conference early this week.
The Pentagon’s Law of War Manual says the U.S. military must take “feasible precautions to verify that the targets [it plans to attack] are military objectives” such as enemy combatants. As these rules dictate, civilians and military medical and religious personnel, and locations like schools, hospitals, places of worship are not to be attacked.
Given the rapid deployment of AI in wartime, whether the U.S. military is truly taking “feasible precautions,” to ensure it is targeting true military objectives, rather than civilians, deserves scrutiny.
"You can rapidly produce long lists of targets much faster than humans can do it by automating that process," Peter Asaro, associate professor of media studies at The New School in New York, and the vice chair of the Stop Killer Robots campaign, told Japan Times. "The ethical and legal question is: To what degree are those humans actually reviewing the specific targets that have been listed, verifying their legality and their value militarily before authorizing?”
As Brianna Rosen, a senior fellow at Just Security and the University of Oxford, previously told RS: “Even with a human fully in the loop, there's significant civilian harm because the human reviews of machine decisions are essentially perfunctory.”
Although the government is set to slowly phase Anthropic out of its systems following the DoD spat with its CEO Dario Amodei, the company is in talks with Emil Michael, under-secretary of defense for research and engineering, to see if a new deal between it and the DoD can be reached.
Anthropic previously received a $200 million DoD contract in July of last year. Claude was the first AI model approved and deployed for use in classified settings, which permitted it to work with partners like Palantir.
The U.S. military previously employed Claude, through its partnership with Palantir, to prepare for an operation that removed Venezuelan leader Nicolas Maduro from power in early January.















