Follow us on social

Israel using secret AI tech to target Palestinians

Israel using secret AI tech to target Palestinians

A leading voice on military AI breaks down how big data risks making war less humane

Reporting | Middle East

The Israeli military has employed an artificial intelligence-driven “kill list” to select over 30,000 targets in Gaza with minimal human input, fueling civilian casualties in the war-torn strip, according to an explosive new investigation from +972 Magazine.

Especially in the early days of the Gaza war, Israel Defense Forces (IDF) personnel ignored the AI’s 10% false positive rate and intentionally targeted alleged militants in their homes with unguided “dumb bombs” despite an increased likelihood of civilian harm, according to IDF sources who spoke with +972 Magazine.

The investigation sheds light on the myriad ways in which cutting-edge AI tech, combined with lax rules of engagement from IDF commanders on the ground, has fueled staggering rates of civilian harm in Gaza. At least 33,000 Palestinians have died due to Israel’s campaign, which followed a Hamas attack that killed 1,200 Israelis last October.

The AI targeting software, known as “Lavender,” reportedly relies on sprawling surveillance networks and assigns a 1-100 score to every Gazan that estimates the likelihood that they are a Hamas militant. Soldiers then input this information into software known as “Where’s Daddy,” which uses AI to warn when an alleged militant has returned to their home.

Previous reporting from +972 Magazine revealed the existence of a similar AI system for targeting houses used by militants, called “The Gospel.” In both cases, the IDF said +972 Magazine exaggerated the role and impact of these high-tech tools.

“The doomsday scenario of killer algorithms is already unfolding in Gaza,” argues Brianna Rosen, a senior fellow at Just Security and the University of Oxford who previously worked at the National Security Council during the Obama administration.

RS spoke with Rosen to get her take on the latest revelations about Israel’s use of AI in Gaza, how AI is changing war, and what U.S. policymakers should do to regulate military tech. The following conversation has been edited for length and clarity.

RS: What does this new reporting from +972 Magazine tell us about how Israel has used AI in Gaza?

Rosen: The first thing that I want to stress is that it's not just +972 Magazine. The IDF itself has actually commented on these systems as well. A lot of people claimed that the report overstates some of the claims about AI systems, but Israel itself has made a number of comments that support some of these facts. The report substantiates a trend that we've seen since December with Israel's use of AI in Gaza, which is that AI is increasing the pace of targeting in war and expanding the scope of war.

As the IDF itself has acknowledged, it's using AI to accelerate targeting, and the facts are bearing this out. In the first two months of the conflict, Israel attacked roughly 25,000 targets — more than four times as many as previous wars in Gaza. And they're actioning more targets than they ever have in the past. At the same time that the pace of targeting is accelerating, AI is also expanding the scope of war, or the pool of potential targets that are actioned for elimination. They're targeting more junior operatives than they ever have before. In previous campaigns, Israel would run out of known combatants or legitimate military objectives. But this latest reporting [shows] that's not seemingly a barrier to killing anymore. AI is acting, in Israel's own words, as a force multiplier, meaning that it's removing the resource constraints that in the past would prevent the IDF from identifying enough targets. Now they're able to go after significantly lower targets with tenuous or no connections at all to Hamas even though, normally, they wouldn't pursue those targets because of the minimal impact of their death on military objectives.

In short, AI is increasing the tempo of operations and expanding the pool of targets, which makes target verification and other precautionary obligations required under international law much harder to fulfill. All of this increases the risk that civilians will be misidentified and mistakenly targeted, contributing to the enormous civilian harm that we've seen thus far.

RS: How does this relate to the idea of having a human "in the loop" for AI-driven decisions?

Rosen: This is what is so concerning. The debate on military AI has been for so long focused on the wrong question. It's been focused on banning lethal autonomous weapons systems, or "killer robots," without recognizing that AI has already become a pervasive feature of war. Israel and other states, including the United States, are already integrating AI into military operations. They're saying that they're doing it in a responsible way with humans fully "in the loop." But the fear that I have, and which I think we're seeing play out here in Gaza, is that even with a human fully in the loop, there's significant civilian harm because the human reviews of machine decisions are essentially perfunctory.

With this report that was released today, there's a claim that there is human verification of the outputs that the AI systems are generating but that the human verification was done in only 20 seconds, just long enough to see whether the target was male or female before authorizing the bombings.

Regardless of whether that particular claim is actually borne out, there have been numerous academic studies about the risk of automation bias with AI, which I think is clearly at play here. Because the machine is so smart and has all of these data streams and intelligence streams being fed into it, there's a risk that humans don't sufficiently question its output. This risk of automation bias means that even if humans are approving the targets, they could be simply rubber stamping the decision to use force rather than thoroughly looking at the data that the machine has produced and going back and vetting the targets very carefully. That's just not being done, and it might not even be possible given the problems with explainability and traceability for humans to really understand how AI systems are generating these outputs.

This is one of the questions that I asked, by the way, in my article in Just Security in December. Policymakers and the public need to press Israel on this question: What does the human review process really look like for these operations? Is this just rubber stamping the decision to use force, or is there serious review?

RS: In this case, it seems like the impact of AI was amplified by the IDF's use of loose rules of engagement. Can you tell me a little bit more about the relationship between emerging tech and practical policy decisions about how to use it?

Rosen: That's the other problem here. First of all, you have the problem of Israel's interpretation of international law, which is, in some ways, much more permissive than how other states interpret basic principles like proportionality. On top of that, there are inevitably going to be errors made with AI systems, which contributes to civilian harm. This latest report claims that the Lavender system, for example, was wrong 10% of the time. That margin of error could, in fact, be much greater depending on how Israel is classifying individuals as Hamas militants.

The AI systems are trained on data, and Israel has identified certain characteristics of people who they claim are Hamas or Palestinian Islamic Jihad operatives, and then they feed that data into the machine. But what if the features that they are identifying are overly broad — such as carrying a weapon, being in a WhatsApp group with someone linked to Hamas, or even just moving house a lot, which everyone, of course, is doing now because it's a whole country of refugees. If these characteristics are fed into AI systems to identify militants, then that's a big concern because the system is going to take that data and misidentify civilians a great part of the time.

Israel can say that it's following international law and that there's human review of all of these decisions, and all of that can be true. But again, it's Israel's interpretation of international law. And it's how they're defining who counts as the combatant in this war and how that data is fed into the AI systems. All of that compounds in a way that can create really serious harm.

I also want to point out that all the well-documented problems with AI in the domestic context — from underlying biases in the algorithms to the problem of hallucination — are certainly going to persist in war, and it's going to be compounded because of the pace of decision making. None of this is going to be reviewed in a very careful way. For example, we know that Israel has a massive surveillance system in the Gaza Strip and that all of this data is being fed into the AI systems to contribute to these targeting outputs. Any underlying biases in those systems will feed into and compound into errors in the final targeting output. If human review is perfunctory, then the result will be significant civilian harm, which is what we have seen.

RS: The U.S. is interested in AI for lots of military applications, including automated swarms of lethal drones. What does Israel's experience tell us about how American policymakers should approach this tech?

Rosen: It tells us that U.S. policymakers have to be extremely circumspect about the use of AI in both intelligence and military operations. The White House and the Department of Defense and other agencies have put forth a number of statements about responsible AI, particularly in a military context. But these have all been very much at the level of principles.

Everything depends on how these broad principles for the responsible use of military AI are operationalized in practice, and, of course, we haven't really had a case yet where we've seen the U.S. in a public way relying on these tools in their conflicts. But that's definitely coming, and the U.S. should use this time now to not only learn all the lessons of what's happening in Gaza, but to be very proactive in operationalizing those broad principles for responsible use of military AI, socializing them among other states, and really leading the world in signing on to these principles for military AI. They have to a certain extent, but the progress has been very, very slow. That's what's desperately needed right now.

Palestinians look for survivors after an Israeli airstrike in Rafah refugee camp, southern Gaza Strip, on October 12, 2023. (Anas Mohammad/ Shutterstock)

Reporting | Middle East
Menendez's corruption is just the tip of the iceberg

U.S. Senator Robert Menendez (D-NJ) looks on, following his bribery trial in connection with an alleged corrupt relationship with three New Jersey businessmen, in New York City, U.S., July 16, 2024. REUTERS/Brendan McDermid

Menendez's corruption is just the tip of the iceberg


Today, Sen. Robert Menendez (D-N.J.) became the first U.S. senator ever to be convicted of acting as an unregistered foreign agent. While serving as chair of the Senate Foreign Relations Committee, Menendez ghost-wrote a letter and approved arms sales on behalf of the Egyptian regime in exchange for bribes, among other crimes on behalf of foreign powers in a sweeping corruption case. An Egyptian businessman even referred to Menendez in a text to a military official as “our man.”

In a statement, U.S. Attorney Damian Williams said Menendez was engaging in politics for profit. "Because Senator Menendez has now been found guilty, his years of selling his office to the highest bidder have finally come to an end,” he said.

keep readingShow less
States should let the feds handle foreign influence

The Bold Bureau /

States should let the feds handle foreign influence

Washington Politics

In April, a state bill in Georgia aimed at clamping down on foreign influence landed on the desk of Governor Brian Kemp.

Presented under the guise of common-sense legislation, the bill was more reminiscent of McCarthyism; if passed, it would have required workers of foreign-owned businesses such as Hyundai, Adidas, or Anheuser-Busch in Georgia to register as foreign agents, placing a huge burden on everyday Americans.

keep readingShow less
Will stock trade ban curtail DOD budget corruption?

Billion Photos via

Will stock trade ban curtail DOD budget corruption?


A new bipartisan proposal to ban members of Congress and their immediate family members from trading individual stocks looks to close a glaring conflict of interest between politicians who control massive government budgets, much of which go to private contractors.

The potential for serious conflicts of interest are quickly apparent when reviewing the stock trades of members of Congress's Senate and House Armed Services Committees, the panels responsible for the National Defense Authorization Act, the bill that sets recommended funding levels for the Department of Defense.

keep readingShow less

Israel-Gaza Crisis



Subscribe now to our weekly round-up and don't miss a beat with your favorite RS contributors and reporters, as well as staff analysis, opinion, and news promoting a positive, non-partisan vision of U.S. foreign policy.