Follow us on social

Israel using secret AI tech to target Palestinians

Israel using secret AI tech to target Palestinians

A leading voice on military AI breaks down how big data risks making war less humane

Reporting | Middle East

The Israeli military has employed an artificial intelligence-driven “kill list” to select over 30,000 targets in Gaza with minimal human input, fueling civilian casualties in the war-torn strip, according to an explosive new investigation from +972 Magazine.

Especially in the early days of the Gaza war, Israel Defense Forces (IDF) personnel ignored the AI’s 10% false positive rate and intentionally targeted alleged militants in their homes with unguided “dumb bombs” despite an increased likelihood of civilian harm, according to IDF sources who spoke with +972 Magazine.

The investigation sheds light on the myriad ways in which cutting-edge AI tech, combined with lax rules of engagement from IDF commanders on the ground, has fueled staggering rates of civilian harm in Gaza. At least 33,000 Palestinians have died due to Israel’s campaign, which followed a Hamas attack that killed 1,200 Israelis last October.

The AI targeting software, known as “Lavender,” reportedly relies on sprawling surveillance networks and assigns a 1-100 score to every Gazan that estimates the likelihood that they are a Hamas militant. Soldiers then input this information into software known as “Where’s Daddy,” which uses AI to warn when an alleged militant has returned to their home.

Previous reporting from +972 Magazine revealed the existence of a similar AI system for targeting houses used by militants, called “The Gospel.” In both cases, the IDF said +972 Magazine exaggerated the role and impact of these high-tech tools.

“The doomsday scenario of killer algorithms is already unfolding in Gaza,” argues Brianna Rosen, a senior fellow at Just Security and the University of Oxford who previously worked at the National Security Council during the Obama administration.

RS spoke with Rosen to get her take on the latest revelations about Israel’s use of AI in Gaza, how AI is changing war, and what U.S. policymakers should do to regulate military tech. The following conversation has been edited for length and clarity.

RS: What does this new reporting from +972 Magazine tell us about how Israel has used AI in Gaza?

Rosen: The first thing that I want to stress is that it's not just +972 Magazine. The IDF itself has actually commented on these systems as well. A lot of people claimed that the report overstates some of the claims about AI systems, but Israel itself has made a number of comments that support some of these facts. The report substantiates a trend that we've seen since December with Israel's use of AI in Gaza, which is that AI is increasing the pace of targeting in war and expanding the scope of war.

As the IDF itself has acknowledged, it's using AI to accelerate targeting, and the facts are bearing this out. In the first two months of the conflict, Israel attacked roughly 25,000 targets — more than four times as many as previous wars in Gaza. And they're actioning more targets than they ever have in the past. At the same time that the pace of targeting is accelerating, AI is also expanding the scope of war, or the pool of potential targets that are actioned for elimination. They're targeting more junior operatives than they ever have before. In previous campaigns, Israel would run out of known combatants or legitimate military objectives. But this latest reporting [shows] that's not seemingly a barrier to killing anymore. AI is acting, in Israel's own words, as a force multiplier, meaning that it's removing the resource constraints that in the past would prevent the IDF from identifying enough targets. Now they're able to go after significantly lower targets with tenuous or no connections at all to Hamas even though, normally, they wouldn't pursue those targets because of the minimal impact of their death on military objectives.

In short, AI is increasing the tempo of operations and expanding the pool of targets, which makes target verification and other precautionary obligations required under international law much harder to fulfill. All of this increases the risk that civilians will be misidentified and mistakenly targeted, contributing to the enormous civilian harm that we've seen thus far.

RS: How does this relate to the idea of having a human "in the loop" for AI-driven decisions?

Rosen: This is what is so concerning. The debate on military AI has been for so long focused on the wrong question. It's been focused on banning lethal autonomous weapons systems, or "killer robots," without recognizing that AI has already become a pervasive feature of war. Israel and other states, including the United States, are already integrating AI into military operations. They're saying that they're doing it in a responsible way with humans fully "in the loop." But the fear that I have, and which I think we're seeing play out here in Gaza, is that even with a human fully in the loop, there's significant civilian harm because the human reviews of machine decisions are essentially perfunctory.

With this report that was released today, there's a claim that there is human verification of the outputs that the AI systems are generating but that the human verification was done in only 20 seconds, just long enough to see whether the target was male or female before authorizing the bombings.

Regardless of whether that particular claim is actually borne out, there have been numerous academic studies about the risk of automation bias with AI, which I think is clearly at play here. Because the machine is so smart and has all of these data streams and intelligence streams being fed into it, there's a risk that humans don't sufficiently question its output. This risk of automation bias means that even if humans are approving the targets, they could be simply rubber stamping the decision to use force rather than thoroughly looking at the data that the machine has produced and going back and vetting the targets very carefully. That's just not being done, and it might not even be possible given the problems with explainability and traceability for humans to really understand how AI systems are generating these outputs.

This is one of the questions that I asked, by the way, in my article in Just Security in December. Policymakers and the public need to press Israel on this question: What does the human review process really look like for these operations? Is this just rubber stamping the decision to use force, or is there serious review?

RS: In this case, it seems like the impact of AI was amplified by the IDF's use of loose rules of engagement. Can you tell me a little bit more about the relationship between emerging tech and practical policy decisions about how to use it?

Rosen: That's the other problem here. First of all, you have the problem of Israel's interpretation of international law, which is, in some ways, much more permissive than how other states interpret basic principles like proportionality. On top of that, there are inevitably going to be errors made with AI systems, which contributes to civilian harm. This latest report claims that the Lavender system, for example, was wrong 10% of the time. That margin of error could, in fact, be much greater depending on how Israel is classifying individuals as Hamas militants.

The AI systems are trained on data, and Israel has identified certain characteristics of people who they claim are Hamas or Palestinian Islamic Jihad operatives, and then they feed that data into the machine. But what if the features that they are identifying are overly broad — such as carrying a weapon, being in a WhatsApp group with someone linked to Hamas, or even just moving house a lot, which everyone, of course, is doing now because it's a whole country of refugees. If these characteristics are fed into AI systems to identify militants, then that's a big concern because the system is going to take that data and misidentify civilians a great part of the time.

Israel can say that it's following international law and that there's human review of all of these decisions, and all of that can be true. But again, it's Israel's interpretation of international law. And it's how they're defining who counts as the combatant in this war and how that data is fed into the AI systems. All of that compounds in a way that can create really serious harm.

I also want to point out that all the well-documented problems with AI in the domestic context — from underlying biases in the algorithms to the problem of hallucination — are certainly going to persist in war, and it's going to be compounded because of the pace of decision making. None of this is going to be reviewed in a very careful way. For example, we know that Israel has a massive surveillance system in the Gaza Strip and that all of this data is being fed into the AI systems to contribute to these targeting outputs. Any underlying biases in those systems will feed into and compound into errors in the final targeting output. If human review is perfunctory, then the result will be significant civilian harm, which is what we have seen.

RS: The U.S. is interested in AI for lots of military applications, including automated swarms of lethal drones. What does Israel's experience tell us about how American policymakers should approach this tech?

Rosen: It tells us that U.S. policymakers have to be extremely circumspect about the use of AI in both intelligence and military operations. The White House and the Department of Defense and other agencies have put forth a number of statements about responsible AI, particularly in a military context. But these have all been very much at the level of principles.

Everything depends on how these broad principles for the responsible use of military AI are operationalized in practice, and, of course, we haven't really had a case yet where we've seen the U.S. in a public way relying on these tools in their conflicts. But that's definitely coming, and the U.S. should use this time now to not only learn all the lessons of what's happening in Gaza, but to be very proactive in operationalizing those broad principles for responsible use of military AI, socializing them among other states, and really leading the world in signing on to these principles for military AI. They have to a certain extent, but the progress has been very, very slow. That's what's desperately needed right now.

Thanks to our readers and supporters, Responsible Statecraft has had a tremendous year. A complete website overhaul made possible in part by generous contributions to RS, along with amazing writing by staff and outside contributors, has helped to increase our monthly page views by 133%! In continuing to provide independent and sharp analysis on the major conflicts in Ukraine and the Middle East, as well as the tumult of Washington politics, RS has become a go-to for readers looking for alternatives and change in the foreign policy conversation. 

 

We hope you will consider a tax-exempt donation to RS for your end-of-the-year giving, as we plan for new ways to expand our coverage and reach in 2025. Please enjoy your holidays, and here is to a dynamic year ahead!

Palestinians look for survivors after an Israeli airstrike in Rafah refugee camp, southern Gaza Strip, on October 12, 2023. (Anas Mohammad/ Shutterstock)

Reporting | Middle East
Romania's election canceled amid claims of Russian interference
Top photo credit: Candidate for the presidency of Romania, Calin Georgescu, and his wife, Cristela, arrive at a polling station for parliamentary elections, Dec. 1, 2024 in Mogosoaia, Romania. Georgescu one the first round in the Nov. 24 presidential elections but those elections results have been canceled (Shutterstock/LCV)

Romania's election canceled amid claims of Russian interference

QiOSK

The Romanian Constitutional Court’s unprecedented decision to annul the first round results in the country’s Nov. 24 presidential election and restart the contest from scratch raises somber questions about Romanian democracy at a time when the European Union is being swept by populist, eurosceptic waves.

The court, citing declassified intelligence reports, ruled that candidate Călin Georgescu unlawfully benefitted from a foreign-backed social media campaign that propelled him from an obscure outsider to the frontrunner by a comfortable margin. Romanian intelligence has identified the foreign backer as Russia. Authorities claim that Georgescu’s popularity was artificially inflated by tens of thousands of TikTok accounts that promoted his candidacy in violation of Romanian election laws.

keep readingShow less
Palestinians Israel
Top photo credit: Palestinians take part in a "Great March of Return" demonstration, on the Gaza-Israel border, in east of Gaza city in the Gaza Strip. 07 December, 2018. Palestinian Territory, Gaza City (Shutterstock/hosny f. Salah)

Why the Israeli-Palestinian conflict has endured

Middle East

The retiring United Nations envoy for the Middle East peace process has insightfully identified a major reason the conflict between Israelis and Palestinians continues to boil and to entail widespread death and destruction.

In a recent interview with the New York Times, Norwegian diplomat Tor Wennesland criticized the international community for relying on short-term fixes such as improving quality of life in occupied territory or diversions such as seeking peace deals between Israel and other Arab states. The crescendo of bloodshed during the past year underscores the ineffectiveness of such approaches.

keep readingShow less
US military syria SDF
Top photo credit: A U.S. Soldier oversees members of the Syrian Democratic Forces as they raise a Tal Abyad Military Council flag over the outpost, Sept. 21, 2019. (U.S. Army photo by Staff Sgt. Andrew Goedl)

US forces still fighting inside Syria amid power vacuum

QiOSK

A surprise offensive by Islamist, al-Qaida-linked group Hayat Tahrir al Sham (HTS) has forced President Bashar al-Assad out in Syria. In turn, the U.S. is ramping up its long-term involvement in a country already devastated by years of war.

According to a Sunday statement by President Joe Biden, the U.S. has made haste to strike a freshly post-Assad Syria 75 times, allegedly hitting ISIS targets with B-52 bombers and F-15 fighters. “We’re clear-eyed about the fact that ISIS will try and take advantage of any vacuum to reestablish its credibility, and create a safe haven,” Biden explained. “We will not allow that to happen.”

keep readingShow less

Trump transition

Latest

Newsletter

Subscribe now to our weekly round-up and don't miss a beat with your favorite RS contributors and reporters, as well as staff analysis, opinion, and news promoting a positive, non-partisan vision of U.S. foreign policy.