Follow us on social

Israel using secret AI tech to target Palestinians

Israel using secret AI tech to target Palestinians

A leading voice on military AI breaks down how big data risks making war less humane

Reporting | Middle East

The Israeli military has employed an artificial intelligence-driven “kill list” to select over 30,000 targets in Gaza with minimal human input, fueling civilian casualties in the war-torn strip, according to an explosive new investigation from +972 Magazine.

Especially in the early days of the Gaza war, Israel Defense Forces (IDF) personnel ignored the AI’s 10% false positive rate and intentionally targeted alleged militants in their homes with unguided “dumb bombs” despite an increased likelihood of civilian harm, according to IDF sources who spoke with +972 Magazine.

The investigation sheds light on the myriad ways in which cutting-edge AI tech, combined with lax rules of engagement from IDF commanders on the ground, has fueled staggering rates of civilian harm in Gaza. At least 33,000 Palestinians have died due to Israel’s campaign, which followed a Hamas attack that killed 1,200 Israelis last October.

The AI targeting software, known as “Lavender,” reportedly relies on sprawling surveillance networks and assigns a 1-100 score to every Gazan that estimates the likelihood that they are a Hamas militant. Soldiers then input this information into software known as “Where’s Daddy,” which uses AI to warn when an alleged militant has returned to their home.

Previous reporting from +972 Magazine revealed the existence of a similar AI system for targeting houses used by militants, called “The Gospel.” In both cases, the IDF said +972 Magazine exaggerated the role and impact of these high-tech tools.

“The doomsday scenario of killer algorithms is already unfolding in Gaza,” argues Brianna Rosen, a senior fellow at Just Security and the University of Oxford who previously worked at the National Security Council during the Obama administration.

RS spoke with Rosen to get her take on the latest revelations about Israel’s use of AI in Gaza, how AI is changing war, and what U.S. policymakers should do to regulate military tech. The following conversation has been edited for length and clarity.

RS: What does this new reporting from +972 Magazine tell us about how Israel has used AI in Gaza?

Rosen: The first thing that I want to stress is that it's not just +972 Magazine. The IDF itself has actually commented on these systems as well. A lot of people claimed that the report overstates some of the claims about AI systems, but Israel itself has made a number of comments that support some of these facts. The report substantiates a trend that we've seen since December with Israel's use of AI in Gaza, which is that AI is increasing the pace of targeting in war and expanding the scope of war.

As the IDF itself has acknowledged, it's using AI to accelerate targeting, and the facts are bearing this out. In the first two months of the conflict, Israel attacked roughly 25,000 targets — more than four times as many as previous wars in Gaza. And they're actioning more targets than they ever have in the past. At the same time that the pace of targeting is accelerating, AI is also expanding the scope of war, or the pool of potential targets that are actioned for elimination. They're targeting more junior operatives than they ever have before. In previous campaigns, Israel would run out of known combatants or legitimate military objectives. But this latest reporting [shows] that's not seemingly a barrier to killing anymore. AI is acting, in Israel's own words, as a force multiplier, meaning that it's removing the resource constraints that in the past would prevent the IDF from identifying enough targets. Now they're able to go after significantly lower targets with tenuous or no connections at all to Hamas even though, normally, they wouldn't pursue those targets because of the minimal impact of their death on military objectives.

In short, AI is increasing the tempo of operations and expanding the pool of targets, which makes target verification and other precautionary obligations required under international law much harder to fulfill. All of this increases the risk that civilians will be misidentified and mistakenly targeted, contributing to the enormous civilian harm that we've seen thus far.

RS: How does this relate to the idea of having a human "in the loop" for AI-driven decisions?

Rosen: This is what is so concerning. The debate on military AI has been for so long focused on the wrong question. It's been focused on banning lethal autonomous weapons systems, or "killer robots," without recognizing that AI has already become a pervasive feature of war. Israel and other states, including the United States, are already integrating AI into military operations. They're saying that they're doing it in a responsible way with humans fully "in the loop." But the fear that I have, and which I think we're seeing play out here in Gaza, is that even with a human fully in the loop, there's significant civilian harm because the human reviews of machine decisions are essentially perfunctory.

With this report that was released today, there's a claim that there is human verification of the outputs that the AI systems are generating but that the human verification was done in only 20 seconds, just long enough to see whether the target was male or female before authorizing the bombings.

Regardless of whether that particular claim is actually borne out, there have been numerous academic studies about the risk of automation bias with AI, which I think is clearly at play here. Because the machine is so smart and has all of these data streams and intelligence streams being fed into it, there's a risk that humans don't sufficiently question its output. This risk of automation bias means that even if humans are approving the targets, they could be simply rubber stamping the decision to use force rather than thoroughly looking at the data that the machine has produced and going back and vetting the targets very carefully. That's just not being done, and it might not even be possible given the problems with explainability and traceability for humans to really understand how AI systems are generating these outputs.

This is one of the questions that I asked, by the way, in my article in Just Security in December. Policymakers and the public need to press Israel on this question: What does the human review process really look like for these operations? Is this just rubber stamping the decision to use force, or is there serious review?

RS: In this case, it seems like the impact of AI was amplified by the IDF's use of loose rules of engagement. Can you tell me a little bit more about the relationship between emerging tech and practical policy decisions about how to use it?

Rosen: That's the other problem here. First of all, you have the problem of Israel's interpretation of international law, which is, in some ways, much more permissive than how other states interpret basic principles like proportionality. On top of that, there are inevitably going to be errors made with AI systems, which contributes to civilian harm. This latest report claims that the Lavender system, for example, was wrong 10% of the time. That margin of error could, in fact, be much greater depending on how Israel is classifying individuals as Hamas militants.

The AI systems are trained on data, and Israel has identified certain characteristics of people who they claim are Hamas or Palestinian Islamic Jihad operatives, and then they feed that data into the machine. But what if the features that they are identifying are overly broad — such as carrying a weapon, being in a WhatsApp group with someone linked to Hamas, or even just moving house a lot, which everyone, of course, is doing now because it's a whole country of refugees. If these characteristics are fed into AI systems to identify militants, then that's a big concern because the system is going to take that data and misidentify civilians a great part of the time.

Israel can say that it's following international law and that there's human review of all of these decisions, and all of that can be true. But again, it's Israel's interpretation of international law. And it's how they're defining who counts as the combatant in this war and how that data is fed into the AI systems. All of that compounds in a way that can create really serious harm.

I also want to point out that all the well-documented problems with AI in the domestic context — from underlying biases in the algorithms to the problem of hallucination — are certainly going to persist in war, and it's going to be compounded because of the pace of decision making. None of this is going to be reviewed in a very careful way. For example, we know that Israel has a massive surveillance system in the Gaza Strip and that all of this data is being fed into the AI systems to contribute to these targeting outputs. Any underlying biases in those systems will feed into and compound into errors in the final targeting output. If human review is perfunctory, then the result will be significant civilian harm, which is what we have seen.

RS: The U.S. is interested in AI for lots of military applications, including automated swarms of lethal drones. What does Israel's experience tell us about how American policymakers should approach this tech?

Rosen: It tells us that U.S. policymakers have to be extremely circumspect about the use of AI in both intelligence and military operations. The White House and the Department of Defense and other agencies have put forth a number of statements about responsible AI, particularly in a military context. But these have all been very much at the level of principles.

Everything depends on how these broad principles for the responsible use of military AI are operationalized in practice, and, of course, we haven't really had a case yet where we've seen the U.S. in a public way relying on these tools in their conflicts. But that's definitely coming, and the U.S. should use this time now to not only learn all the lessons of what's happening in Gaza, but to be very proactive in operationalizing those broad principles for responsible use of military AI, socializing them among other states, and really leading the world in signing on to these principles for military AI. They have to a certain extent, but the progress has been very, very slow. That's what's desperately needed right now.

Thanks to our readers and supporters, Responsible Statecraft has had a tremendous year. A complete website overhaul made possible in part by generous contributions to RS, along with amazing writing by staff and outside contributors, has helped to increase our monthly page views by 133%! In continuing to provide independent and sharp analysis on the major conflicts in Ukraine and the Middle East, as well as the tumult of Washington politics, RS has become a go-to for readers looking for alternatives and change in the foreign policy conversation. 

 

We hope you will consider a tax-exempt donation to RS for your end-of-the-year giving, as we plan for new ways to expand our coverage and reach in 2025. Please enjoy your holidays, and here is to a dynamic year ahead!

Palestinians look for survivors after an Israeli airstrike in Rafah refugee camp, southern Gaza Strip, on October 12, 2023. (Anas Mohammad/ Shutterstock)

Reporting | Middle East
war profit
Top image credit: Andrew Angelov via shutterstock.com

War drives revenue increases for world's top arms dealers

QiOSK

Revenues at the world’s top 100 global arms and military services producing companies totaled $632 billion in 2023, a 4.2% increase over the prior year, according to new data released by the Stockholm International Peace Research Institute (SIPRI).

The largest increases were tied to ongoing conflicts, including a 40% increase in revenues for Russian companies involved in supplying Moscow’s war on Ukraine and record sales for Israeli firms producing weapons used in that nation’s brutal war on Gaza. Revenues for Turkey’s top arms producing companies also rose sharply — by 24% — on the strength of increased domestic defense spending plus exports tied to the war in Ukraine.

keep readingShow less
Biden Putin Zelenskyy
Top Photo: Biden (left) meets with Russian President Putin (right). Ukrainian President Zelenskyy sits in between.

Diplomacy Watch: Will South Korea give weapons to Ukraine?

QiOSK

On Wednesday, a Ukrainian delegation led by Defense Minister Rustem Umerov met with South Korean officials, including President Yoon Suk Yeol. The AP reported that the two countries met to discuss ways to “cope with the security threat posed by the North Korean-Russian military cooperation including the North’s troop dispatch.”

During a previous meeting in October, Ukrainian President Volodomir Zelenskyy said he planned to present a “detailed request to Seoul for arms support including artillery and air defense systems.”

keep readingShow less
Thiel pal and venture capitalist eyed for 2nd highest post in Pentagon
Top photo credit: Trae Stephens of Anduril Industries’ at the Stanford Seminar: Silicon Valley & The U.S. Government (You Tube/Screenshot)

Thiel pal and venture capitalist eyed for 2nd highest post in Pentagon

QiOSK

According to WSJ, President-elect Donald Trump is eyeing Trae Stephens, a close affiliate of venture capitalist and Pentagon contractor Peter Thiel, as his incoming administration’s Deputy Secretary of Defense.

The deputy secretary, a position now held by Kathleen Hicks, is the second-highest-ranking civilian in the Pentagon, with the primary responsibility of “managing the defense budget and executing the priorities of the secretary of defense.”

keep readingShow less

Election 2024

Latest

Newsletter

Subscribe now to our weekly round-up and don't miss a beat with your favorite RS contributors and reporters, as well as staff analysis, opinion, and news promoting a positive, non-partisan vision of U.S. foreign policy.