Follow us on social

Israel using secret AI tech to target Palestinians

Israel using secret AI tech to target Palestinians

A leading voice on military AI breaks down how big data risks making war less humane

Reporting | Middle East

The Israeli military has employed an artificial intelligence-driven “kill list” to select over 30,000 targets in Gaza with minimal human input, fueling civilian casualties in the war-torn strip, according to an explosive new investigation from +972 Magazine.

Especially in the early days of the Gaza war, Israel Defense Forces (IDF) personnel ignored the AI’s 10% false positive rate and intentionally targeted alleged militants in their homes with unguided “dumb bombs” despite an increased likelihood of civilian harm, according to IDF sources who spoke with +972 Magazine.

The investigation sheds light on the myriad ways in which cutting-edge AI tech, combined with lax rules of engagement from IDF commanders on the ground, has fueled staggering rates of civilian harm in Gaza. At least 33,000 Palestinians have died due to Israel’s campaign, which followed a Hamas attack that killed 1,200 Israelis last October.

The AI targeting software, known as “Lavender,” reportedly relies on sprawling surveillance networks and assigns a 1-100 score to every Gazan that estimates the likelihood that they are a Hamas militant. Soldiers then input this information into software known as “Where’s Daddy,” which uses AI to warn when an alleged militant has returned to their home.

Previous reporting from +972 Magazine revealed the existence of a similar AI system for targeting houses used by militants, called “The Gospel.” In both cases, the IDF said +972 Magazine exaggerated the role and impact of these high-tech tools.

“The doomsday scenario of killer algorithms is already unfolding in Gaza,” argues Brianna Rosen, a senior fellow at Just Security and the University of Oxford who previously worked at the National Security Council during the Obama administration.

RS spoke with Rosen to get her take on the latest revelations about Israel’s use of AI in Gaza, how AI is changing war, and what U.S. policymakers should do to regulate military tech. The following conversation has been edited for length and clarity.

RS: What does this new reporting from +972 Magazine tell us about how Israel has used AI in Gaza?

Rosen: The first thing that I want to stress is that it's not just +972 Magazine. The IDF itself has actually commented on these systems as well. A lot of people claimed that the report overstates some of the claims about AI systems, but Israel itself has made a number of comments that support some of these facts. The report substantiates a trend that we've seen since December with Israel's use of AI in Gaza, which is that AI is increasing the pace of targeting in war and expanding the scope of war.

As the IDF itself has acknowledged, it's using AI to accelerate targeting, and the facts are bearing this out. In the first two months of the conflict, Israel attacked roughly 25,000 targets — more than four times as many as previous wars in Gaza. And they're actioning more targets than they ever have in the past. At the same time that the pace of targeting is accelerating, AI is also expanding the scope of war, or the pool of potential targets that are actioned for elimination. They're targeting more junior operatives than they ever have before. In previous campaigns, Israel would run out of known combatants or legitimate military objectives. But this latest reporting [shows] that's not seemingly a barrier to killing anymore. AI is acting, in Israel's own words, as a force multiplier, meaning that it's removing the resource constraints that in the past would prevent the IDF from identifying enough targets. Now they're able to go after significantly lower targets with tenuous or no connections at all to Hamas even though, normally, they wouldn't pursue those targets because of the minimal impact of their death on military objectives.

In short, AI is increasing the tempo of operations and expanding the pool of targets, which makes target verification and other precautionary obligations required under international law much harder to fulfill. All of this increases the risk that civilians will be misidentified and mistakenly targeted, contributing to the enormous civilian harm that we've seen thus far.

RS: How does this relate to the idea of having a human "in the loop" for AI-driven decisions?

Rosen: This is what is so concerning. The debate on military AI has been for so long focused on the wrong question. It's been focused on banning lethal autonomous weapons systems, or "killer robots," without recognizing that AI has already become a pervasive feature of war. Israel and other states, including the United States, are already integrating AI into military operations. They're saying that they're doing it in a responsible way with humans fully "in the loop." But the fear that I have, and which I think we're seeing play out here in Gaza, is that even with a human fully in the loop, there's significant civilian harm because the human reviews of machine decisions are essentially perfunctory.

With this report that was released today, there's a claim that there is human verification of the outputs that the AI systems are generating but that the human verification was done in only 20 seconds, just long enough to see whether the target was male or female before authorizing the bombings.

Regardless of whether that particular claim is actually borne out, there have been numerous academic studies about the risk of automation bias with AI, which I think is clearly at play here. Because the machine is so smart and has all of these data streams and intelligence streams being fed into it, there's a risk that humans don't sufficiently question its output. This risk of automation bias means that even if humans are approving the targets, they could be simply rubber stamping the decision to use force rather than thoroughly looking at the data that the machine has produced and going back and vetting the targets very carefully. That's just not being done, and it might not even be possible given the problems with explainability and traceability for humans to really understand how AI systems are generating these outputs.

This is one of the questions that I asked, by the way, in my article in Just Security in December. Policymakers and the public need to press Israel on this question: What does the human review process really look like for these operations? Is this just rubber stamping the decision to use force, or is there serious review?

RS: In this case, it seems like the impact of AI was amplified by the IDF's use of loose rules of engagement. Can you tell me a little bit more about the relationship between emerging tech and practical policy decisions about how to use it?

Rosen: That's the other problem here. First of all, you have the problem of Israel's interpretation of international law, which is, in some ways, much more permissive than how other states interpret basic principles like proportionality. On top of that, there are inevitably going to be errors made with AI systems, which contributes to civilian harm. This latest report claims that the Lavender system, for example, was wrong 10% of the time. That margin of error could, in fact, be much greater depending on how Israel is classifying individuals as Hamas militants.

The AI systems are trained on data, and Israel has identified certain characteristics of people who they claim are Hamas or Palestinian Islamic Jihad operatives, and then they feed that data into the machine. But what if the features that they are identifying are overly broad — such as carrying a weapon, being in a WhatsApp group with someone linked to Hamas, or even just moving house a lot, which everyone, of course, is doing now because it's a whole country of refugees. If these characteristics are fed into AI systems to identify militants, then that's a big concern because the system is going to take that data and misidentify civilians a great part of the time.

Israel can say that it's following international law and that there's human review of all of these decisions, and all of that can be true. But again, it's Israel's interpretation of international law. And it's how they're defining who counts as the combatant in this war and how that data is fed into the AI systems. All of that compounds in a way that can create really serious harm.

I also want to point out that all the well-documented problems with AI in the domestic context — from underlying biases in the algorithms to the problem of hallucination — are certainly going to persist in war, and it's going to be compounded because of the pace of decision making. None of this is going to be reviewed in a very careful way. For example, we know that Israel has a massive surveillance system in the Gaza Strip and that all of this data is being fed into the AI systems to contribute to these targeting outputs. Any underlying biases in those systems will feed into and compound into errors in the final targeting output. If human review is perfunctory, then the result will be significant civilian harm, which is what we have seen.

RS: The U.S. is interested in AI for lots of military applications, including automated swarms of lethal drones. What does Israel's experience tell us about how American policymakers should approach this tech?

Rosen: It tells us that U.S. policymakers have to be extremely circumspect about the use of AI in both intelligence and military operations. The White House and the Department of Defense and other agencies have put forth a number of statements about responsible AI, particularly in a military context. But these have all been very much at the level of principles.

Everything depends on how these broad principles for the responsible use of military AI are operationalized in practice, and, of course, we haven't really had a case yet where we've seen the U.S. in a public way relying on these tools in their conflicts. But that's definitely coming, and the U.S. should use this time now to not only learn all the lessons of what's happening in Gaza, but to be very proactive in operationalizing those broad principles for responsible use of military AI, socializing them among other states, and really leading the world in signing on to these principles for military AI. They have to a certain extent, but the progress has been very, very slow. That's what's desperately needed right now.

Palestinians look for survivors after an Israeli airstrike in Rafah refugee camp, southern Gaza Strip, on October 12, 2023. (Anas Mohammad/ Shutterstock)

Reporting | Middle East
2023-03-10t000000z_1731362646_mt1nurpho000xjbp8a_rtrmadp_3_conflicts-war-peace-ukraine-scaled
Ukrainian soldiers hold portraits of soldiers father Oleg Khomiuk, 52, and his son Mykyta Khomiuk, 25, during their farewell ceremony on the Independence Square in Kyiv, Ukraine 10 March 2023. The father and son died in the battles for Bakhmut in Donetsk region. (Photo by STR/NurPhoto)

Expert: Ukraine loses 25% of its population

QiOSK

Russia’s invasion of Ukraine is over two years old, and Kyiv is facing a population crisis. According to Florence Bauer, the U.N. Population Fund’s head in Eastern Europe, Ukraine’s population has declined by around 10 million people, or about 25 percent, since the start of the conflict in 2014, with 8 million of those occurring after Russia began its full-scale invasion in 2022. This report comes a week after Ukrainian presidential adviser Serhiy Leshchenko revealed that American politicians were pushing Zelenskyy to mobilize men as young as 18.

Population challenges” were already evident before the conflict started, as it matched trends existing in Eastern Europe, but the war has exacerbated the problem. The 6.7 million refugees represent the largest share of this population shift. Bauer also cited a decline in fertility. “The birth rate plummeted to one child per woman – the lowest fertility rate in Europe and one of the lowest in the world,” she told reporters on Tuesday.

keep readingShow less
Maia Sandu Moldova
Top image credit: Moldova's incumbent President and presidential candidate Maia Sandu casts her ballots at a polling station, as the country holds a presidential election and a referendum on joining the European Union, in Chisinau, Moldova October 20, 2024. REUTERS/Vladislav Culiomza

It was a mistake to make the Moldovan election about Russia

Europe

Moldova’s election result has left incumbent President Maia Sandu damaged.

An EU referendum delivered only a wafer-thin vote in favor of membership of the bloc. And in the first round of a presidential vote that Western commentators predicted Sandu might edge narrowly, she fell some way short of the 50% vote share she’d need to land a second presidential term. She will now face a unified group of opposition parties in the second round with her chances of remaining in office in the balance.

keep readingShow less
RTX (ex-Raytheon) busted for ‘extraordinary’ corruption
Top Photo: Visitor passes the Raytheon Technologies Corporation (RTX) logo at the 54th International Paris Air Show at Le Bourget Airport near Paris, France, June 22, 2023. (REUTERS/Benoit Tessier/File Photo)

RTX (ex-Raytheon) busted for ‘extraordinary’ corruption

Military Industrial Complex

Indictments of arms contractors for corruption and malfeasance are not uncommon, but recently revealed cases of illegal conduct by RTX (formerly Raytheon) are extraordinary even by the relatively lax standards of the defense industry.

The company has agreed to pay nearly $1 billion in fines, which is one of the highest figures ever for corruption in the arms sector. To incur these fines, RTX participated in price gouging on Pentagon contracts, bribing officials in Qatar, and sharing sensitive information with China.

keep readingShow less

Election 2024

Latest

Newsletter

Subscribe now to our weekly round-up and don't miss a beat with your favorite RS contributors and reporters, as well as staff analysis, opinion, and news promoting a positive, non-partisan vision of U.S. foreign policy.