Rhetoric from the Pentagon and the arms industry suggests that integrating artificial intelligence, or AI, into U.S. weapons, communications, and surveillance systems will improve efficiency, innovation, and national security.
The Pentagon is beginning to back its rhetoric on emerging technology with resources. The department’s Office of Strategic Capital now has the authority to grant executive loans and loan guarantees to invest in firms researching and developing 14 “critical technologies,” including hypersonics, quantum computing, microelectronics, autonomous systems, and artificial intelligence.
Meanwhile, the Senate version of the National Defense Authorization Act authorizes the Advanced Defense Capabilities Pilot, which contains a mandate to establish public-private partnerships with the goal of “leverag[ing] private equity capital to accelerate domestic defense scaling, production, and manufacturing.”
Proponents argue that the rapid development and deployment of autonomous systems, pilotless vehicles, and hypersonic weapons will shorten the time between recognizing a potential threat and destroying it — a process analysts and military leaders often refer to as shortening the "kill chain." This shift is portrayed as a positive development, when in fact it could easily enable deadly escalations by accident or design.
A case in point is Israel’s use of targeting systems incorporating AI to generate targets for military strikes in its brutal seige on Gaza. A recent investigation revealed the use of “Lavender,” an AI-based program developed by the Israeli army designed to identify all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad as potential bombing targets. Rather than using this capability to focus on discrete targets and spare civilians, the Israeli Defense Forces are using Lavender to multiply the number of targets to attack in a given time frame, increasing the pace of attack and the number of casualties, which now stand at over 33,000 deaths and tens of thousands injured.
The investigation revealed that the Israeli army preferred to only use unguided missiles, commonly known as “dumb” bombs (in contrast to “smart” precision bombs) to target alleged junior militants marked by Lavender. These bombs can indiscriminately destroy entire buildings and cause significant casualties.
“You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs],” said C., one of the intelligence officers speaking to +972 Magazine, which broke the story on Lavender.
The Lavender machine is not the first time the Israeli military has used AI. “The Gospel,” another system largely built on AI, is said to generate targets at a fast pace. As noted by +972 Magazine, “A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list.” Far from enabling more precise strikes that reduce civilian harm, the AI-targeted attacks increased impunity in the bombing of Gaza. As a member of the Israeli military posted to Gaza put it, “I don’t know how many people I killed as collateral damage … the focus was on creating as many targets as quickly as possible.”
The U.S. Congress has demonstrated its commitment to spurring on “collaborative defense projects between the United States and Israel in emerging technologies” through bills such as the United States-Israel Future of Warfare Act, which is just one avenue through which the United States continues to fund and support Israeli military operations. In February, the Senate approved an additional $14.1 billion for Israeli military operations via a supplemental funding package, but the fate of that aid package awaits action by the House.
But some members of Congress have pushed back against the risks of emerging technologies by introducing legislation to establish governance and regulations of AI. The Federal AI Governance and Transparency Act, for example, aims to ensure that “the design, development, acquisition, use, management, and oversight of artificial intelligence in the Federal Government… [is] consistent with the Constitution and any other applicable law and policy, including those addressing freedom of speech, privacy, civil rights, civil liberties, and an open and transparent Government.”
Accidents in the use of AI systems have their own potentially dire consequences, as pointed out by Michael Klare in a report for the Arms Control Association: “many analysts have cautioned against proceeding with such haste until more is known about the inadvertent and hazardous consequences of doing so. Analysts worry, for example, that AI-enabled systems may fail in unpredictable ways, causing unintended human slaughter or uncontrolled escalation.”
The Pentagon has given lip service to the potential dangers posed by widespread weaponization of AI, but its calls for responsible use of these systems ring hollow in the face of its public commitments to deploy advanced technology as quickly as possible. Last August, Deputy Secretary of Defense Kathleen Hicks unveiled her department’s “Replicator Initiative” in front of an audience of arms-producing companies, pledging to deploy large numbers of new systems by late 2025, possibly including “swarms of drones” designed to overwhelm Chinese defenses in a potential U.S-China conflict.
Meanwhile, venture capital firms like Andreesen-Horowitz and the Founders Fund are pouring billions of dollars into emerging military tech startups, hoping to cash in when some of them become major Pentagon contractors. In addition, these firms have been rushing to increase their lobbying clout by hiring dozens of ex-military officers as advisers and advocates for higher Pentagon spending on AI-driven systems.
The promoters of these new battlefield technologies are marketing them with evangelical fervor, suggesting that not only are they central to being able to “beat” China in a conflict, but that they are the key to restoring U.S. global military dominance. At a time when cooperation between Washington and Beijing is essential for addressing urgent threats like climate change, pandemics, and global poverty, cheerleading for a new high-tech arms race with China is both dangerous and counterproductive.
So what is to be done? First, there needs to be greater transparency about new weapons systems in development, how they might be used, and whether the technology is being shared with other nations. Also, the revolving door between the military, the Pentagon, and the emerging tech sector needs to be carefully regulated, including prohibitions on direct lobbying of former colleagues still in government.
In addition, Washington should consider the calls of scientists and advocates for a ban on robotic weapons and in the meantime, increase transparency, regulation, and oversight of these technologies. And all this needs to be coupled with a rethinking of U.S. global strategy that reduces reliance on military intervention and prioritizes diplomacy in U.S. interactions with governments, organizations, and individuals.
Developing a new generation of military technology will not solve our world’s most pressing problems, and there is a strong chance that it will make them worse. The time to push back against the illusions promoted by the people who will profit from taking AI to war is now.
- VC/DC: Silicon Valley wants its cut of US military spending ›
- The Pentagon's AI 'ghost fleet' is more than just scary — it's unwise ›
- Killer AI is a patriotic duty? Silicon Valley comes to Washington | Responsible Statecraft ›
- 20 unhinged comments heard at an AI conference in DC | Responsible Statecraft ›
- The dystopian future of military AI (VIDEO) | Responsible Statecraft ›