The 2021 National Security Commission on Artificial Intelligence (NSCAI) final report (chaired by former Google CEO Eric Schmidt) declared that we are actively in an AI arms race by stating that “If the United States does not act, it will likely lose its leadership position in AI to China in the next decade.”
This race dynamic is unique because unlike other arms races (such as nuclear weapons), the vast majority of the breakthroughs in AI come from industry, not government. As one scholar puts it, “the AI security dilemma is driven by commercial and market forces not under the positive control of states.” Illustrating this dynamic, in August of 2023, Schmidt created White Stork, a military startup which is developing AI attack drones for Ukraine to deploy against Russia.
Thus the key actors to understanding AI in the military context are the companies that are developing AI and increasingly lobbying lawmakers and the public on the need to avoid regulation and to build AI into military systems. Actors in this space may have a mix of motivations, the most notable being a desire to generate profits and a desire to support U.S. military power by maintaining technological superiority over China. These motivations are often intertwined as individuals, corporations, and think tanks (such as the Schmidt-funded Special Competitive Studies Project) collaborate to promote a message that we need to build AI first and worry about the potential consequences later.
In particular there is an obsession with speed — winning the race is determined by whoever runs fastest. The NSCAI report bemoans that “the U.S. government still operates at human speed, not machine speed” and warns that “delaying AI adoption will push all of the risk onto the next generation of Americans — who will have to defend against, and perhaps fight, a 21st century adversary with 20th century tools.” According to this perspective, the risk posed by AI is failing to be first.
The downside of a race is that running at top speed doesn’t leave time for questioning if the race itself is creating dangers as the nuclear arms race did. And unfortunately the argument that we have to race ahead on AI has been weaponized by the tech industry as a shield against regulation. This timeline depicts the increasingly close collaboration between the tech industry and national security or political figures to frame competition with China as a key reason to avoid regulation of the tech industry and specifically AI.
This lobbying goes beyond the companies that are focused on developing AI for defense applications such as Palantir, to the biggest public companies — namely Meta. Meta in particular has shown a reckless lack of concern for potential misapplication of the frontier AI models that it publishes open source.
Open sourcing the most advanced models is unique among cutting edge AI developers and this public available code allows safety restrictions to be easily removed — which took place within days of their latest model release. Meta has spent over $85 million funding a dark money influence campaign lobbying against AI regulation through a front group, the American Edge Project, which paid for alarmist ads that describe AI regulation as “pro-China legislation.” As Helen Toner, a prominent AI safety expert, put it: the cold war dynamic of fearing China’s AI and a corresponding “…groundless sense of anxiety should not determine the course of AI regulation in the United States.”
Unfortunately this race rhetoric has already resulted in a near total block for meaningful federal legislation. While a number of bills have been introduced, Steve Scalise, Republican House Majority Leader has said that Republicans won’t support any meaningful AI regulation in order to uphold American technological dominance.
Former President Donald Trump has vowed to repeal the Biden Executive Order on AI on day one. Marc Andreesen, a prominent libertarian tech investor, has stated that his conversations about AI in D.C. with policymakers shift from them being pro AI-regulation to “we need American technology companies to succeed, and we need to beat the Chinese” when he brings up China. In an interview I conducted, AI journalist Shakeel Hashim explained, “very experienced lobbyists are talking about China a lot, and they are doing that because it works. Take the very hawkish Hill and Valley Forum, or the Meta-funded American Edge Project. The fact they, and others, are using the China narrative suggests that they are seeing it work."
While more conventional economic arguments about the need for unrestricted innovation have also been deployed widely by industry advocates when trying to shut down California’s AI regulation Senate Bill 1047, it seems that arguments of national security are especially potent at the national level and allow AI lobbyists to frame any potential regulation as unpatriotic.
The problem of AI development isn’t that any particular AI technology will necessarily be fatally flawed. The problem is that in the race to be first, concerns about the risks of particular AI projects or applications (whether internal or externally raised) will not be given sufficient weight.
On the commercial side we have already seen this dynamic play out with the gutting of OpenAI’s safety team. At OpenAI, the commercial market pressures to be at the forefront of AI led to product development taking the imperative over the concerns of the internal safety team. Jan Leike, the former head of the safety team, resigned and highlighted that his team wasn’t given access to promised resources and that safety had “taken a backseat to shiny products.” Lack of transparency does not enable us to identify similar incidents of safety being sidelined in the context of military AI development, but it’s not hard to imagine safety concerns being sidelined.
Unfortunately, AI regulatory efforts will likely face greater resistance over time as more companies perceive their economic interests as being best served by minimal regulation. This dilemma was identified by David Collingridge, author of “The Social Control of Technology,” who has noted that it is easier to regulate a technology before it is threatening, but difficult once it has become integrated into the world and the economy.
This challenge subsequently became known as the Collingridge dilemma. The only solution to the Collingridge dilemma is to take bold action now and heed the calls of AI experts that the risks stemming from AI are real.