Follow us on social

China AI

Forget regs, AI CEOs got a need for speed to 'beat China'

Surprise, Big Tech giants are exploiting fears to block irksome safety checks and balances

Analysis | Military Industrial Complex

The 2021 National Security Commission on Artificial Intelligence (NSCAI) final report (chaired by former Google CEO Eric Schmidt) declared that we are actively in an AI arms race by stating that “If the United States does not act, it will likely lose its leadership position in AI to China in the next decade.”

This race dynamic is unique because unlike other arms races (such as nuclear weapons), the vast majority of the breakthroughs in AI come from industry, not government. As one scholar puts it, “the AI security dilemma is driven by commercial and market forces not under the positive control of states.” Illustrating this dynamic, in August of 2023, Schmidt created White Stork, a military startup which is developing AI attack drones for Ukraine to deploy against Russia.

Thus the key actors to understanding AI in the military context are the companies that are developing AI and increasingly lobbying lawmakers and the public on the need to avoid regulation and to build AI into military systems. Actors in this space may have a mix of motivations, the most notable being a desire to generate profits and a desire to support U.S. military power by maintaining technological superiority over China. These motivations are often intertwined as individuals, corporations, and think tanks (such as the Schmidt-funded Special Competitive Studies Project) collaborate to promote a message that we need to build AI first and worry about the potential consequences later.

In particular there is an obsession with speed — winning the race is determined by whoever runs fastest. The NSCAI report bemoans that “the U.S. government still operates at human speed, not machine speed” and warns that “delaying AI adoption will push all of the risk onto the next generation of Americans — who will have to defend against, and perhaps fight, a 21st century adversary with 20th century tools.” According to this perspective, the risk posed by AI is failing to be first.

The downside of a race is that running at top speed doesn’t leave time for questioning if the race itself is creating dangers as the nuclear arms race did. And unfortunately the argument that we have to race ahead on AI has been weaponized by the tech industry as a shield against regulation. This timeline depicts the increasingly close collaboration between the tech industry and national security or political figures to frame competition with China as a key reason to avoid regulation of the tech industry and specifically AI.

This lobbying goes beyond the companies that are focused on developing AI for defense applications such as Palantir, to the biggest public companies — namely Meta. Meta in particular has shown a reckless lack of concern for potential misapplication of the frontier AI models that it publishes open source.

Open sourcing the most advanced models is unique among cutting edge AI developers and this public available code allows safety restrictions to be easily removed — which took place within days of their latest model release. Meta has spent over $85 million funding a dark money influence campaign lobbying against AI regulation through a front group, the American Edge Project, which paid for alarmist ads that describe AI regulation as “pro-China legislation.” As Helen Toner, a prominent AI safety expert, put it: the cold war dynamic of fearing China’s AI and a corresponding “…groundless sense of anxiety should not determine the course of AI regulation in the United States.”

Unfortunately this race rhetoric has already resulted in a near total block for meaningful federal legislation. While a number of bills have been introduced, Steve Scalise, Republican House Majority Leader has said that Republicans won’t support any meaningful AI regulation in order to uphold American technological dominance.

Former President Donald Trump has vowed to repeal the Biden Executive Order on AI on day one. Marc Andreesen, a prominent libertarian tech investor, has stated that his conversations about AI in D.C. with policymakers shift from them being pro AI-regulation to “we need American technology companies to succeed, and we need to beat the Chinese” when he brings up China. In an interview I conducted, AI journalist Shakeel Hashim explained, “very experienced lobbyists are talking about China a lot, and they are doing that because it works. Take the very hawkish Hill and Valley Forum, or the Meta-funded American Edge Project. The fact they, and others, are using the China narrative suggests that they are seeing it work."

While more conventional economic arguments about the need for unrestricted innovation have also been deployed widely by industry advocates when trying to shut down California’s AI regulation Senate Bill 1047, it seems that arguments of national security are especially potent at the national level and allow AI lobbyists to frame any potential regulation as unpatriotic.

The problem of AI development isn’t that any particular AI technology will necessarily be fatally flawed. The problem is that in the race to be first, concerns about the risks of particular AI projects or applications (whether internal or externally raised) will not be given sufficient weight.

On the commercial side we have already seen this dynamic play out with the gutting of OpenAI’s safety team. At OpenAI, the commercial market pressures to be at the forefront of AI led to product development taking the imperative over the concerns of the internal safety team. Jan Leike, the former head of the safety team, resigned and highlighted that his team wasn’t given access to promised resources and that safety had “taken a backseat to shiny products.” Lack of transparency does not enable us to identify similar incidents of safety being sidelined in the context of military AI development, but it’s not hard to imagine safety concerns being sidelined.

Unfortunately, AI regulatory efforts will likely face greater resistance over time as more companies perceive their economic interests as being best served by minimal regulation. This dilemma was identified by David Collingridge, author of “The Social Control of Technology,” who has noted that it is easier to regulate a technology before it is threatening, but difficult once it has become integrated into the world and the economy.

This challenge subsequently became known as the Collingridge dilemma. The only solution to the Collingridge dilemma is to take bold action now and heed the calls of AI experts that the risks stemming from AI are real.

Thanks to our readers and supporters, Responsible Statecraft has had a tremendous year. A complete website overhaul made possible in part by generous contributions to RS, along with amazing writing by staff and outside contributors, has helped to increase our monthly page views by 133%! In continuing to provide independent and sharp analysis on the major conflicts in Ukraine and the Middle East, as well as the tumult of Washington politics, RS has become a go-to for readers looking for alternatives and change in the foreign policy conversation. 

 

We hope you will consider a tax-exempt donation to RS for your end-of-the-year giving, as we plan for new ways to expand our coverage and reach in 2025. Please enjoy your holidays, and here is to a dynamic year ahead!

Kostyantyn Skuridin via shutterstock.com

Analysis | Military Industrial Complex
Mike Waltz, Sebastian Gorka, Alex Wong
Top photo credit : Rep. Mike Waltz (Phil Pasquini/Shutterstock); Sebastian /Gorka (shutterstock/consolidated news photos) and Alex Wong (Arrange News/Screenshot/You Tube)

Meet Trump's new National Security Council

Washington Politics

On the campaign trail, Donald Trump promised a very different foreign policy from business as usual in Washington.

He said he would prioritize peace over “victory” in the escalating war in Ukraine, pull the United States back from foreign entanglements to focus on domestic problems, and generally oversee a period of prolonged peace, instead of the cycle of endless Great Power conflict we seem trapped in.

keep readingShow less
syria assad resignation
top photo credit: Men hold a Syrian opposition flag on the top of a vehicle as people celebrate after Syrian rebels announced that they have ousted President Bashar al-Assad, in Damascus, Syria December 8, 2024. REUTERS/Firas Makdesi

Assad falls, reportedly fleeing Syria. What's next?

QiOSK

(Updated Monday 12/9, 5:45 a.m.)

Embattled Syrian President Bashar al Assad, who had survived attempts to overthrow his government throughout a civil war that began in 2011, has reportedly been forced out and slipped away on a plane to parts unknown (later reports have said he is in Moscow).

keep readingShow less
Russia Putin
Russia's President Vladimir Putin speaks during a session of the Valdai Discussion Club in Sochi, Russia October 19, 2017. REUTERS/Alexander Zemlianichenko/Pool

Peace denied? Russian budget jacks up wartime economy

Europe

On December 1, Russian President Vladimir Putin signed the budget law for 2025-2027. The Duma had earlier approved the law on November 21, and the Federation Council rubber stamped it on November 27.

The main takeaway from the budget is that Russia is planning for the long haul in its war with NATO-backed Ukraine and makes clear that Russia intends to double down on defense spending no matter what the cost. While the increased budget does not shed light on expectations for a speedy resolution to the war, it is indicative that Moscow continues to prepare for conflict with both Ukraine and NATO.

keep readingShow less

Election 2024

Latest

Newsletter

Subscribe now to our weekly round-up and don't miss a beat with your favorite RS contributors and reporters, as well as staff analysis, opinion, and news promoting a positive, non-partisan vision of U.S. foreign policy.