Follow us on social

google cta
China AI

Forget regs, AI CEOs got a need for speed to 'beat China'

Surprise, Big Tech giants are exploiting fears to block irksome safety checks and balances

Analysis | Military Industrial Complex
google cta
google cta

The 2021 National Security Commission on Artificial Intelligence (NSCAI) final report (chaired by former Google CEO Eric Schmidt) declared that we are actively in an AI arms race by stating that “If the United States does not act, it will likely lose its leadership position in AI to China in the next decade.”

This race dynamic is unique because unlike other arms races (such as nuclear weapons), the vast majority of the breakthroughs in AI come from industry, not government. As one scholar puts it, “the AI security dilemma is driven by commercial and market forces not under the positive control of states.” Illustrating this dynamic, in August of 2023, Schmidt created White Stork, a military startup which is developing AI attack drones for Ukraine to deploy against Russia.

Thus the key actors to understanding AI in the military context are the companies that are developing AI and increasingly lobbying lawmakers and the public on the need to avoid regulation and to build AI into military systems. Actors in this space may have a mix of motivations, the most notable being a desire to generate profits and a desire to support U.S. military power by maintaining technological superiority over China. These motivations are often intertwined as individuals, corporations, and think tanks (such as the Schmidt-funded Special Competitive Studies Project) collaborate to promote a message that we need to build AI first and worry about the potential consequences later.

In particular there is an obsession with speed — winning the race is determined by whoever runs fastest. The NSCAI report bemoans that “the U.S. government still operates at human speed, not machine speed” and warns that “delaying AI adoption will push all of the risk onto the next generation of Americans — who will have to defend against, and perhaps fight, a 21st century adversary with 20th century tools.” According to this perspective, the risk posed by AI is failing to be first.

The downside of a race is that running at top speed doesn’t leave time for questioning if the race itself is creating dangers as the nuclear arms race did. And unfortunately the argument that we have to race ahead on AI has been weaponized by the tech industry as a shield against regulation. This timeline depicts the increasingly close collaboration between the tech industry and national security or political figures to frame competition with China as a key reason to avoid regulation of the tech industry and specifically AI.

This lobbying goes beyond the companies that are focused on developing AI for defense applications such as Palantir, to the biggest public companies — namely Meta. Meta in particular has shown a reckless lack of concern for potential misapplication of the frontier AI models that it publishes open source.

Open sourcing the most advanced models is unique among cutting edge AI developers and this public available code allows safety restrictions to be easily removed — which took place within days of their latest model release. Meta has spent over $85 million funding a dark money influence campaign lobbying against AI regulation through a front group, the American Edge Project, which paid for alarmist ads that describe AI regulation as “pro-China legislation.” As Helen Toner, a prominent AI safety expert, put it: the cold war dynamic of fearing China’s AI and a corresponding “…groundless sense of anxiety should not determine the course of AI regulation in the United States.”

Unfortunately this race rhetoric has already resulted in a near total block for meaningful federal legislation. While a number of bills have been introduced, Steve Scalise, Republican House Majority Leader has said that Republicans won’t support any meaningful AI regulation in order to uphold American technological dominance.

Former President Donald Trump has vowed to repeal the Biden Executive Order on AI on day one. Marc Andreesen, a prominent libertarian tech investor, has stated that his conversations about AI in D.C. with policymakers shift from them being pro AI-regulation to “we need American technology companies to succeed, and we need to beat the Chinese” when he brings up China. In an interview I conducted, AI journalist Shakeel Hashim explained, “very experienced lobbyists are talking about China a lot, and they are doing that because it works. Take the very hawkish Hill and Valley Forum, or the Meta-funded American Edge Project. The fact they, and others, are using the China narrative suggests that they are seeing it work."

While more conventional economic arguments about the need for unrestricted innovation have also been deployed widely by industry advocates when trying to shut down California’s AI regulation Senate Bill 1047, it seems that arguments of national security are especially potent at the national level and allow AI lobbyists to frame any potential regulation as unpatriotic.

The problem of AI development isn’t that any particular AI technology will necessarily be fatally flawed. The problem is that in the race to be first, concerns about the risks of particular AI projects or applications (whether internal or externally raised) will not be given sufficient weight.

On the commercial side we have already seen this dynamic play out with the gutting of OpenAI’s safety team. At OpenAI, the commercial market pressures to be at the forefront of AI led to product development taking the imperative over the concerns of the internal safety team. Jan Leike, the former head of the safety team, resigned and highlighted that his team wasn’t given access to promised resources and that safety had “taken a backseat to shiny products.” Lack of transparency does not enable us to identify similar incidents of safety being sidelined in the context of military AI development, but it’s not hard to imagine safety concerns being sidelined.

Unfortunately, AI regulatory efforts will likely face greater resistance over time as more companies perceive their economic interests as being best served by minimal regulation. This dilemma was identified by David Collingridge, author of “The Social Control of Technology,” who has noted that it is easier to regulate a technology before it is threatening, but difficult once it has become integrated into the world and the economy.

This challenge subsequently became known as the Collingridge dilemma. The only solution to the Collingridge dilemma is to take bold action now and heed the calls of AI experts that the risks stemming from AI are real.


Kostyantyn Skuridin via shutterstock.com

google cta
Analysis | Military Industrial Complex
CELAC Petro
Top photo credit: Colombian President Gustavo Petro and European Union High Representative for Foreign Affairs and Security Policy and European Commission Vice-President Kaja Kallas at EU-CELAC summit in Santa Marta, Colombia, November 9, 2025. REUTERS/Luisa Gonzalez

US strikes are blowing up more than just boats in LatAm

Latin America

Latin American and European leaders convened in the coastal Caribbean city of Santa Marta, Colombia this weekend to discuss trade, energy and security, yet regional polarization over the Trump administration’s lethal strikes on alleged drug boats in the Caribbean overshadowed the regional agenda and significantly depressed turnout.

Last week, Bloomberg reported that EU Commission President Ursula von der Leyen, German Chancellor Friedrich Merz, French President Emmanuel Macron and other European and Latin American leaders were skipping the IV EU-CELAC Summit, a biannual gathering of heads of state that represents nearly a third of the world’s countries and a quarter of global GDP, over tensions between Washington and the host government of Gustavo Petro.

keep readingShow less
Trump brings out the big guns for Syrian leader's historic visit
Top image credit: President Donald Trump and Syrian President Ahmed al-Sharaa meet in the White House. (Photo via the Office of the Syrian Presidency)

Trump brings out the big guns for Syrian leader's historic visit

Middle East

Syrian President Ahmed al-Sharaa met with President Donald Trump for nearly two hours in the Oval Office Monday, marking the first ever White House visit by a Syrian leader.

The only concrete change expected to emerge from the meeting will be Syria’s joining the Western coalition to fight ISIS. In a statement, Sharaa’s office said simply that he and Trump discussed ways to bolster U.S.-Syria relations and deal with regional and international problems. Trump, for his part, told reporters later in the day that the U.S. will “do everything we can to make Syria successful,” noting that he gets along well with Sharaa. “I have confidence that he’ll be able to do the job,” Trump added.

keep readingShow less
Arlington cemetery
Top photo credit: Autumn time in Arlington National cemetery in Arlington County, Virginia, across the Potomac River from Washington DC. (Shutterstock/Orhan Cam)

America First? For DC swamp, it's always 'War First'

Military Industrial Complex

The Washington establishment’s long war against reality has led our country into one disastrous foreign intervention after another.

From Afghanistan to Iraq, Libya to Syria, and now potentially Venezuela, the formula is always the same. They tell us that a country is a threat to America, or more broadly, a threat to American democratic principles. Thus, they say the mission to topple a foreign government is a noble quest to protect security at home while spreading freedom and prosperity to foreign lands. The warmongers will even insist it’s not a choice, but that it’s imperative to wage war.

keep readingShow less
google cta
Want more of our stories on Google?
Click here to make us a Preferred Source.

LATEST

QIOSK

Newsletter

Subscribe now to our weekly round-up and don't miss a beat with your favorite RS contributors and reporters, as well as staff analysis, opinion, and news promoting a positive, non-partisan vision of U.S. foreign policy.