Follow us on social

China AI

Forget regs, AI CEOs got a need for speed to 'beat China'

Surprise, Big Tech giants are exploiting fears to block irksome safety checks and balances

Analysis | Military Industrial Complex

The 2021 National Security Commission on Artificial Intelligence (NSCAI) final report (chaired by former Google CEO Eric Schmidt) declared that we are actively in an AI arms race by stating that “If the United States does not act, it will likely lose its leadership position in AI to China in the next decade.”

This race dynamic is unique because unlike other arms races (such as nuclear weapons), the vast majority of the breakthroughs in AI come from industry, not government. As one scholar puts it, “the AI security dilemma is driven by commercial and market forces not under the positive control of states.” Illustrating this dynamic, in August of 2023, Schmidt created White Stork, a military startup which is developing AI attack drones for Ukraine to deploy against Russia.

Thus the key actors to understanding AI in the military context are the companies that are developing AI and increasingly lobbying lawmakers and the public on the need to avoid regulation and to build AI into military systems. Actors in this space may have a mix of motivations, the most notable being a desire to generate profits and a desire to support U.S. military power by maintaining technological superiority over China. These motivations are often intertwined as individuals, corporations, and think tanks (such as the Schmidt-funded Special Competitive Studies Project) collaborate to promote a message that we need to build AI first and worry about the potential consequences later.

In particular there is an obsession with speed — winning the race is determined by whoever runs fastest. The NSCAI report bemoans that “the U.S. government still operates at human speed, not machine speed” and warns that “delaying AI adoption will push all of the risk onto the next generation of Americans — who will have to defend against, and perhaps fight, a 21st century adversary with 20th century tools.” According to this perspective, the risk posed by AI is failing to be first.

The downside of a race is that running at top speed doesn’t leave time for questioning if the race itself is creating dangers as the nuclear arms race did. And unfortunately the argument that we have to race ahead on AI has been weaponized by the tech industry as a shield against regulation. This timeline depicts the increasingly close collaboration between the tech industry and national security or political figures to frame competition with China as a key reason to avoid regulation of the tech industry and specifically AI.

This lobbying goes beyond the companies that are focused on developing AI for defense applications such as Palantir, to the biggest public companies — namely Meta. Meta in particular has shown a reckless lack of concern for potential misapplication of the frontier AI models that it publishes open source.

Open sourcing the most advanced models is unique among cutting edge AI developers and this public available code allows safety restrictions to be easily removed — which took place within days of their latest model release. Meta has spent over $85 million funding a dark money influence campaign lobbying against AI regulation through a front group, the American Edge Project, which paid for alarmist ads that describe AI regulation as “pro-China legislation.” As Helen Toner, a prominent AI safety expert, put it: the cold war dynamic of fearing China’s AI and a corresponding “…groundless sense of anxiety should not determine the course of AI regulation in the United States.”

Unfortunately this race rhetoric has already resulted in a near total block for meaningful federal legislation. While a number of bills have been introduced, Steve Scalise, Republican House Majority Leader has said that Republicans won’t support any meaningful AI regulation in order to uphold American technological dominance.

Former President Donald Trump has vowed to repeal the Biden Executive Order on AI on day one. Marc Andreesen, a prominent libertarian tech investor, has stated that his conversations about AI in D.C. with policymakers shift from them being pro AI-regulation to “we need American technology companies to succeed, and we need to beat the Chinese” when he brings up China. In an interview I conducted, AI journalist Shakeel Hashim explained, “very experienced lobbyists are talking about China a lot, and they are doing that because it works. Take the very hawkish Hill and Valley Forum, or the Meta-funded American Edge Project. The fact they, and others, are using the China narrative suggests that they are seeing it work."

While more conventional economic arguments about the need for unrestricted innovation have also been deployed widely by industry advocates when trying to shut down California’s AI regulation Senate Bill 1047, it seems that arguments of national security are especially potent at the national level and allow AI lobbyists to frame any potential regulation as unpatriotic.

The problem of AI development isn’t that any particular AI technology will necessarily be fatally flawed. The problem is that in the race to be first, concerns about the risks of particular AI projects or applications (whether internal or externally raised) will not be given sufficient weight.

On the commercial side we have already seen this dynamic play out with the gutting of OpenAI’s safety team. At OpenAI, the commercial market pressures to be at the forefront of AI led to product development taking the imperative over the concerns of the internal safety team. Jan Leike, the former head of the safety team, resigned and highlighted that his team wasn’t given access to promised resources and that safety had “taken a backseat to shiny products.” Lack of transparency does not enable us to identify similar incidents of safety being sidelined in the context of military AI development, but it’s not hard to imagine safety concerns being sidelined.

Unfortunately, AI regulatory efforts will likely face greater resistance over time as more companies perceive their economic interests as being best served by minimal regulation. This dilemma was identified by David Collingridge, author of “The Social Control of Technology,” who has noted that it is easier to regulate a technology before it is threatening, but difficult once it has become integrated into the world and the economy.

This challenge subsequently became known as the Collingridge dilemma. The only solution to the Collingridge dilemma is to take bold action now and heed the calls of AI experts that the risks stemming from AI are real.

Kostyantyn Skuridin via shutterstock.com

Analysis | Military Industrial Complex
Ukraine landmines
Top image credit: A sapper of the 24th mechanized brigade named after King Danylo installs an anti-tank landmine, amid Russia's attack on Ukraine, on the outskirts of the town of Chasiv Yar in the Donetsk region, Ukraine October 30, 2024. Oleg Petrasiuk/Press Service of the 24th King Danylo Separate Mechanized Brigade of the Ukrainian Armed Forces/Handout via REUTERS

Ukrainian civilians will pay for Biden's landmine flip-flop

QiOSK

The Biden administration announced today that it will provide Ukraine with antipersonnel landmines for use inside the country, a reversal of its own efforts to revive President Obama’s ban on America’s use, production, transfer, and stockpiling of the indiscriminate weapons anywhere except the Korean peninsula.

The intent of this reversal, one U.S. official told the Washington Post, is to “contribute to a more effective defense.” The landmines — use of which is banned in 160 countries by an international treaty — are expected to be deployed primarily in the country’s eastern territories, where Ukrainian forces are struggling to defend against steady advances by the Russian military.

keep readingShow less
 Luiz Inacio Lula da Silva
Top image credit: Brazil's President Luiz Inacio Lula da Silva attends task force meeting of the Global Alliance against Hunger and Poverty in Rio de Janeiro, Brazil, July 24, 2024. REUTERS/Tita Barros

Brazil pulled off successful G20 summit

QiOSK

The city of Rio de Janeiro provided a stunningly beautiful backdrop to Brazil’s big moment as host of the G20 summit this week.

Despite last minute challenges, Brazil pulled off a strong joint statement (Leaders’ Declaration) that put some of President Lula’s priorities on human welfare at the heart of the grouping’s agenda, while also crafting impressively tough language on Middle East conflicts and a pragmatic paragraph on Ukraine.

keep readingShow less
Ukraine Russia
Top Photo: Ukrainian military returns home to Kiev from conflict at the border, where battles had raged between Ukraine and Russian forces. (Shuttertock/Vitaliy Holov)

Poll: Over 50% of Ukrainians want to end the war

QiOSK

A new Gallup study indicates that most Ukrainians want the war with Russia to end. After more than two years of fighting, 52% of those polled indicated that they would prefer a negotiated peace rather than continuing to fight.

Ukrainian support for the war has consistently dropped since Russia began its full-scale invasion in 2022. According to Gallup, 73% wished to continue fighting in 2022, and 63% in 2023. This is the first time a majority supported a negotiated peace.

keep readingShow less

Election 2024

Latest

Newsletter

Subscribe now to our weekly round-up and don't miss a beat with your favorite RS contributors and reporters, as well as staff analysis, opinion, and news promoting a positive, non-partisan vision of U.S. foreign policy.