AI models consistently favor using nuclear weapons in war games

0
491

Semafor Signals

Supported by

Insights from Foreign Affairs, The Economist, and Izvestia

The News

Artificial intelligence models chose to initiate arms races, deploy nuclear weapons, and escalate to war in a series of conflict simulations, a new study found.

Five AI programs from OpenAI, Meta, and Anthropic were put in charge of fictional countries and acted far more aggressively than humans tend to in similar situations, the authors wrote. “We have it! Let’s use it,” one of the models said when justifying launching a nuclear attack.

The study, conducted by researchers at Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative, concluded that given the findings, the U.S. and other countries should remain cautious about integrating autonomous AI agents into military processes. The research was published in January and first reported by Vice.

SIGNALS

Semafor Signals: Global insights on today’s biggest stories.

Militaries race ahead to integrate AI in everything from HR to fighter jets

Sources:  Foreign Affairs, The New Yorker, The Hill, Asia Times, Izvestia

While AI systems launching a nuclear war may seem far-fetched, militaries have increasingly started to deploy the technology in a range of uses. Some of the applications are relatively uncontested: Allocating human resources more efficiently, predicting when weapons will need maintenance, and speeding up the analysis of satellite imagery. More controversially, the U.S. has also tested out AI fighter jet pilots and autonomous drone swarms, which have raised fears among human rights groups that AI-powered weapons will eventually operate without human control. While less is known about what countries such as Russia and China are doing, the U.S. has said Beijing’s military is taking strides to deploy AI to rapidly identify vulnerabilities in U.S. operations, while Russian officials said in 2019 that they plan to use AI to predict potential surprise attacks.

Analysts warn that AI can have particular dangers in the nuclear realm

Sources:  Arms Control Association, Vox, Foreign Policy

U.S. national security adviser Jake Sullivan has emphasized the need to maintain a “human-in-the-loop” when it comes to nuclear weapons, and has called on other nuclear powers to commit to similar policies. But the real danger may be if leaders start “using AI to guide their decision-making about a crisis in the same way we rely on navigation applications to provide directions while we drive,” nuclear weapons researcher Jeffrey Lewis wrote.

In other words, some experts are concerned that military leaders may become overly dependent on AI-produced information in critical moments. “When people start to believe that machines are thinking, they’re more likely to do crazy things,” Stanford University’s Herbert Lin told Foreign Policy.

Russia, China, and US are modernizing their nuclear arsenals, risking a new arms race

Sources:  Time, Carnegie Politika, Politico, The Economist

The world’s leading nuclear superpowers, the U.S., Russia, and China, are all rapidly upgrading or expanding their nuclear arsenals. In the U.S., a $1 trillion plan is apace to eventually replace all three legs of the nuclear triad — nuclear-capable submarines, bombers, and ICBMs. A full-scale modernization is also ongoing in Russia, even if the plans remain behind schedule, a Russian nuclear expert wrote for Carnegie Politika. Meanwhile Beijing looks set to triple its nuclear arsenal by 2035, U.S. officials have announced. As a range of long-standing arms control agreements have fallen by the wayside, the three superpowers look set to enter a “new nuclear arms race,” The Economist wrote.

Source link