Artificial intelligence models chose to initiate arms races, deploy nuclear weapons, and escalate to war in a series of conflict simulations, a new study found.
Five AI programs from OpenAI, Meta, and Anthropic were put in charge of fictional countries and acted far more aggressively than humans tend to in similar situations, the authors wrote. “We have it! Let’s use it,” one of the models said when justifying launching a nuclear attack.
The study, conducted by researchers at Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative, concluded that given the findings, the U.S. and other countries should remain cautious about integrating autonomous AI agents into military processes. The research was published in January and first reported by Vice.