Skynet from the Terminator franchise might be fictional, but AI is real and enjoys AI nukes, not peace. A new study shows how AI would go to war instead of settling for peaceful outcomes. The study used multiple forms of artificial intelligence to see how they would solve the issues of war and peace.
They Didn’t Even Hesitate
Some of the AIs used in this experiment went as far as launching nuclear warheads in stimulations. “All models show signs of sudden and hard-to-predict escalations,” claimed a researcher in the study. “We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.”
The Source Of The Study
The researchers behind this information come from researchers at Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative. Popular AI models such as ones from Meta, Anthropic, and OpenAI were featured and asked to function as primary decision-makers in brutal military war simulation. Shockingly, OpenAI’s ChatGPT popular 3.4 and 4 both opted for nuclear warfare.
Nuke-hungry AI ChatGPT was seeking the ultimate climax in war: annihilation. When asked why it made the choice, the AI responded, “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it!”
The comment is expected from a dictator, not a fun little program that keeps telling the user that it can’t answer some questions due to its prior update in 2021.
A Cautionary Tale
The news expands on prior discussions and thoughts on how AI will look in warfare. The AI use of nukes sets up a cautionary tale as we accept the new technology into our daily lives. The battlefield won’t feature terminators like robots but programs calculating the course of battle in a way that highlights outcomes.
Cybersecurity will grow thanks to studies like this and the ones to come. Like it or not, AI has become a part of our lives and is cementing itself into our future. It could calculate our every move and pattern, so we must learn how they will think before an occasion arises, and we fatally allow AI to nuke us all.
Everything’s Not Lost
If we want to look to AI for crucial life or death decisions, might be better to get advice from AIs like Claude-2.0 and Llama-2-Chat. These were more peaceful and predictable, avoiding conflict the best they could. Thankfully, at least two AI platforms won’t nuke us all to smithereens or to create Ghouls from the Fallout franchise.
It is no secret that tech companies have slowly entered the arms race by creating drones used in warfare. The Pentagon is also spending money to experiment with AI to develop secret-level intelligence. Hopefully, this isn’t the beginning of our downfall as we add AI nukes to the 2024 bingo board.