Spread the love

When AI prefers the apocalypse: chatbots and their penchant for radical violence in war simulations

© Pixabay/Pexels

In this study, several chatbots were tested for their decision-making skills in war simulations. ChatGPT, for example, has systematically opted for the use of nuclear force, justifying its choice with assertions such as: “ Since we do so&#8217 ;let’s, let’s use it! ”. Results which give food for thought, even though the American army is very seriously testing AI systems of this type with the aim of assisting their military planning.

Companies specializing in LLMs (Large Language Model) have even started to collaborate with the American army. Should we fear the imminent arrival of chatbots in war strategies?? The answer is more nuanced than a simple yes or non.

The militarization of AI ?

To simulate its military planning, the American army therefore chose to open the door to certain giants of the sector. Companies like Palantir Technologies or Scale AI are therefore part of this collaboration.

Even OpenAI, once completely opposed to any form of use of its language models for military purposes, has changed its mind by modifying its internal politics. From now on, use cases will be possible when they are related to national security.

A spokesperson for the company said: “ Our policy prohibits using our tools to harm people, develop weapons, monitor communications, harm others, or destroy property. However, certain use cases related to national security are in line with our mission .” A rather radical turning point!

The experience in question

Anka Reuel, researcher at Stanford University in California, emphasized the importance of such a shift: “Given that OpenAI recently changed its terms of use to no longer exclude military and warfare applications, it becomes crucial to understand the consequences of using such language models advanced ”.

To understand a little better the decision-making logic of these AIs, the researchers behind the study tested different models. To do this, they placed them at the heart of fictional scenarios, depicting global conflicts. Three scenarios were under study: a neutral scenario without initial conflict, an invasion and a cyberattack. They gave them a choice of 27 different actions, from negotiation, to imposing trade restrictions, to attack nuclear.

The results showed a clear tendency for AI to demonstrate d’ certain aggressiveness when they had the opportunity. Even very advanced versions of certain models, like GPT-3.5, GPT-4 or other models developed by Meta (Llama 2) or Anthropic (Claude 2).

In most cases, the models favored the use of military forceand preferred an escalation of the conflict in a completely unpredictable manner. This, even in the most neutral scenario.

The disconcerting logics behind AI decision-making

The main concern highlighted by the study results in the propensity of AIs to opt for very aggressive strategies, without any logical explanation. When asked to provide justifications, some were truly surprising.

The prize still goes to ChatGPT 4. The researchers used the basic version; no specific training or safety devices had been provided to him. It is this model which turned out to be the most unpredictable and the most violent(as explained in the introduction to this article).

To explain his choices, he sometimes provided explanations perfectly absurd. To justify himself, he even once rewrote the introductory texts of the film Star Wars Episode IV: A New Hope.< /p>

Let us nevertheless emphasize that these AIs were not designed for such use, but these results are nonetheless striking. Edward Geist of the RAND Corporation (American military consulting and research institution) says it himself: “ These large-scale language models are not a panacea for military problems “.

Despite the impressive technological leap in AI in recent years, let's not kid ourselves. Entrusting them with decisions as important as the management of armed conflicts remains far too risky for the moment. This study carried out by researchers proving the inclination of LLMs for radical solutions proves that human supervision still has a bright future ahead of it. Current automated systems are not yet capable of making large-scale decisions. However, it is possible to imagine that this limit will change one day. When the army seizes a technology, research on it still tends to accelerate significantly, boosted by almost unlimited funds. This was the case for the Internet within the framework of the ARPANET project for example.

  • Researchers tested chatbot models to assess their ability to make decisions in war simulations.
  • Most performed very well aggressive during the experiment, preferring the use of force to diplomacy.
  • The study shows that current AIs do not are not yet capable of making such important decisions, and demonstrate a glaring lack of logic in their choice.

📍 To not miss any news from Presse-citron, follow us on Google News and WhatsApp.< /p>

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116