Spread the love

This study debunks the myth of AI as an existential threat

© Image generated by AI DALL-E for Presse-Citron

From HAL 9000 (2001: A Space Odyssey) to Skynet (Terminator), including the Cylons (Galactica), popular culture is full of examples of artificial intelligences turning against their creators. These stories, anchored in our collective imagination, have fueled a growing apprehension about the rise of large-scale language models (LLMs) such as ChatGPT-4 or Google's Gemini. What seemed like fiction 10 years ago is now before our eyes, and the fantasy of the rebellious machine has never been as present as it is today.

However, a team of researchers led by Iryna Gurevych of the Technical University of Darmstadt and Harish Tayyar Madabushi of the University of Bath has shed new light on this burning question. Their work was published at the 62nd Annual Meeting of the Association for Computational Linguistics.

Unexpected Limitations: The Achilles Heel of LLMs

Contrary to widespread fears, the study reveals that these sophisticated models are in fact deeply constrained by their initial programming. Unable to learn new skills autonomously, they remain firmly under human control. Aside from rare examples, such as the very recent EES algorithm designed by MIT, this is the reality for the majority of LLMs.

“There is concern that as models get larger, they will be able to solve new problems that we cannot currently predict, which poses the risk that these larger models could acquire dangerous capabilities, including reasoning and planning” explains Tayyar Madabushi. The researcher then tempers: ” Our study shows that the fear that a model will escape and do something completely unexpected, innovative and potentially dangerous is unfounded “.

This conclusion is based on a series of experiments carried out on four different LLM models. The researchers subjected these systems to tasks previously identified as “ emergent “, that is to say potentially capable of generating unexpected or deviant behavior. The results are clear: no evidence of development of differentiated thinking or ability to act outside of their programming was found observed.

200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000

The illusion of emergence: deconstructing the myth

The apparent sophistication of LLMs, capable of leading coherent and natural conversations, has led some researchers to speculate on the existence of emerging capabilities. One example cited was an LLM student who was able to answer questions about social situations without having been explicitly trained to do so. However, the study by Gurevych and Tayyar Madabushi shows that these feats can be explained by a combination of three factors: the ability to follow instructions, memorization, and linguistic mastery.

No trace of true intelligence or autonomous thought was detected. And this is where the difference lies between what is called an AI today (in reality a simple algorithmic parrot), and a true artificial intelligence, which does not yet exist. A true artificial intelligence would learn by itself, would formulate choices or opinions; which is not the case today. The term AI is in truth an abuse of language, because current systems only process and analyze data according to rules pre-established by human programmers, without real understanding or awareness.

The real danger: human use of AI

While the study dismisses the specter of uncontrollable AI, it does not minimize the risks associated with its use. Gurevych specifies: “Our results do not mean that AI poses no threat. Rather, we show that the alleged emergence of complex thinking skills associated with specific threats is not supported by evidence ».

The real challenges are intimately related to the use that we humans make of these technologies: increasing energy consumption, copyright issues, dissemination of false information or digital pollution in the broad sense of the term.

Thus, far from being an invincible colossus, AI appears as a powerful but limited tool, whose potential dangers lie more in its use only in a hypothetical rebellion. This study therefore invites us to keep our feet on the ground, to recalibrate our fears and to focus our efforts on the ethical and responsible use of AI. A tool is never dangerous in itself, it is if the person using it misuses it; here is an observation which has not evolved enormously for several thousand years.

  • A recent study demonstrates that advanced language models (LLMs) are incapable of developing threatening autonomous intelligence.
  • The apparently emergent abilities of LLMs can be explained by their initial programming and not by real intelligence.
  • The real danger of AI lies in its use by humans rather than in a hypothetical rebellion of machines .

📍 To not miss any news from Presse-citron, follow us on Google News and WhatsApp.

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116