Categories: Enterteiment

This study debunks the myth of AI as an existential threat

Spread the love

© Image generated by AI DALL-E for Presse-Citron

From HAL 9000 (2001: A Space Odyssey) to Skynet (Terminator), including the Cylons (Galactica), popular culture is full of examples of artificial intelligences turning against their creators. These stories, anchored in our collective imagination, have fueled a growing apprehension about the rise of large-scale language models (LLMs) such as ChatGPT-4 or Google's Gemini. What seemed like fiction 10 years ago is now before our eyes, and the fantasy of the rebellious machine has never been as present as it is today.

However, a team of researchers led by Iryna Gurevych of the Technical University of Darmstadt and Harish Tayyar Madabushi of the University of Bath has shed new light on this burning question. Their work was published at the 62nd Annual Meeting of the Association for Computational Linguistics.

Unexpected Limitations: The Achilles Heel of LLMs

Contrary to widespread fears, the study reveals that these sophisticated models are in fact deeply constrained by their initial programming. Unable to learn new skills autonomously, they remain firmly under human control. Aside from rare examples, such as the very recent EES algorithm designed by MIT, this is the reality for the majority of LLMs.

“There is concern that as models get larger, they will be able to solve new problems that we cannot currently predict, which poses the risk that these larger models could acquire dangerous capabilities, including reasoning and planning” explains Tayyar Madabushi. The researcher then tempers: ” Our study shows that the fear that a model will escape and do something completely unexpected, innovative and potentially dangerous is unfounded “.

This conclusion is based on a series of experiments conducted on four different LLM models. The researchers subjected these systems to tasks previously identified as “ emergent “, that is, potentially capable of generating unexpected or deviant behaviors. The results are clear: no evidence of the development of differentiated thought or the ability to act outside of their programming has been observed.

200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000

The Illusion of Emergence: Deconstructing the Myth

The apparent sophistication of LLMs, who are capable of conducting coherent and natural conversations, has led some researchers to speculate on the existence of emergent capacities. In particular, we cited the example of an LLM capable of answering questions about social situations without having been explicitly trained for this. However, the study by Gurevych and Tayyar Madabushi demonstrates that this prowess can be explained by the combination of three factors: the ability to follow instructions, memorization and linguistic mastery.< /p>

No trace of true intelligence or autonomous thought has been detected. And this is where the difference lies between what we call AI today (in reality a simple algorithmic parrot), and real artificial intelligence, which does not exist. does not yet exist. A true artificial intelligence would learn on its own, formulate choices or opinions; which is not the case today. The term AI is actually a misnomer, because current systems only process and analyze data following pre-established rules by human programmers, without any real understanding or awareness.

The real danger  : the human use of AI

If the study rules out the specter of an uncontrollable AI, it does not, however, minimize the risks linked to its use. Gurevych specifies: “ Our results do not mean that AI poses no threat. Rather, we show that the presumed emergence of complex thinking skills associated with specific threats is not supported by evidence“.

The real challenges are intimately linked to the use that we humans make of these technologies: increasing energy consumption, copyright issues, the spread of false information or digital pollution in the broad sense of the term.

Thus, far from being an invincible colossus, AI appears to be a powerful but limited tool, whose potential dangers lie more in its use than in a hypothetical rebellion. This study therefore invites us to keep our feet on the ground, to recalibrate our fears and to focus our efforts on an ethical and responsible use of AI. A tool is never dangerous in itself, it is if the person using it misuses it; This is an observation that has not changed much in a few thousand years.

  • A recent study shows that advanced language models (LLMs) are incapable of developing threatening autonomous intelligence.
  • The apparently emergent capabilities of LLMs can be explained by their initial programming and not by true intelligence.
  • The real danger of AI lies in its use by humans rather than in a hypothetical rebellion of machines.

📍 To not miss any Presse-citron news, follow us on Google News and WhatsApp.

[ ]

Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116

Recent Posts

LIGUE 1. PSG – Brest: Barcola in the spotlight

Before his PSG match against Brest, young Bradley Barcola attracts praise from the media who…

6 days ago

LIGUE 1. PSG – Brest: Barcola in the spotlight

Before his PSG match against Brest, young Bradley Barcola attracts praise from the media who…

6 days ago

Welcome to Derry (Max): Why Stephen King's Universe Will Be Featured in the Series ?

© Warner Bros After two particularly successful feature films, Stephen King's It Saga will be…

6 days ago

Where are electric cars made ?

© Renault It’s always interesting to know where products that we can use on a…

6 days ago

Fire in the Pyrénées-Orientales: the fire is fixed, but there is a significant risk of it starting again

The fire that broke out in the massif of Aspres Thursday, is now fixed. On…

6 days ago

Pélicot case: the video that revealed the Mazan rapes revealed

À the origin of the The Mazan rape case, Dominique Pélicot had first attracted attention…

6 days ago