Spread the love

This virus doped with AI announces the (terrifying) cyberwars of tomorrow

© 01Net with Firefly

AI should lead to as many great innovations as there are dangers for humanity. When it comes to cybersecurity, researchers have been looking at the question for some time – but until now technology has not made it possible to create effective (and truly threatening) viruses based on artificial intelligence.

This is precisely what has just changed with the arrival of large language models (LLM) and the development of specialized general public agents exploiting it# 8217;AI. It thus becomes possible to create malware that is particularly difficult to detect and eliminate, which can mutate, or create other malware on the fly – tailor-made to achieve a maximum of objectives while leaving antivirus software completely in the lurch.

First virus based on AI demonstrated by researchers

Researchers Ben Nassi, Stav Cohen and Ron Bitton (Cornell Tech University) demonstrate in a paper the feasibility of a Trojan horse based on an LLM model and which is capable of propagating independently of ;one computer to another. Their virus, they named it Morris II, in reference to Morris, the first computer worm in history, dating from 1988.

Where the researchers are rather strong is that they have demonstrated, among other things, the ability of their computer worm to infiltrate an email assistant based on ChatGPT or Gemini &# 8211; to better extract sensitive personal data, and obtain emails normally supposed to be protected. While breaking, you will have understood, the security measures put in place for these LLM assistants.

Of course, their virus is not intended to be released into the wild, but rather to discover the most serious flaws in current LLM models. One of Morris II's techniques for overriding security protocols is, for example, to propose prompts leading an AI-based target to deliver and respond to another corrupted prompt.

A technique similar to so-called SQL injection or buffer overflow attacks. This possibility alone, very difficult to patch at the moment, allows particularly formidable attacks. A prompt can, for example, contain instructions for building a phishing web page on the fly and putting it online.

In another type of attack, called “Retrieval-Augmented Generation” or RAG, the virus exploits the way ChatGPT or Gemini retrieves information online. They send for an email specially designed to “poison” the database of an email assistant to better “jailbreaker” and thus access their target's emails.

Worse: in the event of a response from other users in the same thread, the latter are automatically infected in the same way. Another highlighted method: sending an image including a malicious self-replicating prompt directly to the file. This allows the virus to spread quickly, without intervention.

For researchers, not leaving AI agents alone in control is one of the main conditions for avoiding this type of security risk. Which will inevitably tend to multiply in the coming years.

  • Security researchers have just demonstrated the first worm doped with l’AI.
  • Enough to highlight the flaws of current AI agents, such as those supposed to facilitate sorting your emails.
  • Researchers emphasize that keeping humans at the heart of decisions such as sending messages is crucial to responding to this threat.

📍 To not miss any news from Presse-citron , follow us on Google News and WhatsApp.

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116