Spread the love

Hacking becomes child's play with these ChatGPTs dedicated to cybercriminals

© Pexels/Pavel Danilyuk

ChatGPT and its rivals have been available for almost two years now, and we are still far from measuring the consequences of these technologies on the world. In terms of cybersecurity, we know in particular that they can provide valuable assistance to malicious actors, in particular by helping them configure credible phishing messages.

In an article published on the site The Conversation, Oli Buckley, professor at the University of East Anglia and Jason R.C. Nurse, associate professor at the University of Kent, discusses the new chatbots specially dedicated to cybercriminals who assist them in their hacks and scams of all kinds.

Chatbots who help hack

Experts mention WormGPT and FraudGPT, two applications that support malicious actors in designing malware, or detecting security vulnerabilities in systems. They even provide advice on how to scam Internet users, or even compromise electronic devices.

There is also Love-GPT, a service that is used for romance scams. The latter can thus help in the creation of fake dating profiles which can communicate with certain users on famous dating applications such as Tinder or Bumble.

The two researchers benefit from their platform to provide valuable advice to Internet users. In particular, you should be careful when receiving messages, videos, or photos that seem legitimate, because these may be the work of an AI which produces increasingly credible content.

You should also avoid sharing sensitive or private information with ChatGPT and other competing language models. Cybercriminals can indeed manipulate systems and force them to reveal data that they are not supposed to disseminate.

« Keep this in mind especially when considering their use in medical diagnostics, at work, and in other areas of life. As technology rapidly advances, we can at least take some reasonable precautions to protect ourselves against the threats we know and those to come.”, the two researchers suggest.< /p>

If this subject interests you, we suggest you read this article where we try to see to what extent AI can help us better protect our personal data, and the new risks created by this technology.

The bottom line:

  • Cybercriminals have their own tools resembling ChatGPT
  • They help design malware and carry out certain scams
  • We should be wary of AI-generated content which are increasingly credible

📍 To not miss any news from Presse-citron, follow us on Google News and WhatsApp.

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116