© Primakov/Shutterstock.com
Can we blindly trust artificial intelligences ? When it comes to information retrieval, it would seem not. This is in any case the lesson that can be drawn from the media scandal that occurred in the United States a few days ago.
It all started on CNN, which was trying to state historical precedents for presidential pardons of families (in reference to Joe Biden, who pardoned his son despite having said he would never do so). Ana Navarro-Cardenas, a well-known figure on the network, confidently spoke about the pardon granted by President Woodrow Wilson to her alleged brother-in-law, a certain Hunter deButts. This information, picked up by several media outlets, quickly spread on social networks.
Faced with questions about her sources, the commentator responded bluntly: “Blame ChatGPT”. A response that sparked a thorough investigation, revealing not only that Hunter deButts never existed, but also that other widely reported “historical facts” were equally fictitious. Among them, George H.W. Bush's alleged pardon of his son Neil, or that of Jimmy Carter to his brother Billy.
Don't believe AI
This case is just the tip of the iceberg. Research shows that generative AI tools like ChatGPT make mistakes in more than three-quarters of cases when it comes to citing precise sources. An alarming finding when we know that these tools are increasingly used by journalists, researchers and students.
200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000The case of Jeff Hancock, founder of the Stanford Social Media Lab and a recognized expert on disinformation, is particularly revealing. He himself was tricked into using GPT-4 to generate a list of bibliographic references, ending up with citations that did not exist in an official document. If even experts can fall into this trap, what about the general public??
A systemic problem that threatens information
The fundamental difference between traditional search engines and AI-based “answer engines” lies in their approach. Google Search Classic searches point to primary sources that the user can consult and verify. In contrast, generative AIs produce answers that seem coherent but are often impossible to verify.
This new reality poses a major problem: the ease of use of these tools encourages intellectual laziness. Why spend time checking sources when an AI gives us an immediate and apparently credible answer? ? This trend contributes to the general degradation of our information environment, already undermined by disinformation on social networks.
The consequences are felt well beyond the academic world. From seemingly innocuous errors, such as Google's AI claiming that male foxes are monogamous, to serious misunderstandings about current events, our entire ability to distinguish truth from falsehood is threatened. So while we wait for ChatGPT to improve, it's best to rely on good old Google and your ability to cross-reference information.
- A cascade of false information about US presidential pardons has revealed the dangers of relying on generative AI for information research.
- Tools like ChatGPT are wrong more than 75% of the time when it comes to citing sources, even experts are victims.
- Unlike traditional search engines, generative AIs do not allow you to go back to primary sources, setting a dangerous precedent for our ability to verify information.
📍 To not miss any Presse-citron news, follow us on Google News and WhatsApp.
[ ]