Now Facebook stops producing fake academic articles

Spread the love

Just 48 hours after it went live Meta closes Galactica – an AI meant to revolutionize academic research… far from being ready yet.

Now Facebook stops producing fake university articles

On paper, the AI ​​Galactica, developed by the artificial intelligence division of Meta, was an extraordinary advance. Engineers from what was previously known as Facebook Artificial Intelligence Research have indeed trained an AI with “a large body of scientific knowledge produced by mankind’ 8221;, with the aim of allowing easier access to science. Galactica would thus have been trained with more than 48 million scientific papers, books, lecture notes and websites like Wikipedia.

Galactica was thus supposed to “organize” science like no other search engine. For the time being, searching for the state of knowledge on a specific subject is indeed particularly long. Database search engines like arXiv or PubMed are semantic. And students and researchers often have to read many scientific papers before getting the information they really wanted at the start.

The IA Galactica closes 48 hours after it goes online

Galactica theoretically allows you to ask your question directly on a specific scientific subject. The AI ​​reviews the subject through its vast database, and delivers an answer in a few seconds, if possible complete and embellished with bibliography. Beyond that the AI ​​can downright “solve complex math problems, generate Wikipedia articles, write code, notations around molecules and proteins and much more.

The problem is is that, very soon after it was put online, many Internet users, and academics, realized that the answers of the AI ​​could be absurd, peddle false information, or mislead. Cnet cites the example of the query 'Do vaccines cause autism' which gives a surprisingly contradictory answer – while expected result should have been negative.

Meta researchers have made it clear that the site is currently a demo. And even added this warning in capital letters: “NEVER FOLLOW ADVICE FROM A LANGUAGE MODEL WITHOUT VERIFICATION”. But as experts point out, even with this warning texts produced by this AI could be resumed (despite many errors). This seems to have motivated Meta to close the site 2 days after it went live.

Beyond that, this example raises larger questions about the potential emerging dangers of AI. The model on which Galactica is based indeed produces a convincing, human language. But as Carl Bergstrom, a professor at the University of Washington explains quoted by Cnet, what Galactica produces mostly resembles what “a random nonsense generator would give. em>.

According to him, it is because of the way the AI ​​is trained that it ends up producing errors. During its learning, the AI ​​analyzes in fact above all the words and their connection between them, and from this produces texts which have a tone similar to the sources – which can therefore make an impression of authority, be convincing while most often being full of errors and contradictions.

131.5 M reviews

Previous Article
Next Article