Spread the love

OpenAI has a more efficient technique for detecting texts written by ChatGPT

© Presse-citron

While AI models like ChatGPT can help workers in some industries, it has also been a source of controversy, due to the fact that it can be very difficult to tell the difference between human-generated text and AI-generated text. This is a problem in schools and universities, for example, since students can submit assignments generated by ChatGPT. Currently, there are already solutions that assess the probability that a text was written by an artificial intelligence, and not by a human. However, this technology is not very effective, because it gives a probability percentage, and not binary information.

In fact, OpenAI, which had itself launched an AI-generated text detector, has decided to abandon this technology. OpenAI's detector, which was supposed to indicate whether a text was generated with ChatGPT or a competing chatbot, was launched in early 2023. But it was abandoned in July, due to its low precision.

A new method studied by OpenAI

While AIs trained to detect AI-generated text have proven ineffective, for now, OpenAI has new ideas. And among them, there are “watermarks.” In essence, this technique would consist of placing invisible markers in the texts generated by ChatGPT, so that it is then possible to identify the origin of these texts. And according to OpenAI, the results obtained are quite convincing. “Our teams have developed a text watermarking method that we continue to consider in our search for alternatives,” reads a blog post from the company. The latter mentions very high accuracy, even when trying to modify the text paraphrasing certain elements.

200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000

But for now, this tool is not available. And OpenAI justifies itself by mentioning the weaknesses that persist in this new system. “[…] it is less resistant to globalized falsifications; such as using translation systems, reformulating with another generative model or asking the model to insert a special character between each word, then removing this character – which makes it trivial to circumvent by bad actors,” indicates the creator of ChatGPT. He also fears that certain groups of users will be disproportionately impacted by this system. “For example, it could stigmatize the use of AI as a useful writing tool for non-native English speakers,” OpenAI writes.

OpenAI is also exploring other avenues, such as the use of metadata, which it says would completely eliminate the risk of false positives.

  • Generative AI can improve worker productivity in many industries
  • But this also poses a problem, because it is very difficult to distinguish a text generated by AI from a text generated by human intelligence
  • OpenAI has already proposed a solution, which it abandoned after a few months
  • However, a new, more efficient method is currently being studied by the company
  • Unfortunately, the new tool is not available (for the moment)

📍 To not miss any Presse-citron news, follow us on Google News and WhatsApp.

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116