© Unsplash/Adi Nugroho
The name David Mayer probably doesn't ring a bell, but he has created a controversy related to ChatGPT's behavior when he mentions it. A look back at a case that shows the limits of AI.
The David Mayer case
It all started when users noticed that the chatbot didn’t want to answer the question “Who is David Mayer”? ”. It starts an answer, then deletes it in the middle, specifying: “I can’t produce an answer.”
The mystery has sparked a lot of discussion and the press has picked up on the matter. So much so that OpenAI has had to explain itself. The technology company has stressed that the problem has now been resolved. It adds:
There may be cases where ChatGPT does not provide certain information about people to protect their privacy. One of our tools has mistakenly flagged this name and prevented it from appearing in responses, which it should not have done.
200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000
Data protection experts have started to dig into this matter. As the Wall Street Journal reports, Guido Scorza, an Italian lawyer, tested AI on himself and discovered the same problem. One of the other banned names is Jonathan Turley, the American law professor.
A flaw in AI
Beyond these specific cases, this story highlights how major privacy issues are for these AIs. In fact, several people have filed lawsuits against OpenAI on this subject.
To address this, the designers of these tools use techniques that are not innovative but rather date back to the beginnings of computing. They impose rules on these chatbots that look like this example: If “such name” is mentioned, then do not provide a response and generate: “I am not able to produce a response”. The same process is also used for certain taboo subjects that AI is not supposed to address.
Have you ever noticed these cases where ChatGPT and its rivals do not respond to your requests ? Tell us in the comments.
What to remember :
- Netizens spotted that ChatGPT didn’t want to answer the question: “Who is David Mayer”?
- The bug has since been fixed and OpenAI claims that it was a mistake in the context of protecting people’s privacy
- This case illustrates the major problems facing these AIs
📍 To not miss any Presse-citron news, follow us on Google News and WhatsApp.
[ ]