© Primakov/Shutterstock. com
“If you believe what I believe, you should just leave the company.”. Quoted by the New York Times, Suchir Balaji, who spent 4 years at OpenAI as an AI researcher, does not mince his words. Over time, this young talent came to the conclusion that these new technologies would bring more harm than good to society.
A sincere commitment
However, it all started with a revelation. As a teenager, he was captivated by the work of DeepMind, the company now owned by Google, which notably created an AI specialized in the game of Go. He then believed that this innovation could be used to solve insoluble problems, such as curing diseases, slowing down aging, and advancing humanity.
Later, as a student at the prestigious University of Berkeley, he joined a group of students who got involved in OpenAI in 2020. But at the time, it was not the thriving company valued at $160 billion that we know. Its structure was more like a research organization working on language models based on neural networks.
Suchir Balaji and his colleagues did not see themselves as competing with other Internet companies, given that GPT-3, the ancestor of ChatGPT, was just a chatbot. He specifies “With a research project, you can, as a general rule, train on any data. That was the mood of the time.”
200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000The Disappointment of Generative AI
As OpenAI evolved into a much more lucrative model, he eventually realized that using the data might be a copyright violation. The researcher goes on to say that the tool violates the law because “Generative models are designed to mimic online data so that they can replace ‘virtually everything’ on the internet, from news articles to online forums.”.
Various news organizations, artists, and other services have sued OpenAI and its rivals for this very reason. ” This is not a sustainable model for the Internet ecosystem as a whole “, Suchir Balaji says.
Criticized, OpenAI did not fail to react to contest these assertions:
We build our AI models using publicly available data, in a manner protected by fair use and related principles, and supported by long-standing and widely accepted legal precedents. We consider this principle to be fair to creators, necessary for innovators, and essential for the competitiveness of the United States.
For his part, the researcher, who is now working on personal projects, calls for more extensive regulation of these technologies to prevent abuse.
📍 To not miss any Presse-citron news, follow us on Google News and WhatsApp.
[ ]