Spread the love

OpenAI is afraid of ChatGPT-5 and forms a new team around its dangers

Sam Altman, the boss of OpenAI © Y Combinator

Will AI turn the world upside down? While a certain relative panic had gripped a lot of people when ChatGPT-4 was released, and Ray Kurtzweil has probably never sold so many copies of his book &#8220 ;Singularity” (we recommend it), the limits of the current model were quickly highlighted.

De facto, GPT-4 looks for the moment more like a form of warning, and a discovery of uses which are destined to become the norm, rather than a total, accomplished revolution . In truth, the fears around AI threaten to really materialize as early as ChatGPT-5 or later iterations, in a future that is approaching extremely quickly.

Preventing the risks of the next AI &#8220 ;more intelligent” than humans

OpenAI in any case gives the impression of being aware of these dangers, even if there is something amusing in this firefighter-arsonist dialectic. Sam Altman's firm announces the creation of a new team intended to “monitor, evaluate, predict and protect” humanity against potentially major problems of future models

…including the risk of “nuclear threat” ! Indeed, OpenAI knows that more advanced AIs bring us closer to what the author cited above calls the Singularity (a theoretical tipping point at which the growth of these AIs will become uncontrollable and irreversible, with intelligence at light years from that of humanity).

It is then easy to imagine, for example, a super-AI observing the action of humanity on the planet and deciding that “all things being equal”, a planet Earth without humans, populated only by other animals, nature, and intelligent robots would undoubtedly be a better option than being ruled by humans with limited intellectual capacities, themselves dominated by their emotions and their impulses.

As a result, OpenAI wants to at least develop methods to limit the risk of “chemical, biological and radiological threats”what future AI could pose. Another risk: that of AIs learning on their own to replicate themselves to get rid of the rules imposed by humans, or even the capacity of particularly intelligent AIs to deceive humans or carry out large-scale cyberattacks. p>

Sam Altman has made a lot of statements since the launch of GPT-4. He said governments should treat AI as seriously as nuclear weapons. In a statement he also called for awareness that“Limiting the risk of extinction posed by AI should be a global priority.”

This new team will be led by the Director of Center for Deployable Machine Learning the special AI division of the famous MIT. In addition to the efforts described above, this team will be tasked with updating a policy around the development of AI that takes into account its intrinsic risks.

“We believe that the models cutting-edge AI technologies that will surpass the capabilities currently present in the most advanced existing models have the potential to benefit all of humanity. But they also pose increasingly serious risks”, recognizes OpenAI.

For the moment no precise timetable has been announced for the development of the model GPT-5. It seems that the training of this team is, however, one of the prerequisites before starting training for the next version of the firm's most advanced model.

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116