Sam Altman, the boss of OpenAI © Y Combinator
Will AI turn the world upside down? While a certain relative panic had gripped a lot of people when ChatGPT-4 was released, and Ray Kurtzweil has probably never sold so many copies of his book “ ;Singularity” (we recommend it), the limits of the current model were quickly highlighted.
De facto, GPT-4 looks for the moment more like a form of warning, and a discovery of uses which are destined to become the norm, rather than a total, accomplished revolution . In truth, the fears around AI threaten to really materialize as early as ChatGPT-5 or later iterations, in a future that is approaching extremely quickly.
Preventing the risks of the next AI “ ;more intelligent” than humans
OpenAI in any case gives the impression of being aware of these dangers, even if there is something amusing in this firefighter-arsonist dialectic. Sam Altman's firm announces the creation of a new team intended to “monitor, evaluate, predict and protect” humanity against potentially major problems of future models
…including the risk of “nuclear threat” ! Indeed, OpenAI knows that more advanced AIs bring us closer to what the author cited above calls the Singularity (a theoretical tipping point at which the growth of these AIs will become uncontrollable and irreversible, with intelligence at light years from that of humanity).
It is then easy to imagine, for example, a super-AI observing the action of humanity on the planet and deciding that “all things being equal”, a planet Earth without humans, populated only by other animals, nature, and intelligent robots would undoubtedly be a better option than being ruled by humans with limited intellectual capacities, themselves dominated by their emotions and their impulses.
As a result, OpenAI wants to at least develop methods to limit the risk of “chemical, biological and radiological threats”what future AI could pose. Another risk: that of AIs learning on their own to replicate themselves to get rid of the rules imposed by humans, or even the capacity of particularly intelligent AIs to deceive humans or carry out large-scale cyberattacks. p>
Sam Altman has made a lot of statements since the launch of GPT-4. He said governments should treat AI as seriously as nuclear weapons. In a statement he also called for awareness that“Limiting the risk of extinction posed by AI should be a global priority.”
This new team will be led by the Director of Center for Deployable Machine Learning the special AI division of the famous MIT. In addition to the efforts described above, this team will be tasked with updating a policy around the development of AI that takes into account its intrinsic risks.
“We believe that the models cutting-edge AI technologies that will surpass the capabilities currently present in the most advanced existing models have the potential to benefit all of humanity. But they also pose increasingly serious risks”, recognizes OpenAI.
For the moment no precise timetable has been announced for the development of the model GPT-5. It seems that the training of this team is, however, one of the prerequisites before starting training for the next version of the firm's most advanced model.
[ ]