Spread the love

How Sora, OpenAI's video generator could change the world in 2024 ?

© Screenshot/OpenAI

Barely presented, and already pointed out. Sora, the new AI tool created by OpenAI (ChatGPT) is unparalleled in the AI ​​landscape. The company struck a major blow by revealing the potential of its new baby four days ago in a breathtaking video. Capable of generating videos of striking realism from a simple text, Sora brings his share of enthusiasm, but also fears. While the year 2024 is a central year, when residents of many countries will go to the polls across the globe, its capacities are worrying. Could it be used to spread misleading content, deepfakes and other misinformation? The question is worth asking.

An unprecedented technical breakthrough

If other AI systems generating videos existed, like Google Imagen (which can be seen tested in this video) or Nvidia VideoLDM (demonstrated in this video), no one really comes close to of Sora.

If the renderings of this one are also realistic, is that it relies on two different technologies. It uses diffusion models, like DALL-E, which gives it the ability to order a large quantity of random pixels into an image. These images are then transformed into video sequences using another AI technology called “ transformer architecture ”.

Result? Today she is the AI ​​text-to-video the most advanced ever created. Rachel Tobac of SocialProof Security highlights the huge improvement Sora brings over previous attempts in the field. She explains that the generated renderings are “considerably more realistic and less fanciful.” Even if the videos are far from perfect, you really have to make an effort to hunt for details to detect imperfections and other strange visual elements.

A future role in disinformation ?

One of the most concerning aspects of Sora is his potential to contribute to the proliferation of deepfakes. For Hany Farid (specialist in digital image analysis and detection of digitally manipulated images) from the University of California at Berkeley, this risk is not a problem. is not to be neglected. “ As is the case with other techniques in generative artificial intelligence, there is no reason to think that the conversion of text to videos will not continue to improve rapidly, bringing us closer and closer to a time when it will be difficult to distinguish fake from real  » he explains.

He imagines what using Sora would entail if we use him in conjunction with other AI technologies. « If we combine this technology with voice cloning powered by artificial intelligence, it could mark the beginning of a new era in the production of deepfakes, where we would see people speaking words and perform acts that they have never done ” he emphasizes.

For Tobac, this risk is also consider seriously: “ Sora truly has the talent to produce videos likely to fool the general public ”. She also rightly points out that “ it is not necessary for the video to be perfect to be credible, because many do not yet realize that videos can be manipulated as easily as photos ”. In the age of gullibility, will Sora one day be queen??

Regulation and security measures

Perfectly aware of its challenges, OpenAI has restricted access to Sora and the tool is not freely available available. Currently, the company has commissioned a group of “of experts in areas such as misinformation, hateful content, and bias.” responsible for testing Sora internally in order to assess its potential for misuse. Experts whose names are currently not known.

OpenAI wants to be reassuring on this precise point; a company spokesperson said: “ We implemented many critical security measures before making Sora available in OpenAI's offerings’ ». All that remains is to hope that these safeguards that the company has already deployed on its other products, like ChatGPT, remain effective when it deploys Sora for commercial use.

Placing a watermark on the generated videos attesting to their creation by AI is also part of the measures envisaged to avoid overflows.

Could Sora be dangerous and one day influence public opinion?? Yes, undoubtedly. Is it possible to act so that this never happens? Of course. The only solution lies in establishing collaboration between governments, social networks and OpenAI to prevent this from happening. The integration of such powerful technology into our society is always accompanied by this same refrain: a tool n&# 8217;is never bad in itself; the use we make of it, on the other hand, can be.

    < li>Sora, OpenAI's generative AI, represents a giant step forward from a technical point of view.
  • The risks of seeing spreading falsified content thanks to its capabilities is real, and several experts are already sounding the alarm.
  • OpenAI ensures that it puts in place regulatory measures to prevent this from happening and the company is currently testing Sora internally.

📍 To not miss any Presse-citron news, follow us on Google News and WhatsApp.

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116