Gina Neff, professor of technology at the University of Cambridge © BBC
Should we fear the arrival of AI on democratic political systems? Should we fear instability in the decades to come?? L’ The advent of the internet and social networks has already led to well-documented risks in various political spaces. From cyberwar to fake news, never since the 1930s has the question of propaganda and its effects on society arisen so acutely.
Gina Neff, professor of technology at the University of Cambridge (United Kingdom), discusses the perceived risks of AI on democratic exercise in a podcast from the BBC. She begins by returning to the recent complaint from the New York Times against OpenAI. The firm headed by Sam Altman has been trying for several months to reach agreements with major press groups to allow its models to legally base their training on subscribed articles and content normally placed behind a paywall.
Will democratic societies (and newspapers) resist generative AI ?
De facto the firm's latest models are surprisingly capable of regurgitating versions of these articles without authorization. The agreement between OpenAI and the New York Times has not come to an end, and the press group has just announced that it has filed a complaint against the world leader in information technology. IA, while arguing, she explains, that ChatGPT “undermines the democratic exercise”.
At issue: a direct loss of income for the New York Times and other groups which risks impacting their profitability, and therefore the possibility of delivering independent quality information. However, the expert fears that the problem is actually already deeper: “all the smaller companies looking to base their content creation on ;#8217;artificial intelligence must be interested in these questions of intellectual property”.
With for the ;#8217;time, as the only horizon, the courts which should soon decide the subject. But also, tomorrow, undoubtedly – and this we add beyond this extract – the question of which actors, in the training of these models, will have the most influence in the points of view taken up in the articles generated on press sites of modest size, which are part, overall , the most read on the internet.
Another problem: that of the veracity of the data collected by these generative systems. We indeed see that some errors, sometimes difficult to detect, slip into this type of content generated by AI from prompts from Internet users. Errors that can be taken up and amplified by Internet users themselves on social networks. To the point of creating real disturbing movements.
However, even if safeguards (which can be circumvented in many cases) are present to avoid it, the other great danger of generative cats is their ability to allow actors to quickly generate a large amount of content for malicious purposes – without this requiring heavy investments, or even support, whether state support or that of wealthy actors.
< p>The whole question is whether companies (and Internet users) are ready to put things into perspective, when the most outrageous false information already sometimes enjoys a worrying virality – and that other techniques make it possible to put words into the mouths of politicians and respected personalities in a stunning way.
Between new opportunities and destructive potential, the’ #8217;AI is similar in many ways to – for the communication sector – at the advent of a new era, comparable to the atomic age. “So far so good… The important thing is not the fall, it's the landing. The full BBC extract is available via the link in the source of this article.
- In a BBC podcast, an expert poses what, according to her, are the dangers of AI for democratic exercise.
- With in focus, for the moment, is the question of intellectual property of articles used for training artificial intelligence models like ChatGPT.
- however, AI also poses other real dangers without it being really certain that Internet users are ready to separate the wheat from the wheat. tares.
📍 To not miss any news from Presse-citron, follow us on Google News and WhatsApp.