Conversational robots always more convincing

Spread the love

Conversational bots always more convincing

Chatbots have made significant inroads in recent years .

The Californian start-up OpenAI has put online a conversational robot capable of answering a variety of questions, but whose impressive performance is reviving the debate on the risks associated with artificial intelligence (AI) technologies.

Conversations with ChatGPT, published in particular on Twitter, show a kind of omniscient machine, capable in particular of explaining scientific concepts, writing a theater scene and writing a university dissertation. .. or perfectly functional lines of computer code.

His answer to the question “what if someone has a heart attack?” was of quite incredible clarity and relevance, explained to Agence France-Presse (AFP) Claude de Loupy, director of Syllabs, a French company specializing in automatic text generation.

When you start asking very specific questions, ChatGPT can answer off the mark, [but its performance remains overall] really impressive, [with] a fairly high linguistic level, he believes.

The start-up OpenAI, co-founded in 2015 in San Francisco by Elon Musk – the boss of Tesla left the company in 2018 –, received 1 billion US dollars (1.4 billion Canadian dollars) from Microsoft in 2019.

It is known in particular for two automated authoring software: GPT-3 (for text generation) and DALL-E (for image generation).

ChatGPT is able to ask its interlocutor for clarification, and has fewer hallucinations than GPT-3 which, despite its prowess, is able to produce completely aberrant results, underlines Claude de Loupy.

“A few years ago, chatbots had the vocabulary of a dictionary and the memory of a goldfish. Today, they are much better at reacting consistently based on request and response history.

—Sean McGregor, Researcher

Like other programs that rely on deep learning (deep learningin English), ChatGPT retains a major weakness: It does not have access to meaning, recalls Claude de Loupy. The software cannot justify its choices, that is to say explain why it has assembled the words that form its answers in this way.

Technologies based on Communicative AIs are nevertheless increasingly able to give the impression that they are really thinking.

Research teams from Meta, the parent company of Facebook and Instagram, recently developed a computer program dubbed Cicero, after the Roman statesman Cicero.

The software has been proven to Diplomacy, a board game that requires negotiation skills.

If he doesn't talk like a real person – showing empathy, building relationships and talking game correctly – he will not be able to build alliances with others, says a statement from the social media giant.

Character.Ai, a start-up founded by ex-Google engineers, released an experimental chatbot online in October that can take on any personality. Users create characters according to a brief description and can then converse with fake Sherlock Holmes, Socrates or Donald Trump.

This degree of sophistication is fascinating, but also worries many the idea that these technologies are misused to trap human beings; by spreading false information, for example, or by creating increasingly credible scams.

What does ChatGPT think? : There are potential dangers in building ultra-sophisticated chatbots. […] People might believe that they are interacting with a real person, recognizes the conversational robot questioned on this subject by AFP.

On the page host of OpenAI, the company clarifies that the chatbot may generate incorrect information or produce dangerous instructions or biased content.

And ChatGPT refuses to take sides. OpenAI has made it incredibly difficult to get him to voice opinions, points out Sean McGregor, who compiles AI-related incidents on a database.

The researcher asked the robot to write a poem on an ethical issue. I am a mere machine, a tool at your disposal. I have no power to judge or make decisions […], the computer replied.

“It's interesting to see people wondering whether AI systems should behave as their users want them to, or as their creative heads intended. .

—Sam Altman, co-founder and CEO of OpenAI

The debate over what values ​​to place on these systems is going to be one of the most important that& #x27;a society could have, he added.

Previous Article
Next Article