Spread the love

Could AI be more moral than us ?

© Image generated by DALL-E AI for Presse-Citron

The explosion of AI models over the past year and a half like ChatGPT, Gemini or the very recent Claude is accompanied by dozens questions. How will our jobs be affected by this technology ? How far can these AIs actually understand and interact with us ? To what extent should we allow these AIs to make decisions autonomously, and how can we ensure that the decisions made are aligned with our values ​​as humans living in society ?

Among these questions, there is one that can be asked with a certain acuteness: that of the morality of artificial intelligences. A study recently highlighted the propensity of these chatbots to resort to violence whenever they could. This question therefore certainly deserves to be explored. This is precisely what these researchers from Georgia State University are doing; the results of their research were published on April 30 in the journal Scientific Reports of Nature.

Testing AI morality

Eyal Aharoni, principal investigator of the study, therefore looked into this question with his team: how external perceptions of ethical responses generated by AI, in ;#8217;ChatGPT occurrence, do they compare to those of human responses ? This research was based on an adaptation of the Turing test and aimed to determine whether AIs could, at least in appearance, being more moral than humans in certain contexts.

Aharoni therefore submitted students and ChatGPT to a questionnaire on ethical dilemmas. The anonymized responses from the two groups were then evaluated by external participants according to different criteria: virtue, intelligence or reliability for example. The surprising results revealed that ChatGPT responses were rated superior to those of humans in terms of virtue and trustworthiness.

Even if there is reason to be surprised, let us still remember that this study in no way guarantees the intrinsic ethical superiority of ChatGPT compared to the rest of us humans. Let us consider it rather as an invitation to reflection on the way in which AI could be used to guide ethical behavior as well as on the growing place of artificial intelligence in our social spheres.

Biased perceptions of AI responses

The Eyal Aharoni study light a disturbing phenomenon: the surprise of the participants when they discovered that one of the answers was perceived as ethical came from &# 8217;an AI, ChatGPT in this case. From the outset, they attributed all the answers to human beings. This bias reveals potentially excessive confidence in the moral capacities of artificial intelligence.

According to Aharoni, this confidence could be explained by the perception of AI responses as more coherent and impartial than those of humans. Indeed, AIs are programmed to avoid prejudice and adopt a neutral tone, which can create the illusion of superior morality . “ What's interesting is that the reason people were able to tell the difference seems to be that they rated ChatGPT's responses as being greater than  » he explains. « If we had done this study five or ten years ago, we could have predicted that people would be able to identify AI because of the inferiority of its responses. However, we found the opposite, namely that the AI ​​was in some way too efficient “.

The supposed impartiality of AI, while it may seem attractive at first glance, in reality masks a crucial lack of contextual understanding and empathy; two essential elements for human ethical judgment.

What implications for our society?

If individuals tend to give more credence to the moral decisions of AIs than to those of humans, what will be the repercussions?? Aharoni explains: “ Our findings lead us to believe that a computer could technically pass a moral Turing test, that is, it could deceive us in its moral reasoning. For this reason, we must try to understand its role in our society, because there will be times when people are unaware that they are interacting with a computer and others when they know it and consult computer to get information because they trust it more than other people “.

Indeed, blind trust in AI could lead to situations where decisions made without human discernment, lack the nuance and ethical depth required. “ People are going to rely more and more on this technology, and the more we rely on it, the more the risk increases over time » warns Aharoni.

This research therefore puts our finger on an essential point that many of us will be confronted with: the consideration we give to what artificial intelligences express. If these can be of fantastic help to us in some of our daily tasks, in particular the intellectual professions, they are not endowed (for the moment) with capacities which constitute the essence of human morality. That of feeling, understanding and acting wisely in an imperfect world. A theme widely addressed in Quantic Dream's splendid video game, Detroit: Become Human. If you haven't laid your hands on it, we can only advise you to experiment, you won't come out unscathed.

  • Researchers at Georgia State University published a study to test the morality of ChatGPT.
  • On the surface, this conclusion might suggest that it was more moral than the people tested. However, this interpretation is influenced by perception biases
  • This excess confidence in AI systems could, in the long term, have harmful consequences.

📍 For Don't miss any news from Presse-citron, follow us on Google News and WhatsApp.

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116