© Olena Yakobchuk/Shutterstock
Are our chatbots overworked ? Do they feel belittled or mistreated ? These questions, which might make some of our readers burst out laughing, are at the heart of a mysterious recruitment within the AI company Anthropic. A look back at a very surprising decision and its foundations.
Protecting the “interests” of AI systems
Business Insider thus returns to the hiring of researcher Kyle Fish by Anthropic. The latter is supposed to reflect on the “well-being” of AI. In short, as these technologies evolve, it will have to ensure that artificial intelligence is sufficiently respected.
The company thus says it evaluates ” what capabilities are needed for an AI system to be worthy of moral consideration “ and what practical steps companies can take to protect the ” interests “ of AI systems.
The person concerned himself did not wish to respond to our colleagues. However, he has already raised this subject in the past:
200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000I want to be the type of person who cares – early and seriously – of the possibility that a new species/type of AI being (Ed.) may have its own interests that matter morally.
AI soon to be sentient?
This consideration is not disinterested, since he thinks that, if we treat them correctly, AI could return the favor in the future if they become more powerful than us.
Among the proponents of this conception, there is this idea that these technologies could soon be endowed with consciousness. They would therefore not only be intelligent, but also sensitive.
The supporters of this idea compare these questions to previous debates on animal rights. Quoted by the American site, Jeff Sebo, director of the Center for Mind, Ethics, and Policy at New York University, develops:
If you look ahead in 10 or 20 years, when AI systems will have many more computer cognitive characteristics associated with consciousness and sensitivity, you can imagine that similar debates will take place
A contested version
As the business media rightly points out, it is quite astonishing to worry about the future rights of machines, even though some are currently being used to roll back human rights. The brutal example of programs that allow people to refuse health care to sick children or to spread disinformation online, or to guide combat drones equipped with missiles that cause enormous damage. The message has been received.
📍 To not miss any Presse-citron news, follow us on Google News and WhatsApp.
[ ]