Spread the love

It is now possible to protect a work against artificial intelligence

© Bing Image Creator/© Microsoft

The more opinions are formed on artificial intelligence, the more they agree on the fact that the use of AI in the world of art is only pleasant for the people who use it. Indeed, the public is generally against this practice, the images generated by AI often having a disturbing appearance, with strange elements that are difficult to put your finger on at first glance.

We think in particular of the community manager at Xbox, who to promote an environment where artists have few resources, used an image generated by artificial intelligence. An action which was not passed on to the players, who criticized the practice to the point that the incriminating tweet ended up being deleted.

A problem of use without consent

When consent is at the heart of social debates in our time, it seems that AI has no use for this kind of fight. Most AI models wildly use data on the internet without taking into consideration the consent of artists to use their works. This is then used to train the AI ​​model to respond most faithfully to a prompt (a command one might say) by offering the most possibilities by being as close as possible to the request.

However, certain aspects of these works are sometimes reused for economic reasons, without any compensation being made to the original artist, which is spoiled by an invisible entity. While some artificial intelligence companies make deals to try to obtain the rights to use certain content, the vast majority of images used to train AI models are still used as such, these were free of rights, wrongly of course.

The artists strike back

However, there are ways to protect against AI theft, while continuing to share your works on social networks. To do this, a team of researchers from the University of Chicago managed to create an astonishing piece of software called Glaze. The software makes it possible to add effects to the works proposed to it, imperceptible to the human eye, but sufficiently disturbing so that AI models cannot take hold of them< /strong>.

Subscribe to Lemon Squeezer

Since its launch it has been downloaded more than 1.6 million times, proof is that the problem worries many artists. If the solution is effective, the creators of the software themselves admit that it is possible that in the future AI models will be able to thwart these sleight of hand, and that other means will have to be found. to counter them.

How do I know if my photos have already been used ?

If you have just learned that it is possible to protect your works from the abusive use of artificial intelligence models, you may have shared images already used by Midjourney, DALL-E and others.

Thus, the site haveibeentrained.com (temporarily unavailable at the time of writing this article) allows you to discover by uploading your knowledge images whether these have already been used by an AI model or not. Thus, you will be able to make the necessary arrangements with artificial intelligence services in order to defend your interests.

  • Many artificial intelligence models indiscriminately use artists' works to train by integrating them into their databases.
  • However, the consent of artists is very rarely requested, and they receive no remuneration.
  • Thus, software has now been developed to protect works against their use by an AI.

📍 So you don't miss any news from Presse-citron, follow us on Google News and WhatsApp.

[ ]

Teilor Stone

By Teilor Stone

Teilor Stone has been a reporter on the news desk since 2013. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining Thesaxon , Teilor Stone worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my teilor@nizhtimes.com 1-800-268-7116