Nowe narzędzie dla artystów pozwala zwalczać skrobaczki AI poprzez “zatrucie” danych

University of Chicago’s Glaze Project has released Nightshade v1.0, a tool that enables artists to sabotage AI generative models that use their work for training. Nightshade makes imperceptible changes to the pixel level in images, introducing misinformation to AI models, causing them to interpret the images as something else and influencing distorted image results, such as identifying cubist style as cartoon.

The tool is currently available for Windows systems and Apple Silicon Macs.

During testing, Nightshade demonstrated its effectiveness by replicating the “poisoning” of an AI image generation model. Here are a few examples of the effects of this process.

Examples of AI Model “Poisoning”

One of the tests conducted using Nightshade was poisoning an AI image generation model trained on data to create cubist-style images. After introducing influential changes through Nightshade, the AI models started interpreting these images as cartoons. The effect was surprising and intriguing, showcasing how data manipulation can influence the results of AI generative models.

Another example was poisoning an AI model designed to recognize the style of impressionist paintings. However, after modifications by Nightshade, the AI models began interpreting the images as abstractions, completely altering the expected results.

Nightshade v1.0 opens up new possibilities for artists who feel that their creativity is being exploited in an undesirable way. The tool allows them to defend against AI scrapers and influence the results of generative AI models.

FAQ

Question: What is Nightshade?
Answer: Nightshade is a tool developed by the University of Chicago’s Glaze Project that enables artists to sabotage AI generative models by introducing misinformation at the pixel level in images.

Question: How does Nightshade affect AI models?
Answer: Nightshade makes imperceptible changes at the pixel level in images, causing misinformation for AI models. This results in models interpreting images differently, leading to distorted outcomes.

Question: Which operating systems support Nightshade?
Answer: Nightshade is currently available for Windows systems and Apple Silicon Macs.

Question: What are examples of AI model “poisoning” using Nightshade?
Answer: One example is poisoning an AI model trained to create cubist-style images, which, after Nightshade modifications, the AI models interpret as cartoons. Another example is poisoning an AI model designed to recognize impressionist painting style but after Nightshade modifications, the models interpret the images as abstractions.

Question: What possibilities does Nightshade open up for artists?
Answer: Nightshade v1.0 allows artists to defend against undesirable exploitation of their creativity by AI models. The tool enables them to influence the results of generative AI models, giving them control over how their work is interpreted.

Useful Links:

University of Chicago’s Glaze Project Website

The source of the article is from the blog karacasanime.com.ve