Nightshade vs image generative AI
Example of art theft Midjourney on right |
Diffusion for a while. They are registered as research companies and as such are not required to pay artists for use of their work nor give credit or ask consent. Yet they charge for use of their programs. So this is my perspective from perhaps a different angle. When I began in art school (OCAD) I was quite an awful artist. But they saw potential in me. One of the things an artist learns to do over a long period of time is to bring forth, from deep inside them, an emotion or message perhaps. Something they know intimately and try to put on a 2d surface (in the case of painting) for others to see and hopefully pick up the emotion. As an artist you have to learn techniques and methods to allow your emotions to be translated from inside you onto the canvas using things like paint or drawing etc. So imagine you wanted to show a childhood trauma, abuse or perhaps the loss of a loved one. Suppose you created an image and when people saw it they felt ... nothing. They could not detect in any way this emotion of yours. So you study, you experiment and learn and over time you begin to understand aspects of art such as colour theory or composition which help you translate your emotions onto a canvas. For example understanding that the shape of strokes affect emotion. If you create a composition mostly of oval or rounded shapes it is soothing to the eye whereas a composition of sharp triangle shapes can create anxiety. Then combining colour will also affect this.. cold blues or warm orange subconsciously leave an impression which the artist figures out. When an artist creates an image it is not just a month of painting but a lifetime of learning to paint it so it works. AI steals this lifetime of effort from the artist and uses it without payment or credit. It is not "inspired" by the artists but merely using their work to train itself.
I have tried Midjourney and it is impressive. With five words (prompts) or so I said... castle, winding road, girl in red jacket, stormy skies.. something like that and I got, not what i pictured in my mind, but a very finished impressive scene.. all done in a few seconds. Essentially I got a happy accident, not something I envisioned but rather something unexpected but great looking. I "settled" for the image. Some artists take quite a long time to make their art, not mere minutes and the cost of that time and effort is reflected in the price they sell it for, my worry is that the simplification of art creation by these inexpensive tools will eventually remove the long artist and thus remove the art that takes time to do in order to transfer the soul onto the canvas for others to see and feel. It is already difficult to survive as an artist and losing commissions or sales will quickly have them drop out of that career in order to just pay basic bills.
Anyway, I am not against technology and progress.. I work in a virtual world after all, but I would like it to be fair to artists. I have a friend who uses Midjourney to quickly create images he uses for reference in his art or to show the players of his Dungeons and Dragons game the scenes they are in. In the past people have got hot and bothered by the invention of the camera, the appearance of the drum machine, auto tune and so on all feeling these will destroy the related medium and they didn't. I think this is fundamentally different though in that it is seismic far beyond art. I think it just makes sense to figure things out before releasing them to the wild. It is out of the box now and wont go back in but lets make it so huge swaths of the population are not devastated. Big companies just get bigger, wealthier and more powerful and for the most part the individuals can't stand up to them. They can pay artists a small % when their work is used.. but they won't until the have to. AI companies will now have to consider that their entire models can be ruined by their practice of stealing artists work without their consent. We can't appeal to them morally so lets fuck up their wallet.
Below is an portion from MIT review about Nightshade.
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.
The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth.
AI companies such as OpenAI, Meta, Google, and Stability AI are facing a slew of lawsuits from artists who claim that their copyrighted material and personal information was scraped without consent or compensation. the hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists’ copyright and intellectual property.
Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows.
The team intends to integrate Nightshade into Glaze, and artists can choose whether they want to use the data-poisoning tool or not. The team is also making Nightshade open source, which would allow others to tinker with it and make their own versions. The more people use it and make their own versions of it, the more powerful the tool becomes, Zhao says. The data sets for large AI models can consist of billions of images, so the more poisoned images can be scraped into the model, the more damage the technique will cause.
Nightshade exploits a security vulnerability in generative AI models, one arising from the fact that they are trained on vast amounts of data—in this case, images that have been hoovered from the internet. Nightshade messes with those images.
Artists who want to upload their work online but don’t want their images to be scraped by AI companies can upload them to Glaze and choose to mask it with an art style different from theirs. They can then also opt to use Nightshade. Once AI developers scrape the internet to get more data to tweak an existing AI model or build a new one, these poisoned samples make their way into the model’s data set and cause it to malfunction.
Poisoned data samples can manipulate models into learning, for example, that images of hats are cakes, and images of handbags are toasters. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample.
The researchers tested the attack on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. When they fed Stable Diffusion just 50 poisoned images of dogs and then prompted it to create images of dogs itself, the output started looking weird—creatures with too many limbs and cartoonish faces. With 300 poisoned samples, an attacker can manipulate Stable Diffusion to generate images of dogs to look like cats.
Generative AI models are excellent at making connections between words, which helps the poison spread. Nightshade infects not only the word “dog” but all similar concepts, such as “puppy,” “husky,” and “wolf.” The poison attack also works on tangentially related images. For example, if the model scraped a poisoned image for the prompt “fantasy art,” the prompts “dragon” and “a castle in The Lord of the Rings” would similarly be manipulated into something else.
Comments