How "protected" are we?
3d
Yet another topic about AI… 🫣
It’s been about a year since I started using Glaze and Nightshade on my artwork images before posting them online, and I was assuming that this would keep me somewhat safe from AI unethical appropriation and scraping.
For you guys who aren’t familiar: Glaze and Nightshade are softwares currently developed by researchers at the University of Chicago in order to protect copyright and safeguard artists from invasive AI tools. Glaze, specifically, promises to impair style mimicry, and Nightshade promises to “deceive” AI models so that these models won’t be able to “read” the protected images.
Then, I thought I’d run a few quick tests.
TEST 1 - I applied Glaze (at its default level) to image number 1 attached here (sketch of a lady with cats), uploaded it to ChatGPT and asked it to generate an image of the Mona Lisa in the same style. Theoretically, it shouldn’t be able to do it, since Glaze is meant to prevent style mimicry. But take a look at image number 2 and tell me: do you think Glaze’s protection actually worked in this case?
TEST 2 - I apele Nightshade (at its default level) to image number 3 (sketch of a guy in glasses with a backpack, running, with a thought bubble where’s writtn “Where’s the bus?”), the I uploaded it to ChatGPT as asked it to describe in words what was that image. It shouldn’t be able to do it as well, since Nightshade’s promise was to “poison” the image so that it would be “unreadable” to the AI tool. Yet, image 4 is what ChatGPT gave to me as an answer (since the text is in Portuguese, here’s a general translation:
“The image is a cartoon-style drawing made with loose pen strokes. It shows a seemingly young person wearing large glasses running with a backpack on his back. The character appears to be in a hurry, perhaps trying to catch public transport. There is a speech bubble with the phrase in Portuguese: "Where's the bus?" This indicates that he is looking for or waiting for a bus that has not yet arrived. The style of the drawing is expressive and humorous, suggesting an everyday scene with a comic twist. There is also a small signature in the lower right corner, with the letter "B". Would you like me to apply any modifications or create something based on this image?”
So I wondered if Nightshade worked only on my signature (which is “LI”, not “B”), since that’s the only part which the AI wasn’t accurate about…
Would a higher level of Glaze/Nightshade protection perhaps work more efficiently? (I hope I'll be able to test these higher levels in the future, although I couldn't up to now because applying the standard level of Glaze and Nightshade to a single image costs me several hours, literally… I wonder if my PC is too slow.)
I truly don't mean my words to be a criticism to the work of the brave team of researchers at the University of Chicago who, I believe, are doing the best they can with Glaze and Nightshade. I honestly and deeply appreciate their endeavour. It's just that, after all this, I have doubts and wonderings:
- Is any artwork ever really “safe” online nowadays?
- As artists, is it efficient or even worth the effort to try to “safeguard” ourselves against AI through such strategies? Assuming that the problem is not the tool, but how people use it, is it too naive to trust that it’s possible to raise ethical awareness for people not to use AI maliciously in the first place, or at least to trust that those who do will face effective legal consequences?
- Even if it was possible to completely avoid AI, if AI is where the future of overall digital information is headed, wouldn’t the attempt to “protect” ourselves against it backfire? In other words, by trying to keep our artworks out of AI databases (if this is even possible by now), would we be excluding ourselves from the future of digital memory/history?
Among all this, I even believe that art and copyright are just a fraction of the problem - deepfakes, data manipulation, social control, political surveillance, these are much more complex issues also implicated in the "bad side of AI". Still, it’s somehow all connected, isn’t it?
I know this is a polemic and even difficult matter for many of us. And I know that answers or solutions are complicated. Anyway, I’d love to hear other thoughts about this.
Stay well, y'all! 🤍
Small detail about Nightshade: it's purpose is to "poison" the datasets that are used to train new models. So using it on a trained model might not yield any noticeable results as the model has already "learned" enough about art.
But if enough artwork online would be covered with nightshade then this will make it harder to train new models and advance the AI. At least in theory.
It is an arms race and one side has A LOT more funding and brain power invested. So I share your feeling that Glaze and Nightshade will at best only slow down their work a bit, but will not provide a guaranteed defense.
My personal hope is that the whole AI wave is still just new and novel.
Each 6 months a new version gets released and it looks better than before.
Kind of like computer game graphics in the 90s and 2000s.
Everyone just wants more eye candy and the whole industry is geared towards pushing for new heights of realistic 3D renders.
By now realism has been achieved enough times that it no longer is the must have. Unique art style or vibe of the game is arguably more influential.
And this is something I feel like the AI has a hard time with.
Consistently working on a new and unique direction.
Ai just gravitates towards the average. Large Language Models will lean towards the average of what previous texts have said before and image generation will lean towards what it has seen before.
This doesn't mean that art jobs will be safe. A lot of demand for single images or advertisements will be moving away from the "human" market.
There will be work for experienced artists taking on larger and more ambitious projects.
But smaller and low pressure jobs and tasks will likely go to the lowest bidder: the Ai
•
2d
Thank you for sharing your thoughts, @Martin M!
"Ai just gravitates towards the average [...] towards what it has seen before." - This bit, in particular, called my attention! It makes a lot of sense with what we're already seeing: average art jobs that require "same old" solutions look like to be in the most danger of replacement, while a smaller percentage of somewhat "truly innovative" projects will keep requiring human brain force. It's still a very tough change, though, since the majority of art jobs in a pre-AI era were exactly those more ordinary projects, and it's a sad and very real social problem that there will be a much smaller room for the "everyday artists" regarding jobs, especially for those who were still trying to break into the industry.
On the other hand, I think things can gain some perspective if we consider art-making as a broader human practice and don't restrict "being an artist" to industry jobs or commercial profitability. In this scenario, perhaps it's easier to see AI for what it really is: a technical artifice, rather than some sort of "evil menace"? Perhaps this moment of change in the art industry could be an opportunity for people (and artists?) to start looking more at art not only as a job, but as a sort of human existencial practice than can connect people to one another, something we do because we do, not (just) for money, and something that can make our particular lives and stories resonate with everyone's shared humanity (and I guess this a place AI might never reach). Is this too romantic or naively optimistic? Just some loose thoughts... which, of course, don't reveal the tragedy of real world artists having to find other ways to pay their bills, and also don't exclude the urgent need for updated legal systems and practices worldwide.
But, yeah, regarding the "digital protection" concern, I think your words sum it up pretty well.
Thanks again!