This tool strips away anti-AI protections from digital art

Wait 5 sec.

A new technique called LightShed will make it harder for artists to use existing protective tools to stop their work from being ingested for AI training. It’s the next step in a cat-and-mouse game—across technology, law, and culture—that has been going on between artists and AI proponents for years. Generative AI models that create images need to be trained on a wide variety of visual material, and data sets that are used for this training allegedly include copyrighted art without permission. This has worried artists, who are concerned that the models will learn their style, mimic their work, and put them out of a job.These artists got some potential defenses in 2023, when researchers created tools like Glaze and Nightshade to protect artwork by “poisoning” it against AI training (Shawn Shan was even named MIT Technology Review’s Innovator of the Year last year for his work on these). LightShed, however, claims to be able to subvert these tools and others like them, making it easy for the artwork to be used for training once again.To be clear, the researchers behind LightShed aren’t trying to steal artists’ work. They just don’t want people to get a false sense of security. “You will not be sure if companies have methods to delete these poisons but will never tell you,” says Hanna Foerster, a PhD student at the University of Cambridge and the lead author of a paper on the work. And if they do, it may be too late to fix the problem.AI models work, in part, by implicitly creating boundaries between what they perceive as different categories of images. Glaze and Nightshade change enough pixels to push a given piece of art over this boundary without affecting the image’s quality, causing the model to see it as something it’s not. These almost imperceptible changes are called perturbations, and they mess up the AI model’s ability to understand the artwork.Glaze makes models misunderstand style (e.g., interpreting a photorealistic painting as a cartoon). Nightshade instead makes the model see the subject incorrectly (e.g., interpreting a cat in a drawing as a dog). Glaze is used to defend an artist’s individual style, whereas Nightshade is used to attack AI models that crawl the internet for art.Foerster worked with a team of researchers from the Technical University of Darmstadt and the University of Texas at San Antonio to develop LightShed, which learns how to see where tools like Glaze and Nightshade splash this sort of digital poison onto art so that it can effectively clean it off. The group will present its findings at the Usenix Security Symposium, a leading global cybersecurity conference, in August. The researchers trained LightShed by feeding it pieces of art with and without Nightshade, Glaze, and other similar programs applied. Foerster describes the process as teaching LightShed to reconstruct “just the poison on poisoned images.” Identifying a cutoff for how much poison will actually confuse an AI makes it easier to “wash” just the poison off. LightShed is incredibly effective at this. While other researchers have found simple ways to subvert poisoning, LightShed appears to be more adaptable. It can even apply what it’s learned from one anti-AI tool—say, Nightshade—to others like Mist or MetaCloak without ever seeing them ahead of time. While it has some trouble performing against small doses of poison, those are less likely to kill the AI models’ abilities to understand the underlying art, making it a win-win for the AI—or a lose-lose for the artists using these tools.Around 7.5 million people, many of them artists with small and medium-size followings and fewer resources, have downloaded Glaze to protect their art. Those using tools like Glaze see it as an important technical line of defense, especially when the state of regulation around AI training and copyright is still up in the air. The LightShed authors see their work as a warning that tools like Glaze are not permanent solutions. “It might need a few more rounds of trying to come up with better ideas for protection,” says Foerster.The creators of Glaze and Nightshade seem to agree with that sentiment: The website for Nightshade warned the tool wasn’t future-proof before work on LightShed ever began. And Shan, who led research on both tools, still believes defenses like his have meaning even if there are ways around them. “It’s a deterrent,” says Shan—a way to warn AI companies that artists are serious about their concerns. The goal, as he puts it, is to put up as many roadblocks as possible so that AI companies find it easier to just work with artists. He believes that “most artists kind of understand this is a temporary solution,” but that creating those obstacles against the unwanted use of their work is still valuable.Foerster hopes to use what she learned through LightShed to build new defenses for artists, including clever watermarks that somehow persist with the artwork even after it’s gone through an AI model. While she doesn’t believe this will protect a work against AI forever, she thinks this could help tip the scales back in the artist’s favor once again.