Declan Sun/UnsplashThe explosion of generative artificial intelligence (AI) tools has provoked both hopes and anxieties about the potential benefits and harms of this technology. In advanced economies, people are almost equally worried and optimistic about it.This is perhaps unsurprising. AI consumes vast amounts of natural resources yet promises to save the planet. It may improve human efficiency and productivity, while putting millions out of work. For many white-collar workers, AI use now seems non-optional. The messaging is clear – get on board or be left behind. Amid this uncertainty and rapid technological uptake, concerned citizens have made efforts to resist AI. One form of AI resistance, aimed at sabotaging the functionality of AI large language models, is data poisoning. But how accessible is it to the everyday person? And what is at stake in its use? What is AI resistance?Acts of AI resistance range from social sanctions and boycotts, to strikes, protests, public outcry and lawsuits. Driving these acts are perceived threats to jobs, ethics, safety, democracy and sovereignty, and the environment.AI is also described as an existential risk to creative industries, including music, fiction and film. In the United Kingdom, generative AI has been characterised as an “industrial scale theft” that threatens a £124.6 billion (A$237bn) creative sector and more than 2.4 million jobs.People have long used civil disobedience to address social injustices. Famously, Rosa Parks’ refusal to sit at the back of a bus in Alabama led to a 13-month bus boycott by tens of thousands of Black residents. It only ended when racial segregation on public transport was deemed unconstitutional in the United States. Acts of sabotage have also long been central to collective action against injustice. In fights for labour rights, workers have employed diverse tactics to reduce efficiency and productivity. This has ranged from hotel workers putting salt in sugar bowls to farm workers breaking machinery.Data poisoning can be viewed as a modern version of these historic actions. How does data poisoning work?Data poisoning means deliberately inserting misleading, biased, or nonsensical content into the data AI models learn from, to make their outputs worse. Only 250 poisoned documents in a dataset could compromise outputs across AI models of any size.There are various ways to poison data. Some require highly technical skills, others are accessible to anyone with an internet connection – if their text or images are used as training data. Researchers have developed several data poisoning tools that exploit the vulnerabilities of AI models. Glaze and Nightshade enable artists to make poisoned visual images that can’t be used as training data. The tool CoProtector defends against the exploitation of open source code repositories like Github. Monash University and the Australian Federal Police have created Silverer, enabling social media users to doctor personal images to prevent them from being used in deepfakes. Example images of AI model output generated with data poisoned with the Nightshade tool. Shan et al., arXiv (2023), CC BY But you don’t need a tool or advanced skills to affect AI. Simply creating websites with factitious information, making jokes in Reddit, feeding models their own outputs, or editing Wikipedia can poison data.Data poisoning is commonly presented as a dangerous act perpetrated by “cyber criminals” or “malicious actors”. But what if it’s used to protect human rights? Is data poisoning legal? Is it ethical?Legal obligations related to data poisoning are often directed to AI developers and organisations. The EU Artificial Intelligence Act requires that appropriate measures are adopted to prevent and detect data poisoning. The legal status of AI data poisoning by individual users is less clear. Criminal penalties may apply under US or UK computer fraud and misuse laws. Interference with an AI model would also likely breach the terms of service of AI companies. If AI data poisoning is unlawful, questions could still be asked about its ethical status. Philosophers have long recognised that civil disobedience can be justifiable in circumstances where legally sanctioned practices produce serious injustice.If AI companies are operating with state approval in ways that impact citizens’ rights to privacy, copyright, safe and secure work, quality education, social and sexual safety, data poisoning may constitute ethical civil disobedience. For philosopher John Rawls, “[civil disobedience] is one of the stabilising devices of a constitutional system, although by definition an illegal one”.If the intention is to prevent mass unemployment, preserve the integrity of elections, and protect against social harms (suicide, child abuse, increased human isolation, loss of human creativity and environmental degradation), data poisoning could align with the principles of justice that underpin democratic social institutions. A significant problem with data poisoning is that even if models become compromised – and outputs grow inconsistent, misleading, or nonsensical – users overly trust AI systems. Data poisoning then could contribute to harms it seeks to resist, amplifying the inaccuracy of systems humans are increasingly relying on, irrespective of their quality and effects.Data poisoning is not simply an immoral cyber crime. It can be an ethically complex strategy to address social injustices. AI development needs to be of collective benefit and aligned with public values and interests. If AI company employees are asking “Are we the baddies?”, history may prove that in some cases data poisoners are on the side of good.The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.