From the Opinions Editor: What do equity and justice mean when it comes to AI?

Wait 5 sec.

4 min readNew DelhiFeb 22, 2026 08:14 PM IST First published on: Feb 22, 2026 at 08:14 PM ISTDear ReaderThe 2024 film Humans in the Loop deals with one of the crucial questions of our age: What do equity and justice mean when it comes to artificial intelligence, arguably the most transformative technology of the 21st century? Set in Jharkhand, the film follows Nehma, an Adivasi woman, who is employed to annotate images and videos used to train AI systems. In one scene Nehma’s supervisor attempts to generate an image of an Indian Adivasi woman using an AI image tool. The system — trained on West-oriented, English-language data — repeatedly produces light-skinned, blonde-haired women; at one point, it interprets “Indian” as “Native American.” The algorithm is not itself malicious, the supervisor muses. It is “like a child,” she says, and can only work with the knowledge it has been fed.AdvertisementThis question of whose image, history, and knowledge informs AI systems surfaced repeatedly last week at the India AI Impact Summit in Delhi. Multiple panels discussed the issues of representation, inclusion and empowerment. Since ChatGPT’s public debut in November 2022, research has consistently shown that large language models (LLMs) reproduce biases and hierarchies around race, religion, gender and caste. The problem goes beyond the headline-grabbing controversies over chatbots that reproduce offensive language; more insidiously, AI can encode social assumptions that feel “natural” because they are statistically common in their training data.Consider the emerging body of research on caste bias in large language models. A recent paper titled “DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis,” by researchers from IBM Research and Dartmouth College, demonstrates that LLMs systematically associate certain caste-linked surnames with specific occupations, a bias learned from patterns embedded in digital text — news archives, social media, historical documents — where caste stratification has long shaped discourse.Such reproduction of real-world prejudices can lead to real-world harms when AI systems are used, say, for hiring purposes, filtering college and school applications or determining credit ratings. They might refrain from using overtly derogatory language yet still produce responses that normalise unequal access to education, employment, or social mobility.AdvertisementBut more or better representation alone is insufficient. As Urvashi Aneja, founder and director of Digital Futures Lab argued at the Summit, during a discussion on ‘Women, Work and the AI Future’, representation must be accompanied by redistribution of power. Including more diverse data without thinking about who designs, governs, and profits from AI can widen surveillance or exploitation. For example, expanding facial recognition datasets to include more brown and dark-skinned faces may reduce error rates — but it may also increase the surveillance state’s capacity to monitor marginalised communities.you may likeThe scandal in January connected to Grok, the chatbot embedded in the social media platform X, illustrates this distinction. When users of the chatbot generated non-consensual, sexualised images of women, the problem was not that women were underrepresented in training data. The deeper issue was, who had power over design safeguards, redress mechanisms, and accountability structures. Women’s bodies became raw material for humiliation at scale, and their representation in datasets did nothing to prevent that harm.Technology is frequently framed as neutral, but history shows otherwise. From automotive safety measures calibrated for male physiology to the racialised design of medical devices like pulse oximeters (which were found to overestimate oxygen saturation in Black patients during the Covid pandemic, leading to delay in treatment), examples of bias coded into technology abound. AI inherits this legacy at unprecedented scale.AI that represents and empowers us all, therefore, won’t just come from diverse, inclusive knowledge. It also demands wider distribution of power — over how systems are built, how they are deployed, and who they ultimately serve.Until next time,Pooja Pillai