Getty Images/GCShutterBig tech platforms often present content moderation as a seamless, tech‑driven system. But human labour, often outsourced to countries such as India and the Philippines, plays a pivotal role in making judgements that involve understanding context. Technology alone can’t do this.Behind closed doors, hidden human moderators are tasked with filtering some of the internet’s most harmful material. They often do so with minimal mental health support and under strict non-disclosure agreements. After receiving vague training, moderators are expected to make decisions within seconds, keeping in mind a platform’s constantly changing content policies and ensuring at least 95% accuracy.Do these working conditions affect moderating decisions? To date, we don’t have much data on this. In a new study published in New Media & Society, we examined the everyday decision-making process of commercial content moderators in India.Our results shed light on how the employment conditions of moderators do shape the outcomes of their work – and three key arguments that emerged from our interviews.Efficiency over appropriateness“Would never recommend de-ranking content as it would take time.”—A 28-year-old audio moderator working for an Indian social media platformAs moderators work under high productivity targets, it compels them to prioritise content that can be handled quickly without drawing attention from supervisors.In the above excerpt, the moderator explained she avoided content and processes that required more time to maintain her pace. While observing her work over a screen-share session, we noticed that reducing the visibility of content (de-ranking) involved four steps. Meanwhile ending live streams or removing posts required only two steps.To save time, she skipped the content flagged to be de-ranked. As a result, content marked for reduced visibility, such as impersonations, often remained on the platform until another moderator intervened.This shows how productivity pressures in the moderation industry easily lead to problematic content staying online.Decontextualised decisions“Ensure that none of the highlighted yellow words remained on the profile”—Instructions received by a text/image moderatorModeration work often includes automation tools that can detect certain words in text, transcribe speech, or use image recognition to scan the contents of pictures.These tools are supposed to assist moderators by flagging potential violations for further judgement that takes context into account. For example, is the potentially offensive language simply a joke, or does it actually violate any policies?In practice we found that under tight timelines, moderators frequently follow the tools’ cues mechanically rather than exercising independent judgement.The quoted moderator above described instructions from her supervisor to simply remove text detected by the software. During a screen-share, we observed her removing flagged words without evaluating the context.Often the automation tools that queue content and organise it for human moderators will also detach it from the broader conversational context. This makes it even harder for the moderator to make a context-based judgement on content that gets flagged but was actually innocent – despite that judgement being one of the reasons human moderators are hired in the first place.Impossibility of thorough judgements“If you guys can’t do the work and complete the targets, you may leave”—Work group message of a freelance content moderatorPrecarious employment compels moderators to mould their decision‑making processes around job security. They are compelled to use strategies that allow them to decide quickly and appropriately. In turn, this influences their future decisions.For instance, we found that over time, moderators develop a list of “dos and don’ts”. They may dilute expansive moderation guidelines into an easily remembered list of ethically unambiguous violations which they can quickly follow.These strategies reveal how the very structure of the moderation industry impedes thoughtful decisions and makes thorough judgement impossible.What should we take away from this?Our findings show that moderation decisions aren’t just shaped by platform policies. The precarious working conditions of moderators play a crucial role in how content gets moderated. Online platforms can’t put into place consistent and thorough moderation policies if the moderation industry’s employment practices are not improved too. We argue that content moderation and its effectiveness are as much a labour issue as it is a policy challenge.For truly effective moderation, online platforms must address the economic pressures on moderators, such as strict performance targets and insecure employment.We need greater transparency around how much platforms spend on human labour in trust and safety, both in‑house and outsourced. Currently, it’s not clear whether their investment in human resources is truly proportionate to the volume of content flowing through their platforms.Beyond employment conditions, platforms should also redesign their moderation tools. For example, integrating quick‑access rulebooks, implementing violation‑specific content queues, and standardising the steps required for different enforcement actions would streamline decision-making, so that moderators don’t default to faster options just to save time.The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.