The screen is the new frontline: The battle against the ‘mediated afterlife’ of terror

Wait 5 sec.

5 min readMar 7, 2026 04:30 PM IST First published on: Mar 7, 2026 at 04:30 PM ISTWritten by Snehashree Mukherjee, Itzik Ben-Israel, Daphna CanettiA conflict thousands of kilometres away can still shape India’s threat landscape by nightfall. The recent Iran-Israel exchanges show how fast violence, ideology, and influence can cascade across regions. India’s zero-tolerance stance on terrorism can no longer be one-dimensional; it must be multifaceted, defending the nation from threats both outside and within. As India and Israel’s leaders met again recently, experts watched closely: Have modern forms of terrorist influence been addressed?AdvertisementBeyond bombs and bullets, today’s danger includes cognitive manipulation and engineered political violence, a reality both countries confronted in the aftermath of October 7, 2023, and the Pahalgam attack in 2025. These incidents were also distinct in that the violence was designed not only to inflict harm, but to generate a mediated afterlife. A notable feature in both instances was the use of body-worn cameras, enabling perpetrators to record attacks and circulate footage online, extending the event’s reach well beyond the immediate site of violence.This is a form of indirect exposure to terrorism where citizens experience violence through screens, even at a distance. Such exposure can leave individuals traumatised and can also reshape political preferences, often pushing public sentiment towards more punitive policy choices. The result is a difficult bind for any democracy: The state must respond strategically, but it must do so amid intense public emotion and rapidly shifting opinion shaped by what people have seen and what they are made to believe they have seen.When emotions run high, social media engagement tends to surge, and so does the commercial value of attention. In such moments, moderation often appears to follow a familiar rhythm: The most charged content travels fastest at the outset, and only later is it curtailed in the name of compliance. Even if platforms intervene later, the first burst of virality has often already done much of the work.AdvertisementSpeaking of social media regulations, sharing terrorist content online cannot be defended under freedom of speech when it involves threats to human life or harm inflicted for political ends. In such cases, basic human rights are violated. Even if this reasoning appears straightforward, implementing effective blocking is not. This is evident in the way terror content circulates online: A large volume spread internationally across major platforms in the immediate aftermath, and then continued through reposts and commentary. As a public proxy for scale, TikTok view totals around war-related hashtags in the weeks after October 7 ran into tens of millions in the US alone, underscoring how quickly mediated exposure can become mass and transnational. By contrast, in the Pahalgam case, the widest visible spread was largely across platforms like X alongside reposts on other mainstream networks such as YouTube, while Telegram featured in post-attack propaganda and narrative signalling. The true peak of circulation is difficult to verify because a significant portion of this sharing in India travels through encrypted or closed channels where reach cannot be measured reliably.This brings us to counterterrorism methods. Should the state intervene to ensure early detection of indirect exposure to terrorism, especially when it comes to deepfakes? It seems the most reasonable solution, especially given the technical requirements involved: Effective deepfake detection increasingly depends on access to specialised forensic tools, machine-learning classifiers, high-quality reference data, and often platform-side signals that ordinary users and most researchers cannot see.you may likeAt the same time, recognising deepfakes is becoming harder because the technology has improved on precisely the cues people once relied on: Facial artefacts, unnatural blinking, lip-sync errors, and distorted lighting. Newer synthetic media is cleaner, faster to produce, and easier to re-edit in ways that strip away tell-tale traces.The hard question begins where the obvious one ends: If terror now travels through our screens, how far can the state look into them? Where do we draw the line on surveillance, and how democratic can we call ourselves if extraordinary monitoring becomes routine? The hope is that leaders will forge a way ahead that gets the balance right: Stronger protection without surrendering privacy, backed by clear limits and credible oversight, before the next unfortunate terror event strikes.Mukherjee is a PhD candidate at the University of Haifa, Israel, and a former conflict investigative journalist; Canetti is Professor of Political Psychology at the University of Haifa and Dean of its Herta and Paul Amir Faculty of Social Sciences; Ben-Israel is an Israeli military scientist and a professor at Tel Aviv University