Dear Reader,Every second article about artificial intelligence, these days, arrives dressed for a funeral. The technology is going to take your job, or your mind, or the entire epistemic foundation of civilisation—and sometimes, in particularly ambitious pieces, all three. Social media amplifies this to a frequency that would exhaust a prophet. And yet, if you look away from the headlines and observe what people are actually doing, you notice something curious: they are using AI for almost everything. The same writers warning about the death of authentic thought are using it to draft emails and edit articles, if not write them. The same professionals decrying the collapse of expertise are feeding it legal clauses and tax questions. There is a contradiction here, and it is worth sitting with rather than running from.I should confess that I have been oscillating between the doom club and the other camps for some time now. It is not a comfortable position. One week I am convinced that large language models are quietly dismantling the foundations of knowledge work, writing, and perhaps human thought itself. The next, I am using one to understand a rental agreement and a doctor’s prescription, and thinking: this is rather marvellous, actually. The panic feels excessive. Then I read something alarming again and the pendulum swings back.The doom talk is not new. History has a long and somewhat comic record of civilisational panic over new technologies. When the railways arrived in Britain in the 1830s, physicians warned with perfect sincerity that the human body could not survive the speed. Medical journals reported on “railway madness”—a mysterious affliction in which otherwise calm passengers would begin screaming or raving, only to calm down entirely when the train stopped. One American traveller reportedly boarded British trains carrying a loaded revolver, as protection against these fabled madmen.Queen Victoria herself, on her first train journey in 1842—from Slough to London Paddington—wrote to her uncle that she was “quite charmed with it” but was sufficiently anxious about speed to impose a limit of 40 miles per hour by day and 30 by night, with a signal fitted to the royal saloon so she could alert the driver at will. Caution and marvel, often inseparable.Books provoked a similar anxiety, and rather earlier. In 1481, the Venetian editor Hieronimo Squarciafico imagined a debate among the spirits of great authors in the Elysian Fields, in which some complained that “printing had fallen into the hands of unlettered men, who corrupted almost everything”. His famous aphorism—that an abundance of books makes men “less studious”—became a shorthand for the anxieties of his age. He expressed these concerns, with exquisite irony, in printed text.The television, in the mid-20th century, received a comparable welcome. Newton Minow, appointed Chairman of the Federal Communications Commission by President Kennedy in 1961, called it a “vast wasteland” of senseless violence, mindless comedy, and offensive advertising. His speech is remembered for those two words. What is less remembered is that Minow himself, reflecting on the speech decades later, said the two words he had actually wanted people to remember were “public interest”.Each era, it seems, gets the technological panic it deserves. And each era, in retrospect, looks a little overdressed for the occasion.This is not to say the fears were groundless. Some things were genuinely lost with each upheaval—certain kinds of attention, certain communal habits, certain crafts and professions. The handloom weavers of early industrial England did not merely “get disrupted”; their world ended. Those losses are real, and the people who suffered were not wrong to protest. What matters, then as now, is not whether disruption causes harm —it does—but whether societies build what one might call the railway crossing: the governance, the regulations, the guardrails that can prevent the locomotive from running over whoever is standing on the tracks. With AI, we are still, conspicuously, building the crossing. The technology has arrived; the governance has not. That is the real problem. Not the technology itself.There is a related confusion worth clearing up. Much of the alarm about AI conflates the tool with its use. A knife can slice bread or slit a throat. The knife is not the moral agent. If someone is using an image-generation tool to produce non-consensual images of a neighbour, the moral failure belongs to the person, not the model. The tools available to people with cruelty in their hearts have always existed—cameras, photocopiers, telephones, printing presses. Digital technology has accelerated certain harms and made some easier to scale. But the underlying pathology is human. It was there before the algorithm, and the algorithm did not manufacture it.The same logic applies to AI-generated misinformation and propaganda. These are not advanced inventions of the machine age. Radio Télévision Libre des Mille Collines, the Rwandan radio station that broadcast incitement to genocide in 1994, operated on a technology that required nothing more sophisticated than a transmitter and a script.AI has not created the impulse to dehumanise or deceive. It has, in some cases, given that impulse a new distribution channel. Which is precisely why the guardrails matter—and why the concern should be less about what AI is and more about what we are doing with it, and who is writing the rules. Those are political questions, not technical ones.Which brings us to a point the doom conversation tends to miss entirely. For every abuse that AI enables, there is an emancipation it makes possible—and some of those emancipations are genuinely historic.Consider a friend of mine, a Malayali engineer who studied no language besides her mother tongue and technical jargon. For years, she has thought carefully about patriarchy, about women’s unpaid labour, about the social reproduction of inequality—ideas that have a rich theoretical literature in English that she could not once have easily accessed. Now, with an AI tool helping her compose and refine her arguments, she writes publicly, reaching readers in different cities and countries, participating in a conversation she was structurally excluded from before. This is not trivial.For generations, the English language functioned as a gate—a mechanism by which the globally dominant class of educated, metropolitan, Anglophone professionals maintained a kind of monopoly on the terms of public debate. If English is not your first language, and you never had relatives in Bilathi, as Malayalis once called England, or parents who could afford an English education for you, you will now find that an AI tool can help your thoughts reach a different audience: use it. The argument that this is somehow inauthentic—that ideas expressed with AI assistance are not really yours—is made most readily by people who already have access to the education, the editorial networks, and the cultural capital that AI partially substitutes for. It is, in other words, a class argument posed as an aesthetic one.If you are using AI to write a letter to your elected representative about a displacement project threatening your community, or to appeal against an unfair dismissal, or about a local environmental violation, then what matters is your voice reaching further than it previously could. And if you are writing in AI-assisted prose to document a massacre a government is trying to suppress or to organise solidarity across borders, you have my respect and my readership.The objection about AI hallucination deserves a brief, if slightly uncharitable, response. Yes, AI systems confabulate. They invent citations, misremember dates, and assert falsehoods with confidence. This is a real limitation. But compared to what alternative? Prime-time news anchors of a certain kind of television channel have demonstrated over years that hallucination can be broadcast to hundreds of millions of people with full institutional authority, live, and in high definition. Propaganda ministries have been in the confabulation business considerably longer, and with better production values. That we know AI makes mistakes is, in some ways, a structural advantage—it foregrounds a problem that has always been there and makes it harder to take any single source on faith.Where AI does require genuinely urgent attention is in its systemic biases. When used for recruitment, credit scoring, bail decisions, or medical diagnosis, they do not merely reflect human prejudice, they institutionalise it at scale, with an aura of objectivity that makes it harder to challenge. An algorithm that consistently undervalues job applicants from certain postcodes or with certain surnames is encoding the accumulated discriminations of the data it was trained on. The solution is not to blame the machine. It is to fix the humans and the institutions, and to build the regulatory architecture that holds both accountable.I want to end with an earthworm.A few days ago I noticed cracks in my bathroom tiles. Then, one night something came out: a long earthworm, inching its way across the floor. I have a firmly irrational aversion to creatures of this kind. My first instinct was pesticide. Instead of a routine web search I ended up asking an AI tool for advice—something effective against worms but safe for children bathing in the same space.What I received was not a list of brand names. The tool told me: “Earthworms are not pests—they are indicators of moist, organic-rich soil beneath the tiles. Spraying pesticide would not address the cause, and in a humid place like Kerala could harm the floor, groundwater, and the family bathing in that space.” It followed this with practical advice on drainage, monsoon construction, and the role of earthworms in soil health. As a temporary measure, it suggested keeping the exhaust fan running longer so the floor dried faster—minus moisture, the earthworms would be less likely to surface.I am now going to run the exhaust fan and not the pesticides. Which means somewhere in Kerala a clutch of earthworms owes its life to the very technology that was supposed to end civilisation. And I was left with a question that felt larger than bathroom maintenance: what would it mean if the systems we are building were designed, from the start, with that quality of judgment—the willingness to correct the premise of a question before answering it? To ask not just what is being requested, but whether the request itself rests on a misunderstanding?That, ultimately, is what the AI debate should be about. The doom is real. So is the possibility. The question—as always—is what we choose to do with what we have been given. History suggests we muddle through, sometimes badly, occasionally well, and that the muddle tends to produce, over time, something more useful than the prophets of catastrophe anticipated. That is not a reason for complacency. It is, perhaps, a reason to proceed with slightly more curiosity and slightly less terror.With these thoughts, I welcome you to read Frontline’s coverage of the recently concluded AI Summit in New Delhi. Pieces by researcher Sayamsiddha and by our columnists Mitali Mukherjee, Ajaz Ashraf, Apoorvanand, and Aditya Sinha, all track the AI question from an Indian perspective.Read, share, and write back with your experience of AI.Wishing you a meaningful, AI-aided, week ahead,Jinoy Jose P.Digital Editor, FrontlineWe hope you’ve been enjoying our newsletters featuring a selection of articles that we believe will be of interest to a cross-section of our readers. Tell us if you like what you read. And also, what you don’t like! Mail us at frontline@thehindu.co.inCONTRIBUTE YOUR COMMENTS