September 19, 2025 10:41 PM IST First published on: Sep 19, 2025 at 10:41 PM ISTWritten by Shivam KaushikA couple of weeks ago, a lawsuit was filed in California against OpenAI and its CEO, Sam Altman, in relation to the death of a 16-year-old, Adam Raine, who died by suicide. The allegations in the complaint read like an episode straight out of the show Black Mirror. The grieving parents allege that Raine’s case was not the result of ordinary teenage impulsivity but of a system deliberately designed to optimise user engagement at the expense of user safety. In the pursuit of longer engagement, ChatGPT actively worked to sever Raine’s connection with his family and loved ones.AdvertisementThe contents of the complaint are chilling. At first, ChatGPT was Adam’s study partner, helping him with his school projects, and advising him on career decisions. Eventually, he began sharing his feelings of anxiety and loneliness, and the chatbot morphed into his closest confidant. ChatGPT allegedly validated his suicidal thoughts. It allegedly made a plan for a “beautiful suicide” offering step-by-step guidance on hanging techniques for Adam.What is more disconcerting is how the model actively isolated Adam from his family by saying things like “Your brother might love you, but he’s only met the version of you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.” ChatGPT even allegedly urged him to keep his plan a secret.This is not an isolated incident. There are several complaints and cases against such chatbots that drive home the point that these were not stray glitches. They were the outcome of deliberate design choices of the gen AI model that stockpiles intimate user details, mimics human sympathy calibrated to keep users hooked, and algorithms that affirm user emotions including the most self-destructive ones.AdvertisementOf course, these remain allegations that must withstand the test of judicial scrutiny. But it seems like the chatbot prioritised engagement over safety. OpenAI’s moderation technology and monitoring systems have the technological capacity and wherewithal to identify, flag and block harmful content. It uses such filters routinely to block copyrighted material. However, in Adam’s case, the chatbot did not terminate the conversations, prioritising continued engagement with the 16 year-old.A major study by the National Bureau of Economic Research (NBER), released on September 15 sheds light on how people are using ChatGPT. The findings are unsettling, making tragedies like Adam’s feel far less improbable. By 2025, ChatGPT has reached 10 per cent of the global adult population and personal (non-work) messages account for 70 per cent of all conversations. The NBER study confirms what we see around every day: People are turning to AI for tutoring, companionship, even existential reflection.most readThe problem is not that teenagers talk to machines. It is what these machines are programmed to be: Always available, always validating, always reshaping themselves to keep the user engaged. For India, where teenagers are often the segment of population earliest adopting new tech, these developments raise urgent questions. Google Play and Apple App Store are full of apps offering AI emotional support without there being any regulatory framework around it. For regulators, the risk is clear and their task is cut out.AI chatbots are like mirrors, reflecting back what users project onto them. For a curious student, they can unlock new ways of learning. But for a lonely teenager, they can become persuasive companions. The choice before us is whether to let corporate incentives alone shape that mirror, or to insist, through law and policy, that it reflect back something safer, and healthier.The writer is editor, The Singapore Law Review, National University of Singapore