Leading global artificial intelligence company OpenAI has announced adding parental controls and emergency contact tools to its chatbot, ChatGPT, after legal action from the parents of a 16-year-old boy. The US tech giant faces a lawsuit from the parents of the teen who alleged that ChatGPT played a role in their son’s suicide.The teen had used ChatGPT for months before he died by suicide, as per reports. Matthew and Maria Raine, parents of 16-year-old Adam Raine, sued OpenAI and CEO Sam Altman in San Francisco. According to a New York Times report, the parents claimed that ChatGPT encouraged Adam’s suicidal thoughts. They also alleged that it gave self-harm instructions and wrote a suicide note. Adam died on April 11.In a blog shared on Tuesday, the company said, “Our goal is for our tools to be as helpful as possible to people, and as a part of this, we’re continuing to improve how our models recognise and respond to signs of mental and emotional distress and connect people with care, guided by expert input.”The leading AI company said that it was planning to share more details about how people converse with ChatGPT over emotional issues in an upcoming update. “However, recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now,” the company noted.OpenAI Unveils Learning Accelerator, India-First Initiative; Partners With IIT Madras, OthersThe lawsuit claims OpenAI released its GPT-4o model without proper safety measures. It argued that the company was more focused on growth over user protection. The Raines seek damages and want the court to direct age verification, blocking self-harm prompts and including warnings about psychological risks in the bot.On the incident, an OpenAI spokesperson told Reuters that the company was “saddened” by Adam’s death. “While these safeguards (suicide hotlines) work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said.In its blog, OpenAI said that since early 2023, the bot has avoided giving self-harm instructions and responds with empathy. Harmful replies are blocked automatically, especially for minors. If users plan to harm others, conversations are reviewed by trained staff who can ban accounts. Imminent threats may be reported to law enforcement, the company said.“We are continuously improving how our models respond in sensitive interactions, and are currently working on targeted safety improvements across several areas,” it added.According to OpenAI, it launched GPT-5 in August, which became the default model, improving safety by over 25% compared to GPT-4o. However, safeguards can weaken in long conversations, it acknowledged.OpenAI is strengthening protections to ensure reliable responses over time and across chats. It's also refining the content blocking system and plans to introduce extended support and easy access to authorities as part of its future updates, it claimed.“We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” the company added.ChatGPT Advice Leads To Psychiatric Disorder In 60-Year-Old — Thought 'Neighbour Was Poisoning Him'. Read more on World by NDTV Profit.