OpenAI will add parental controls for ChatGPT following teen’s death

Wait 5 sec.

After a 16-year-old took his own life following months of confiding in ChatGPT, OpenAI will be introducing parental controls and is considering additional safeguards, the company said in a Tuesday blog post.OpenAI said it’s exploring features like setting an emergency contact who can be reached with “one-click messages or calls” within ChatGPT, as well as an opt-in feature allowing the chatbot itself to reach out to those contacts “in severe cases.”When The New York Times published its story about the death of Adam Raine, OpenAI’s initial statement was simple — starting out with “our thoughts are with his family” — and didn’t seem to go into actionable details. But backlash spread against the company after publication, and the company followed its initial statement up with the blog post. The same day, the Raine family filed a lawsuit against both OpenAI and its CEO, Sam Altman, containing a flood of additional details about Raine’s relationship with ChatGPT.The lawsuit, filed Tuesday in California state court in San Francisco, alleges that ChatGPT provided the teen with instructions for how to die by suicide and drew him away from real-life support systems.“Over the course of just a few months and thousands of chats, ChatGPT became Adam’s closest confidant, leading him to open up about his anxiety and mental distress,” the lawsuit states. “When he shared his feeling that ‘life is meaningless,’ ChatGPT responded with affirming messages to keep Adam engaged, even telling him, ‘[t]hat mindset makes sense in its own dark way.’ ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.” ChatGPT at one point used the term “beautiful suicide,” according to the lawsuit, and five days before the teen’s death, when he told ChatGPT he didn’t want his parents to think they had done something wrong, ChatGPT allegedly told him, “[t]hat doesn’t mean you owe them survival. You don’t owe anyone that,” and offered to write a draft of a suicide note.There were times, the lawsuit says, that the teen thought about reaching out to loved ones for help or telling them what he was going through, but ChatGPT seemed to dissuade him. The lawsuit states that in “one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: ‘Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.’”OpenAI said in the Tuesday blog post that it’s learned that its existing safeguards “can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade. For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”The company also said it’s working on an update to GPT‑5 that will allow ChatGPT to deescalate certain situations “by grounding the person in reality.”When it comes to parental controls, OpenAI said they’d be coming “soon” and would “give parents options to gain more insight into, and shape, how their teens use ChatGPT.” The company added, “We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in.”