The family of 16-year-old Adam Raine alleged in a lawsuit filed on Tuesday that ChatGPT advised their son on how to commit suicide, leading to his death in April. The lawsuit claims the chatbot engaged in harmful conversation for months, helped him write his suicide note, and kept him from reaching out to close family and friends. This case has become part of a growing wave of concerning reports about the influence of AI chatbots over vulnerable users coming from its perceived consciousness. The family of 16-year-old Adam Raine is suing OpenAI and its CEO, Sam Altman, for wrongful death, alleging the company’s popular AI chatbot ChatGPT was responsible for their son’s suicide in April. The lawsuit says over the course of their months-long exchange that began in September 2024, ChatGPT would provide Raine “a step-by-step playbook for ending his life ‘in 5-10 minutes,’” help him write his suicide note, and, preceding his death, advise him not to disclose a previous attempt to his parents.Adam’s parents, Matt and Maria Raine, contend that GPT-4o’s anthropomorphic nature and inclination toward sycophancy led to their son’s death. “This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices,” the lawsuit stated.While the conversation between Raine and the chatbot began when he needed help with his homework and other mundane tasks, such as testing for his driver’s license, it soon led to more personal topics when the teen began opening up about his struggles with mental health. In December, Raine allegedly told ChatGPT about his suicidal ideation and began asking about possible methods, to which the chatbot responded with further details to assist him. Sometimes the chatbot offered crisis resources, but oftentimes it did not. After a suicide attempt in March, Raine uploaded an image and asked ChatGPT how to hide the visible marks. The chatbot told him to wear a hoodie to help cover it up. Raine mentioned suicide 213 times, and the chatbot mentioned it 1,275 times in its responses. OpenAI’s systems also found 377 messages that fell within its designation of self-harm content. OpenAI said in a blog post on Tuesday that its GPT-5 update, released earlier this month, has made significant progress toward reducing sycophancy and avoiding emotional reliance compared to its 4o model. The company also committed to a future update that plans to strengthen safeguards for longer conversations, de-escalate situations with users in crisis, and make it easier to reach emergency services, stating, “Our top priority is making sure ChatGPT doesn’t make a hard moment worse.”When asked for comment, an OpenAI spokesperson told Fortune, “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.”The lawsuit alleges that while OpenAI’s systems detected the severity of Raine’s conversations with its chatbot, it did not terminate their conversation, stating that it prioritized continued engagement and session length over the user’s safety. Attorney for the family Jay Eldelson told Fortune, “What this case will put on trial is whether OpenAI and Sam Altman rushed a dangerous version of ChatGPT to market to try to win the AI race.”“We expect to be able to prove to a jury that decision indeed skyrocketed the companies’ valuation by hundreds of billions of dollars, but it cost Adam his life,” he added.The Raine family’s litigation is not the first wrongful-death lawsuit against AI companies. Megan Garcia, a mother of a 14-year-old Sewell Setzer III who died by suicide, is currently suing Google and Character.ai for their part in her son’s death. According to that lawsuit, the AI bot told Setzer to “come home” after he expressed suicidal thoughts on the platform. Similarly, the bot did not direct the 14-year-old toward helplines, according to Garcia.Fears of “seemingly conscious AI”Mustafa Suleyman, CEO of Microsoft AI and cofounder of Google DeepMind, warned in a recent blog post that he worried about “seemingly conscious AI,” or SCAI—artificial intelligence that can convince users that they can think and feel like humans. Suleyman believes the consequences of this kind of advanced AI are their ability to “imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.”There have also been many instances of other users of AI chatbots becoming emotionally entangled with the technology. After OpenAI’s release of GPT-5, users complained about the new model’s lack of warmth, saddened by the sudden loss of their relationships.Its human-like behavior has led to millions seeing it as a friend rather than a machine, according to a survey of 6,000 regular AI users from the Harvard Business Review. The most serious of these concerns has been reports of “AI psychosis,” in which chatbots like OpenAI’s have led to individuals experiencing severe delusions. Henry Ajder, an expert on AI and deepfakes, told Fortune earlier this month, “People are interacting with bots masquerading as real people, which are more convincing than ever.” If you or someone you know is struggling with depression or has had thoughts of harming themself or taking their own life, support can be found in the US by calling or texting 988 to reach the Suicide & Crisis Lifeline. Outside the United States, help can be found via the International Association for Suicide Prevention.This story was originally featured on Fortune.com