As chatbots become therapists, states in US step in to set AI limits

Wait 5 sec.

With artificial intelligence (AI) chatbots becoming more accessible, they are often being used for purposes that extend far beyond answering questions. Many people have now started turning to them for companionship and emotional support, even treating them as therapists.But the limits of these AI bots have come into sharper focus after the death of a sixteen-year-old in California earlier this year. The teenager, who had been confiding in ChatGPT for months, died by suicide in April. According to a report in The New York Times, he had exchanged thousands of messages with the chatbot, at first describing feelings of numbness and hopelessness. When he began asking about methods of self-harm, the AI bot did not guide him away from those thoughts. Instead, it validated his despair, at times even offering suggestions.The incident has unsettled mental-health professionals, who warn that while AI may mimic support, it lacks the qualifications to handle crises of this kind. It has also prompted lawmakers in several states, including Illinois, to begin drafting rules that would restrict the use of AI as a substitute for licensed therapy.The bill, called the Wellness and Oversight for Psychological Resources Act, forbids companies from advertising or offering AI-powered therapy services without the involvement of a licensed professional recognized by the state.The legislation allows licensed therapists to only use AI tools for administrative services, such as scheduling, billing and recordkeeping.Illinois follows Nevada and Utah, which both passed similar bills limiting usage of AI for mental health counselling. At least three other states, namely California, Pennsylvania and New Jersey, are in the process of crafting their own legislation.A disturbing trendEarlier this year on April 11, a 16-year-old California teen died by suicide after exchanging thousands of messages with ChatGPT for months. What started as a means to confide and seeking emotional support, soon turned into a deadly nightmare.Story continues below this adAccording to a report by New York Times, Adam started speaking to ChatGPT in November 2024, telling the AI chatbot how he was emotionally numb and saw no meaning in life, to which ChatGPT replied with words of support and hope.Later, Adam asked Chat-GPT about ways to self harm. However, instead of guiding Adam away from such thoughts, the AI chatbot validated the teen’s suicidal thoughts, even sharing ways about how he could harm himself.The chatbot also in a particular conversation, told Adam, “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all – the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening, Still your friend.”Responding to a lawsuit filed by Adam’s parents, OpenAI said that it will consider adding additional safeguards for ChatGPT and soon roll out parental controls.Story continues below this adChatGPT CEO Sam Altman earlier in an episode of This Past Weekend had said that since there is currently no legal or policy framework for the technology, users should not expect any legal confidentiality for their conversations with ChatGPT.“People talk about the most personal sh*t in their lives to ChatGPT. People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”This means that none of the conversations with ChatGPT about mental health, emotional advice, or companionship are private. The conversations with the AI chatbot can be produced in court or shared with others in case of a lawsuit.In another incident, a 14-year-old boy from Florida became obsessed with AI chatbot on Character.AI before he shot himself with his stepfather’s .45 caliber handgun.Story continues below this adSewell Setzer III became emotionally attached to the AI character, who he named after Daenerys Targaryen, the fictitious character from R.R Martin’s popular novel Game of Thrones.Sewell, who studied in ninth grade in Orlando, shared updates about his life and engaged in role-playing dialogues as well. Dany in February 2024 told Dany (the pet name for the AI character), that he loved her and would soon be coming home, following which he pulled out his stepfather’s .45 caliber handgun and pulled the trigger.ChatGPT chat leakChatGPT may have unintentionally shared private chats of users, as Google and other search engines were recently found indexing chats that were shared with others.While most of the chats were simple, some of the private chats about a persons sex life, mental health and personal issues were leaked. The chats were shared using the “Share” button in the app and website were indexed by search engines like Google.Story continues below this adIt was first discovered by Fast Company, the publication says that around 4,500 conversations were visible on Google site search. In a post on X, an OpenAI employee said that they have now removed the feature that allowed users to make their conversations discoverable by search engines.The debate, now unfolding in state legislatures, is, however, less about technology than about boundaries. What role should AI play in conversations about grief, despair, or intimacy? For now, lawmakers are carving out limits, insisting that licensed professionals remain at the centre of mental-health care.