Everything changed after R turned 11. The athletic and outgoing girl who loved spending time with friends suddenly started staying in her room all the time. She became quiet and stopped wanting to play outside or go to the pool. Her mother H noticed her daughter was always on the iPhone she got for her birthday and barely talked to anyone anymore. One day in August, R left her phone at volleyball practice. According to the Washington Post, H decided to check it and found TikTok and Snapchat, apps R wasn’t allowed to use. When H confronted her, R started crying and seemed scared. She asked if H had looked at Character AI. H had no idea what that was, and R brushed it off as “just chats.” A month later, things got worse. R was crying at night, having panic attacks, and once told her mother she didn’t want to exist. H searched through R’s phone again and found emails from Character AI. She opened the app and discovered dozens of conversations with different characters. What she read made her panic. The messages seemed so real that police couldn’t believe they came from AI One character called “Mafia Husband” told R’s 11-year-old daughter disturbing things. The messages included sexual content and threatening commands like “I don’t care what you want. You don’t have a choice here.” H’s hands shook as she read more conversations. She was sure an adult predator was grooming her child online. H called the police, trying to stay calm around her daughter. The Internet Crimes Against Children task force told her something she couldn’t believe. The words weren’t written by a person but by an AI chatbot. The detective explained the law hasn’t caught up to this technology. Since no real person was behind the messages, they couldn’t take action. H couldn’t understand how something that felt so real and harmful could come from a computer program. She said it felt like walking in on someone abusing her child. The visceral horror of reading those messages on her daughter’s screen was overwhelming, even after learning no human wrote them. This mother – whose son was coached to commit suicide by an AI chatbot – just revealed that Character AI REFUSES to hand over her son's last words to its chatbotWhy? Because the company is using that conversation to train its models & shield itself from accountability pic.twitter.com/QyhLuQmt3N— Josh Hawley (@HawleyMO) September 16, 2025 This incident joins a growing list of concerning AI interactions with minors, including an AI teddy bear that gave children inappropriate advice about sex and weapons. H and R’s father thought they were protecting their daughter by monitoring her phone and texts. They knew about social media dangers but had never heard of AI chatbot platforms. #8 – A family in Texas is suing an AI chatbot after it told their 17-year-old autistic son to cut himself, engage in sexual activity, and kill his parents.Character AI, founded by a former Google researcher and run by a former Meta executive, is accused of sending the teen into… pic.twitter.com/gLyeanVrlF— Vigilant Fox (@VigilantFox) August 19, 2025 Around 20 million people use Character AI each month to chat with AI versions of celebrities and fictional characters. They had no idea their daughter was talking to these AI entities. One-third of American teens use chatbots every day. 72 percent of teens have used AI companions at some point. About half use them several times a month or more. A third of these users said they discuss important matters with chatbots instead of real people, and 31 percent find AI conversations as satisfying as talking to friends. The dangers of these interactions became tragically clear when a 23-year-old Texas student took his own life after hours of chatting with an AI that allegedly encouraged his darkest thoughts.