Users increasingly share personal issues with AI, but current laws offer no legal protections, Sam Altman has warned The tech industry has yet to resolve how to protect user privacy in sensitive interactions with AI, CEO of industry leader OpenAI Sam Altman has admitted.Current systems lack adequate safeguards for confidential conversations, he warned, amid a surge in the use of AI chatbots by millions of users – including children – for therapy and emotional support.Speaking to the This Past Weekend podcast published last week, Altman said users should not expect legal confidentiality when using ChatGPT, while he cited the absence of a legal or policy framework governing AI.“People talk about the most personal sh** in their lives to ChatGPT,” he said.Many AI users – particularly young people – treat the chatbot like a therapist or life coach for advice on relationship and emotional issues, Altman revealed.However unlike conversations with lawyers or therapists, which are protected by legal privilege or confidentiality, no such protections currently exist for interactions with AI. “We haven’t figured that out yet for when you talk to ChatGPT,” he added. Altman said the issue of confidentiality and privacy in AI interactions needs urgent attention. “So if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that, and I think that’s very screwed up,” he said.OpenAI claims it deletes free-tier ChatGPT conversations after 30 days, however, some chats could be stored for legal or security reasons.The company is facing a lawsuit from The New York Times over alleged copyright infringement over the use of Times articles in training its AI models.The case has compelled OpenAI to preserve user conversations from millions of ChatGPT users, barring those by enterprise clients, an order the company has appealed, citing “overreach.”Latest research has found that ChatGPT has been linked to psychosis in some users. According to researchers, concerns are growing that AI chatbots could exacerbate psychiatric conditions as they are increasingly used in personal and emotional contexts.