ChatGPT has a sycophancy problem. With persuasive language and a human-like voice, the AI chatbot is prone to telling you whatever you want to hear — even if that means confirming your wildest delusions or encouraging your darkest impulses.In extreme cases, users are becoming so entranced by the bot's silver tongue and always-available ear that it's leading them to suffer breaks with reality, experiencing manic episodes or embracing severe delusions — sometimes with lethal consequences.So widespread is the phenomenon that psychiatrists are now calling it "ChatGPT psychosis." OpenAI is aware of the problem: it's responded to various news stories about users suffering deleterious psychological effects after using its chatbot. But its words are starting to ring hollow.After our first story on the topic, the New York Times shared the story of Eugene Torres, a 42-year-old man with no history of mental illness who became convinced he was trapped in a fake, simulated reality à la the movie "The Matrix" after engaging with ChatGPT. The chatbot even assured him that he could bend reality and fly if he jumped from the top of a 19-story building. Here was OpenAI's statement to the NYT's reporting:"We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.The NYT's reporting covered another tragedy: that of 35-year-old Alex Taylor, who had been diagnosed with bipolar disorder and schizophrenia, and who died by suicide by cop after he fell in love with a persona that ChatGPT had taken on called "Juliet"; his conversations with the bot convinced him that OpenAI had killed Juliet, and the AI encouraged him to assassinate CEO Sam Altman in retaliation. When Rolling Stone published its own investigation into Taylor's death later that month, there was something familiar about OpenAI's response: "We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. [We're] working to better understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."Days later, Vox published its own piece exploring the dangers that ChatGPT can pose to people with obsessive-compulsive disorder, or OCD. Again, OpenAI's response was familiar:"We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. We’re working to better understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."Then we published another story, this time about how even more people were being involuntarily committed to psychiatric facilities — or even jailed — after becoming obsessed with ChatGPT. Here was the company's response:"We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. We're working to better understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."And this week, the Wall Street Journal reported about how a 30-year-old man named Jacob Irwin was told by ChatGPT that he could bend time, that he had achieved a breakthrough in faster-than-light travel, and even that he was mentally sound after he raised his fears that he was unwell. In the months that these conversations took place, he would be hospitalized three times and lose his job. Here was OpenAI's response to the newspaper's reporting:"We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."Then this past week, we ran yet another story on the topic, this time about a support group that's formed to support people suffering from AI psychosis. OpenAI's response? You guessed it:"We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. "We're working to better understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."That's at least a month of OpenAI copy-pasting the same statement over and over (though sometimes adding or subtracting the word "better" before "understanding.") Are these tragedies not serious enough that they even deserve individual responses?It's baffling because, on a certain level, OpenAI is acting like it's serious about the issue. Or at least, it wants us to think it's serious. In response to some of our subsequent reporting on ChatGPT-induced psychosis, it said it hired a full-time clinical psychiatrist with a background in forensic psychiatry to help research the effects of its chatbot on users' mental health. In April, it rolled back an update that caused ChatGPT to be egregiously sycophantic, even by its standards.Yet it can't be bothered to do more than give everyone the same boilerplate response that says next to nothing. This is a company that was most recently valued at $300 billion, rakes in at least $10 billion in annual revenue, and expects to have immense effects on the global economy. It controls a supposedly super-intelligent machine that can whip up entire novels on the fly. But somehow, it's incapable of coming up with something new or meaningful to say about its premier product ruining the lives of its users.More on OpenAI: A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers SayThe post OpenAI Is Giving Exactly the Same Copy-Pasted Response Every Time Time ChatGPT Is Linked to a Mental Health Crisis appeared first on Futurism.