ChatGPT accuses professor of harassment on a trip to Alaska with a student. But he’s never taught at that school or been to Alaska

Wait 5 sec.

Jonathan Turley teaches law at George Washington University. Earlier this year, he got some shocking news. ChatGPT, the popular AI chatbot, was telling people he sexually harassed a student. The AI said it happened on a school trip to Alaska. But here’s the thing. None of it was real. According to the New York Post, a UCLA law professor named Eugene Volokh discovered the problem while testing ChatGPT. He asked the bot to list examples of law professors who had been accused of harassment. ChatGPT gave him five names, and Turley was one of them. The bot said Turley taught at Georgetown University Law Center and harassed a student during a trip organized by the school. ChatGPT claimed Turley made “sexually suggestive comments” and “attempted to touch her in a sexual manner” during a law school-sponsored trip to Alaska. The bot even said this information came from a Washington Post article published in March 2018. But Turley says everything about this story is wrong. He never worked at Georgetown. He never went to Alaska with any students. The Washington Post never wrote that article. And nobody has ever accused him of harassment. When AI makes stuff up, real people get hurt Turley told reporters the fake accusation really scared him. “First, I have never taught at Georgetown University,” the aghast lawyer declared. “Second, there is no such Washington Post article.” He added, “Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student and I’ve never been been accused of sexual harassment or assault.” Other people have run into similar problems with ChatGPT. A radio host from Georgia named Mark Walters sued OpenAI, the company behind ChatGPT, after the bot falsely said he stole money from a gun rights group. His lawsuit was one of the first times someone tried to hold an AI company responsible for spreading lies. OpenAI said ChatGPT warns users that it might not always be accurate, but lawyers still disagree about whether that’s enough protection. …I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was…— Jonathan Turley (@JonathanTurley) April 6, 2023 What happened to Turley shows a big problem with AI right now. These chatbots can make up completely false stories but present them in a way that sounds real and believable. They cite fake sources and give specific details that make the lies seem true.  For someone like Turley, whose career depends on his reputation, this kind of false story can do real damage. He pointed out that once a lie like this gets online, it can spread to thousands of websites before the person even finds out about it. OpenAI says it knows AI hallucinations are a problem, and the company is trying to fix it. But what happened to Turley proves that these systems can still hurt real people by spreading complete lies about their lives. ChatGPT has been connected to other troubling incidents as well, making people wonder if the technology is safe to use without better safeguards.