We’re well into the AI boom, and AI chatbots still suffer from the small problem of being serial liars.Public figures are still finding that out the hard way. Late last month, Republican senator Marsha Blackburn tore into Google after its AI model, Gemma, falsely claimed that Blackburn had been accused of rape when asked if there were any such allegations against her.The AI’s answer wasn’t a simple “yes,” but an entire fabricated story. It confidently explained that, during her 1987 campaign for Tennessee state senator, a state trooper alleged “that she pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts.”The compelling narrative would be enough to fool someone who wasn’t familiar with AI’s hallucinatory habit, but Blackburn claims Gemma also generated fake links to made up news articles to back it all up, though clicking them led to dead ends.“This is not a harmless ‘hallucination,'” Blackburn wrote in an official statement. “It is an act of defamation produced and distributed by a Google-owned AI model.” She demanded that Google “shut it down until you can control it.”Google’s response, tellingly, was to pull the plug. In a statement, the company argued that the Gemma model was intended to be used by developers and was never intended to be a “consumer tool or model,” so it yanked it from AI Studio, its public platform for accessing its suite of AI models. (Google also rebuffed Blackburn’s claims that its AIs exhibited a “pattern of bias against conservative figures” by admitting to the far larger problem of hallucinations being inherent to LLM technology itself.)As a senator, Blackburn piled pressure on Google that most of us wouldn’t be able to, but her complaints prefigure enormous legal quagmires in the future, the seeds of which are being planted as we speak.This summer, a Minnesota solar firm sued Google for defamation after the search engine giant’s notoriously shoddy AI Overviews falsely claimed that the business was being investigated by regulators and had been accused of deceptive business practices, backing these claims with bogus citations. The firm, Wolf Solar Electric, claimed that it lost business as a result of these hallucinations. According to recent reporting from the New York Times, the suit is one of at least six defamation cases filed in the US over content generated by AI models. AI hallucinations, at least for the time being, aren’t going away, meaning that chatbots’ wayward responses will continue to expose AI companies to litigation as courts slowly make sense of what to do with them. In the order of operations, they’re legal problems first, technical problems second. Which raises the question: who will solve them?Peter Henderson, a professor at Princeton University, argued to The Economist that the question of whether AI companies can be held liable for these false generations will almost certainly end up before the Supreme Court. Putting your finger to the wind will give conflicting answers on how that might turn out. On the one hand, The Economist notes that a recent defamation suit by a radio station against OpenAI was dismissed by a court in Georgia, after the judge determined that OpenAI could not be held liable because it provided “extensive warnings” about its bot’s proclivity for errors.Moreover, AI firms could also benefit from the standing interpretation of a law called Section 230, which has been a boon to social media companies by in effect ruling that internet sites aren’t liable for the information spread on their platforms, since they aren’t the publishers of that content.But does that apply to generative AI, since the AIs themselves are generating the content and not merely resharing it? In a 2023 case against Google, Supreme Court justice Neil Gorsuch said that, no, the protection doesn’t apply to AI generated content. And so if Section 230 fails, The Economist warns, AI developers may argue that chatbots have a right to free speech. After all, existing legal precedent holds that it isn’t just humans that enjoys this sacred right enshrined in the Constitution, but corporations, too.More on AI: CEO Accused of Asking ChatGPT About Something Absolutely WildThe post Google Pulls Down AI Chatbot After It Accuses Senator of Terrible Crime appeared first on Futurism.