Tech News
← Back to articles

Google Pulls Down AI Chatbot After It Accuses Senator of Terrible Crime

read original related products more articles

We’re well into the AI boom, and AI chatbots still suffer from the small problem of being serial liars.

Public figures are still finding that out the hard way. Late last month, Republican senator Marsha Blackburn tore into Google after its AI model, Gemma, falsely claimed that Blackburn had been accused of rape when asked if there were any such allegations against her.

The AI’s answer wasn’t a simple “yes,” but an entire fabricated story. It confidently explained that, during her 1987 campaign for Tennessee state senator, a state trooper alleged “that she pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts.”

The compelling narrative would be enough to fool someone who wasn’t familiar with AI’s hallucinatory habit, but Blackburn claims Gemma also generated fake links to made up news articles to back it all up, though clicking them led to dead ends.

“This is not a harmless ‘hallucination,'” Blackburn wrote in an official statement. “It is an act of defamation produced and distributed by a Google-owned AI model.” She demanded that Google “shut it down until you can control it.”

Google’s response, tellingly, was to pull the plug. In a statement, the company argued that the Gemma model was intended to be used by developers and was never intended to be a “consumer tool or model,” so it yanked it from AI Studio, its public platform for accessing its suite of AI models. (Google also rebuffed Blackburn’s claims that its AIs exhibited a “pattern of bias against conservative figures” by admitting to the far larger problem of hallucinations being inherent to LLM technology itself.)

As a senator, Blackburn piled pressure on Google that most of us wouldn’t be able to, but her complaints prefigure enormous legal quagmires in the future, the seeds of which are being planted as we speak.

This summer, a Minnesota solar firm sued Google for defamation after the search engine giant’s notoriously shoddy AI Overviews falsely claimed that the business was being investigated by regulators and had been accused of deceptive business practices, backing these claims with bogus citations. The firm, Wolf Solar Electric, claimed that it lost business as a result of these hallucinations. According to recent reporting from the New York Times, the suit is one of at least six defamation cases filed in the US over content generated by AI models.

AI hallucinations, at least for the time being, aren’t going away, meaning that chatbots’ wayward responses will continue to expose AI companies to litigation as courts slowly make sense of what to do with them. In the order of operations, they’re legal problems first, technical problems second. Which raises the question: who will solve them?

Peter Henderson, a professor at Princeton University, argued to The Economist that the question of whether AI companies can be held liable for these false generations will almost certainly end up before the Supreme Court.

... continue reading