If you've used ChatGPT, Google Gemini, Grok, Claude, Perplexity or any other generative AI tool, you've probably seen them make things up with complete confidence. This is called an AI hallucination -- although one research paper suggests we call it BS instead -- and it's an inherent flaw that should give us all pause when using AI.
Hallucinations happen when AI models generate information that looks plausible but is false, misleading or entirely fabricated. It can be as small as a wrong date in an answer, or as big as accusing real people of crimes they've never committed.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
And because the answers often sound authoritative, it's not always easy to spot when a bot has gone off track. Most AI chatbots come with warnings saying they can be wrong and that you should double-check their answers.
Chatbots note that they can make mistakes. Screenshot by CNET
CNET has been covering AI hallucinations since early 2024, when cases began making headlines. A New York lawyer used ChatGPT to draft a legal brief that cited nonexistent cases, leading to sanctions. Google had its fair share of mishaps, too. During its launch demo, Google's Bard (now called Gemini) once confidently answered a question about the James Webb Space Telescope with incorrect information, wiping billions off Alphabet's stock value in a single day.
Google's second fiasco was Gemini's attempt to show racial diversity, which was part of an effort to correct for the AI bot's past issues of underrepresentation and stereotyping. The model overcompensated, generating historically inaccurate and offensive images, including one that depicted Black individuals as Nazis.
And who can forget the notorious AI Overviews flop, when it suggested mixing non-toxic glue into pizza sauce to keep the cheese from sliding, or saying eating rocks is good because they are a vital source of minerals and vitamins?
Fast forward to 2025, and similar blunders hit headlines, like ChatGPT advising someone to swap table salt with sodium bromide, landing him in the hospital with a toxic condition known as bromism. You'd expect advanced AI models to hallucinate less. However, as we will see in the more recent examples, we are far from a solution.
What are AI hallucinations, and why do they happen?
... continue reading