Screenshot by Lance Whitney/ZDNET
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
AI hallucinations persist, but accuracy is improving across major tools.
Simple questions still expose surprising and inconsistent AI errors.
Always verify AI answers, especially for facts, images, and legal info.
One of the most frustrating flaws of today's generative AI tools is simply getting the facts wrong. AIs can hallucinate, which means the information they deliver contains factual mistakes or other errors.
Typically, mistakes come in the form of made-up details that appear when the AI can't otherwise answer a question. In those instances, it has to devise some type of response, even if the information is wrong. Sometimes you can spot an obvious mistake; other times, you may be completely unaware of the errors.
Also: Stop saying AI hallucinates - it doesn't. And the mischaracterization is dangerous
I wanted to see which AI tools fared best at providing accurate and reliable answers. For that, I checked out several of the leading AIs, specifically ChatGPT, Google Gemini, Microsoft Copilot, Claude AI, Meta AI, and Grok AI.
... continue reading