Tech News
← Back to articles

OpenAI's fix for hallucinations is simpler than you think

read original related products more articles

Hector Roqueta Rivero/Moment via Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

OpenAI says AI hallucination stems from flawed evaluation methods.

Models are trained to guess rather than admit ignorance.

The company suggests revising how models are trained.

Even the biggest and most advanced generative AI models occasionally hallucinate, or generate inaccurate information presented as fact. Now, OpenAI claims to understand why -- while offering a possible solution.

In a research paper published last week, a team of researchers from the company argued that hallucination stems not from the quality of a model's training data, but rather from flawed evaluation incentives. These are widely used throughout the industry and reward guessing over the admission of uncertainty.

Also: Your favorite AI chatbot is full of lies

"Language models are optimized to be good test-takers, and guessing when uncertain improves test performance," the authors write in the paper.

... continue reading