In a paper published earlier this month, OpenAI researchers said they'd found the reason why even the most powerful AI models still suffer from rampant "hallucinations," in which products like ChatGPT confidently make assertions that are factually false.
They found that the way we evaluate the output of large language models, like the ones driving ChatGPT, means they're "optimized to be good test-takers" and that "guessing when uncertain improves test performance."
In simple terms, the creators of AI incentivize them to guess rather than admit they don't know the answer — which might be a good strategy on an exam, but is outright dangerous when giving high-stakes advice about topics like medicine or law.
While OpenAI claimed in an accompanying blog post that "there is a straightforward fix" — tweaking evaluations to "penalize confident errors more than you penalize uncertainty and give partial credit for appropriate expressions of uncertainty" — one expert is warning that the strategy could pose devastating business realities.
In an essay for The Conversation, University of Sheffield lecturer and AI optimization expert Wei Xing argued that the AI industry wouldn't be economically incentivized to make these changes, as doing so could dramatically increase costs.
Worse yet, having an AI repeatedly admit it can't answer a prompt with a sufficient degree of confidence could deter users, who love a confidently positioned answer, even if it's ultimately incorrect.
Even if ChatGPT admitted that it doesn't know the answer just 30 percent of the time, users could quickly become frustrated and move on, Xing argued.
"Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly," the researcher wrote.
While there are "established methods for quantifying uncertainty," AI models could end up requiring "significantly more computation than today’s approach," he argued, "as they must evaluate multiple possible responses and estimate confidence levels."
"For a system processing millions of queries daily, this translates to dramatically higher operational costs," Xing wrote.
... continue reading