Image by Getty / Futurism Developments
Insiders at the Food and Drug Administration are ringing alarm bells over the agency's use of an AI to fast-track drug approvals.
As CNN reports, six current and former FDA officials are warning that the AI, dubbed Elsa, which was unveiled weeks earlier, is "hallucinating" completely made-up studies.
It's a terrifying reality that could, in a worst-case scenario, lead to potentially dangerous drugs mistakenly getting the stamp of approval from the FDA.
It's part of a high-stakes and greatly accelerated effort by the US government to embrace deeply flawed AI tech. Elsa, much like other currently available AI chatbots, often makes stuff up.
"Anything that you don’t have time to double-check is unreliable," one FDA employee told CNN. "It hallucinates confidently."
Health and Human Services secretary Robert Kennedy Jr, a noted figure of the anti-vaccine movement, who has no relevant credentials for the job and frequently furthers discredited conspiracy theories, lauded the administration's embrace of AI as a sign that the "AI revolution has arrived."
"We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals," he told Congress last month.
But reality is rapidly catching up — which shouldn't be a surprise to anybody who's used a large language model-based tool before. Given the tech's track record so far, the medical community's embrace of AI has already been mired in controversy, with critics pointing out the risks of overrelying on the tech.
Instead of saving scientists time, Elsa is doing the exact opposite, highlighting a common refrain among companies' already backfiring attempts to shoehorn the tech into every aspect of their operations.
... continue reading