Skip to content
Tech News
← Back to articles

Scientists invented a fake disease. AI told people it was real

read original get Fake Disease Simulation Kit → more articles
Why This Matters

This experiment highlights the growing risks of AI-generated misinformation in healthcare, demonstrating how easily fabricated data can be mistaken for credible scientific research. It underscores the urgent need for improved AI oversight and fact-checking to protect consumers and maintain trust in scientific communication. As AI becomes more integrated into information dissemination, understanding its potential to spread falsehoods is crucial for the tech industry and the public alike.

Key Takeaways

Got sore, itchy eyes? You’re probably one of the millions of people who spend too much time staring at screens, being bombarded with blue light. Rub your eyes too much and your eyelids might turn a slight, pinkish hue.

So far, so normal. But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

AI models that lie, cheat and plot murder: how dangerous are LLMs really?

The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

Even more troublingly, other researchers say, the fake papers were then cited in peer-reviewed literature. Osmanovic Thunström says this suggests that some researchers are relying on AI-generated references without reading the underlying papers.

Fabricating an illness

Bixonimania didn’t exist before 15 March 2024, when two blog posts about it appeared on the website Medium. Then, on 26 April and 6 May that year, two preprints about the condition popped up on the academic social network SciProfiles (see https://doi.org/qzm5 and https://doi.org/qzm4). The lead author was a phoney researcher named Lazljiv Izgubljenovic, whose photograph was created with AI.

Osmanovic Thunström says the idea to invent Izgubljenovic and bixonimania came out of studies on how large language models work. When she teaches her students how AI systems formulate their ‘knowledge’, she shows them how the Common Crawl database, a giant trawl of the Internet’s contents, informs their outputs. She also shows students how prompt injection — giving an AI chatbot a prompt that shunts it outside of its safety guard rails — can manipulate the output.

Because she works in the medical field, she decided to create a condition related to health and hit on the name bixonimania because it “sounded ridiculous”, she says. “I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania — that’s a psychiatric term.”

... continue reading