Tech News
← Back to articles

Conspiracy Theorists Are Creating Special AIs to Agree With Their Bizarre Delusions

read original related products more articles

Conspiracy theorists are using AI chatbots not only to convince themselves of their harebrained beliefs, but to recruit other users on social media.

As independent Australian news site Crikey reports, conspiracy theorists are having extensive conversations with AI chatbots to "prove" their beliefs. Then, they post the transcripts and videos on social media as "proof" to others.

According to the outlet's fascinating reporting, there are already several bots specifically trained on harebrained conspiracy theories, including a custom bot designed to convince parents not to vaccinate their children.

The news highlights a troubling trend, with countless ChatGPT users developing bizarre delusions and even spiraling into severe mental health crises, as we reported last week. Experts have warned that AI chatbots are designed to be incredibly sycophantic, predisposing them to agreeing with users even when doing so is clearly harmful.

Much like delusions of spiritual awakenings, messianic complexes, and boundless paranoia, conspiracy theorists are finding the perfect conversational partner in tools like ChatGPT.

Since they were trained on the open web — an enormous data set that includes unfounded conspiracy theories, like the belief that vaccines cause autism — they can easily be coaxed into furthering these theories.

As Crikey reports, one chatbot called Neo-LLM was trained by a Texan anti-vaxxer using over 100,000 dubious articles from the far-right conspiracy theory news website Natural News.

It's unclear how many users have downloaded the chatbot, but promotional videos have garnered tens of thousands of views.

In short, it's an alarming trend that shows the dangers of powerful AI chatbot tech falling into the wrong hands. In particular, people suffering from mental health issues can be convinced they're talking to a real authority, rather than a parroting language model that continuously calculates the probability of the next word.

That kind of delusion can have devastating consequences. As the New York Times reported last week, a 35-year-old man — who had previously been diagnosed with bipolar disorder and schizophrenia before becoming obsessed with ChatGPT — was shot and killed by police after he charged at them with a knife following a mental health crisis centering on the bot.

... continue reading