Tech News
← Back to articles

Evidence Grows That AI Chatbots Are Dunning-Kruger Machines

read original related products more articles

If using an AI chatbot makes you feel smart, we have some bad news.

New research flagged by PsyPost suggests that the sycophantic machines are warping the self-perception and inflating the egos of their users, leading them to double down on their beliefs and think they’re better than their peers. In other words, it provides compelling evidence that AI leads users directly into the Dunning-Kruger effect — a notorious psychological trap in which the least competent people are the most confident in their abilities.

The work, described in a yet-to-be-peer-reviewed study, comes amid significant concern over how AI models can encourage delusional thinking, which in extreme cases has led to life-upending mental health spirals and even suicide and murder. Experts believe that the sycophancy of AI chatbots is one of the main drivers of this phenomenon, which some are calling AI psychosis.

The study involved over 3,000 participants across three separate experiments, but with the same general gist. In each, the participants were divided into four separate groups to discuss political issues like abortion and gun control with a chatbot. One group talked to a chatbot that received no special prompting, while the second group was given a “sycophantic” chatbot which was instructed to validate their beliefs. The third group spoke to a “disagreeable” chatbot instructed to, instead, challenge their viewpoints. And the fourth, a control group, interacted with an AI that talked about cats and dogs.

Across the experiments, the participants talked to a wide range of large language models, including OpenAI’s GPT-5 and GPT-4o models, Anthropic’s Claude, and Google’s Gemini, representing the industry’s flagship models. The exception is the older GPT-4o, which remains relevant today because many ChatGPT fans still consider it their favorite version of the chatbot — due to it, ironically, being more personable and sycophantic.

After conducting the experiments, the researchers found that having a conversation with the sycophantic AI chatbots led to the participants having more extreme beliefs, and raised their certainty that they were correct. But strikingly, talking to the disagreeable chatbots didn’t have the opposite effect, as it neither lowered extremity nor certainty compared to the control group.

In fact, the only thing that making the chatbot disagreeable seemed to have a noticeable effect on was user enjoyment. The participants preferred having the sycophantic companion, with those that spoke to the disagreeable chatbots less inclined to use them again.

The researchers also found that, when a chatbot was instructed to provide facts about the topic being debated, the participants viewed the sycophantic fact-provider as less biased than the disagreeable one.

“These results suggest that people’s preference for sycophancy may risk creating AI ‘echo chambers’ that increase polarization and reduce exposure to opposing viewpoints,” the researchers wrote.

Equally notable was how the chatbots affected the participants’ self-perception. People already tend to think they are better than average when it comes to desirable traits like empathy and intelligence, the researchers say. But they warned that AI could amplify this “better than average effect” even further.

... continue reading