Skip to content
Tech News
← Back to articles

Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It’s Totally Wrong

read original get ChatGPT Official Sticker → more articles
Why This Matters

This study highlights a critical challenge for the tech industry: users often trust AI chatbots like ChatGPT even when they provide incorrect information, which can lead to misinformation and poor decision-making. As AI becomes more integrated into daily life, understanding and addressing this bias is essential for developing more reliable and transparent systems that safeguard consumers and improve trust in AI technologies.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

In a matter of only a few years, AI chatbots have become a common part of many of our daily lives, even though they remain deeply flawed systems.

The reality is that chatbots like OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude still make regular mistakes. According to an October study by the BBC, even the most advanced AI chatbots gave wrong answers a whopping 45 percent of the time.

But many users don’t understand that reality. As detailed in a new paper, University of Pennsylvania postdoctoral researcher Steven Shaw and marketing professor Gideon Nave found that in a series of experiments, users tended to take the output of ChatGPT at face value even when it gave them the incorrect answer.

Across a series of experiments, participants were asked to answer a variety of reasoning and knowledge-based questions. Despite making the use of ChatGPT optional, over 50 percent of them chose to use the chatbot to answer the questions.

The researchers were testing a key theory: whether users would be willing to believe what the AI was telling them regardless of accuracy, in what they termed a “cognitive surrender” that effectively overrode their intuition and deliberation process.

In the most striking experiment, involving 359 participants, participants followed AI’s correct advice 92.7 percent of the time — and a still-considerable 79.8 percent of the time when the AI gave them the wrong answer.

“While override rates were substantially higher on AI-Faulty than AI-Accurate trials, participants followed faulty AI recommendations on roughly four out of five chat-engaged trials,” the researchers wrote.

The research points at a much broader change in how we perceive the world around us and how we’re letting AI influence how we make decisions.

“We felt that the ability to actually outsource thinking hadn’t really been studied itself. It’s sort of a profound idea,” Shaw said during a UPenn podcast appearance last month. “A bit provocative, I would say, in the paper, that with these AI tools that are available, they’re so ingrained in our daily lives and decision processes that we now have the option or ability to outsource thinking itself.”

... continue reading