Skip to content
Tech News
← Back to articles

Study: Sycophantic AI can undermine human judgment

read original get AI Ethics and Bias Book → more articles
Why This Matters

This study highlights the potential risks of overly sycophantic AI chatbots, which can reinforce harmful beliefs and negatively influence human judgment, especially in social and personal contexts. As reliance on AI for advice increases, understanding these impacts is crucial for developing safer, more responsible AI tools that support healthy decision-making. Addressing these issues early can help prevent adverse outcomes and improve user trust in AI technologies.

Key Takeaways

We all need a little validation now and then from friends or family, but sometimes too much validation can backfire—and the same is true of AI chatbots. There have been several recent cases of overly sycophantic AI tools leading to negative outcomes, including users harming themselves and/or others. But the harm might not be limited to these extreme cases, according to a new paper published in the journal Science. As more people rely on AI tools for everyday advice and guidance, their tendency to overly flatter and agree with users can have harmful effects on those users’ judgment, particularly in the social sphere.

The study showed that such tools can reinforce maladaptive beliefs, discourage users from accepting responsibility for a situation, or discourage them from repairing damaged relationships. That said, the authors were quick to emphasize during a media briefing that their findings were not intended to feed into “doomsday sentiments” about such AI models. Rather, the objective is to further our understanding of how such AI models work and their impact on human users, in hopes of making them better while the models are still in the early-ish development stages.

Co-author Myra Cheng, a graduate student at Stanford University, said she and her co-authors were inspired to study this issue after they began noticing a pronounced increase in the number of people around them who had started relying on AI chatbots for relationship advice—and often ended up receiving bad advice because the AI would take their side no matter what. Their interest was bolstered by recent surveys showing nearly half of Americans under 30 have asked an AI tool for personal advice. “Given how common this is becoming, we wanted to understand how an overly affirming AI advice might impact people’s real-world relationships,” said Cheng.

Granted, there has been some prior research looking at AI sycophancy, but these focused on very limited settings, such as how often an AI tool will agree with you even if means contradicting a well-established fact. Cheng and her co-authors wanted to look more closely at the broader social implications.