Find Related products on Amazon

Shop on Amazon

After GPT-4o backlash, researchers benchmark models on moral endorsement—find sycophancy persists across the board

Published on: 2025-06-23 15:46:40

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Last month, OpenAI rolled back some updates to GPT-4o after several users, including former OpenAI CEO Emmet Shear and Hugging Face chief executive Clement Delangue said the model overly flattered users. The flattery, called sycophancy, often led the model to defer to user preferences, be extremely polite, and not push back. It was also annoying. Sycophancy could lead to the models releasing misinformation or reinforcing harmful behaviors. And as enterprises begin to make applications and agents built on these sycophant LLMs, they run the risk of the models agreeing to harmful business decisions, encouraging false information to spread and be used by AI agents, and may impact trust and safety policies. Stanford University, Carnegie Mellon University and University of Oxford researchers sought to change that by proposing a benchmark to measure models’ sycopha ... Read full article.