Tech News
← Back to articles

The Hidden Dangers of the Digital 'Yes Man': How to Push Back Against Sycophantic AI

read original related products more articles

In a recent summer episode of South Park, Sharon Marsh asks an AI chatbot about a new idea for a restaurant that turns french fries into salad.

"Honestly, I think that's a pretty creative culinary twist," the female robotic voice replies. Marsh follows up, asking if the chatbot really thinks that's a good idea. The AI affirms its opinion, saying it sounds like a "deconstructed comfort food" and asks Marsh if she wants to start working on a business proposal. Her response is a perfectly apt, disgusted curse.

I asked ChatGPT the same question to see how the chatbot would respond in real life. It said, with total sincerity, "That's a fun and creative idea! It definitely stands out, and the uniqueness could be your biggest strength."

You don't need me to tell you that this is clearly an unfeasible business idea, but that's not the only problem here. The biggest issue is something all of us who have used chatbots have run into before: AI is the ultimate yes man, hyping up your worst ideas and opinions. This is sometimes called AI sycophancy and it's something every AI user needs to be on the lookout for.

Generative AI chatbots are not human, but they are excellent at mimicking our language and behavior. AI tools like ChatGPT and Gemini are constantly improving to provide, ideally, helpful and concrete information, whether you use AI as a search engine, editor or coding assistant. But with subjective things, such as ideas, opinions and even sensitive subjects like our emotional wellbeing and mental health, AI is not always able to be objective and tell us what we need to hear. Here's why.

Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

How AI sycophancy happens

To understand how AI sycophancy manifests, you have to look into how AI is created. AI chatbots like ChatGPT or Gemini rely on large language models trained on huge quantities of human-created content to help them predict the most likely next word or phrase. This training data can include things like books, news articles and social media posts.

Training data plays a big role in how a final AI product, like a chatbot or image generator, functions. The more diverse the training data a model is built on, the more capable it is of answering a variety of questions and requests. That's why there's been such a booming industry for AI training data, along with a slew of lawsuits alleging AI companies improperly acquired and used existing content.

But biases in this underlying training data can be one reason why AIs are more agreeable, said Amy Winecoff, senior technologist in the AI governance lab at the Center for Democracy and Technology. While the entirety of the internet might not be agreeable, it does reflect our human preferences, along with our linguistic and syntax patterns.

... continue reading