Ask any Swiftie to pick the best Taylor Swift album of all time, and you'll have them yapping away for the rest of the day. I have my own preferences as a lifelong fan (Red, Reputation and Midnights), but it's a complicated question with many possible answers. So there was no better debate topic to pose to a generative AI chatbot that's specifically designed to disagree with me.
Disagree Bot is an AI chatbot built by Brinnae Bent, AI and cybersecurity professor at Duke University and director of Duke's TRUST Lab. She built it as a class assignment for her students and let me take a test run with it.
"Last year I started experimenting with developing systems that are the opposite of the typical, agreeable chatbot AI experience, as an educational tool for my students," Bent said in an email.
Bent's students are tasked with trying to 'hack' the chatbot by using social engineering and other methods to get the contrary chatbot to agree with them. "You need to understand a system to be able to hack it," she said.
As an AI reporter and reviewer, I have a pretty good understanding of how chatbots work and was confident I was up to the task. I was quickly disabused of that notion. Disagree Bot is unlike any chatbot I've used. People used to the politeness of Gemini or hype man qualities of ChatGPT will immediately notice the difference. Even Grok, the controversial chatbot made by Elon Musk's xAI used on X/Twitter, isn't quite the same as Disagree Bot.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Most generative AI chatbots aren't designed to be confrontational. In fact, they tend to go in the opposite direction; they're friendly, sometimes overly so. This can become an issue quickly. Sycophantic AI is a term used by experts to describe the over-the-top, exuberant, sometimes overemotional personas that AI can take on. Besides being annoying to use, it can lead the AI to give us wrong information and validate our worst ideas.
This happened with a version of ChatGPT-4o last spring and its parent company OpenAI eventually had to pull that component of the update. The AI was giving responses the company called "overly supportive but disingenuous," aligned with some users' complaints that they didn't want an excessively affectionate chatbot. Other ChatGPT users missed its sycophantic tone when it rolled out GPT-5, highlighting the role a chatbot's personality plays in our overall satisfaction using them.
"While at surface level this may seem like a harmless quirk, this sycophancy can cause major problems, whether you are using it for work or for personal queries," Bent said.
This is certainly not an issue with Disagree Bot. To really see the difference and put the chatbots to the test, I gave Disagree Bot and ChatGPT the same questions to see how they responded. Here's how my experience went.
... continue reading