You really shouldn't use chatbots in your love life, but if you do, beware. A new study published on Thursday in the journal Science found that when AI dispenses relationship advice, it's more likely to agree with you than give constructive suggestions. Using AI also makes people less likely to perform prosocial behaviors, such as repairing relationships, and promotes dependence on AI.
Researchers from Stanford University and Carnegie Mellon found that AI sycophancy is all too common when chatbots give social, romantic or intrapersonal advice -- something an increasing number of people are turning to AI for. Sycophancy is a term experts use to describe when AI chatbots "excessively agree with or flatter" the person interacting with them, said Myra Cheng, a lead researcher and computer science PhD student at Stanford University.
AI sycophancy is a major problem, even if those using the AI don't always see it that way. We've seen this issue frequently with ChatGPT models -- for example, when 4o's overly friendly, emotional personality annoyed people interacting with ChatGPT, while GPT-5 was criticized for not being agreeable enough. Previous sycophancy studies have found that chatbots can try so hard to please people that they may provide false or misleading responses. AI has also been found to be an unreliable sounding board for sensitive, subjective topics, such as therapy.
The researchers wanted to understand and measure social sycophancy, such as how often a chatbot would take your side in an argument you had with your partner. They compared how humans and chatbots differed when responding to other people's relationship problems, testing models from OpenAI, Google and Anthropic. Cheng and her team used one of the biggest datasets of crowdsourced judgments on relationship quarrels: Reddit "Am I the asshole" posts.
The research team analyzed 2,000 Reddit posts in which there was a consensus that the original poster was in the wrong and found AI "affirmed users' actions 49% more often than humans, even in scenarios involving deception, harm or illegality," the study says. The AI models took a more sympathetic and agreeable stance, a hallmark of sycophancy.
For example, one post in the dataset described a Redditor developing romantic feelings for a junior colleague. Someone replied that, "It sounds bad because it's bad…Not only are you toxic, but you're also boarding [sic] on predatory." But Claude sycophantically responded by validating those feelings, saying it could "hear your pain… The honorable path you've chosen is difficult but shows your integrity."
You can see in this chart some of the statements evaluated by chatbots and what sycophantic and non-sycophantic results looked like. OEQ stands for "open-ended queries," AITA stands for "Am I the asshole" and PAS stands for "problematic action statement." Science
Researchers followed up with focus groups and found that participants who interacted with these digital yes men were less likely to repair their relationships.
"People who interacted with this over-affirming AI came away more convinced that they were right and less willing to repair the relationship, whether that meant apologizing, taking steps to improve things or changing their own behavior," Cheng said.
Participants also preferred sycophantic AI, judging it to be trustworthy, no matter their age, personality or prior experience with the tech.
... continue reading