This behavior is known in research circles as sycophancy, Olson explains, referring to the well-documented tendency of large language models to agree with users rather than assert correct but potentially unpopular answers.Read Entire Article
Your AI assistant isn't confused, it just wants to agree with you
Get alerts for these topics