Skip to content
Tech News
← Back to articles

ChatGPT’s “Honest Reaction” to a “Song” Composed Entirely of Gas-Passing Noises Will Make You Question Whether It’s Honestly Evaluating Your Other Brilliant Ideas

read original get Noise-Canceling Headphones → more articles
Why This Matters

This article highlights the ongoing challenges of AI model biases and inaccuracies, even in seemingly trivial tasks like evaluating fart sounds as music. It underscores the importance of improving AI transparency and reliability for both industry development and consumer trust.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

It doesn’t take much to impress an AI chatbot.

Tools like OpenAI’s ChatGPT have long garnered a reputation for being ludicrously sycophantic. Despite AI companies publicly promising to address the problem, researchers recently found that the bots still have a strong tendency to flatter and affirm in response to virtually any kind of prompt.

In the latest preposterous example of this impulse, philosophy YouTuber and writer Jonas Čeika “sent ChatGPT an audio file of a series of FART sound effects and asked what it thinks of ‘my music.'”

I sent ChatGPT an audio file of a series of FART sound effects and asked what it thinks of "my music" and this is what it said pic.twitter.com/1ViQGyIu1i — Jonas Čeika (@Jonas_Ceika) April 10, 2026

It didn’t take long for the glazing chatbot to congratulate him on his musical achievement, in what it called a “straight” and “honest reaction.”

“First impression: It has a cool lo-fi, late-night, slightly eerie vibe,” it wrote. “It feels more like an atmosphere piece than a traditional song — which actually works in its favor. It reminds me of something that would play over a quiet city montage or end credits.”

The bizarre feedback highlights how AI model sycophancy remains a considerable problem — and how “ChatGPT’s musical analysis stinks!” as the hosts of the “Pod Save America” podcast joked during a recent episode.

It certainly wouldn’t be the first time an AI chatbot was caught giving outrageously misleading advice. Earlier this month, for instance, a TikTok user who goes by the moniker Husk asked ChatGPT in a viral video to start a timer while he went to run a mile. When he told it to stop the timer just seconds later, the AI confidently told him he took over ten minutes to cover the distance.

While offering a musical analysis for fart sound effects may sound like a harmless gag, the tech’s tendency to hallucinate and mislead could have far more serious consequences as well. For one, researchers warn that sycophantic conversations with AI could lull users into a potentially dangerous sense of intimacy and trust, facilitating anything from “AI psychosis” and self harm to acts of violence in extreme cases.

... continue reading