Tech News
← Back to articles

Does your chatbot have 'brain rot'? 4 ways to tell

read original related products more articles

Eoneren/E+ via Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

A recent paper found that AI can experience "brain rot."

Models underperform after ingesting "junk data."

Users can test for these four warning signs.

You know that oddly drained yet overstimulated feeling you get when you've been doomscrolling for too long, like you want to take a nap and yet simultaneously feel an urge to scream into your pillow? Turns out something similar happens to AI.

Last month, a team of AI researchers from the University of Texas at Austin, Texas A&M, and Purdue University published a paper advancing what they call "the LLM Brain Rot Hypothesis" -- basically, that the output of AI chatbots like ChatGPT, Gemini, Claude, and Grok will degrade the more they're exposed to "junk data" found on social media.

Also: OpenAI says it's working toward catastrophe or utopia - just not sure which

"This is the connection between AI and humans," Junyuan Hong, an incoming Assistant Professor at the National University of Singapore, a former postdoctoral fellow at UT Austin and one of the authors of the new paper, told ZDNET in an interview. "They can be poisoned by the same type of content."

... continue reading