Skip to content
Tech News
← Back to articles

LLMs can be exhausting

read original get AI Language Model Mug → more articles
Why This Matters

This article highlights the challenges faced by users working with large language models (LLMs), including mental fatigue, bloated context windows, and slow feedback loops, which can hinder productivity and AI effectiveness. Recognizing these issues is crucial for developers and consumers to improve AI usability and ensure more efficient human-AI collaboration.

Key Takeaways

LLMs can be absolutely exhausting

15 Mar, 2026

Some days I get in bed after a tortuous 4-5 hour session working with Claude or Codex wondering what the heck happened. It's easy to blame the model - there's so many options to choose from:

They're dumbing down the model to save money. Context rot! Codex/Claude Code/[insert harness] is getting bloated.

It's not uncommon for me to come back to the problem the next day, my own context window cleared from rest, and find a fast and fulfilling path forward with the help of the LLM. What's going on?

I'm tired & experiments are too slow

As I get more tired, the quality of my prompts degrade

This one seems pretty obvious. If I am becoming mentally fatigued, I will write worse prompts, and because of that, the AI will do a worse job. Here's an example of what happens when I'm really tired: Kick off a somewhat meaty prompt (after 30% of context was used to align with the AI on the problem), realize right after submitting that I missed some key context, interrupt the LLM, provide the context, and then have it proceed. Without a doubt, interrupting Claude Code or "steering" in Codex leads to worse outcomes.

Feedback loop is too slow and context is bloated

Some of the work I'm doing right now requires parsing some large files. There's bugs in that parsing logic that I'm trying to work through with the LLM. The problem is, every tweak requires re-parsing and it's a slow process. I liken it to a slot machine that takes 10 minutes to spin. To add insult to injury, some of these tasks take quite a bit of context to get rolling on a new experiment, and by the end of the parsing job, the LLM is 2% away from compaction. That then leads to either a very dumb AI or an AI that is pretending to know what's going on with the recent experiment once it's complete.

... continue reading