An effect that’s being more and more widely reported is the increase in time it’s taking developers to modify or fix code that was generated by Large Language Models. If you’ve worked on legacy systems that were written by other people, perhaps decades ago, you’ll recognise this phenomenon. Before we can safely change code, we first need to understand it – understand what it does, and also oftentimes why it does it the way it does. In that sense, this is nothing new. What is new is the scale of the problem being created as lightning-speed code generators spew reams of unread code into millions of projects. Teams that care about quality will take the time to review and understand (and more often than not, rework) LLM-generated code before it makes it into the repo. This slows things down, to the extent that any time saved using the LLM coding assistant is often canceled out by the downstream effort. But some teams have opted for a different approach. They’re the ones checking in code nobody’s read, and that’s only been cursorily tested – if it’s been tested at all. And, evidently, there’s a lot of them. When teams produce code faster than they can understand it, it creates what I’ve been calling “comprehension debt”. If the software gets used, then the odds are high that at some point that generated code will need to change. The “A.I.” boosters will say “We can just get the tool to do that”. And that might work maybe 70% of the time. But those of us who’ve experimented a lot with using LLMs for code generation and modification know that there will be times when the tool just won’t be able to do it. “Doom loops”, when we go round and round in circles trying to get an LLM, or a bunch of different LLMs, to fix a problem that it just doesn’t seem to be able to, are an everyday experience using this technology. Anyone claiming it doesn’t happen to them has either been extremely lucky, or is fibbing. It’s pretty much guaranteed that there will be many times when we have to edit the code ourselves. The “comprehension debt” is the extra time it’s going to take us to understand it first. And we’re sitting on a rapidly growing mountain of it.