Full paper
Dividing labor with LLMs. As LLMs get better at writing code, it seems inevitable that there will be less work for human programmers. Thomas Dohmke is right that low-level coding skills will matter less and that “the future belongs to developers who can model systems, anticipate edge cases, and translate ambiguity into structure—skills that AI can’t automate.”
Dohmke says “We need to teach abstraction, decomposition, and specification not just as pre-coding steps, but as the new coding.” As someone who has been teaching these things as the core of coding for a few decades now, influenced by colleagues and my own teachers, I can’t disagree with that.
But this “division of labor” viewpoint—the human makes the big, high-level decisions and the LLM fills in the details—may not be the best way to think about AI in coding. First, it ignores the fact that LLM capabilities will undoubtedly improve, so the line between what LLMs can do and what must be left to a human coder is constantly shifting. Second, it assumes that all tasks at a particular level are alike, when in fact some high level design might be amenable to LLMs, and some low level coding may require human skills.
Back to basics. Instead, we might ask: how should we build software so that humans and LLMs can work more effectively together? Most observers realize that the advent of LLMs is going to produce radical changes in software development. What they may not have recognized is how, paradoxically, LLMs will force us to re-adopt classic software engineering principles that we’ve only paid lip service to in recent years.
The end of software design? When ChatGPT came out, I worried that AI would suck all the air out of the room and it’d be ages before anyone would care about software design again. But then it began to dawn on me that the success of AI will be more dependent on good software design than most people realize, and that the advent of AI is more likely to revive interest in software fundamentals. A new era for software design?
LLM coders are amazing. Coding has turned out to be one of the major successes of LLMs. Their capabilities are spectacular. Now that LLMs can one-shot entire applications, it’s hard to remember that just a few years ago researchers were struggling to synthesize ten-line functions.
Hype and reality. But, as anyone who has used LLM coders knows, the reality is not as pretty as the hype would suggest. LLMs quickly hit a complexity barrier when asked to build a full application, and when they operate as programming partners in a real codebase they tend to fumble around making changes in arbitrary places, breaking existing functionality, introducing security vulnerabilities and corroding the quality of the code—making coding take more time, not less. In short, they behave just like novice programmers.
Today’s playbook. Most companies in this sector are following the same playbook. They hope to overcome the problems by stacking up one agent on another with more and more elaborate guidance and iterative strategies, and by giving these agents more control and wider access. It’s possible that this will work, but I’m skeptical. Evidence so far suggests that granting more control merely increases the risk of catastrophic errors. Widening access makes it harder to localize changes (and maintain structure), and means more tokens read and higher costs to the user. It’s common for agentic tools to ingest the entire codebase or reprocess a file many times even to make a tiny change.
A new approach. A different approach is possible, however. Rather than seeking to adapt LLMs to the messy reality of software development, we might recognize that LLMs—and their failure to work effectively in complex codebases—are exposing flaws in how we develop software. They are the canary in the mine, sending us a signal that all is not happy in the way we write code.
... continue reading