There’s a strange dynamic in AI discourse where you’re only allowed to hold one of two positions: either large language models will automate all knowledge work, collapse employment, and fundamentally restructure civilisation within a decade, or they’re stochastic parrots that can’t really do anything useful and the whole thing is a bubble. The measured take, that LLMs are a significant productivity tool comparable to previous technological shifts but not a rupture in the basic economic fabric, doesn’t generate much engagement. It’s boring.
I want to make the case for boring.
The Python Analogy
Consider how we talk about LLMs as a new abstraction layer for programming. You write intent in English, the model translates it to code, you debug at the level of English when things go wrong. This is framed as revolutionary, but there’s another way to see it: it’s the same transition we’ve made repeatedly throughout computing history. Assembly programmers became C programmers became Python programmers. The abstraction rose, individual productivity increased, more total software got written, and roughly similar numbers of people were employed writing it.
If English-to-code is just another abstraction layer, maybe the equilibrium looks like “same number of software engineers, each individually more productive, much more total software in the world.” That’s a big deal, but it’s not mass unemployment. It’s not the end of programming as a career. It’s what happens every time we get better tools.
The counterargument is that previous transitions still required learning a formal language with precise syntax, whereas English is natural and everyone speaks it already. This should dramatically lower barriers to entry. Perhaps. Though I suspect the binding constraint was never syntax but the underlying skill of thinking precisely about systems, edge cases, state management, and failure modes. The compiler was pedagogical in that it forced you to confront ambiguity. If the LLM just does something plausible when your specification is vague, you may never develop that precision, and the systems you build may be worse for it. Whether that matters depends on the use case.
The Specification Problem
Here’s a more useful way to think about which work is vulnerable to automation. Consider knowledge work as sitting on a spectrum from well-specified to ambiguous.
At the well-specified end, you have tasks where the inputs are clean, the desired output is clear, and success criteria are obvious: processing a standard form, writing code to a precise spec, translating a document, summarising a report. LLMs are excellent at this, and there’s strong evidence they can automate significant portions of it.
At the ambiguous end, you have tasks where the context is messy, the right approach isn’t obvious, and success depends on knowledge that isn’t written down anywhere. “Put together something on the competitive landscape” where you need to know what the CEO actually cares about, what was discussed in a meeting last month, which competitors are politically sensitive to mention, what level of detail is appropriate for this audience. The specification itself is the work, and it requires drawing on institutional history, unstated preferences, and accumulated judgement that no system has access to.
... continue reading