This is a lightly edited transcript of a talk I gave at QCon London on 17 March 2026.
AI is approaching perfection on exactly the tasks that used to comprise the first decade of an engineering career, and those tasks were never just tasks. They were the mechanism that built judgment, intuition, and the ability to supervise the systems we now delegate to AI. The ladder isn’t just missing rungs. It’s missing the process that created the people who built the ladder.
At a conference in New York last year, Dario Amodei, the co-inventor of reinforcement learning from human feedback (RLHF) and co-founder of Anthropic, predicted that within three to six months AI would be writing 90% of all code; and within twelve months, essentially all of it. At the six-month mark, Amodei claimed vindication, saying the 90% figure was “absolutely true,” at least within Anthropic.
Act I: A Prediction Expires
Twelve months on, almost to the day, and well, that’s not exactly what’s happened.
A detailed analysis by Redwood Research concluded that the 90% number was misleading. If you count only code committed to repositories, it was closer to 50%. Which is still a pretty shocking number. The figure inflates to 90% only by including throwaway scripts, one-off explorations, and all AI-generated text that was ever transiently useful. Meanwhile, Google reports that just over 25% of internal code was AI-generated; Microsoft reports around 30%; GitHub Copilot shows roughly 30% enterprise suggestion acceptance rate.
These figures haven’t really changed over the last six months. That’s nowhere near “essentially all.”
But AI is writing a lot of code. The landscape of what being a developer is has massively changed. Amodei isn’t making it up, Anthropic’s own internal data shows engineers using Claude in 59% of their daily work and reporting 50% productivity gains. Roughly 4% of all public GitHub commits are now authored by Claude Code. The tools are real. The productivity is real, at least in certain contexts.
The problem is that “writing code” was never the point. The conversation most people are having right now is about AI writing code. The conversation we should be having is around what software engineering looks like when software engineers don’t have to write code. But behind that is another question, which both sides arguing about code are ignoring: the structural question. If AI handles the work that used to train engineers, where does the next generation of engineers come from?
Amodei’s own caveat to his claim proves the point. He said the programmer still needs to specify the overall design, how code collaborates, and whether the design is secure. He separated the writing from the engineering. But until about three to six months ago, the writing was how people learned the engineering.
... continue reading