Software is quietly becoming a probabilistic system, and almost no one is saying it out loud.
We built our profession around deterministic code. Write it, test it, ship it, know it works - but in my experience that contract is breaking. Inside the top few percent of operators at truly AI-native companies, the codebase has started to become something you believe works, with a probability you can no longer precisely state. The workday is changing as a consequence, and so are the roles, the organizations, the training pipelines, and the nature of what it means to ship.
I noticed because I built one.
A few months ago, in the evenings after my day job running Modular, I started building a side project called Compound Loop - a system that orchestrates multiple frontier models against each other to write, review, and merge code more or less autonomously. I would set it running on a real problem before I went to bed, and I would wake up and triage a stack of pull requests that had not existed the night before. Some were excellent, some were wrong, and some surfaced a question I did not know to ask. By 8 a.m. I was not catching up on yesterday's work - I was deciding which of the overnight jobs to keep, while the system kept analyzing logs and adding more PRs. The continuous compounding nature of it was, and still is, infectious to watch.
For the first time in the history of knowledge work, the person who went home did not take the only copy of their brain with them. 9-9-6 as a concept is dead, and we are simply 24-7 employees now - but the 24-7 employee is not a person working 24 hours, it is a person whose agents work with enormous parallelization. Most teams in 2026 still bottleneck on coordination rather than typing, and most organizations have barely begun to restructure, but the frontier is always where the future shows up first, and the frontier is already here. This essay is not a description of the industry at large, but rather a description of what is already happening inside the most AI-native teams, and where I believe that pulls the rest of the industry.
Roles are not just collapsing upward - they are splitting
Inside the most AI-native teams, the pattern is messier than the clean "everyone levels up" story most commentary is selling. Some operators really are moving up the stack: the best engineers are becoming more effective product managers, working at engineering's abstraction layer, the best product managers are becoming system architects, and the best architects are thinking about distribution, growth, and the shape of the market. For this group - maybe the top tier of any team - the work is more leveraged than it has ever been, and they are having the best years of their careers.
But that is not the whole picture, and pretending it is does a disservice to everyone else. Alongside the upward shift, a downward pressure is fragmenting roles in ways the headlines are not covering. Plenty of engineers are not becoming architects - instead they are becoming spec writers, reviewers, and agent babysitters, operators who spend their days translating intent into machine-readable prompts and then grading the machine's work against standards they themselves might not fully possess. Some of that work is genuinely important, but some of it is the 2026 equivalent of data entry, dressed up in new terminology.
We need to be honest about what that means for the people doing it. These fragmented roles will be paid less, valued less, and in many cases become career dead ends - a layer of output-wrangling work the system needs but does not reward. The pay gap between the top tercile running fleets of agents effectively and the middle tier managing their exhaust will be wider than the pay gap between engineers and sales reps was in the previous era. That gap is already opening inside the companies I watch closely, and I don't believe it is going to close on its own.
One honest note on where the scarce work has moved. In AI infrastructure, kernel performance and compiler design and hardware abstraction remain deeply defensible moats, because there is still a high degree of determinism needed at the lowest levels of systems engineering. But at the level of building software on top of those moats, the center of gravity has shifted hard toward the human inputs a machine cannot yet replicate, and that shift is real and accelerating.
... continue reading