Opening Look at a modern CPU die. See those frame pointer registers? That stack management hardware? That’s real estate. Silicon. Transistor budget. It’s a monument to C and to the everything-is-a-function paradigm. Not because this paradigm discovered some fundamental truth about computation. Because it won, and winners get to reshape reality in their image. Those transistors could have been general-purpose registers. Could have been message-passing hardware. Could have been anything. Instead, they’re dedicated to making one particular abstraction - blocking, synchronous function calls - cheaper to execute. This is Hardware Stockholm Syndrome: we optimized the hardware for C, then pointed at the hardware and said “see, C is efficient!” We forgot we made it that way. The Capture How did we get here? In the beginning, CPUs gave you the basics: registers, memory access, CALL and RETURN instructions. The rest was up to you. Assembly programmers decided where to put parameters, where to put return values, how to manage their stack (if they used one at all). Then C came along with a rigid contract: everything is a function. Parameters go here. Return values go there. Stack frames look like this. The calling convention is standardized. This was expensive in the early days. Every function call meant: Save the frame pointer Set up a new stack frame Copy parameters into position Execute Copy return value into position Tear down the stack frame Restore the old frame pointer That’s a lot of overhead for “do this thing, then come back.” So what happened? Did we question whether everything needed to be a function? No. We changed the hardware. Frame pointer registers appeared. Stack management got hardware support. Calling conventions got baked into instruction sets. The cost of the function paradigm dropped - not to zero, but low enough that we stopped noticing. But that wasn’t the end of it. Treating everything like a sequential function creates a deeper problem: what happens when one synchronous chain of functions tries to hog the single CPU? You need a traffic cop. An enforcer. Enter the preemptive operating system. Programmers just wanted libraries of shared code. What they got was an operating system with god-like powers that steps in, interrupts their synchronous chain, saves its entire state, and pulls out another synchronous chain to run. The operating system transformed synchronous code into expensive state machines. Save all registers. Save the stack pointer. Save the program counter. Save everything. Then restore it all when it’s that chain’s turn again. This is massively expensive. But it was necessary - because we insisted everything be synchronous and sequential, and we only had one CPU to time-share. The Validation Loop And then something insidious happened. The hardware got better at this. Context switching got faster. Cache hierarchies evolved to make stack access cheaper. Branch predictors learned the patterns of function calls and returns. Virtual memory systems optimized for the access patterns of synchronous, stack-based execution. Suddenly, C programs ran fast. Really fast. Look, people said, C is efficient! It maps naturally to hardware! It must be the right abstraction! But this was circular reasoning wearing a lab coat. C wasn’t efficient because it matched some Platonic ideal of computation. C was efficient because we’d spent decades and billions of transistors making it efficient. We’d redesigned the hardware to fit the paradigm, then pointed at the fit and called it destiny. The frame pointer? Not a fundamental truth of computing - a subsidy for function-based programming. The call stack? Not the only way to organize execution - just the way we optimized for. Preemptive multitasking? Not inevitable - a patch over the synchronous-sequential assumption when you only have one expensive CPU to share. Every optimization made C look more “natural.” Every hardware feature designed to speed up function calls reinforced the idea that functions were the right atomic unit. The paradigm and the platform became mutually reinforcing, a feedback loop that made alternatives look strange, inefficient, impractical. We stopped asking “is this the right abstraction?” and started asking “how do we make this abstraction faster?” The 1972 Economics But why? Why did this particular paradigm win? Context matters. 1972. CPUs were still expensive. A PDP-11 minicomputer - the machine Unix and C were developed on - cost tens of thousands of dollars. Not house-expensive like the mainframes of the 1960s, but still far too expensive to give each programmer their own dedicated machine. You had to time-share. Time-sharing means: maximize CPU utilization. Keep that expensive silicon busy. When one program waits for I/O, switch to another program. Slice up the CPU’s time among many users. And for time-sharing, synchronous sequential execution makes sense. Each user gets their illusion of a dedicated machine, running their code in sequence. The operating system juggles them behind the scenes. The function-based paradigm fit this economic reality. Programs were synchronous chains. The OS was a scheduler. Everything bottlenecked through one expensive resource. This wasn’t wrong. This was rational for 1972. The Pong Counterexample But here’s the thing: even in 1972, this wasn’t the only way to build things. Pong shipped in 1972. The same year C was taking shape. Pong had no software. Zero. It was built entirely from hardware logic chips - flip-flops, counters, comparators. Massively parallel. Every component doing its own thing simultaneously. The ball position counter running in parallel with the paddle position logic, running in parallel with the collision detection, running in parallel with the score display. No functions. No call stack. No operating system. No synchronization beyond the electrical signals propagating through the circuits. And it worked. Beautifully. Reliably. Millions of units sold. You could argue: well, that’s just hardware, not a general-purpose computer. But that’s the point. We chose to make general-purpose computers work like synchronous, sequential, function-based machines. We could have explored other models - asynchronous, message-passing, massively parallel. The hardware people were already doing it. We didn’t progress forward from 1972. We moved sideways into one particular paradigm and then optimized the hell out of it. The function-based approach wasn’t inevitable. It was economically rational for expensive, time-shared CPUs. It had familiar mathematical notation. It won. But Pong is a reminder: there were other paths. We just didn’t take them. What Changed Fast forward to 2025. CPUs are cheap. Absurdly cheap. A Raspberry Pi Zero costs $5 and has more computing power than the machines that sent humans to the moon. You can buy a dozen CPUs for the price of lunch. The economic constraint that made time-sharing rational? Gone. The reason to treat combinations of CPUs in a synchronous, sequential manner? Gone. The justification for preemptive operating systems that transform synchronous code into expensive state machines? Gone. But our atomic units? Still synchronous. Still blocking. Still function-based. We’re still building programs like we’re time-sharing expensive CPUs. We’re still using preemptive multitasking to fake asynchrony on top of synchronous primitives. We’re still paying the cost of context switches and stack management. Why? Because the hardware is optimized for it. Because our tools assume it. Because our mental models are built around it. Because we forgot it was a choice. First Principles Reset So let’s start over. From first principles. What do we have in 2025? Cheap CPUs. Abundant. Distributed everywhere - in our pockets, on our wrists, embedded in devices throughout our environment. Networks. Ubiquitous connectivity. Messages flowing between nodes constantly. Asynchronous events. Sensors triggering. Users clicking. Data arriving. Nothing waits for anything else. Parallel execution. Multi-core is standard. GPUs have thousands of specialized cores. The hardware is massively concurrent. What don’t we have? A programming paradigm where these things are atomic units. Notation that treats asynchronous message-passing as fundamental rather than exceptional. Abstractions built for networks of cheap CPUs instead of time-slices of expensive ones. A mental model where parallel, loosely-coupled execution is the default, not something you bolt on with “concurrency primitives.” The function - blocking, synchronous, sequential - is a 1972 solution. It fit the constraints of expensive, time-shared CPUs. It matched familiar mathematical notation. It won, reshaped the hardware in its image, and became invisible. But the constraints changed. The hardware changed. The problems changed. Our atomic units didn’t. We’re still building intricate Swiss watches with carefully synchronized gears when we could be building LEGO-like compositions of loosely-coupled, asynchronous components. We’re still treating message-passing, parallelism, and asynchrony as “advanced topics” that require special handling, when they’re the reality of the systems we’re building. The question isn’t “how do we make functions handle async better?” The question is “what would programming look like if we started from today’s reality instead of 1972’s constraints?” Closing In the 1960s, engineers looked at piles of transistors and asked: “What do we have? What don’t we have?” That question led to integrated circuits, then CPUs, then the computing revolution. What can that question lead to in 2025, if we’re willing to ask it honestly? Not by throwing everything away. Not by starting from scratch. But by recognizing that C’s paradigm was a choice, not destiny. That the hardware Stockholm Syndrome is real - we fell in love with our constraints and forgot they were constraints. The first step to escape is recognizing the cage. The function-based paradigm is a tool. A powerful tool, optimized by decades of hardware and software evolution. But it’s not the foundation of computation. It’s not the only way. It’s not even the best fit for the problems we’re solving today. It’s just what we’re used to. And that’s worth questioning. See Also Part 1 The Marketing of C Email: [email protected] Substack: paultarvydas.substack.com Videos: https://www.youtube.com/@programmingsimplicity2980 Discord: https://discord.gg/65YZUh6Jpq Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas Twitter: @paul_tarvydas Bluesky: @paultarvydas.bsky.social Mastodon: @paultarvydas (earlier) Blog: guitarvydas.github.io References: https://guitarvydas.github.io/2024/01/06/References.html Leave a comment Share