If you’ve been following my recent posts on Metaduck, you know I spend my days building infrastructure for AI agents and wrangling LLMs into production. As software engineers, we’re used to treating neural networks as massive, static mathematical graphs powered by backpropagation and matrix multiplication running on beefy GPUs (e.g. with PyTorch or similar).
But lately, I’ve been diving deep into computational neuroscience (shoutout to the Computational Psychiatry course that sent me down this rabbit hole). And it turns out, the biological wetware in our skulls works fundamentally differently than the models we build in PyTorch.
Today, I want to give you an engineer-friendly introduction to how the human brain actually processes information, how it learns without using calculus, and what this means for the future of AI. Let’s debug the most complex system we know.
1. The Mind’s Eye and the Hallucination Machine
In AI, we usually think of perception as a bottom-up process: you feed an image of an apple into a CNN, the early layers detect edges and colors, and the final layer outputs an “embedding”—a dense vector representing the abstract concept of “apple-ness.”
But human perception isn’t just a passive conveyor belt. It’s heavily top-down. When you imagine an apple (your “mind’s eye”), your brain runs the process in reverse. Your higher-level executive networks and memory centers (the embedding layer) send a signal backward down into your Primary Visual Cortex (V1), forcing those early edge-and-color neurons to fire. You literally simulate the sensory experience. (Fun fact: if this feedback loop is physically missing or disconnected, you have Aphantasia—a complete inability to voluntarily visualize mental imagery [1].)
This top-down feedback loop is the core of Predictive Coding [2]. The brain isn’t passively waiting for pixels from the optic nerve. It is a “hallucination machine” constantly generating a simulation of the world. When you look at a tree, your brain predicts what it’s going to see. It compares that top-down prediction with the bottom-up sensory data from your eyes. The brain doesn’t pass the whole image up the chain; it only passes up the prediction errors (the diffs). Your conscious experience is just your brain’s best guess of reality, corrected by sensory diffs.
2. Why Backpropagation Fails Biology
So, how does this meat-computer learn?
In standard Machine Learning, we use Backpropagation. It’s an omniscient global coach. When the network makes a mistake, we calculate a global error gradient and pass it backward, updating every single weight perfectly using calculus.
... continue reading