I’ve been reading this week about how humans learn, and effective ways of transferring knowledge. In addition, I’ve also had AI in the back of my mind, and recently I’ve come to the realization that not only is our industry building AI tools poorly, we’re building them backwards. Which, honestly, is really depressing to me because there is so much unrealized potential that we have available–is it not enough that we built the LLMs unethically, and that they waste far more energy than they return in value? On top of that, it doesn’t take that much extra effort to build the tooling in a way that facilitates how humans work together; the tooling could be built to improve our capabilities by making everybody more effective, rather than by deskilling critical reasoning loops for practitioners. Here’s how that might look.
First: How we learn. My favorite (evidence backed) theory on how humans learn is Retrieval Practice.
The short of it is that humans don’t really learn when we download info into our brain, we learn when we expend effort to pull that info out. This has some big implications for designing collaborative tooling!
Second: What we learn. It turns out, the “thing” that we learn most effectively is not knowledge as we typically think of it, it’s process. This should be intuitive, if we put into a bit of a more natural context. Imaging learning baking for a moment: Do you teach someone to bake a cake by spitting out a fact sheet of ingredients and having them memorize it? Or do you teach them the process?
Third: How we level up. Humans are really bad at “novel” innovation, which is a bit tragic because novel innovation seems to be the thing that the tech industry thinks of when it talks about developer productivity. We surround ourselves with the myth of the solo genius, we benchmark developers on individual contributions, and expect people to implement code by themselves. Yet, it turns out that sustained solo innovation is both extremely rare, and also not that important in the grand scheme of things. It’s much more like the sprinkles on top of a cupcake, rather than the main course; simply put, it’s not how innovation generally happens.
However! We’re really good at cumulative iteration. Humans are turbo optimized for communities, basically. This is why brainstorming is so effective… But usually only in a group. There is an entire theory in cognitive psychology about cumulative culture that goes directly into this and shows empirically how humans work in groups. Humans learn collectively and innovate collectively via copying, mimicry, and iteration on top of prior art. You know that quote about standing on the shoulders of giants? It turns out that it’s not only a fun quote, but it’s fundamentally how humans work.
Also, innovation and problem solving? Basically the same thing. If you get good at problem solving, propagating learning, and integrating that learning into the collective knowledge of the group, then the infamous Innovator’s Dilemma disappears.
So, combine all of those bits of information together, what do we get?
Humans learn and teach via process Processes need to take a goldilocks amount of effort to be effective Cumulative iteration > solo developer problem solving We build tools to help us think, not to think for us
Now, here’s the main pattern I see AI tooling doing:
... continue reading