I’ve been surprised by one aspect of using large language models more than any other.
They often put into words things I have long understood, but could not write down clearly. When that happens, it feels less like learning something new and more like recognition. A kind of “yes, that’s it” moment.
I have not seen this effect discussed much, but I think it matters. I also think it has improved how I think.
Much of what we know is tacit
As programmers and developers, we build up a lot of understanding that never quite becomes explicit.
You know when a design is wrong before you can say why. You sense a bug before you can reproduce it. You recognize a bad abstraction instantly, even if it takes an hour to explain the problem to someone else.
This is not a failure. It is how experience operates. The brain compresses experience into patterns that are efficient for action, not for speech. Those patterns are real, but they are not stored in sentences.
The problem is that reflection, planning, and teaching all require language. If you cannot express an idea, you cannot easily inspect it or improve it.
LLMs are good at the opposite problem
Large language models are built to do exactly this – turn vague structure into words.
... continue reading