Modern AI appears remarkably human. We can talk to LLMs like friends, get instant relationship advice, and send them photos of our outfits. And just like humans, these models aren't perfect. They sometimes misremember facts, make simple arithmetic errors, and over-agree with our favorite conspiracy theories.
We argue this similarity between humans and AI is largely superficial. In fact, we believe that AI systems will become less human-like as they continue to scale and improve in their capabilities. This has important implications for alignment and interpretability as AI plays an increasingly powerful role in society.
Specifically, we argue that humans and LLMs are fundamentally different computational architectures, each operating under distinct constraints and algorithms. Even if their outputs become more similar with scale, the underlying mechanisms remain separable. To make this distinction clearer, we propose a process-focused framework for evaluating LLM-human likeness.
Humans vs. LLMs
When a human sits down to write even a simple email, they draw on a small, personal set of memories: their relationship with the recipient, past interactions, and immediate goals. They think briefly, using limited attention and working memory, and then craft a message that fits the social context. An LLM can also produce a useful email, but it does so under entirely different constraints. Instead of lived experience, it relies on patterns extracted from billions of text examples - far more than any human could ever encounter - and maps the prompt into a high-dimensional representation to generate the next token. In other words, humans and LLMs may produce similar outputs, but the computational resources, amount and nature of available data, time pressures, and underlying reasoning algorithms they use are fundamentally different. The following sections outline these contrasts in turn.
Computational constraints
Humans: Cognitive theorists like Simon, Gigerenzer, and Griffiths argue human reasoning is best understood as bounded - we make good-enough decisions with limited computational resources. Even though the brain is massively parallel, with ~10¹¹ neurons and ~10¹⁴-10¹⁵ synapses, its effective, moment-to-moment computation is still bottlenecked by metabolic limits, slow spike transmission, and small capacity buffers. A useful analogy is that while the brain's "parameter count" (synapses) is enormous, its usable working compute at any given time is restricted, making online reasoning resource-limited.
LLMs: In principle, LLMs can be scaled almost arbitrarily in ways biological brains cannot: more parameters, more training compute, more depth. They are not bound by biology, though of course in practice they are bounded by hardware, energy and training budgets. Current 70-billion-parameter models are tiny compared to the human brain. But unlike humans, LLMs can scale up as much as we want.
Data constraints
Humans: We are constantly hit with a large, continuous stream of sensory input, but we cannot process or store more than a very small part of it. Vision alone delivers millions of information bits per second, and almost all of it is discarded automatically. What gets through is filtered by attention, relevance to survival, prior knowledge, and social context. So while humans technically encounter huge amounts of information, the part that enters memory and shapes learning is tiny compared to what AI systems can retain (See Figure 1).
... continue reading