Tech News
← Back to articles

Revisiting Minsky's Society of Mind in 2025

read original related products more articles

A Teenager’s Frustration, a Researcher’s Revelation

(Note: this is a deeply nerdy / technical post, with most applicability to folks building AI systems, and of little relevance to most users of AI.)

In the late 90s, as a tech-obsessed teenager, I picked up Marvin Minsky’s 1986 book The Society of Mind expecting profound answers about intelligence. It was exciting: Minsky made AI seem so tractable, with beautiful essays arguing that the mind is composed of countless simple “agents” – little processes that individually do almost nothing, yet collectively produce what we call thinking. Between this and Hofstadter’s Gödel, Escher, Bach, it seemed like intelligence as an emergent phenomena was just around the corner. But in the many years that followed, any progress in AI came from systems that felt very different than the one Minsky described.

Eventually, I dismissed Minsky’s theory as an interesting relic of AI history, far removed from the sleek deep learning models and monolithic AI systems rising to prominence.

Fast forward to 2025, and my perspective has flipped. After a decade of working with large language models and AI systems, I’m struck by how prescient Minsky’s ideas now seem. The AI field is running into the limits of gigantic, monolithic models – like today’s large language models (LLMs) that try to do everything in one go – and increasingly looking toward modular, multi-agent approaches. Techniques that once sounded fanciful in Society of Mind, like collections of specialized “mini-AIs” and internal self-monitoring agents, are re-emerging as practical strategies for building more robust, scalable, and aligned AI.

As a historian of technology, I’ve seen foundational ideas cycle back into relevance, and Minsky’s vision of a “society” inside a mind is a prime example. Today, AI researchers and engineers are essentially operationalizing the Society of Mind – often without realizing it – through architectures that value diversity of components over any single all-powerful algorithm.

Let’s explore how Minsky’s insights on modularity, agent-like architectures, and internal oversight map onto current developments: from Mixture-of-Experts models and multi-agent systems (HuggingGPT, AutoGen) to new approaches in AI alignment and debates over centralized vs. decentralized designs.

Minsky’s Vision: Mind as a Society of Simple Agents

Minsky’s core proposal in The Society of Mind is elegant and radical: “The power of intelligence stems from our vast diversity, not from any single, perfect principle.”

Rather than a single, unified “genius” solving problems, our minds are portrayed as assemblies of many tiny agents, each with limited ability. These agents form hierarchies and teams (which Minsky calls “agencies”), where each sub-agent handles a piece of a task, and higher-level agents coordinate or choose which agents should act. Intelligence, in this view, emerges from the interplay of lots of simple parts, much like a society’s culture emerges from many individuals. No single component understands the whole, yet together they achieve complex, adaptive behavior.

... continue reading