Skip to content
Tech News
← Back to articles

When everyone has AI and the company still learns nothing

read original get AI Learning Notebook → more articles
Why This Matters

This article highlights the gap between individual AI adoption and organizational learning, emphasizing that widespread AI use does not automatically translate into organizational growth or innovation. It underscores the importance of integrating AI into broader company strategies to realize its full potential, which is crucial for both industry advancement and consumer benefits.

Key Takeaways

Are people using AI, or is the organization learning from it? What changed because we spent those tokens? And who moves discoveries from individuals to teams to organizational capabilities?

Ethan Mollick has been writing about AI adoption in organizations for a while now. In Making AI Work: Leadership, Lab, and Crowd, he makes the point that individual productivity gains from AI do not automatically become organizational gains. People may get faster, write better, analyze more, automate more, or quietly become cyborg versions of themselves. The company may still learn almost nothing.

A lot of companies are now entering the phase where GitHub Copilot licenses are provisioned, ChatGPT Enterprise exists somewhere in the stack, Claude or Gemini or Cursor show up in pockets, and every team has at least one person who is much further along than the official enablement material assumes. Some of this is visible, yet much of it is not. Management sees license usage (“Where is the ROI for the 2 mio € we paid Anthropic last year?”), maybe prompt counts, maybe a survey, maybe a few internal PoCs that feel encouraging enough to put into a steering committee deck. In other companies, AI went straight to IT and died.

I think everyone knows this is the phase where it gets complicated, like, really complicated. The “messy middle” of AI adoption starts when AI use is everywhere, uneven, partially hidden, difficult to compare, and not yet connected to organizational learning.

Everyone has Copilot now

The first phase of AI adoption is (mostly) comfortable because it looks like other enterprise rollouts. You buy seats. You define acceptable use. You run training. You create a champion network. You ask people to share use cases in a Teams channel, which will briefly look alive and then become one more corporate attic full of good intentions.

The second phase is much stranger: one team uses Copilot as autocomplete and calls it a day. Another team runs Claude Code in tight loops, with tests, reviews, and constant steering. A product owner suddenly prototypes real software instead of mocking screens in Figma. A senior engineer delegates a root-cause analysis to an agent and comes back to the valid solution in under an hour; this would’ve taken him two weeks without AI. A junior person produces polished code but has no idea which architectural assumptions got smuggled into the system. A support team quietly turns recurring tickets into workflow automation, because they know exactly where the work hurts and nobody in the Center of Excellence ever asked the right question.

All of these things can happen in the same company at the same time. That is what makes the messy middle messy: the adoption unit is no longer the organization, and maybe not even the team. It is the loop inside the work!

Mollick’s Leadership, Lab, and Crowd frame is useful here. Leadership sets direction and permission, The Crowd discovers use cases because the Crowd does the actual work. The Lab turns those discoveries into shared practices, tools, benchmarks, and new systems. But the part I keep getting stuck on is the same one that shows up in agentic engineering again and again: how does the learning actually travel?

The old change machinery is too slow for this

... continue reading