Skip to content
Tech News
← Back to articles

Apple's accidental moat: How the "AI Loser" may end up winning

read original get Apple M2 Chip → more articles
Why This Matters

This article highlights how Apple's strategic patience and lack of aggressive AI spending may position it to benefit from the commoditization of AI technology. As models become more accessible and less costly, Apple's earlier focus on hardware and ecosystem integration could give it a competitive edge without the need for massive AI investments. This shift underscores the importance for tech companies to balance innovation with financial prudence in an evolving AI landscape.

Key Takeaways

A few weeks ago I wrote about how I thought intelligence is becoming a commodity. The idea is quite straightforward, and widespread now: when everyone races to build the best model, the models get better, but so does every other model eventually. Every dollar spent on a bigger training run makes the previous one cheaper. The distance between frontier, second-best, and open-source alternatives is collapsing fast (actually Gemma4, Kimi K2.5 and GLM 5.1 are becoming my bedside models these days). Even more, as models become better, the unit of intelligence that can be deployed in local hardware with lower hardware capabilities increases significantly.

The irony of this situation is that this commoditisation of intelligence is benefiting the company that everyone was framing as the “AI loser”: Apple

The company that “lost”

There’s a version of the last three years where Apple genuinely failed at AI. They had Siri before anyone had a serious voice assistant, and then watched how ChatGPT ate their lunch already since their first release (even before they had introduced their native voice interaction). Apple didn’t have a flagship frontier (or even a vanity open-source) model, no $500B compute commitment with the usual suspects. Meanwhile, the rest of the AI labs and big tech companies were racing to win the next state-of-the-art benchmark by burning bags of cash.

What this also meant is that while these companies were burning money at a rate that would make a sovereign wealth fund uncomfortable, Apple was (and still is) sitting in a pile of undeployed cash (to the point of even increasing their stock buybacks) giving them optionality.

To me, OpenAI is the most paradigmatic example of this “infinite money burning machine”. OpenAI raised at a $300B valuation and then shut down Sora, the video product they’d been positioning as a creative industry flagship, because it was running at roughly $15M a day in costs against $2.1M in daily revenue. Disney had already signed a three-year licensing deal for Sora to generate content from Marvel, Pixar, and Star Wars characters. They were finalising a $1B equity stake in OpenAI. When Sora died, so did the billion. A $1B investment evaporated, because the product it was staked on couldn’t pay for itself (reducing their buffer that accommodates their daily burn).

On the infrastructure side: OpenAI signed non-binding letters of intent with Samsung and SK Hynix for up to 900,000 DRAM wafers per month, roughly 40% of global output. These were of course non-binding. Micron, reading the demand signal, shut down its 29-year-old Crucial consumer memory brand to redirect all capacity toward AI customers. Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.

I don’t know about you, but I don’t see these behaviours as those of someone that is winning the AI race, independently of how good their models do in benchmarks, and how much they are burning in infrastructure. A small miscalculation in the expected revenue, and you are out of the game (I am actually of the opinion that without some kind of bailout, OpenAI could be bankrupt in the next 18-24 months, but I am horrible at predictions).

From intelligence to capabilities

My sense is that the labs’ bet was always that raw model capability, i.e. intelligence, along with the infrastructure required to run them would stay scarce. Those who manage to secure the best model and the infrastructure to run it at scale would get the best moat. But I am afraid that having the best model in itself may not be enough moving forward. Less capable models are becoming as capable as previous versions of the frontier models.

... continue reading