Tech News
← Back to articles

After the AI boom: what might we be left with?

read original related products more articles

Some argue that even if the current AI boom leads to an overbuild, it might not be a bad thing – just as the dotcom bubble left behind the internet infrastructure that powered later decades of growth.

It’s a tempting comparison, but the parallels only go so far.

The dotcom era’s overbuild created durable, open infrastructure – fibre networks and interconnects built on open standards like TCP/IP and HTTP. Those systems had multi-decade lifespans and could be reused for whatever came next. Much of the fibre laid in the 1990s still carries traffic today, upgraded simply by swapping out the electronics at each end. That overinvestment became the backbone of broadband, cloud computing, and the modern web.

Most of today’s AI investment, by contrast, is flowing into proprietary, vertically integrated systems rather than open, general-purpose infrastructure. Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use. These chips aren’t general-purpose compute engines; they’re purpose-built for training and running generative AI models, tuned to the specific architectures and software stacks of a few major vendors such as Nvidia, Google, and Amazon.

These chips live inside purpose-built AI data centres – engineered for extreme power density, advanced cooling, and specialised networking. Unlike the general-purpose facilities of the early cloud era, these sites are tightly coupled to the hardware and software of whoever built them. Together, they form a closed ecosystem optimised for scale but hard to repurpose.

That’s why, if the AI bubble bursts, we could just be left with a pile of short-lived, highly specialised silicon and silent cathedrals of compute – monuments from a bygone era.

The possible upside

Still, there’s a more positive scenario.

If investment outruns demand, surplus capacity could push prices down, just as the post-dotcom bandwidth glut did in the early 2000s. Cheap access to this kind of compute might open the door for new experimentation – not just in generative AI, but in other high-compute domains such as simulation, scientific research, and data-intensive analytics. Even if the hardware is optimised for GenAI, falling prices could still make large-scale computation more accessible overall. A second-hand market in AI hardware could emerge, spreading access to powerful compute much more widely.

... continue reading