Tech News
← Back to articles

Will AI be the basis of many future industrial fortunes, or a net loser?

read original related products more articles

Fortunes are made by entrepreneurs and investors when revolutionary technologies enable waves of innovative, investable companies. Think of the railroad, the Bessemer process, electric power, the internal combustion engine, or the microprocessor—each of which, like a stray spark in a fireworks factory, set off decades of follow-on innovations, permeated every part of society, and catapulted a new set of inventors and investors into power, influence, and wealth.

Yet some technological innovations, though societally transformative, generate little in the way of new wealth; instead, they reinforce the status quo. Fifteen years before the microprocessor, another revolutionary idea, shipping containerization, arrived at a less propitious time, when technological advancement was a Red Queen’s race, and inventors and investors were left no better off for non-stop running.

Anyone who invests in the new new thing must answer two questions: First, how much value will this innovation create? And second, who will capture it? Information and communication technology (ICT) was a revolution whose value was captured by startups and led to thousands of newly rich founders, employees, and investors. In contrast, shipping containerization was a revolution whose value was spread so thin that in the end, it made only a single founder temporarily rich and only a single investor a little bit richer.

Is generative AI more like the former or the latter? Will it be the basis of many future industrial fortunes, or a net loser for the investment community as a whole, with a few zero-sum winners here and there?

There are ways to make money investing in the fruits of AI, but they will depend on assuming the latter—that it is once again a less propitious time for inventors and investors, that AI model builders and application companies will eventually compete each other into an oligopoly, and that the gains from AI will accrue not to its builders but to customers. A lot of the money pouring into AI is therefore being invested in the wrong places, and aside from a couple of lucky early investors, those who make money will be the ones with the foresight to get out early.

The microprocessor was revolutionary, but the people who invented it at Intel in 1971 did not see it that way—they just wanted to avoid designing desktop calculator chipsets from scratch every time. But outsiders realized they could use the microprocessor to build their own personal computers, and enthusiasts did. Thousands of tinkerers found configurations and uses that Intel never dreamed of. This distributed and permissionless invention kicked off a “great surge of development,” as the economist Carlota Perez calls it, triggered by technology but driven by economic and societal forces.[1]

There was no real demand for personal computers in the early 1970s; they were expensive toys. But the experimenters laid the technical groundwork and built a community. Then, around 1975, a step-change in the cost of microprocessors made the personal computer market viable. The Intel 8080 had an initial list price of $360 ($2,300 in today’s dollars). MITS could barely turn a profit on its Altair at a bulk price of $75 each ($490 today). But when MOS Technologies started selling its 6502 for $25 ($150 today), Steve Wozniak could afford to build a prototype Apple. The 6502 and the similarly priced Zilog Z80 forced Intel’s prices down. The nascent PC community started spawning entrepreneurs and a score of companies appeared, each with a slightly different product.

You couldn’t have known in the mid-1970s that the PC (and PC-like products, such as ATMs, POS terminals, smartphones, etc.) would revolutionize everything. While Steve Jobs was telling investors that every household would someday have a personal computer (a wild underestimate, as it turned out), others questioned the need for personal computers at all. As late as 1979, Apple’s ads didn’t tell you what a personal computer could do—it asked what you did with it.[2] The established computer manufacturers (IBM, HP, DEC) had no interest in a product their customers weren’t asking for. Nobody “needed” a computer, and so PCs weren’t bought—they were sold. Flashy startups like Apple and Sinclair used hype to get noticed, while companies with footholds in consumer electronics like Atari, Commodore, and Tandy/RadioShack used strong retail connections to put their PCs in front of potential customers.

The market grew slowly at first, accelerating only as experiments led to practical applications like the spreadsheet, introduced in 1979. As use grew, observation of use caused a reduction in uncertainty, leading to more adoption in a self-reinforcing cycle. This kind of gathering momentum takes time in every technological wave: It took almost 30 years for electricity to reach half of American households, for example, and it took about the same amount of time for personal computers.[3] When a technological revolution changes everything, it takes a huge amount of innovation, investment, storytelling, time, and plain old work. It also sucks up all the money and talent available. Like Kuhn’s paradigms in science, any technology not part of the wave’s techno-economic paradigm will seem like a sideshow.[4]

The nascent growth of PCs attracted investors—venture capitalists—who started making risky bets on new companies. This development incentivized more inventors, entrepreneurs, and researchers, which in turn drew in more speculative capital.

... continue reading