Tech News
← Back to articles

Asset Manager Warns That OpenAI Is Likely Headed for Financial Disaster

read original related products more articles

Just over three years ago, OpenAI opened the floodgates with the launch of ChatGPT. The frantic industry-wide race that followed has resulted in soaring valuations for AI companies, tens of billions of dollars invested in data center infrastructure — and plenty of skepticism as well.

For one, experts have pointed out that OpenAI’s business fundamentals are inherently different from those of its competitors like Google. These legacy businesses can tap existing revenue sources to bankroll their major AI capital expenditures.

The Sam Altman-led OpenAI, however, has raised record amounts of cash and has vowed to spend well over $1 trillion before the end of the decade without the advantage of an existing business that generates ongoing revenue. (The company’s recent announcement that it’s stuffing ads into ChatGPT is likely a bid to shift that reality.)

The gap between the AI industry’s promises of a human-level AI-driven future and reality, in other words, has never been wider much like the enormous gulf between AI company valuations and their lagging revenues.

To many onlookers, that kind of hubris could end in disaster. As former Fidelity manager George Noble, who has spent decades in asset management, notes in a lengthy tweet, the company may already be “FALLING APART IN REAL TIME.”

“I’ve watched companies implode for decades,” he wrote. “This one has all the warning signs.”

Besides stalling subscriber growth, Noble pointed out that OpenAI is reportedly losing a staggering $12 billion per quarter, as well as “burning $15 million per day on [text-to-video generator app] Sora alone.” Noble also cast doubt on the AI industry’s promises of scaling up operations to meet demand, an immensely costly enterprise that’s bound to become even more expensive as AI models demand even more power.

Whether their utility will increase at the same rate remains a major point of contention, with some warning that we may have hit a point of diminishing returns in which each new iteration of the same AI model provides smaller and smaller benefits.

“Here’s the big math problem nobody wants to discuss,” Noble said. “It’s going to cost 5x the energy and money to make these models 2x better.”

“The low-hanging fruit is gone,” he added. “Every incremental improvement now requires exponentially more compute, more data centers, more power.”

... continue reading