The Stargate AI data center in Abilene, Texas, US, on Wednesday, Sept. 24, 2025. Kyle Grillot | Bloomberg | Getty Images
West Texas dust, iron-tinged and orange-red, rides the wind and sticks like a film to everything you touch. It clings to skin and the inside of your mouth, a fine grit that turns every breath into a reminder of where you are. This is the landscape where OpenAI CEO Sam Altman is orchestrating something called Stargate — a fast-expanding constellation of data centers, backed by partners including Oracle , Nvidia , and SoftBank . Six thousand workers' vehicles pour into the site each morning. Tires raise a constant veil of grit over a construction footprint the size of a small city — more people working this single campus than OpenAI employs across its entire payroll. Rain comes in flashes. One minute the roads are powder; the next they're mud — thick, adhesive, the kind that tugs at boots and gums up machinery. Then the storm moves on, the sun returns, and the surface hardens again, cracked and chalky, as if the place is trying to erase the evidence that water ever touched it. And at dusk, the same conditions that make living there punishing turn the sky into a blaze. Shorter wavelengths fall away and reds and oranges remain. "This is what it takes to deliver AI," Altman told CNBC on site in September. "Unlike previous technological revolutions or previous versions of the internet, there's so much infrastructure that's required. And this is a small sample of it." A small sample: At roughly $50 billion per site, OpenAI's Stargate projects add up to about $850 billion in spending — nearly half of the $2 trillion global AI infrastructure surge HSBC now forecasts. The Abilene campus already has one data center online, with a second nearly complete. OpenAI CFO Sarah Friar told CNBC the site could ultimately scale past a gigawatt of capacity — enough electricity to power about 750,000 homes, roughly the size of Seattle and San Francisco combined. "The shovels that are going in the ground here today, they're really about compute that comes online in 2026," she said in September. "That first Nvidia push will be for Vera Rubins, the new frontier accelerator chips. But then it's about what gets built for '27, '28, and '29. What we see today is a massive compute crunch." "We are growing faster than any business I've ever heard of before," Altman said, squinting against the sun. "And we would be way bigger now if we had way more capacity." Land is cheap. Governments are willing. And the grid, for now, can be persuaded to bend. Altman is not alone in building kingdoms.
watch now
watch now
The money
AWS site lead Josh Sallabedra with MacKenzie Sigalos Katie Tarasov
"There is something inherently uncomfortable as a credit investor about the transformation of the sort we're facing that is going to require an enormous amount of capital," Daniel Sorid, head of U.S. investment grade credit strategy at Citi , told investors on a video call earlier this month. You can see that discomfort in the derivatives market. Credit-default swaps — insurance that pays out if a borrower can't service its debt — have widened to multi-year highs for Oracle. Barclays and Morgan Stanley have told clients to buy protection, and in late October, a liquid CDS market tied to Meta began actively trading for the first time as investors rushed to hedge what's becoming a hyperscaler debt boom. There's precedent for debt-funded buildouts outrunning near-term demand. In the dot-com era, telecoms levered up to lay fiber fast. When conditions tightened, many had to restructure. The network survived — but the outcomes ranged from many early investors booking losses, to equity wipeouts.
watch now
OpenAI and the tangled web
At the center of this infrastructure arms race sits OpenAI — and a web of interlocking deals that has reshaped the competitive landscape for AI. In the span of just two months this fall, the company announced partnerships totaling roughly $1.4 trillion in headline commitments — a figure that prompted skeptics to warn of an AI bubble and raised basic questions about whether the power, land, and supply chains exist to match the ambition. The deals came in rapid succession. In September, OpenAI announced a $100 billion equity-and-supply agreement with Nvidia — the chip giant taking an ownership stake in OpenAI in exchange for 10 gigawatts of its next-generation systems. In October, OpenAI teamed up with AMD to deploy its Instinct GPUs, with the agreement structured to potentially give OpenAI a 10% stake in the chipmaker. Days later, Broadcom agreed to supply 10 gigawatts of custom chips co-designed with OpenAI. And in November, OpenAI signed its first cloud contract with Amazon Web Services, further loosening Microsoft's once exclusive grip. "We have to do this," OpenAI President Greg Brockman told CNBC in October, referring to the company's scramble to secure the raw computing power behind its ambitions. "This is so core to our mission if we really want to be able to scale to reach all of humanity, this is what we have to do." Nvidia is effectively financing demand for its own chips, Oracle is building the sites, AMD and Broadcom are positioning as alternative suppliers, and OpenAI is anchoring the demand. Critics call it a circular economy: capital, capacity, and revenue all recycling through the same small set of players. It works as long as growth holds — but if demand slips or funding tightens, the stress can propagate fast through a web of shared exposures. Already, Nvidia has cautioned investors there was "no assurance" it would enter a definitive agreement with OpenAI, or complete the investment on expected terms, a reminder that headline AI pacts often start as frameworks.
... continue reading