Tech News
← Back to articles

OpenAI's hyperscaler ambitions are being put to the test with its latest megadeals

read original related products more articles

watch now

Sam Altman didn't set out to compete with Nvidia . OpenAI began with a simple bet that better ideas, not better infrastructure, would unlock artificial general intelligence. But that view shifted years ago, as Altman realized that more compute, or processing power, meant more capability — and ultimately, more dominance. On Monday morning, he unveiled his latest blockbuster deal, one that moves OpenAI squarely into the chipmaking business and further into competition with the hyperscalers. OpenAI is partnering with Broadcom to co-develop racks of custom AI accelerators, purpose-built for its own models. It's a big shift for a company that once believed intelligence would come from smarter algorithms, not bigger machines. "In 2017, the thing that we found was that we were getting the best results out of scale," the OpenAI CEO said in a company podcast on Monday. "It wasn't something we set out to prove. It was something we really discovered empirically because of everything else that didn't work nearly as well." That insight — that the key was scale, not cleverness — fundamentally reshaped OpenAI. Now, the company is expanding that logic even further, teaming up with Broadcom to design and deploy racks of custom silicon optimized for OpenAI's workloads. The deal gives OpenAI deeper control over its stack, from training frontier models to owning the infrastructure, distribution, and developer ecosystem that turns those models into lasting platforms. Altman's rapid series of deals and product launches is assembling a complete AI ecosystem, much like Apple did for smartphones and Microsoft did for PCs, with infrastructure, hardware, and developers at its core.

watch now

Hardware

Through its partnership with Broadcom, OpenAI is co-developing custom AI accelerators, optimized for inference and tailored specifically to its own models. Unlike Nvidia and AMD chips, which are designed for broader commercial use, the new silicon is built for vertically integrated systems, tightly coupling compute, memory, and networking into full rack-level infrastructure. OpenAI plans to begin deploying them in late 2026. The Broadcom deal is similar to what Apple did with its M-series chips: control the semiconductors, control the experience. But OpenAI is going even further and engineering every layer of the hardware stack, not just the chip. The Broadcom systems are built on its Ethernet stack and designed to accelerate OpenAI's core workloads, giving the company a physical advantage that's deeply entangled with its software edge. At the same time, OpenAI is pushing into consumer hardware, a rare move for a model-first company. Its $6.4 billion all-stock acquisition of Jony Ive's startup, io, brought the legendary Apple designer into its inner circle. It was a sign that OpenAI doesn't just want to power AI experiences, it wants to own them. Ive and his team are exploring a new class of AI-native devices designed to reshape how people interact with intelligence, moving beyond screens and keyboards toward more intuitive, engaging experiences. Reports of early concepts include a screenless, wearable device that uses voice input and subtle haptics, envisioned more as an ambient companion than a traditional gadget. OpenAI's twin bet on custom silicon and emotionally resonant consumer hardware adds two more powerful branches over which it has direct control.

watch now

Blockbuster deals

OpenAI's chips, datacenters and power fold into one coordinated campaign called Stargate that provides the physical backbone of AI. In the past three weeks, that campaign has gone into overdrive with several major deals: OpenAI and Nvidia have agreed to a framework for deploying 10 gigawatts of Nvidia systems, backed by a proposed $100 billion investment.

AMD will supply OpenAI with multiple generations of its Instinct GPUs under a 6-gigawatt deal. OpenAI can acquire up to 10% of AMD if certain deployment milestones are met.

Broadcom's custom inference chips and racks are slated to begin deployment in late 2026, as part of Stargate's first 10‑gigawatt phase. Taken together, it is OpenAI's push to root the future of AI in infrastructure it can call its own. "We are able to think from etching the transistors all the way up to the token that comes out when you ask ChatGPT a question, and design the whole system," Altman said. "We can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models — all of that." Whether or not OpenAI can deliver on every promise, the scale and speed of Stargate is already reshaping the market, adding hundreds of billions in market cap for its partners, and establishing OpenAI as the de facto market leader in AI infrastructure. None of its rivals appears able to match the pace or ambition. And that perception alone is proving a powerful advantage.

... continue reading