Tech News
← Back to articles

Google's decade-long bet on custom chips is turning into company's secret weapon in AI race

read original related products more articles

In this article GOOGL Follow your favorite stocks CREATE FREE ACCOUNT

Sopa Images | Lightrocket | Getty Images

Nvidia has established itself as the undisputed leader in artificial intelligence chips, selling large quantities of silicon to most of the world's biggest tech companies en route to a $4.5 trillion market cap. One of Nvidia's key clients is Google , which has been loading up on the chipmaker's graphics processing units, or GPUs, to try and keep pace with soaring demand for AI compute power in the cloud. While there's no sign that Google will be slowing its purchases of Nvidia GPUs, the internet giant is increasingly showing that it's not just a buyer of high-powered silicon. It's also a developer. On Thursday, Google announced that its most powerful chip yet, called Ironwood, is being made widely available in the coming weeks. It's the seventh generation of Google's Tensor Processing Unit, or TPU, the company's custom silicon that's been in the works for more than a decade. TPUs are application-specific integrated circuits, or ASICs, which play a crucial role in AI by providing highly specialized and efficient hardware for particular tasks. Google says Ironwood is designed to handle the heaviest AI workloads, from training large models to powering real-time chatbots and AI agents, and is more than four times faster than its predecessor. AI startup Anthropic plans to use up to 1 million of them to run its Claude model. For Google, TPUs offer a competitive edge at a time when all the hyperscalers are rushing to build mammoth data centers, and AI processors can't get manufactured fast enough to meet demand. Other cloud companies are taking a similar approach, but are well behind in their efforts. Amazon Web Services made its first cloud AI chip, Inferentia, available to customers in 2019, followed by Trainium three years later. Microsoft didn't announce its first custom AI chip, Maia, until the end of 2023. "Of the ASIC players, Google's the only one that's really deployed this stuff in huge volumes," said Stacy Rasgon, an analyst covering semiconductors at Bernstein. "For other big players, it takes a long time and a lot of effort and a lot of money. They're the furthest along among the other hyperscalers." Google didn't provide a comment for this story.

watch now

Originally trained for internal workloads, Google's TPUs have been available to cloud customers since 2018. Of late, Nvidia has shown some level of concern. When OpenAI signed its first cloud contract with Google earlier this year, the announcement spurred Nvidia CEO Jensen Huang to initiate further talks with the AI startup and its CEO, Sam Altman, according to reporting by The Wall Street Journal. Unlike Nvidia, Google isn't selling its chips as hardware, but rather providing access to TPUs as a service through its cloud, which has emerged as one of the company's big growth drivers. In its third-quarter earnings report last week, Google parent Alphabet said cloud revenue increased 34% from a year earlier to $15.15 billion, beating analyst estimates. The company ended the quarter with a business backlog of $155 billion. "We are seeing substantial demand for our AI infrastructure products, including TPU-based and GPU-based solutions," CEO Sundar Pichai said on the earnings call. "It is one of the key drivers of our growth over the past year, and I think on a going-forward basis, I think we continue to see very strong demand, and we are investing to meet that." Google doesn't break out the size of its TPU business within its cloud segment. Analysts at D.A. Davidson estimated in September that a "standalone" business consisting of TPUs and Google's DeepMind AI division could be valued at about $900 billion, up from an estimate of $717 billion in January. Alphabet's current market cap is more than $3.4 trillion.

'Tightly targeted' chips

Customization is a major differentiator for Google. One critical advantage, analysts say, is the efficiency TPUs offer customers relative to competitive products and services. "They're really making chips that are very tightly targeted for their workloads that they expect to have," said James Sanders, an analyst at Tech Insights. Rasgon said that efficiency is going to become increasingly important because with all the infrastructure that's being built, the "likely bottleneck probably isn't chip supply, it's probably power." On Tuesday, Google announced Project Suncatcher, which explores "how an interconnected network of solar-powered satellites, equipped with our Tensor Processing Unit (TPU) AI chips, could harness the full power of the Sun." As a part of the project, Google said it plans to launch two prototype solar-powered satellites carrying TPUs by early 2027. "This approach would have tremendous potential for scale, and also minimizes impact on terrestrial resources," the company said in the announcement. "That will test our hardware in orbit, laying the groundwork for a future era of massively-scaled computation in space."

Dario Amodei, co-founder and chief executive officer of Anthropic, at the World Economic Forum in 2025. Stefan Wermuth | Bloomberg | Getty Images

Google's largest TPU deal on record landed late last month, when the company announced a massive expansion of its agreement with OpenAI rival Anthropic valued in the tens of billions of dollars. With the partnership, Google is expected to bring well over a gigawatt of AI compute capacity online in 2026. "Anthropic's choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years," Google Cloud CEO Thomas Kurian said at the time of the announcement. Google has invested $3 billion in Anthropic. And while Amazon remains Anthropic's most deeply embedded cloud partner, Google is now providing the core infrastructure to support the next generation of Claude models. "There is such demand for our models that I think the only way we would have been able to serve as much as we've been able to this year is this multi-chip strategy," Anthropic Chief Product Officer Mike Krieger told CNBC. That strategy spans TPUs, Amazon Trainium and Nvidia GPUs, allowing the company to optimize for cost, performance and redundancy. Krieger said Anthropic did a lot of up-front work to make sure its models can run equally well across the silicon providers. "I've seen that investment pay off now that we're able to come online with these massive data centers and meet customers where they are," Krieger said.

Hefty spending is coming

... continue reading