Skip to content
Tech News
← Back to articles

Anthropic expands partnership with Google and Broadcom for next-gen compute

read original get Google Cloud TPU Kit → more articles
Why This Matters

Anthropic's expanded partnership with Google and Broadcom to develop next-generation TPU infrastructure signifies a major step in scaling AI capabilities to meet soaring customer demand. This investment not only enhances AI training and deployment efficiency but also underscores a strategic focus on strengthening American computing infrastructure, benefiting both the tech industry and consumers through more powerful and resilient AI services.

Key Takeaways

We have signed a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity that we expect to come online starting in 2027. This significant expansion of our compute infrastructure will power our frontier Claude models and help us serve extraordinary demand from customers worldwide.

“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,” said Krishna Rao, CFO of Anthropic. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth.”

Demand from Claude customers has accelerated in 2026. Our run-rate revenue has now surpassed $30 billion—up from approximately $9 billion at the end of 2025. When we announced our Series G fundraising in February, we shared that over 500 business customers were each spending over $1 million on an annualized basis. Today that number exceeds 1,000, doubling in less than two months.

The vast majority of the new compute will be sited in the United States, making this partnership a major expansion of our November 2025 commitment to invest $50 billion in strengthening American computing infrastructure.

The partnership deepens our existing work with Google Cloud—building on the increased TPU capacity we announced last October—as well as our relationship with Broadcom.

We train and run Claude on a range of AI hardware—AWS Trainium, Google TPUs, and NVIDIA GPUs—which means we can match workloads to the chips best suited for them. This diversity of platforms translates to better performance and greater resilience for customers who depend on Claude for critical work. Amazon remains our primary cloud provider and training partner, and we continue to work closely with AWS on Project Rainier. Claude remains the only frontier AI model available to customers on all three of the world's largest cloud platforms: Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry).