Skip to content
Tech News
← Back to articles

Is xAI a neocloud now?

read original more articles
Why This Matters

The partnership between xAI and Anthropic signifies a strategic shift where xAI monetizes its data center capacity, transforming from a pure AI developer to a major compute provider. This move highlights Elon Musk's focus on building infrastructure and data centers, potentially reshaping the AI industry’s approach to resource management and commercialization. It also signals a broader industry trend towards prioritizing data center development as a core business activity.

Key Takeaways

On Wednesday, xAI and Anthropic announced a surprise partnership that has the Claude-maker buying out “all of the compute capacity at [xAI’s] Colossus 1 data center,” roughly 300MW that allowed Anthropic to immediately raise its usage limits. It’s a huge deal for xAI, likely worth billions of dollars. More importantly, it immediately monetized one of the company’s most impressive accomplishments, turning xAI from a consumer to a provider of compute.

It’s tempting to see the arrangement as a shot at OpenAI amid the ongoing lawsuit. But Musk’s explanation on X was that xAI had already moved training to a newer data center, Colossus 2, and xAI simply didn’t need them both.

In the short term, there’s an obvious logic at work. xAI’s existing products are mostly focused on Grok, which has seen plummeting usage since the image generation debacles earlier this year. If xAI’s data center buildout is that much more than what Grok needs to operate, partnering with Anthropic adds a lot of green to the balance sheet. This is especially useful as the company, now combined with SpaceX, speeds towards an IPO. More broadly, having Anthropic lined up as a customer makes it easier to believe that SpaceX’s orbital data center play might actually work.

But beyond the short-term benefit, the Anthropic partnership sends an unusual message about where Elon Musk’s priorities really lie. It suggests the company’s real business may be more about building data centers than training AI models.

It’s rare to see a major tech company treat compute resources this way when companies like Google, Meta who are also training models, are building more data centers. It’s an easy point to miss, because so many of these companies are working as enterprise AI vendors, online services and cloud providers all at once. But when forced to make a choice between selling more available compute to customers and preserving some to build their own tools, they reliably choose door #2.

Just last month, Sundar Pichai admitted on a call that Google Cloud revenue was lower than it could have been because the company was “capacity constrained” — and when given the choice of renting out their GPUs or using them to develop AI products, Google chose the AI products.

Facebook has faced a more extreme version of the same constraint, spinning up an entirely new cloud apparatus just to ensure they would have enough GPU power to chase Zuckerberg’s AI ambition. As he put it when announcing Meta Compute in January, “How we engineer, invest, and partner to build this infrastructure will become a strategic advantage.”

Techcrunch event This Week Only: Buy one pass, get the second at 50% off

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register before May 8 to bring a +1 at half the cost. This Week Only: Buy one pass, get the second at 50% off

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register before May 8 to bring a +1 at half the cost. San Francisco, CA | REGISTER NOW

... continue reading