Tech News
← Back to articles

Meta expands Nvidia deal to use millions of AI chips in data center build-out, including standalone CPUs

read original related products more articles

Meta will use millions of Nvidia chips in its artificial intelligence data centers, including Nvidia's new standalone CPUs and next-generation Vera Rubin systems, in a sweeping new deal announced Tuesday.

Meta CEO Mark Zuckerberg said in a statement that the expanded partnership continues his company's push "to deliver personal superintelligence to everyone in the world," a vision he announced in July.

Financial terms of the deal were not provided.

Shares of Meta and Nvidia climbed during extended trading on Tuesday. Advanced Micro Devices stock sank about 4% on the news.

In January, Meta announced plans to spend up to $135 billion on AI in 2026. "The deal is certainly in the tens of billions of dollars," said chip analyst Ben Bajarin of Creative Strategies. "We do expect a good portion of Meta's capex to go toward this Nvidia build-out."

The partnership is nothing new, as Meta has been using Nvidia graphics processing units for at least a decade, but the deal marks a significantly broader technology partnership between the two Silicon Valley-based giants.

Standalone CPUs are the biggest new thing in the deal, with Meta becoming the first to deploy Nvidia's Grace central processing units as standalone chips in its data centers, as opposed to incorporated alongside GPUs in a server. Nvidia said it's the first large-scale deployment of Grace CPUs on their own.

"They're really designed to run those inference workloads, run those agentic workloads, as a companion to a Grace Blackwell/Vera Rubin rack," Bajarin said. "Meta doing this at scale is affirmation of the soup-to-nuts strategy that Nvidia's putting across both sets of infrastructure: CPU and GPU."

The next-generation Vera CPUs are planned to be deployed by Meta in 2027.