Tech News
← Back to articles

Nvidia Will Spend $26 Billion to Build Open-Weight AI Models, Filings Show

read original related products more articles

Nvidia will spend $26 billion over the next five years to build open source artificial intelligence models, according to a 2025 financial filing. Executives confirmed the news, which has not been previously reported, in interviews with WIRED.

The sizable investment could see Nvidia evolve from a chipmaker with an impressive software stack into a bona fide frontier lab capable of competing with OpenAI and DeepSeek. It’s a strategic move that could further entrench Nvidia’s place as the AI world’s leading chip manufacturer, since the models are tuned to the company’s hardware.

Open source models are ones where the weights or the parameters that determine a model’s behavior are released publicly—sometimes with the details of its architecture and training. This allows anyone to download and run it on their own machine or the cloud. In Nvidia’s case, the company also reveals the technical innovations involved in building and training its models, making it easier for startups and researchers to modify and build upon the company’s innovations.

On Wednesday, Nvidia also released Nemotron 3 Super, its most capable open-weight AI model to date. The new model has 128 billion parameters (a measure of the model’s size and complexity), making it roughly equivalent to the largest version of OpenAI’s GPT-OSS, though the company claims it outperforms GPT-OSS and other models across several benchmarks.

Specifically, Nvidia claims Nemotron 3 Super received a score of 37 on the Artificial Intelligence Index, which scores models across 10 different benchmarks. GPT-OSS scored 33—but several Chinese models scored higher. Nvidia says Nemotron 3 Super was secretly tested on PinchBench, a new benchmark that assesses a model’s ability to control OpenClaw, and ranks number one on that test.

Nvidia also introduced a number of technical tricks that it used to train Nemotron 3. These include architectural and training techniques that improve the model’s reasoning abilities, long-context handling, and responsiveness to reinforcement learning.

“Nvidia is taking open model development much more seriously,” says Bryan Catanzaro, VP of applied deep learning research at Nvidia. “And we are making a lot of progress.”

Open Frontier

Meta was the first big AI company to release an open model, Llama, in 2023. CEO Mark Zuckerberg recently rebooted the company’s AI efforts, however, and signaled that it might not make future models fully open. OpenAI offers an open-weight model, called GPT-oss, but it is inferior to the company’s best proprietary offerings, not well-suited to modification.

The best US models, from OpenAI, Anthropic, and Google, can be accessed only through the cloud or via a chat interface. By contrast, the weights for many top Chinese models, from DeepSeek, Alibaba, Moonshot AI, Z.ai and MiniMax are released openly and for free. As a result, many startups and researchers around the world are currently building on top of Chinese models.

... continue reading