Nvidia will release According to a Financial report for 2025. Executives confirmed the news, which has not been previously announced, in interviews with WIRED.
This significant investment will transform Nvidia from a chipmaker with an impressive software stack into a bona fide frontier lab capable of competing with OpenAI and DeepSeek. It’s a strategic move that could further strengthen Nvidia’s position as a leading chipmaker in artificial intelligence as the models are tuned to the company’s hardware.
Open source models are those in which the weights or parameters that determine the model’s behavior are made publicly available – sometimes with details of its architecture and training. Thanks to this, anyone can download and run it on their own computer or in the cloud. In Nvidia’s case, the company is also revealing technical innovations related to building and training its models, making it easier for startups and researchers to modify and leverage the company’s innovations.
On Wednesday, Nvidia also released Nemotron 3 Super, its most powerful open-weight AI model yet. The recent model has 128 billion parameters (a measure of model size and complexity), making it roughly equivalent to OpenAI’s largest version of GPT-OSS, although the company claims it outperforms GPT-OSS and other models in several tests.
Specifically, Nvidia claims that the Nemotron 3 Super achieved a score of 37 on its artificial intelligence index, which evaluates models in 10 different benchmarks. GPT-OSS scored 33, but several Chinese models scored higher. Nvidia says the Nemotron 3 Super has been secretly tested on PinchBench, a recent benchmark that assesses a model’s ability to control OpenClaw, and ranks first in that test.
Nvidia also introduced a number of technical tricks that it used to train Nemotron 3. These include architectural and training techniques that improve the model’s reasoning ability, long context handling, and responsiveness to reinforcement learning.
“Nvidia is taking the development of open models much more seriously,” says Bryan Catanzaro, vice president of applied research for deep learning at Nvidia. “And we are making great progress.”
Open border
Meta was the first major AI company to open-source Llama in 2023. CEO Mark Zuckerberg, however, recently rebooted the company’s artificial intelligence efforts and he signaled it this may not make future models fully open. OpenAI offers an open-source model called GPT-oss, but it is inferior to the company’s best proprietary proposals and does not lend itself well to modification.
OpenAI, Anthropic and Google’s top US models can only be accessed via the cloud or chat interface. However, weights for many top Chinese models, from DeepSeek, Alibaba, Moonshot AI, Z.ai and MiniMax, are made available openly and for free. As a result, many startups and researchers around the world now rely on Chinese models.
“It’s in our interest to help grow the ecosystem,” says Catanzaro, who joined Nvidia in 2011 and helped spearhead the company’s shift from making gaming graphics cards to making silicon for artificial intelligence. Nvidia released the first Nemotron model in November 2023. He adds that Nvidia recently completed initial training of a 550-billion-parameter model. (Initial training involves feeding huge amounts of data into a model spread across a huge number of specialized chips working in parallel). Since then, Nvidia has released a number of models specialized for utilize in areas such as robotics, climate modeling and protein folding.
Kari Briski, vice president of generative AI software for enterprises, says Nvidia’s future artificial intelligence models will support the company improve not only its chips but also the supercomputer-sized data centers it builds. “We’re building it to stretch our systems and test not only compute, but also storage and networking, and to kind of lay out a roadmap for the hardware architecture,” he says.
