Jensen Huang Says Nvidia’s New Vera Rubin Chips Are in ‘Full Production’
CEO of Nvidia Jensen Huang says the company’s next-generation AI superchip platform, Vera Rubin, is scheduled to start reaching customers later this year. “Today, I can tell you that the Vera Rubin is in full production,” Huang said during a press event Monday at the annual technology trade show CES in Las Vegas.
Rubin will reduce the cost of running AI models to about one-tenth of Nvidia’s current flagship chip system, Blackwell, the company told analysts and reporters during a call Sunday. Nvidia also said that Rubin can train some large models using roughly a quarter of the number of chips required by Blackwell. Combined, these gains could make advanced AI systems much cheaper to run and make it harder for Nvidia customers to justify turning away from its hardware.
Nvidia said on the call that two of its existing partners, Microsoft and CoreWeave, will be among the first companies to begin offering services powered by Rubin chips later this year. Nvidia added that two major AI data centers that Microsoft is currently building in Georgia and Wisconsin will eventually include thousands of Robin chips. Some Nvidia partners have begun running next-generation AI models on early Rubin systems, the company said.
The semiconductor giant also said it is working with Red Hat, which makes open source enterprise software for banks, automakers, airlines and government agencies, to introduce more products that will run on the new Rubin chip system.
Nvidia’s latest chip platform is named after Vera Rubin, the American astronomer who reshaped how scientists understand the properties of galaxies. The system includes six different chipsets, including the Rubin GPU and Vera CPU, both manufactured using the Taiwanese semiconductor manufacturer’s 3nm manufacturing process and the most advanced bandwidth memory technology available. Nvidia’s 6th generation interconnect and switching technologies connect different chipsets together.
Every part of this chip system is “absolutely revolutionary and the best of its kind,” Huang announced during the company’s CES press conference.
Nvidia has been developing the Rubin system for years, and Huang first announced that the chips would be coming during a keynote in 2024. Last year, the company said that systems built on Rubin would start arriving in the second half of 2026.
It’s unclear exactly what Nvidia means by saying Vera Rubin is in “full production.” Typically, production of advanced chips of this level — which Nvidia is building with its longtime partner TSMC — starts out in low quantities while the chips undergo testing and validation and ramps up at a later stage.
“The CES announcement about Robin is meant to tell investors we’re on the right track,” says Austin Lyons, an analyst at Creative Strategists and author of the semiconductor industry newsletter Chipstrat. There were rumors on Wall Street that the Rubin GPU was behind schedule, Lyons says, so Nvidia is now pushing back by saying it has completed key development and testing steps and is confident that the Rubin is still on track to begin ramping up production in the second half of 2026.
In 2024, Nvidia had to delay delivery of its then-new Blackwell chips due to a design flaw that caused them to overheat when plugged together in server racks. Shipments to Blackwell are back on schedule by mid-2025.
With the rapid expansion of the AI industry, software companies and cloud service providers have had to compete fiercely for access to Nvidia’s latest GPUs. Demand will likely be quite high for Robin. But some companies are also hedging their bets by investing in their own custom chip designs. For example, OpenAI said it is working with Broadcom to build custom silicon for next-generation AI models. These partnerships highlight the long-term risk facing Nvidia: Customers who design their own chips could gain a level of control over their hardware that the company does not provide.
But Lyons says today’s announcements show how Nvidia is evolving beyond just offering GPUs to become a “full AI ecosystem architect, spanning compute, networking, memory hierarchy, storage, and software orchestration.” Even as hyperscalers pour money into custom silicon, Nvidia’s tightly integrated platform is “becoming more difficult to replace,” he adds.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2026-01-05 23:05:00



