Meta and Oracle choose NVIDIA Spectrum-X for AI data centres

Meta and Oracle are upgrading their AI data centers with NVIDIA’s Spectrum-X Ethernet network switches — technology designed to handle the increasing demands of large-scale AI systems. Both companies adopt Spectrum-X as part of an open networking framework designed to improve AI training efficiency and accelerate deployment across massive compute clusters.
Trillion-parameter models turn data centers into “giga-scale AI factories,” said Jensen Huang, founder and CEO of NVIDIA, adding that Spectrum-X acts as the “nervous system” that connects millions of GPUs to train the largest models ever built.
Oracle plans to use Spectrum-X Ethernet with its Vera Rubin architecture to build large-scale AI factories. The new setup will allow the company to connect millions of GPUs more efficiently, helping customers train and deploy new AI models faster, said Mahesh Thiagarajan, executive vice president of Oracle Cloud Infrastructure.
Meanwhile, Meta is expanding its AI infrastructure by integrating Spectrum-X Ethernet switches into the Facebook Open Switching System (FBOSS), its internal platform for managing network switches at scale. According to Jaya Nagarajan, Meta’s vice president of network engineering, the company’s next-generation network must be open and efficient to support larger AI models and provide services to billions of users.
Building flexible artificial intelligence systems
According to Joe Delaire, who leads NVIDIA’s Accelerated Data Center Compute Solutions group, flexibility is key as data centers become more complex. He explained that NVIDIA’s MGX system offers a modular, built-in design that allows partners to combine different central processing units (CPUs), graphics processing units (GPUs), storage and networking components as needed.
The system also promotes interoperability, allowing organizations to use the same design across multiple generations of devices. “It provides flexibility, faster time to market, and future readiness,” Delary told the media.
As AI models grow in size, energy efficiency is becoming a major challenge for data centers. NVIDIA works “from chip to network” to optimize power usage and scalability, collaborating closely with power and cooling vendors to maximize performance per watt, DeLaere said.
One example is switching to an 800-volt DC power delivery, which reduces heat loss and improves efficiency. The company also offers power smoothing technology to reduce surges in the electrical grid – an approach that can cut maximum power needs by up to 30 percent, allowing for more computing capacity within the same footprint.
Scale up, out, and across
NVIDIA’s MGX platform also plays a role in how data centers scale. Gilad Shiner, senior vice president of networking at the company, told the media that the MGX racks host compute and switching components, and support NVLink for extended connectivity and Spectrum-X Ethernet for expanded growth.
He added that MGX can connect multiple AI data centers together as a unified system – which is what companies like Meta need to support massive distributed AI training operations. Depending on the distance, they can connect sites through dark fiber or additional MGX-based switches, enabling high-speed communications across regions.
Meta AI’s adoption of Spectrum-X reflects the growing importance of open networks. Scheiner said the company will use FBOSS as its network operating system, but noted that Spectrum-X supports several other systems, including Cumulus, SONiC and Cisco’s NOS through partnerships. This flexibility allows large scale companies and enterprises to unify their infrastructure with the systems that best fit their environments.
Expanding the AI ecosystem
NVIDIA sees Spectrum-X as a way to make AI infrastructure more efficient and accessible across different levels. The Ethernet platform is specifically designed for AI workloads such as training and inference, providing up to 95 percent effective bandwidth and outperforming traditional Ethernet by a wide margin, Shiner said.
He added that NVIDIA’s partnerships with companies like Cisco, xAI, Meta, and Oracle Cloud Infrastructure are helping bring Spectrum-X to a broader range of environments — from hyperscalers to enterprises.
Preparing for Vera Rubin and beyond
NVIDIA’s upcoming Vera Rubin architecture is expected to be commercially available in the second half of 2026, with the Rubin CPX product arriving by the end of the year, DeLaere said. Both will work alongside Spectrum-X networks and MGX systems to support the next generation of AI factories.
He also explained that Spectrum-X and XGS share the same underlying hardware but use different algorithms for different distances – Spectrum-X for internal data centers and XGS for inter-data center communication. This approach reduces latency to a minimum and allows multiple sites to work together as one large AI supercomputer.
Collaboration across the energy chain
To support 800V DC transmission, NVIDIA is working with partners from the chip level to the grid. The company collaborates with Onsemi and Infineon on power components, with Delta, Flex and Lite-On on rack level, and with Schneider Electric and Siemens on data center designs. A technical white paper detailing this approach will be released at the OCP Summit.
Delaire described this as a “holistic design from silicon to power delivery,” ensuring all systems work together seamlessly in the high-density AI environments that companies like Meta and Oracle operate in.
Performance advantages of super scale
Spectrum-X Ethernet is specifically designed for distributed computing and AI workloads. It provides adaptive routing and telemetry-based congestion control to eliminate network hotspots and deliver stable performance, Shiner said. These features enable faster training and inference speeds while allowing multiple workloads to run simultaneously without any interference.
He added that Spectrum-X is the only Ethernet technology proven to scale at extreme levels, helping organizations get the best performance and return on their GPU investments. For hyperscalers like Meta, this scalability helps manage increasing AI training requirements and maintain infrastructure efficiency.
Hardware and software work together
While NVIDIA’s focus is mostly on hardware, DeLaere said software optimization is just as important. The company continues to improve performance through co-design — aligning hardware and software development to maximize the efficiency of AI systems.
NVIDIA is investing in the FP4 core, frameworks like Dynamo and TensorRT-LLM, and algorithms like speculative decoding to improve throughput and AI model performance. These updates ensure that systems like Blackwell continue to deliver better results over time for hyperscalers like Meta that rely on consistent AI performance, he said.
Networking for the Trillion Parameter Era
The Spectrum-X platform – which includes Ethernet switches and SuperNICs – is NVIDIA’s first Ethernet system designed specifically for AI workloads. It’s designed to efficiently connect millions of GPUs while maintaining predictable performance across AI data centers.
With congestion control technology achieving up to 95 percent data throughput, Spectrum-X represents a significant leap over standard Ethernet, which typically only reaches about 60 percent due to flow collisions. Its XGS technology also supports long-distance AI data center links, connecting facilities across regions into unified “AI super factories.”
By connecting the full NVIDIA stack — GPUs, CPUs, NVLink, and software — Spectrum-X delivers the consistent performance needed to support trillion-parameter models and the next wave of generative AI workloads.
(Photo by Nvidia)
See also: OpenAI and Nvidia plan $100 billion chip deal for the future of AI
Want to learn more about AI and Big Data from industry leaders? Check out the Artificial Intelligence and Big Data Expo taking place in Amsterdam, California and London. This comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-10-13 15:00:00