Newsletters




NVIDIA Helps Accelerate the Next Industrial Revolution for Data Centers, Contributes to the Blackwell Platform 


To drive the development of open, efficient, and scalable data center technologies, NVIDIA announced that it has contributed foundational elements of its NVIDIA Blackwell accelerated computing platform design to the Open Compute Project (OCP) and broadened NVIDIA Spectrum-X support for OCP standards.

At this year’s OCP Global Summit, NVIDIA will be sharing key portions of the NVIDIA GB200 NVL72 system electro-mechanical design with the OCP community—including the rack architecture, compute and switch tray mechanicals, liquid-cooling and thermal environment specifications, and NVIDIA NVLink cable cartridge volumetrics—to support higher compute density and networking bandwidth.

NVIDIA has already made several official contributions to OCP across multiple hardware generations, including its NVIDIA HGX H100 baseboard design specification, to help provide the ecosystem with a wider choice of offerings from the world’s computer makers and expand the adoption of AI.

In addition, expanded NVIDIA Spectrum-X Ethernet networking platform alignment with OCP Community-developed specifications enables companies to unlock the performance potential of AI factories deploying OCP-recognized equipment while preserving their investments and maintaining software consistency.

“Building on a decade of collaboration with OCP, NVIDIA is working alongside industry leaders to shape specifications and designs that can be widely adopted across the entire data center,” said Jensen Huang, founder and CEO of NVIDIA. “By advancing open standards, we’re helping organizations worldwide take advantage of the full potential of accelerated computing and create the AI factories of the future.”

NVIDIA’s accelerated computing platform was designed to power a new era of AI, the company said.

GB200 NVL72 is based on the NVIDIA MGX modular architecture, which enables computer makers to quickly and cost-effectively build a vast array of data center infrastructure designs.

The liquid-cooled system connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs in a rack-scale design. With a 72-GPU NVIDIA NVLink domain, it acts as a single, massive GPU and delivers 30x faster real-time trillion-parameter large language model inference than the NVIDIA H100 Tensor Core GPU.

The NVIDIA Spectrum-X Ethernet networking platform, which now includes the next-generation NVIDIA ConnectX-8 SuperNIC, supports OCP’s Switch Abstraction Interface (SAI) and Software for Open Networking in the Cloud (SONiC) standards. This allows customers to use Spectrum-X’s adaptive routing and telemetry-based congestion control to accelerate Ethernet performance for scale-out AI infrastructure.

As the world transitions from general-purpose to accelerated and AI computing, data center infrastructure is becoming increasingly complex, according to NVIDIA. To simplify the development process, NVIDIA is working closely with more than 40 global electronics makers that provide key components to create AI factories.

Additionally, a broad array of partners are innovating and building on top of the Blackwell platform, including Meta, which plans to contribute its Catalina AI rack architecture based on GB200 NVL72 to OCP. This provides computer makers with flexible options to build high compute density systems and meet the growing performance and energy efficiency needs of data centers.

For more information about this news, visit www.nvidia.com.


Sponsors