NVIDIA Mellanox 920-9B210-00FN-0D0 InfiniBand Switch Launched: Redefining HPC and AI Fabric

April 15, 2026

tin tức mới nhất của công ty về NVIDIA Mellanox 920-9B210-00FN-0D0 InfiniBand Switch Launched: Redefining HPC and AI Fabric

As AI clusters and high-performance computing (HPC) environments scale to unprecedented levels, the demand for ultra-low latency, lossless, and high-throughput networking has never been greater. Addressing these exact challenges, NVIDIA has introduced the Mellanox 920-9B210-00FN-0D0 InfiniBand switch — a new benchmark in NDR architecture. This product release covers everything network architects and IT managers need to know, from why this switch matters to its core specifications and deployment advantages.

Why the 920-9B210-00FN-0D0? Solving the Bandwidth Bottleneck

Traditional Ethernet fabrics often struggle with tail latency and packet loss, especially under incast traffic patterns common in distributed training and storage workloads. The NVIDIA Mellanox 920-9B210-00FN-0D0 eliminates these issues with true lossless InfiniBand fabric. It is specifically designed to support demanding environments where every microsecond counts. By integrating Sharp v2 adaptive routing and congestion control, this switch ensures deterministic performance even at scale.

Key Features: NDR Speed, Compatibility, and OPN Flexibility
  • Ultra-High Throughput: The 920-9B210-00FN-0D0 MQM9790-NS2F 400Gb/s NDR delivers 400Gb/s per port, doubling the bandwidth of previous HDR solutions. This makes it ideal for GPU clusters and computational storage.
  • Part Number Clarity: For procurement and integration, the 920-9B210-00FN-0D0 InfiniBand switch OPN is the official ordering part number, simplifying quotes and BOM management across global supply chains.
  • Complete Technical Resources: Engineers can refer to the 920-9B210-00FN-0D0 datasheet and 920-9B210-00FN-0D0 specifications for detailed thermal, power, and performance metrics before deployment.
  • Cost & Availability: Early market indications suggest the 920-9B210-00FN-0D0 price remains competitive for NDR-class switches, and units are already listed as 920-9B210-00FN-0D0 for sale through authorized distributors.
Deployment and Compatibility

Interoperability is critical in multi-vendor data centers. The 920-9B210-00FN-0D0 compatible portfolio includes all major NVIDIA ConnectX-7 and BlueField-3 DPUs, as well as third-party InfiniBand components that adhere to the NDR standard. Moreover, as a complete 920-9B210-00FN-0D0 InfiniBand switch OPN solution, it integrates seamlessly with NVIDIA’s Unified Fabric Manager (UFM), enabling telemetry and automated failover. IT managers will appreciate the reduced operational overhead compared to legacy switching platforms.

Technical Specifications at a Glance
Parameter Detail
Model NVIDIA Mellanox 920-9B210-00FN-0D0
Data Rate 400Gb/s NDR (per port)
Base Part Number 920-9B210-00FN-0D0 MQM9790-NS2F
OPN 920-9B210-00FN-0D0 InfiniBand switch OPN

For architects evaluating next-gen fabrics, the 920-9B210-00FN-0D0 offers a clear migration path from HDR to NDR without forklift upgrades. The complete set of 920-9B210-00FN-0D0 specifications also confirms backward compatibility with previous InfiniBand speeds, protecting existing investments. Whether you are building a new AI supercomputer or modernizing an HPC data center, this switch delivers the performance and reliability that only NVIDIA Mellanox can provide.