NVIDIA Quantum-2 QM9700-NS2R 64-Port 400G InfiniBand Managed Switch

Thông tin chi tiết sản phẩm:

Hàng hiệu: Mellanox
Số mô hình: MQM9700-NS2R (920-9B210-00RN-0M2)
Tài liệu: MQM9700 series.pdf

Thanh toán:

Số lượng đặt hàng tối thiểu: 1 cái
Giá bán: Negotiate
chi tiết đóng gói: hộp bên ngoài
Thời gian giao hàng: Dựa trên hàng tồn kho
Điều khoản thanh toán: T/T
Khả năng cung cấp: Cung cấp theo dự án/đợt
Giá tốt nhất Tiếp xúc

Thông tin chi tiết

Mẫu số: MQM9700-NS2R (920-9B210-00RN-0M2) tốc độ truyền tải: 400G
Cổng: 64 Công nghệ: Infiniband
Tốc độ tối đa: NDR Gói vận chuyển: đóng gói
Làm nổi bật:

NVIDIA Quantum-2 400G InfiniBand switch

,

64-port InfiniBand managed switch

,

Mellanox Quantum-2 network switch

Mô tả sản phẩm

Industry-leading NDR 400Gb/s per port | 51.2 Tb/s aggregate throughput | SHARPv3 in-network computing | Ultra-low latency for AI & HPC fabrics

The NVIDIA Quantum-2 QM9700-NS2R is a fully managed 1U InfiniBand switch delivering an unprecedented 64 ports of 400Gb/s (NDR) non-blocking bandwidth. Designed for extreme-scale AI, scientific research, and high-performance computing (HPC) clusters, it enables massive scalability with 51.2 Tb/s aggregate bidirectional throughput and over 66.5 billion packets per second. Leveraging SHARPv3, adaptive routing, and RDMA, the QM9700-NS2R accelerates data movement and in-network computing for the most demanding workloads.

Key Facts at a Glance
  • Switch Radix: 64 non-blocking 400G ports (32 OSFP connectors)
  • Throughput: 51.2 Tb/s aggregate bidirectional
  • Packet Rate: >66.5 billion packets per second (BPPS)
  • SHARPv3: 32x AI acceleration improvement vs prior generation
  • Airflow: Connector-to-power (C2P) reverse airflow, ideal for hot-aisle containment
  • Management: On-board subnet manager for up to 2,000 nodes, MLNX-OS, CLI, WebUI, SNMP, JSON API
  • Power & Cooling: 1+1 redundant hot-swap PSU, hot-swappable fan units, front-to-rear or rear-to-front option (C2P configured)
Product Overview

Built on the NVIDIA Quantum-2 platform, the QM9700 series redefines data center switching density and efficiency. The QM9700-NS2R (managed, C2P airflow) integrates 64 ports of 400Gb/s InfiniBand in a compact 1U chassis. It supports port-split technology to deliver up to 128 ports of 200Gb/s, offering flexible topologies like Fat Tree, DragonFly+, SlimFly, and multi-dimensional Torus. Backward compatibility with previous InfiniBand generations ensures smooth integration into existing infrastructure. With advanced telemetry, congestion control, and self-healing network capabilities, this switch maximizes application throughput while simplifying operations.

Key Features & Capabilities
High Density
64 x 400Gb/s Ports

32 OSFP cages supporting 400G NDR or 128x200G via splitter cables, delivering the densest top-of-rack InfiniBand switch in 1U form factor.

In-Network Computing
SHARPv3 Technology

Third-generation NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol accelerates AI collective operations by up to 32X, reducing data movement and latency.

Ultra-Low Latency
RDMA + Adaptive Routing

Remote Direct Memory Access (RDMA) and adaptive routing eliminate bottlenecks, while enhanced virtual lane mapping and congestion control maintain consistent performance.

Reliability
Self-Healing Fabric

Automatic failover, link-level retransmission, and advanced monitoring capabilities ensure non-disruptive operations for mission-critical workloads.

Management
Integrated Subnet Manager

On-board subnet manager supports up to 2000 nodes out-of-the-box; full chassis management via CLI, WebUI, SNMP, and JSON/REST API.

Flexible Topologies
Fat Tree to DragonFly+

Optimized for scalable, cost-effective cluster designs; double-density radix reduces network layers and lowers total cost of ownership.

Advanced Networking Technology

The Quantum-2 platform integrates NVIDIA’s latest 400G SerDes technology, delivering 51.2 Tb/s of switching capacity. Key innovations include SHARPv3 in-network computing, which offloads collective operations from compute nodes, drastically improving AI training efficiency. The switch leverages Remote Direct Memory Access (RDMA) to bypass kernel overhead, achieving microsecond-scale latency. Adaptive routing dynamically distributes traffic across multiple paths to avoid hotspots, while enhanced quality of service (QoS) and virtual lane (VL) mapping guarantee bandwidth for critical applications. The QM9700-NS2R also incorporates advanced telemetry for real-time fabric monitoring and self-healing capabilities that automatically re-route traffic around link failures.

Typical Deployments
  • AI & Machine Learning Clusters: High-radix NDR switches enable massive GPU supercomputing pods with SHARPv3 accelerating NCCL collectives.
  • HPC Research Centers: SlimFly or Fat Tree topologies connect thousands of nodes with ultra-low latency for weather simulation, genomics, and physics.
  • Enterprise Data Centers: Consolidate East-West traffic with 400G spine-and-leaf architecture, reducing tier count and operational costs.
  • Cloud & Hyperscale: DragonFly+ and multi-dimensional torus for maximum scalability with high bandwidth density per rack unit.
  • Storage & IO Expansion: Connect high-performance storage systems using NVMe over Fabrics (NVMe-oF) via InfiniBand.
Compatibility & Ecosystem

The QM9700-NS2R is fully interoperable with NVIDIA InfiniBand adapters (ConnectX-6, ConnectX-7, ConnectX-8), cables (active/passive copper, active fiber, optical modules), and previous FDR/EDR/HDR generations. It runs MLNX-OS with extensive API support for automation frameworks. Compatible with NVIDIA Unified Fabric Manager (UFM) for advanced monitoring, predictive analytics, and telemetry. The switch integrates seamlessly with major HPC schedulers and open network automation tools.

Compatibility Matrix
Component Supported Models/Standards
Host Channel Adapters NVIDIA ConnectX-6 / ConnectX-7 / ConnectX-8 InfiniBand, NDR 400G HCAs
Cables & Transceivers OSFP passive copper (up to 2.5m), active copper, active fiber (up to 500m), optical modules (QSFP-DD to OSFP adapters for 200G split)
Previous InfiniBand Speeds HDR (200Gb/s), EDR (100Gb/s), FDR (56Gb/s) – backward compatible
Management & Automation MLNX-OS, UFM, Prometheus/Grafana via SNMP, JSON-RPC, Ansible modules
Technical Specifications
Parameter Specification
Ports & Speed 64 non-blocking ports of 400Gb/s (NDR) InfiniBand; 32 OSFP connectors; supports 128 ports @200Gb/s via splitter cables
Switching Capacity 51.2 Tb/s aggregate bidirectional throughput; >66.5 billion packets per second (BPPS)
Latency Sub-130ns port-to-port with dynamic routing (typical)
Processor & Memory x86 Coffee Lake i3, 8GB DDR4 SO-DIMM (2666 MT/s), 16GB M.2 SSD
Management Interfaces 1x USB 3.0, 1x USB (I2C), 1x RJ45 (Ethernet), 1x RJ45 (UART)
Power Supply 1+1 redundant, hot-swappable, 200–240Vac, 80 PLUS Gold+ certified
Cooling & Airflow Connector-to-power (C2P) reverse airflow (NS2R model); hot-swappable fan units, front/rear options
Dimensions (HxWxD) 1.7 in (43.6 mm) x 17.0 in (438 mm) x 26.0 in (660.4 mm)
Weight 14.5 kg (31.97 lbs)
Operating Conditions Temperature: 0°C to 40°C; Humidity: 10% to 85% non-condensing; Altitude up to 3050m
Regulatory & Safety RoHS, CE, FCC, VCCI, cTUVus, CB, RCM, ENERGY STAR
Warranty 1-year manufacturer warranty (extendable)
Selection Guide: QM9700 Series
Orderable Part Number Description Management Airflow Direction
MQM9700-NS2R 64 ports 400Gb/s InfiniBand, managed switch, 32 OSFP ports Full on-board subnet manager, MLNX-OS Connector-to-Power (C2P) – reverse airflow
MQM9700-NS2F 64 ports 400Gb/s InfiniBand, managed switch Managed (same features) Power-to-Connector (P2C) – forward airflow
MQM9790-NS2R 64 ports 400Gb/s, unmanaged switch Unmanaged, external UFM management C2P reverse airflow
MQM9790-NS2F 64 ports 400Gb/s, unmanaged switch Unmanaged P2C forward airflow

The QM9700-NS2R is ideal for customers requiring advanced on-box management (subnet manager) and connector-to-power airflow (exhaust to cold aisle). Confirm airflow compatibility with your data center thermal design before ordering.

Why Choose Starsurge for NVIDIA Quantum-2
Global Logistics & Fast Delivery

Warehouses and fulfillment partners enable rapid worldwide shipping with secure packaging.

Technical Pre-Sales Consulting

Our in-house engineers help validate network topologies, cabling plans, and firmware requirements.

Competitive Pricing & Warranty

Authorized partner pricing with extended warranty options and advanced replacement programs.

Multilingual Support

Dedicated support in English, Mandarin, Cantonese and regional languages for seamless procurement.

Service & Support

Starsurge provides end-to-end lifecycle support: from design consultation to deployment and post-sales maintenance. Our services include on-site installation guidance, RMA processing, firmware upgrade assistance, and customized cabling solutions. For high-volume projects, we offer dedicated account management and 24/7 technical escalation. The QM9700-NS2R ships with 1-year hardware warranty; extended support packages are available upon request.

Frequently Asked Questions
Q1: What is the difference between QM9700-NS2R and QM9700-NS2F?

The only difference is the airflow direction: NS2R uses connector-to-power (C2P) reverse airflow, while NS2F uses power-to-connector (P2C) forward airflow. Choose according to your rack thermal design (hot aisle/cold aisle).

Q2: Can I use QSFP56 or QSFP28 cables with this switch?

Yes, the QM9700 supports backward compatibility using appropriate adapter cables or breakout options for HDR (200G), EDR (100G), and FDR (56G) speeds. Please consult the compatibility matrix or contact Starsurge for validated cable SKUs.

Q3: Does the managed switch require an external subnet manager?

No, the QM9700-NS2R features an integrated on-board subnet manager capable of managing up to 2000 nodes, simplifying small to medium deployments. For larger fabrics, external SM or UFM can be used.

Q4: Is this switch compatible with non-NVIDIA GPUs or servers?

Yes, the InfiniBand fabric is agnostic to server vendor. Any server with a supported InfiniBand HCA (e.g., ConnectX series) can connect seamlessly.

Q5: What is the typical power consumption?

Typical power varies with port configuration and cabling. The PSUs are 1+1 redundant 200-240Vac, Gold+ rated. Contact us for detailed power planning.

Important Precautions
  • Ensure airflow direction (C2P for NS2R) matches your rack ventilation strategy to prevent overheating.
  • Use only approved NVIDIA OSFP modules or qualified passive/active copper/fiber cables for 400G performance.
  • Installation should follow ESD precautions and be performed by qualified network personnel.
  • Firmware updates must be executed using MLNX-OS guidelines to avoid interruption; schedule maintenance windows accordingly.
About Hong Kong Starsurge Group Co., Limited
NVIDIA Quantum-2 QM9700-NS2R 64-Port 400G InfiniBand Managed Switch 0

Founded in 2008, Starsurge is a technology-driven provider of network hardware, IT services, and system integration solutions. We serve government, healthcare, manufacturing, education, finance, and enterprise sectors worldwide. With an experienced sales and technical team, we deliver reliable networking equipment including switches, NICs, wireless controllers, cables, and IoT solutions. Our customer-first approach ensures scalable, efficient, and future-ready infrastructure. Multilingual support and global delivery capabilities make Starsurge your trusted partner for NVIDIA and data center solutions.

Buyer Checklist: QM9700-NS2R Deployment
  • ☑ Confirm airflow direction (C2P) fits your data center layout
  • ☑ Verify power input: 200–240V AC with redundant feeds
  • ☑ Select appropriate OSFP cables/transceivers: active/passive copper or fiber, based on distance
  • ☑ Plan network topology (Fat Tree, DragonFly+, etc.) and scaling nodes
  • ☑ Ensure host adapters are NDR-capable (ConnectX-7 or newer for 400G)
  • ☑ Allocate management IP and review MLNX-OS licensing (no extra license for core switching)
  • ☑ Prepare rack space: 1U height, depth up to 660mm including cable management
Related Products
NVIDIA ConnectX-7 NDR 400G InfiniBand Adapter

Single/dual-port, PCIe 5.0, acceleration for AI and HPC.

NVIDIA Quantum-2 MQM9790-NS2F

Unmanaged version for external UFM management, ideal for large-scale deployments.

OSFP to 2x QSFP112 200G Breakout Cables

Enables 200G connectivity to legacy servers or lower-speed infrastructure.

NVIDIA Unified Fabric Manager (UFM)

Advanced telemetry, predictive monitoring, and fabric orchestration.

Muốn biết thêm chi tiết về sản phẩm này
NVIDIA Quantum-2 QM9700-NS2R 64-Port 400G InfiniBand Managed Switch bạn có thể gửi cho tôi thêm chi tiết như loại, kích thước, số lượng, chất liệu, v.v.

Chờ hồi âm của bạn.