Brand: Nvidia
NVIDIA Quantum-2 QM9790 InfiniBand Switch 64-Port 400Gb/s NDR
Warranty:
1 Year Effortless warranty claims with global coverage
Description
Product Overview: Revolutionary 400Gb/s InfiniBand Switching Technology
The NVIDIA Quantum-2 QM9790 InfiniBand Switch represents the pinnacle of data center networking innovation, delivering unprecedented performance for artificial intelligence (AI), high-performance computing (HPC), and cloud computing applications. As the industry-leading switch platform in both power efficiency and port density, the QM9790 provides AI developers and scientific researchers with the highest networking performance available to tackle the world’s most challenging computational problems.
Engineered on the revolutionary NVIDIA Quantum-2 architecture, this 64-port NDR (Next Data Rate) InfiniBand switch delivers an astounding 51.2 terabits per second (Tb/s) of aggregate bidirectional throughput packed into a space-efficient 1U rack-mountable chassis. Each of the 64 ports operates at a blazing 400 gigabits per second (Gb/s), providing the bandwidth necessary for extreme-scale systems that process high-resolution simulations, massive datasets, and complex parallelized algorithms requiring real-time information exchange.
The QM9790 extends NVIDIA’s groundbreaking In-Network Computing technologies and introduces SHARPv3 (Scalable Hierarchical Aggregation and Reduction Protocol version 3), the third generation of NVIDIA’s revolutionary in-network processing technology. This advancement delivers 32 times more AI acceleration power compared to previous generations, dramatically boosting application performance by performing complex computations directly within the network infrastructure as data traverses the data center fabric.
Key Features and Technical Capabilities
Unmatched Performance Specifications
The NVIDIA Quantum-2 QM9790 sets new benchmarks in networking performance with specifications that redefine what’s possible in data center infrastructure:
Port Configuration and Throughput:
- 64 ports of 400Gb/s NDR InfiniBand in a single 1U chassis
- 32 OSFP (Octal Small Form-Factor Pluggable) connectors supporting dual-port configuration
- Non-blocking switching capacity of 51.2Tb/s aggregate bidirectional throughput
- 66.5 billion packets per second (BPPS) processing capacity
- Port-split technology supporting up to 128 ports at 200Gb/s for double-density configurations
- Backward compatibility with 40, 56, 100, and 200 Gb/s speeds for flexible migration paths
Performance Metrics:
- Sub-500 nanosecond switch latency for latency-sensitive applications
- Cut-through switching architecture minimizing end-to-end latency
- Wire-speed performance across all ports simultaneously with zero packet loss
- Advanced quality of service (QoS) ensuring predictable performance for mission-critical workloads
SHARPv3: Revolutionary In-Network Computing
The third-generation SHARP technology represents a quantum leap in network intelligence, transforming the switch from a passive data forwarder into an active computational participant:
SHARPv3 Capabilities:
- 32x more acceleration engines compared to previous generation
- Hardware-accelerated MPI collective operations including Allreduce, Broadcast, and Gather
- Reduction and aggregation operations performed directly in the network fabric
- Dramatically reduced CPU overhead by offloading communication operations
- Scalability to unlimited nodes for small and large data aggregation through the network
- Real-time data processing as information moves through the data center network
- Significant reduction in data volume traversing the network infrastructure
SHARPv3 fundamentally changes how distributed applications communicate by performing reduction operations at wire speed within the network switches themselves, rather than requiring every node to participate in traditional software-based collective operations. This innovation delivers transformative performance improvements for AI training workloads, where collective communication often represents the primary bottleneck in scaling efficiency.
Advanced Networking Technologies
Remote Direct Memory Access (RDMA): The QM9790 provides full support for RDMA operations, enabling:
- Zero-copy data transfers between application memory spaces
- Kernel bypass eliminating operating system overhead
- CPU offload freeing processor resources for computation
- Microsecond-level latency for memory-to-memory transfers
- GPUDirect RDMA support for direct GPU-to-GPU communication without CPU involvement
- GPUDirect Storage capabilities for direct storage-to-GPU data paths
Adaptive Routing and Congestion Management:
- Intelligent packet routing dynamically selecting optimal paths through the fabric
- Congestion detection and avoidance maintaining consistent low latency
- Enhanced virtual lane (VL) mapping with up to 16 virtual lanes per physical link
- Credit-based flow control preventing buffer overflow and packet loss
- Priority-based traffic shaping ensuring critical workloads receive guaranteed bandwidth
Self-Healing Network Capabilities: The QM9790 incorporates NVIDIA’s SHIELD technology for unprecedented reliability:
- Fast link recovery automatically detecting and bypassing failed connections
- Automatic rerouting maintaining application connectivity during failures
- Hot-swappable components for maintenance without downtime
- Redundant power supplies (1+1 configuration) ensuring continuous operation
- Predictive failure analysis enabling proactive maintenance
Technical Specifications
Hardware Specifications Table
| Category | Specification | Details |
|---|---|---|
| Form Factor | Chassis Design | 1U rack-mountable standard design |
| Dimensions | 17.2″ W × 1.7″ H × 26″ D (438mm × 43.6mm × 660mm) | |
| Weight | Single PSU: 13.6 kg / Dual PSU: 14.8 kg | |
| Mounting | 19-inch standard rack mount | |
| Port Configuration | Total Ports | 64 × 400Gb/s NDR InfiniBand ports |
| Physical Connectors | 32 × OSFP (Octal Small Form-Factor Pluggable) | |
| Port Density Options | 64 × 400Gb/s or 128 × 200Gb/s (with port splitting) | |
| Supported Speeds | 400, 200, 100, 56, 40 Gb/s per port | |
| Performance | Switching Capacity | 51.2 Tb/s aggregate bidirectional throughput |
| Packet Processing | 66.5 billion packets per second (BPPS) | |
| Switching Latency | Sub-500 nanoseconds (port-to-port) | |
| Switching Architecture | Non-blocking, cut-through | |
| Power | Input Voltage | 200-240VAC, 50/60Hz, 10A |
| Power Supply Config | 1+1 redundant, hot-swappable, 80 Gold+ certified | |
| Typical Power (Passive) | 640W (ATIS measurement) | |
| Maximum Power (Active) | 1,610W (with active cables/optics) | |
| Energy Efficiency | ENERGY STAR certified | |
| Cooling | Airflow Options | Front-to-rear or rear-to-front configurations |
| Fan Configuration | 6+1 redundant hot-swappable fan modules | |
| Acoustic Level | 78.4 dBA at room temperature | |
| Management | Management Type | Externally managed (requires UFM or compatible) |
| Management Ports | 1× RJ45 (Ethernet), 1× RJ45 (UART), 1× USB 3.0, 1× USB I2C | |
| Management Software | Compatible with NVIDIA UFM (Unified Fabric Manager) | |
| Environmental | Operating Temperature | 0°C to 35°C (forward airflow) / 0°C to 40°C (reverse) |
| Storage Temperature | -40°C to 70°C (non-operational) | |
| Operating Humidity | 10% to 85% non-condensing | |
| Storage Humidity | 10% to 90% non-condensing | |
| Operating Altitude | Up to 3,050 meters (10,000 feet) |
Compliance and Certifications Table
| Category | Standards |
|---|---|
| Safety | CB, cTUVus, CE, CU, S-Mark |
| EMC (Electromagnetic Compatibility) | CE, FCC, VCCI, ICES, RCM, CQC, BSMI, KCC, TEC, ANATEL |
| Environmental | RoHS compliant |
| Energy Efficiency | 80 PLUS Gold+, ENERGY STAR |
| Warranty | 1 year standard manufacturer warranty |
Connectivity Options and Cable Support
The QM9790’s 32 OSFP connectors provide maximum flexibility for various deployment scenarios:
Supported Cable Types:
- Passive Copper Direct Attach Cables (DAC): Cost-effective short-reach connectivity (up to 5 meters)
- Active Copper Cables: Extended reach copper connectivity (up to 10 meters) with signal amplification
- Active Optical Cables (AOC): Lightweight, flexible optical connectivity (up to 100 meters)
- Optical Transceivers: Long-reach fiber connectivity supporting single-mode and multi-mode fiber
- SR8 (850nm multimode): Up to 100 meters
- DR8 (1310nm single-mode): Up to 2 kilometers
- FR8 (1310nm single-mode): Up to 2 kilometers with error correction
OSFP Connector Advantages:
- Dual-port configuration: Each OSFP cage supports two independent 400Gb/s ports
- High-density design: 64 ports in 32 physical connectors maximizing port count
- Hot-pluggable: Install and remove cables without powering down the switch
- Future-ready: Architecture supports potential future speed upgrades
Use Cases and Applications
1. Artificial Intelligence and Machine Learning Infrastructure
The QM9790 is purpose-built for modern AI/ML workloads that demand extreme bandwidth and minimal latency:
AI Training Clusters:
- Interconnect thousands of GPUs for distributed training of large language models (LLMs)
- Enable efficient multi-node training with SHARPv3-accelerated gradient aggregation
- Support emerging AI frameworks requiring collective communication primitives
- Minimize time-to-solution for deep learning research and development
- Scale from departmental GPU clusters to supercomputing-class AI infrastructure
AI Inference Deployments:
- High-throughput, low-latency connectivity for real-time inference services
- Support for large-scale recommendation systems processing billions of requests
- Enable fast model serving with direct GPU-to-GPU communication
- Optimize batch inference workloads across distributed GPU resources
Benefits for AI Workloads:
- SHARPv3 accelerates gradient synchronization, reducing training time by up to 2x
- Ultra-low latency maintains GPU utilization during distributed operations
- High bandwidth prevents network from becoming the bottleneck in multi-GPU systems
- GPUDirect RDMA eliminates CPU overhead in GPU-to-GPU transfers
2. High-Performance Computing (HPC)
Scientific computing applications benefit tremendously from the QM9790’s advanced capabilities:
Simulation and Modeling:
- Computational fluid dynamics (CFD) simulations
- Weather forecasting and climate modeling
- Molecular dynamics and drug discovery
- Astrophysics and cosmology simulations
- Finite element analysis (FEA) for engineering
Research Computing:
- University research clusters supporting multiple scientific disciplines
- National laboratory supercomputing facilities
- Corporate research and development environments
- Genomics and bioinformatics data processing
HPC Performance Advantages:
- MPI collective operations accelerated by SHARPv3
- Adaptive routing optimizes job placement and communication patterns
- Self-healing network maintains availability for long-running simulations
- RDMA reduces latency for tightly-coupled parallel applications
3. Cloud Service Provider Infrastructure
Cloud providers require extreme scalability and multi-tenancy support:
Private Cloud Deployments:
- On-premises cloud infrastructure for enterprises
- Departmental cloud resources for large organizations
- Research cloud platforms for academic institutions
Public Cloud Infrastructure:
- Hyperscale data center networking
- GPU-as-a-Service (GPUaaS) offerings
- Bare-metal cloud instances with RDMA networking
- Container orchestration platforms requiring high-performance networking
Cloud Benefits:
- Performance isolation ensures tenant workloads don’t interfere
- Software-defined networking (SDN) capabilities for cloud orchestration
- High port density reduces physical infrastructure requirements
- Energy efficiency lowers operational expenses
4. Data Analytics and Big Data Processing
Modern analytics platforms demand high-throughput, low-latency networking:
Analytics Workloads:
- Apache Spark distributed processing
- Hadoop ecosystem clusters
- Real-time stream processing
- In-memory analytics platforms
- Data warehouse acceleration
Financial Services:
- High-frequency trading infrastructure
- Risk modeling and simulations
- Fraud detection systems
- Regulatory compliance analytics
5. Storage Area Networks (SAN)
The QM9790 excels in storage-intensive environments:
Storage Applications:
- All-flash array connectivity
- Distributed storage systems (Ceph, Lustre, GPFS)
- NVMe-oF (NVMe over Fabrics) deployments
- Backup and disaster recovery infrastructure
- Media and entertainment post-production workflows
Storage Benefits:
- RDMA enables zero-copy data transfers to storage
- GPUDirect Storage provides direct storage-to-GPU paths
- Low latency maintains performance for latency-sensitive databases
- High bandwidth supports massive parallel file systems
Network Topologies Supported
The QM9790’s flexible architecture supports various network topologies optimized for different deployment scenarios:
Fat-Tree Topology:
- Traditional data center topology with predictable performance
- Full bisection bandwidth for non-blocking communication
- Simple to design and implement
- Scales to thousands of endpoints
Dragonfly+ Topology:
- Optimized for large-scale systems requiring lower cost
- Reduced cable and switch count compared to Fat-Tree
- Excellent performance for many-to-many communication patterns
- Popular in supercomputing installations
SlimFly Topology:
- Maximizes performance per dollar and per watt
- Reduced diameter compared to traditional topologies
- Optimal for budget-constrained deployments
- Excellent for HPC workloads
Multi-Dimensional Torus:
- Direct network topology with high scalability
- Multiple redundant paths for fault tolerance
- Common in supercomputing environments
- Predictable nearest-neighbor performance
Management and Monitoring
External Management with NVIDIA UFM
The QM9790 is an externally managed switch, requiring NVIDIA Unified Fabric Manager (UFM) or compatible management platform:
UFM Capabilities:
- Fabric Discovery: Automatic topology mapping and device inventory
- Performance Monitoring: Real-time bandwidth utilization and latency tracking
- Predictive Analytics: Proactive identification of potential issues
- Configuration Management: Centralized switch configuration across the fabric
- Firmware Updates: Coordinated firmware upgrades with minimal disruption
- Telemetry Collection: Comprehensive data for capacity planning
- Visualization: Graphical topology views and performance dashboards
Management Interfaces:
- Command-line interface (CLI) for scripting and automation
- RESTful API for integration with orchestration platforms
- Simple Network Management Protocol (SNMP) for legacy monitoring systems
- JavaScript Object Notation (JSON) for modern automation workflows
Monitoring and Diagnostics
Built-in Diagnostics:
- Port-level statistics (bandwidth, errors, discards)
- Buffer utilization monitoring
- Temperature sensors throughout the chassis
- Power supply status and power consumption tracking
- Fan speed monitoring and failure detection
Advanced Telemetry:
- Per-flow telemetry for troubleshooting
- Congestion detection and reporting
- What-Just-Happened (WJH) event logging
- Performance counter exports for external analytics
Installation and Deployment
Physical Installation
Rack Mounting:
- Standard 19-inch rack compatible
- 1U form factor conserves valuable rack space
- Includes rack-mount kit with installation hardware
- Front-to-rear or rear-to-front airflow options match data center cooling design
Cable Management:
- 32 OSFP ports provide high-density connectivity
- Recommended: use proper cable management for clean installation
- Support for both top-of-rack and end-of-row deployment models
- Color-coded or labeled cables improve troubleshooting
Power Requirements
Electrical Specifications:
- Input: 200-240VAC, 50/60Hz, single-phase
- Recommended: dual power feeds for redundancy
- Power supply: 1+1 redundant hot-swappable units
- Circuit breaker: 15A minimum per power supply
Power Budgeting:
- Typical operation (passive cables): 640W
- Maximum load (active optics): 1,610W
- Plan for maximum power in data center capacity calculations
- 80 PLUS Gold+ efficiency reduces energy costs
Cooling and Environmental
Airflow Configuration:
- Choose front-to-rear or rear-to-front based on data center design
- Hot aisle/cold aisle deployment recommended
- 6+1 redundant fans ensure adequate cooling
- Fan failure detection with automatic speed adjustment
Environmental Planning:
- Operating temperature: 0-35°C (forward airflow)
- Maintain recommended temperature for optimal performance
- Acoustic level: 78.4 dBA (consider placement in equipment rooms)
- Avoid installations exceeding 3,050m altitude
Comparison with Previous Generation
QM9790 (Quantum-2) vs. QM8790 (Quantum) Comparison
| Feature | QM9790 (Quantum-2 NDR) | QM8790 (Quantum HDR) | Improvement |
|---|---|---|---|
| Port Speed | 400 Gb/s | 200 Gb/s | 2× faster |
| Total Ports | 64 ports | 40 ports | 60% more ports |
| Switching Capacity | 51.2 Tb/s | 16 Tb/s | 3.2× higher throughput |
| SHARP Technology | SHARPv3 | SHARPv2 | 32× more acceleration |
| Power (Typical) | 640W | ~450W | More efficient per Gb/s |
| Form Factor | 1U | 1U | Same physical footprint |
The QM9790 delivers transformational improvements while maintaining the same 1U form factor, enabling seamless infrastructure upgrades without requiring data center redesign.
Frequently Asked Questions (FAQ)
Q1: What is the difference between the QM9700 and QM9790 models?
A: Both models feature identical hardware specifications with 64 ports of 400Gb/s NDR InfiniBand. The primary difference is management: the QM9700 includes an onboard subnet manager (internally managed) with Intel Core i3 CPU for standalone operation supporting up to 2,000 nodes, while the QM9790 is externally managed, requiring NVIDIA UFM or compatible management platform but offering greater scalability for large deployments exceeding 2,000 nodes.
Q2: Can the QM9790 be used with existing HDR or EDR InfiniBand equipment?
A: Yes, the QM9790 is backward compatible with previous InfiniBand generations. Ports can operate at 200Gb/s (HDR), 100Gb/s (EDR), 56Gb/s (FDR), and 40Gb/s (QDR) speeds, enabling gradual migration from older infrastructure. However, to achieve 400Gb/s NDR speeds, you must use NDR-compatible adapters and cables throughout the connection path.
Q3: What cables and transceivers are compatible with the QM9790?
A: The QM9790’s OSFP ports support multiple connectivity options:
- Passive DAC: Cost-effective for distances up to 5 meters (rack-scale connections)
- Active Copper: Extended copper reach up to 10 meters with active signal conditioning
- Active Optical Cables (AOC): Lightweight, flexible connectivity up to 100 meters
- Optical Transceivers: SR8 (multimode up to 100m), DR8/FR8 (single-mode up to 2km)
Always verify compatibility with NVIDIA’s Cable and Transceiver Compatibility Matrix for your specific deployment.
Q4: How does SHARPv3 improve AI training performance?
A: SHARPv3 dramatically accelerates AI training by performing gradient aggregation operations (Allreduce, Reduce-Scatter) directly in the network switches at wire speed, rather than requiring each compute node to participate in software-based collective operations. This innovation:
- Reduces training time by up to 2× for large-scale distributed training
- Minimizes CPU overhead, freeing resources for computation
- Eliminates bandwidth amplification (multiple messages traversing the same links)
- Enables near-linear scaling to thousands of GPUs
Q5: What management software is required for the QM9790?
A: The QM9790 requires external management software. NVIDIA Unified Fabric Manager (UFM) is the recommended solution, providing comprehensive fabric discovery, monitoring, configuration, and analytics. UFM is available in several editions:
- UFM Enterprise: Full-featured management for large-scale deployments
- UFM Cyber-AI: Adds security and anomaly detection capabilities
- UFM Telemetry: Focused on performance monitoring and telemetry export
Alternative compatible management platforms include OpenSM (open-source subnet manager) for basic configurations.
Q6: Can the QM9790 support both InfiniBand and Ethernet protocols?
A: No, the QM9790 is a pure InfiniBand switch supporting NDR, HDR, EDR, FDR, and QDR InfiniBand protocols. It does not support Ethernet. For Ethernet connectivity, consider NVIDIA’s Spectrum family of Ethernet switches. However, if you need to interconnect InfiniBand and Ethernet networks, NVIDIA offers gateway solutions and adapters with RoCE (RDMA over Converged Ethernet) support.
Q7: What is the typical lifespan and reliability of the QM9790?
A: NVIDIA InfiniBand switches are designed for 24/7/365 operation in mission-critical data center environments. Key reliability features include:
- Mean Time Between Failures (MTBF) designed for multi-year continuous operation
- Redundant, hot-swappable power supplies (1+1 configuration)
- Redundant, hot-swappable cooling fans (6+1 configuration)
- Self-healing network capabilities (SHIELD technology)
- Component-level monitoring and predictive failure detection
The QM9790 includes a 1-year manufacturer warranty, with extended warranty and support options available.
Q8: How much power does the QM9790 consume, and what are the cooling requirements?
A: Power consumption varies based on configuration:
- Typical operation (passive copper cables): 640W
- Maximum consumption (all ports with active optics): 1,610W
For cooling, the switch features 6+1 redundant fans with front-to-rear or rear-to-front airflow options. Ensure adequate data center cooling to maintain the operating temperature range of 0-35°C (forward airflow) or 0-40°C (reverse airflow). The switch produces approximately 78.4 dBA of acoustic noise at room temperature.
Q9: Is the QM9790 suitable for cloud environments and multi-tenant deployments?
A: Absolutely. The QM9790 includes advanced features specifically designed for cloud environments:
- Performance isolation: Prevents tenant workloads from interfering with each other
- Virtual fabric partitioning: Logical separation of resources for different tenants
- Quality of Service (QoS): Guarantees bandwidth and latency for SLA compliance
- Containerized workload support: Integration with Kubernetes and container orchestration
These capabilities make the QM9790 ideal for private clouds, public cloud infrastructure, and GPU-as-a-Service offerings.
Q10: Can I upgrade from a QM8790 (HDR) to QM9790 (NDR) without replacing all equipment simultaneously?
A: Yes, gradual migration is fully supported due to backward compatibility. A migration strategy typically follows these phases:
Phase 1: Deploy QM9790 switches while maintaining existing QM8790 switches in the fabric Phase 2: Upgrade critical compute nodes to NDR-capable adapters (ConnectX-7 or newer) Phase 3: Replace cables with NDR-rated connectivity (maintaining HDR cables for non-upgraded links) Phase 4: Gradually replace remaining switches and adapters based on workload priorities
During migration, QM9790 ports automatically negotiate to the highest common speed supported by both endpoints, ensuring continuous operation throughout the upgrade process.
Why Choose the NVIDIA Quantum-2 QM9790 from itctshop.com?
When you purchase the NVIDIA Quantum-2 QM9790 InfiniBand Switch from itctshop.com, you’re partnering with a trusted provider of enterprise networking solutions:
Authorized NVIDIA Partner: Genuine NVIDIA products with full manufacturer warranty
Expert Technical Support: Our team understands HPC and AI networking requirements
Competitive Pricing: Enterprise-volume pricing for your data center projects
Fast Shipping: Expedited delivery options for time-sensitive deployments
Configuration Services: Assistance with network design and implementation planning
Ongoing Support: Post-purchase technical assistance for your deployment
Conclusion: Future-Proof Your Data Center Infrastructure
The NVIDIA Quantum-2 QM9790 InfiniBand Switch represents the cutting edge of data center networking technology, delivering unmatched performance, density, and efficiency for the most demanding AI, HPC, and cloud computing workloads. With 64 ports of 400Gb/s NDR InfiniBand, revolutionary SHARPv3 in-network computing, and comprehensive advanced features, the QM9790 provides the foundation for extreme-scale systems that push the boundaries of what’s computationally possible.
Whether you’re building a next-generation AI training cluster, expanding a scientific research supercomputer, deploying cloud infrastructure, or creating a high-performance storage network, the QM9790 delivers the bandwidth, latency, and intelligence required for success. Its backward compatibility ensures investment protection while enabling seamless migration paths from previous-generation infrastructure.
Transform your data center networking with the NVIDIA Quantum-2 QM9790. Order today from itctshop.com and experience the future of high-performance interconnects.
Last update at December 2025



Reviews
There are no reviews yet.