Description
The HGX H100 Optimized X13 8U 8GPU is a cutting-edge, high-performance server designed for advanced AI workloads, deep learning training, and large-scale vLLM (very large language model) applications. Built with enterprise-grade components and optimized interconnects, this system delivers unmatched computational power, storage throughput, and network performance, making it ideal for data centers, AI research labs, and supercomputing environments.
Chassis and General Specifications
-
Form Factor: 8U rack-mountable server
-
GPU Support: 8 NVIDIA H100 SXM5 GPUs
-
RoHS Compliant: Yes
-
System Models: X13DGH-T-P.1560TS-R4K12B, RoHS SYS-821GE-TNHR and EWCSC
-
Warranty: 3 years labor, 3 years parts, 1 year CRS under limited warranty
This server is engineered to provide exceptional reliability and long-term stability under heavy AI and HPC workloads.
Processor (CPU)
-
Model: Intel Xeon Platinum 8480+
-
Configuration: Dual Processor (2P)
-
Total Cores: 56 cores per CPU (112 cores total)
-
Base Clock: 2.0 GHz
-
TDP: 350W per processor
-
Cache: 320MB L3
-
SKU: P4X-8470Y-X-SN3L-MCC (x2)
These processors are specifically chosen for massive parallelism, enabling large-scale AI training and inference with minimal latency.
Memory (RAM)
-
Module: MEM-DR564L-HL64
-
Type: DDR5-4800 ECC RDIMM 2Rx8
-
Capacity per Module: 64GB
-
Number of Modules: 16
-
Total Memory: 1TB (1024GB)
The system’s 1TB of high-speed DDR5 memory ensures efficient handling of large datasets and AI models, while ECC support guarantees data integrity and system stability.
Storage
-
Drive Model: HDS-25N4-003T8-E1-TXD-NON-008
-
Type: 2.5″ NVMe PCIe 4.0 SSDs
-
Capacity per Drive: 3.84TB
-
Number of Drives: 8
-
Enterprise Grade: 1 DWPD (Drive Writes Per Day), TLC
-
Form Factor: 7mm
With 30.72TB total NVMe storage, this system provides high-speed, low-latency data access for AI training, HPC simulations, and large-scale database operations.
Graphics Processing Units (GPUs)
-
Model: NVIDIA H100 SXM5
-
Quantity: 8 GPUs
-
Interconnect Bandwidth: 900GB/s via NVLink / NVSwitch
-
AI Optimization: Fully compatible with vLLM and DeepSpeed / Tensor Parallelism
The GPUs deliver extreme performance for AI workloads, making it possible to train very large language models and complex neural networks efficiently.
Networking and Connectivity
-
Network Card: AOC-MH25G-m2S
-
Type: Dual-Port 25GbE SFP28 NIC
-
Quantity: 2
-
RoHS Compliant: Yes
This configuration ensures ultra-fast data transfer between servers and cluster nodes, supporting high-throughput AI pipelines and large-scale parallel processing.
Additional Features
-
TPM Module: AOM-TPM-9670V-P, SPI TPM 2.0 using SLB9670, RoHS
-
RAID Support: Intel VROC Premium, RAID 0, 1, 5, 10 via AOC-VROCPREMOD
-
Software License: SFT-DCMS-SINGLE, 1-year Soika Enterprise subscription
-
Super Cluster Interconnection: Local cluster support for high-performance AI workloads
Key Advantages
-
Optimized for AI inference and training, especially for very large language models (vLLM)
-
Enterprise-class reliability and durability, ensuring continuous operation under heavy workloads
-
Fully compliant with RoHS environmental standards
-
Ideal for deployment in AI research labs, data centers, and supercomputing environments
-
Scalable and future-proof, supporting next-generation AI models and HPC applications
Summary
The HGX H100 Optimized X13 8U 8GPU is an elite-class server designed for organizations that demand maximum GPU acceleration, high memory bandwidth, and ultra-fast NVMe storage. Its combination of dual Intel Xeon Platinum 8480+ processors, 1TB DDR5 ECC RAM, 8 NVIDIA H100 GPUs, and enterprise-grade NVMe storage makes it a complete solution for modern AI workloads and large-scale computational tasks.
Reviews
There are no reviews yet.