Share product

NVIDIA DGX H100 ( 8×H100 SXM5 AI Supercomputing Platform )

USD520,000

Get Quote on WhatsApp

Category:

Brand:

Shipping:

Worldwide

Description

Description

The NVIDIA DGX H100 is a powerful AI workstation built for heavy AI workloads, deep learning training, and large-scale model inference. It comes with 8 NVIDIA H100 GPUs, each with 80GB of super-fast HBM3 memory, connected through high-speed NVLink, making it perfect for running large AI models quickly.

It uses 2 Intel Xeon Platinum 8480+ CPUs (56 cores each) and 2TB of DDR5 memory, providing plenty of processing power and memory for demanding tasks. Storage is handled by 8 NVMe SSDs (3.84TB each), giving a total of ~30TB fast storage for AI datasets.

The DGX H100 supports popular AI frameworks such as vLLM, Megatron, DeepSpeed, HuggingFace Transformers, TensorRT-LLM, and Triton Inference Server, making it flexible for research and production.

Networking is extremely fast with high-speed ConnectX-7 NICs and NVLink + Infiniband for connecting multiple systems together. The system also includes secure boot, TPM 2.0, RAID support, and redundant 3000W power supplies for reliability. It runs NVIDIA DGX OS 6, ready for container-based AI workflows.

Key Features:

Specifications
8GPU, HGX H100 SXM5, RoHS NVIDIA DGX H100 System (x1)
8GPU, HGX H100 SXM5, RoHS EWCSC (x1)
0% 3 YRS LABOR, 3 YRS PARTS, 1 YR CRS UNDER LIMITED WARRANTY
CPU-Intel-Xeon-8480+ (x2)
Intel Xeon Platinum 8480+, 2P, 56C, 2.0GHz, 350W TDP
MEM-DR564MC-ER64 (x32)
64GB DDR5-4800 ECC RDIMM 2Rx4 – Total: 2TB system memory
HDS-25N4-003T8-E1-TXD-NON-008 (x8)
SSD 2.5″ NVMe PCIe Gen4 3.84TB, 1DWPD, TLC NAND, 7mm – Total: ~30TB usable NVMe
GPU-H100 SXM5 Gen5: 900GB/s via NVLink 4.0 / NVSwitch
80GB HBM3 per GPU (Total: 640GB HBM3)
TF32: 32,896 TFLOPS | FP8: 67,000 TFLOPS (aggregate system)
Networking
ConnectX-7 NICs: 2x 400GbE or 8x 100GbE
NVIDIA NVLink and In niBand Switch Fabric for multi-node DGX SuperPods
TPM: Secure Boot with UEFI and TPM 2.0
RAID: Software RAID 0/1 for boot drives
Power Supply: Redundant 3000W (Titanium-rated) hot-swappable PSUs
SFT-DGXOS-SINGLE (x1)
NVIDIA Base Command OS (DGX OS 6) with container orchestration
LLM Framework Support: vLLM, NeMo Megatron, DeepSpeed, HuggingFace Transformers, TensorRT-LLM, Triton Inference Server
Interconnection: Full-bandwidth NVLink and In niBand-based scale-out support
Brand

Brand

Nvidia

Reviews (0)

Reviews

There are no reviews yet.

Be the first to review “NVIDIA DGX H100 ( 8×H100 SXM5 AI Supercomputing Platform )”

Your email address will not be published. Required fields are marked *

Shipping & Delivery

Shipping & Payment

Worldwide Shipping Available
We accept: Visa Mastercard American Express
International Orders
For international shipping, you must have an active account with UPS, FedEx, or DHL, or provide a US-based freight forwarder address for delivery.
Additional Information

Additional information

Related products