Share product

NVIDIA DGX B200 (AI Supercomputer – 8× Blackwell B200 SXM5 GPUs, 2× Intel Xeon 8570, 2TB DDR5, 34TB NVMe)

USD600,000

Get Quote on WhatsApp

Category:

Brand:

Shipping:

Worldwide

Description

Description

The NVIDIA DGX B200 is a cutting-edge AI supercomputing platform purpose-built to meet the demands of the most advanced generative AI workloads. Engineered around the revolutionary NVIDIA Blackwell architecture, this system delivers unmatched performance, memory bandwidth, and scalability for enterprises deploying large language models (LLMs), multimodal AI, and real-time inference.

With eight NVIDIA Blackwell GPUs, dual 5th Gen Intel® Xeon® processors, and a fully integrated software stack, DGX B200 sets a new standard for AI infrastructure—offering up to 144 petaFLOPS of inference performance, 72 petaFLOPS of training performance, and 1.4TB of GPU memory. It’s the foundational building block for NVIDIA’s DGX BasePOD™ and DGX SuperPOD™, enabling hyperscale-grade AI operations in a turnkey solution.

Key Capabilities

Hardware Architecture

  • GPU Configuration: 8× NVIDIA Blackwell SXM GPUs
  • GPU Memory: 1,440GB total HBM3e
  • Memory Bandwidth: 64TB/s aggregate HBM3e bandwidth
  • GPU Interconnect: NVLink 5.0 + NVSwitch – 14.4TB/s all-to-all bandwidth
  • CPU: 2× Intel® Xeon® Platinum 8570 (112 cores total, up to 4.0GHz boost)
  • System Memory: 2TB DDR5, configurable up to 4TB
  • Power Usage: ~14.3kW max system draw
  • Rack Size: 10U form factor

Storage & Networking

  • OS Drives: 2× 1.9TB NVMe M.2
  • Internal Storage: 8× 3.84TB NVMe U.2 SSDs (~30TB usable)
  • Networking:
    • 4× OSFP ports (8× single-port NVIDIA ConnectX-7 VPI) – up to 400Gb/s InfiniBand/Ethernet
    • 2× dual-port QSFP112 NVIDIA BlueField-3 DPUs – up to 400Gb/s
    • 10Gb/s onboard NIC + 100Gb/s dual-port Ethernet NIC for management

Security & Management

  • Secure Boot with TPM 2.0
  • Host BMC with RJ45 interface
  • Software RAID 0/1 for boot drives
  • Enterprise-grade support: 3-year hardware/software coverage, 24/7 portal access, live agent support

AI Software Stack

The DGX B200 is not just hardware—it’s a complete AI factory platform. It comes pre-integrated with:

  • NVIDIA Base Command OS (DGX OS 6) – optimized for container orchestration and AI workload management
  • NVIDIA AI Enterprise – full-stack software suite for model development, deployment, and monitoring
  • NVIDIA Mission Control – orchestration engine for infrastructure resilience, workload automation, and cluster provisioning
  • NVIDIA NIM™ Microservices – for secure, scalable, and fast model deployment
  • Run:ai Technology – for seamless workload scheduling and resource optimization

 Performance Highlights

Metric Value
Training (FP8) 72 petaFLOPS
Inference (FP4) 144 petaFLOPS
GPU Memory 1.4TB HBM3e
Memory Bandwidth 64TB/s
NVLink Bandwidth 14.4TB/s
System Power ~14.3kW
Rack Units 10U

Ideal Use Cases

  • Training trillion-parameter LLMs
  • Real-time inference for multimodal AI
  • Fine-tuning domain-specific models
  • Scientific computing and simulation
  • Enterprise AI factories and hyperscale deployments

The NVIDIA DGX B200 is more than a server—it’s a strategic investment in the future of AI. Whether you’re building a private AI cloud, scaling out a SuperPOD, or deploying mission-critical models, the DGX B200 delivers the performance, reliability, and software integration needed to stay ahead in the AI race.

Brand

Brand

Nvidia

Reviews (0)

Reviews

There are no reviews yet.

Be the first to review “NVIDIA DGX B200 (AI Supercomputer – 8× Blackwell B200 SXM5 GPUs, 2× Intel Xeon 8570, 2TB DDR5, 34TB NVMe)”

Your email address will not be published. Required fields are marked *

Shipping & Delivery

Shipping & Payment

Worldwide Shipping Available
We accept: Visa Mastercard American Express
International Orders
For international shipping, you must have an active account with UPS, FedEx, or DHL, or provide a US-based freight forwarder address for delivery.
Additional Information

Additional information

Related products