Share product

DDN EXAScaler – Scalable Parallel File System Appliances for AI, HPC, and Data-Intensive Workloads

Call for price inquiry

Category:

Brand:

Shipping:

Worldwide

Description

Description

DDN EXAScaler is a flagship family of high-performance parallel file system appliances engineered to meet the extreme demands of artificial intelligence (AI), high-performance computing (HPC), and data analytics. Built on the industry-leading Lustre-based EXA6 architecture, EXAScaler delivers unmatched throughput, ultra-low latency, and massive concurrency for GPU-accelerated environments.

Designed for seamless integration with NVIDIA DGX BasePOD and SuperPOD deployments, EXAScaler supports GPU Direct Storage (GDS), Hot Nodes caching, and end-to-end NVMe data paths—enabling direct communication between NVMe SSDs and GPUs without CPU bottlenecks. This architecture dramatically reduces IO latency and accelerates deep learning, generative AI, and simulation workloads.

EXAScaler appliances are available in multiple form factors and configurations, allowing organizations to scale from departmental AI clusters to enterprise-grade, multi-tenant GPU platforms. With support for InfiniBand HDR/NDR, 200/100GbE, and advanced data orchestration tools like Stratagem and Hot Pools, EXAScaler ensures optimal performance, data integrity, and operational efficiency across hybrid and cloud environments.

EXAScaler Product Family – Model Comparison

Model Form Factor Target Use Case Key Features
EXAScaler EXA6 2U / 4U Appliance DGX BasePOD, SuperPOD, Enterprise AI Parallel Lustre, GDS, Hot Nodes, 115 GB/s read, 3M+ IOPS
EXAScaler ES200X2 2U Appliance Departmental AI, Edge Inference NVMe-oF RoCE, FlashLink, 0.05ms latency, scalable IOPS
EXAScaler ES400X2 4U Appliance Mid-range AI Training SmartMatrix full-mesh, 32 controllers, 100GbE RDMA
EXAScaler SFA400X2 4U Appliance Mixed AI/HPC Workloads NVMe block storage, hybrid expansion, high SSD density
EXAScaler GRIDScaler Multi-node Enterprise-scale AI, Multi-tenant clusters Parallel Lustre (EXA6), GDS, 8× HDR IB, 115 GB/s read, validated with DGX H100

Key Architectural Highlights

  • Parallel Lustre File System (EXA6): Fully optimized for AI/HPC workloads with scalable metadata and data paths.
  • GPU Direct Storage (GDS): Enables direct NVMe-to-GPU data transfer, reducing latency and CPU overhead.
  • Hot Nodes Caching: Automatically caches frequently accessed data on GPU-local NVMe, accelerating IO.
  • Stratagem Policy Engine: Advanced data placement and residency control for hybrid and multi-cloud environments.
  • Hot Pools Tiering: Intelligent movement of data between flash and disk tiers for optimal performance and cost-efficiency.
  • T10DIF Data Integrity: Ensures end-to-end protection from application to disk.
  • InfiniBand HDR/NDR & 200/100GbE: High-bandwidth, low-latency connectivity for AI pipelines and GPU clusters.
  • Management Framework (EMF): Modern CLI/API orchestration, observability, and automated upgrades across clusters.

Strategic Positioning for TIADA

DDN EXAScaler represents the pinnacle of AI-optimized storage infrastructure. For TIADA’s enterprise clients, it offers:

  • Validated integration with NVIDIA DGX H100 systems
  • Scalable performance for multi-tenant AI platforms
  • Turnkey deployment with full orchestration and observability
  • Competitive edge over traditional NAS and block storage solutions like Huawei Dorado or Pure Storage

Main Models of DDN EXAScaler EXA6

1. DDN ES/AI200X2

  • Compact, entry-level appliance

  • Ideal for Edge AI, departmental inference, Dev/Test environments

  • Scale-out architecture with Lustre EXA6 support

  • GPU hot-node caching support

2. DDN ES/AI400X2

  • Mid-range model designed for DGX BasePOD deployments

  • Scales linearly with additional appliances

  • Performance: 90+ GB/s Read, 65+ GB/s Write, ~3M IOPS

  • Fully validated with NVIDIA DGX H100 systems

  • Supports GPU Direct Storage (GDS)

3. DDN SFA400X2

  • Pure NVMe block storage for parallel file systems

  • Scale-out building block for large AI pipelines

  • Suitable for hybrid expansion and high-throughput AI workloads

  • Delivers strong throughput and IOPS scalability

4. DDN GRIDScaler (IBM Spectrum Scale-based)

  • Enterprise-grade Parallel File/NAS appliance

  • Built on IBM Spectrum Scale (GPFS)

  • Designed for multi-tenant AI/HPC platforms

  • Delivers high throughput with 0.61ms SPEC SFS2014 response time

  • GPU integration paths for advanced AI workloads

Comparison Table: DDN EXAScaler EXA6 Models

Model Form Factor / Interfaces Performance AI/HPC Use Case Special Features
ES/AI200X2 2U appliance, 4× HDR/HDR100 IB or 4× 200/100GbE High IOPS, compact Small GPU clusters, Dev/Test, Edge AI Lustre EXA6, Hot Nodes GPU caching
ES/AI400X2 2U appliance, 8× HDR/HDR100 IB or 8× 200/100GbE 90+ GB/s Read, 65+ GB/s Write, ~3M IOPS DGX BasePOD, mid-size AI training clusters NVIDIA DGX H100 validated, GDS support
SFA400X2 Block storage, 8× HDR/HDR100 IB or 8× 200/100GbE High throughput; scalable Large-scale AI pipelines, hybrid expansion Pure NVMe block storage, filesystem backend
GRIDScaler Parallel NAS/File appliance with IBM Spectrum Scale SPEC SFS2014: 0.61ms response, high throughput Enterprise-scale, multi-tenant AI/HPC GPU integration, end-to-end parallel paths
Brand

Brand

ddn

Reviews (0)

Reviews

There are no reviews yet.

Be the first to review “DDN EXAScaler – Scalable Parallel File System Appliances for AI, HPC, and Data-Intensive Workloads”

Your email address will not be published. Required fields are marked *

Shipping & Delivery

Shipping & Payment

Worldwide Shipping Available
We accept: Visa Mastercard American Express
International Orders
For international shipping, you must have an active account with UPS, FedEx, or DHL, or provide a US-based freight forwarder address for delivery.
Additional Information

Additional information

Related products