Nvidia DGX Systems

The vanguard of enterprise-grade AI computing. Engineered with meticulous precision, DGX systems are the robust building block of HPC and Super Compute equiped with the latest generation of Nvidia GPUs, known for their unrivaled processing power.

The proven standard for running AI

Built from the ground up the NVIDIA DGX platform combines the best of NVIDIA software, infrastructure, and expertise in a modern, unified AI development solution that spans from the cloud to on-premises data centers.

  • 8x NVIDIA H100 GPUs With 640 Gigabytes of Total GPU Memory
    18x NVIDIA® NVLink® connections per GPU, 900 gigabytes per second of bidirectional GPU-to-GPU bandwidth
  • 4x NVIDIA NVSwitches
    7.2 terabytes per second of bidirectional GPU-to-GPU bandwidth, 1.5X more than previous generation
  • 10x NVIDIA ConnectX®-7 400 Gigabits-Per-Second Network Interface
    1 terabyte per second of peak bidirectional network bandwidth
  • Dual Intel Xeon Platinum 8480C processors, 112 cores total, and 2 TB System Memory
    Powerful CPUs for the most intensive AI jobs
  • 30 Terabytes NVMe SSD
    High speed storage for maximum performance

Features

The NVIDIA DGX is unique for a variety of reasons, each contributing to its status as a pioneering solution in the AI and HPC landscape.

Integrated AI Infrastructure

NVIDIA DGX is an integrated hardware and software solution, specifically engineered for AI workloads. It’s more than just a collection of GPUs – it’s a cohesive, purpose-built system optimized for AI performance.

 

Pre-configured Software Stack

NVIDIA DGX comes with Enterprise AI, a fully integrated and optimized software stack, including AI frameworks, libraries, and software development kits. NVIDIA DGX software stack is performance-tuned, tested, and optimized for the NVIDIA DGX hardware to deliver maximized performance.

 

Powerful GPU Technology

DGX systems leverage NVIDIA’s industry-leading GPU technology, designed for the most computationally intensive tasks. This includes the H100 Tensor Core GPUs that offer significant advantages in AI model training and inference.

 

Multi-GPU Scaling

DGX systems can effectively leverage multiple GPUs, thanks to NVIDIA’s NVLink and NVSwitch technologies. This allows for efficient scaling of performance across GPUs, making DGX systems suited for the most demanding AI workloads.

 

POD Scalability

One of the most significant advantages of the DGX system is its ability to scale. Individual DGX units can be interconnected to form a DGX BasePOD (4, 8, 16, or 32 systems) to ultimately scale to a SuperPOD (127 systems). These larger AI infrastructures rank among the world’s top supercomputers.

The NVIDIA DGX family provides the flexibility to start with a single system and grow as needs evolve. Nvidia networking and interconnect technologies, like InfiniBand and NVLink, ensure extreme high bandwidth communication between the DGX systems, maintaining peak performance at SuperPOD scale.

 

Information

Proven reference architectures for AI infrastructure delivered with leading storage providers

NVIDIA DGX Systems

NVIDIA DGX H100 is a fully integrated hardware and software solution on which to build your AI Center of Excellence.

Download Datasheet

NVIDIA DGX SuperPOD

NVIDIA DGX SuperPOD is HPC and AI on Data Center scale. It supports hybrid deployments and offers leadership-class accelerated infrastructure and performance for the most challenging AI workloads, with industry-proven results.

Download Datasheet

Talk to an Expert?

Let’s discuss how we can help you

This field is for validation purposes and should be left unchanged.

No automatic newsletter subscription guaranteed!