AceleMax GPU Server

AceleMax DGS-428A

4U Dual AMD EPYC Processor 8x Double-Width PCIe 4.0 GPU Server

  • Supports up to 8 double-width PCIe GPUs in a 4U chassis
  • Supports two AMD EPYC™ 7002 or 7003 series processors family
  • Designed for VDI, machine intelligence, deep learning, machine learning, artificial intelligence, neural network, advanced rendering and compute.

Request a Quote

Reference # Q712616

The AceleMax DGS-428A is a 4U PCIe 4.0 GPU server powered by AMD EPYC™ 7002 or 7003 series dual-socket processors, delivering up to 2X the performance and 4X the floating-point capability compared to the previous 7001 generation.

 

Designed for AI, HPC and virtual desktop infrastructure (VDI) applications in data center or enterprise environments requiring powerful CPU cores, multiple GPU support and faster transmission speeds, the DGS-428A delivers GPU-optimized performance with support for up to eight high-performance Direct-Attached double-slot GPUs, including the latest NVIDIA A100 PCIe GPUs built on the NVIDIA Ampere architecture. This performance also provides benefits for virtualization by consolidating GPU resources into a shared pool, enabling users to utilize resources in more efficient ways.

 

The DGS-428A also features up to 11 PCIe 4.0 slots and up to 160 PCIe lanes for compute, graphics, storage and networking expansion. PCIe 4.0 provides transfer speeds of up to 16 GT/s — double the bandwidth of PCIe 3.0 — and delivers lower power consumption, better lane scalability and backwards compatibility.

 

For networking, the DGS-428A supports a flexible AIOM/OCP 3.0 networking card for up to 100 Gigabit Ethernet to meet the demands of high-bandwidth applications. For storage, it accommodates up to 24 x 2.5″ SAS/SATA drive bays, with support for 4 x 2.5″ SATA drives natively, and 4x 2.5″ NVMe SSDs.

Built with AMD EPYC™ 7003 Series Processor

Providing incredible compute, IO and bandwidth capability – designed to meet the huge demand for more compute in big data analytics, HPC and cloud computing.

  • Built on 7nm advanced process technology, allowing for denser compute capabilities with lower power consumption
  • Up to 64 core per CPU, built using Zen 2 high performance cores and AMD’s innovative chiplet architecture
  • Supporting PCIe Gen 4.0 with a bandwidth of up to 64GB/s, twice of PCIe Gen 3.0
  • Embedded security protection to help defend your CPU, applications, and data

NVIDIA A100 PCIe GPU

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration and flexibility to power the world’s highest-performing elastic data centers for AI, data analytics and HPC applications. As the engine of the NVIDIA data center platform, the A100 GPU provides up to 20X higher performance and 2.5X AI performance than V100 GPUs, and can efficiently scale up to thousands of GPUs or be partitioned into seven isolated GPU instances with new multi-Instance GPU (MIG) capability to accelerate workloads of all sizes.

 

The NVIDIA A100 GPU features third-generation Tensor Core technology that supports a broad range of math precisions providing a unified workload accelerator for data analytics, AI training, AI inference, and HPC. It also supports new features such as New Multi-Instance GPU, delivering optimal utilization with right sized GPU and 7x Simultaneous Instances per GPU; New Sparsity Acceleration, harnessing Sparsity in AI Models with 2x AI Performance; 3rd Generation NVLINK and NVSWITCH, delivering Efficient Scaling to Enable Super GPU, and 2X More Bandwidth than the V100 GPU. Accelerating both scale-up and scale-out workloads on one platform enables elastic data centers that can dynamically adjust to shifting application workload demands. This simultaneously boosts throughput and drives down the cost of data centers.

S Y S T E M

4U Rackmount

P R O C E S S O R S

Dual AMD EPYC™ 7002 or 7003 Processors

G P U

Supports 8 double-width GPUs

M E M O R Y

Up to 32 DIMM slots, up to 8TB DDR4 memory 3200 MHz DIMMs

D R I V E S

4x SATA + 4x U.2 NVMe

I / O

9x PCIe 4.0 x16 + 1x PCIe 4.0 x8
Option: 10x PCIe 4.0 x16 AIOM NIC

C O O L I N G  F A N S

8x Removable heavy duty fans

P O W E R  S U P P L I E S

2000W (2+2) Redundant Power Supplies Titanium Level

Processor

  • Dual AMD EPYC™ 7002/7003 Series Processors, up to 280W TDP
  • Socket SP3
  • Up to 64-core, 128 threads per processor

Memory

  • 32 x DIMM slots
  • Up to 8TB 3DS ECC DDR4-3200MH RDIMM/LRDIMM
  • 8-Channel memory architecture

Graphics Processing Unit (GPU):

  • Up to 8 double-width PCIe 0 GPU cards in a 4U chassis, including NVIDIA A100 and A40 PCIe GPUs built on the NVIDIA Ampere architecture, V100, V100S, T4 and Quadro® RTX GPUs
  • Up to 55,296 FP32 CUDA Cores, 27,648 FP64 CUDA Cores, 3,456 Tensor Cores, 77.60 TF peak FP64 double-precision performance, 156 TF peak FP64 Tensor Core double-precision performance, 156 TF peak FP32 single-precision performance, 2,496 TF peak Bfloat16 performance, 2,496 TF peak FP16 Tensor Core half-precision performance, 9,984 TOPS peak Int8 Tensor Core Inference performance, and 320GB GPU memory, with eight A100 PCIe GPUs in a 4U chassis
  • On-board Aspeed AST2500 graphics controller

Expansion Slots

  • 10 x PCI-E 4.0 x16 (FH, FL) slots
  • 1 x PCI-E 4.0 x8 (FH, FL) slot

Storage

  • Up to 24 x 2.5″ SAS/SATA drive bays
  • 4 x 2.5″ SATA supported natively
  • 4 x 2.5″ NVMe supported natively

Network Controller

  • 2 x RJ45 GbE LAN ports (rear)
  • 1 x GbE management LAN

Power Supply

2,000W Titanium level Redundant Power Supplies with PMBus

System Dimension

7″ x 17.2″ x 29″ / 178mm x 437mm x 737mm (H x W x D)