AceleMax DGS-214A

2U Single AMD EPYC Processor 4x Double-Width / 8x Single-Width PCIe 4.0 GPU Server

  • Supports up to 4 double-width or 8 single-width PCIe GPUs in a 2U chassis
  • Supports one AMD EPYC™ 7002 or 7003 series processor family
  • Designed for VDI, machine intelligence, deep learning, machine learning, artificial intelligence, neural network, advanced rendering and compute

Request a Quote

Reference # Q712614

The AceleMax DGS-214A is a 2U 4/8 PCIe 4.0 GPU server powered by AMD EPYC™ 7002 or 7003 series single-socket processor, delivering up to 2X the performance and 4X the floating-point capability compared to the previous 7001 generation.

 

Designed for AI, HPC and virtual desktop infrastructure (VDI) applications in data center or enterprise environments requiring powerful CPU cores, multiple GPU support and faster transmission speeds, the DGS-214A delivers GPU-optimized performance with support for up to four high-performance double-slot or eight single-slot GPUs, including the latest NVIDIA A100 PCIe GPUs built on the NVIDIA Ampere architecture. This performance also provides benefits for virtualization by consolidating GPU resources into a shared pool, enabling users to utilize resources in more efficient ways.

 

DGS-214A also features up to 11 PCIe 4.0 slots for compute, graphics, storage and networking expansion. PCIe 4.0 provides transfer speeds of up to 16 GT/s — double the bandwidth of PCIe 3.0 — and delivers lower power consumption, better lane scalability and backwards compatibility.

 

For networking, the DGS-214A supports an OCP 3.0 network interface card, which supports up to 200 Gigabit Ethernet to meet the demands of high-bandwidth applications. With a flexible chassis design, the AG-214 accommodates up to eight hot-swappable 3.5-inch or 2.5-inch hard drives, four of which can be configured as NVMe SSDs.

NVIDIA A100 PCIe GPU

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration and flexibility to power the world’s highest-performing elastic data centers for AI, data analytics and HPC applications. As the engine of the NVIDIA data center platform, the A100 GPU provides up to 20X higher performance and 2.5X AI performance than V100 GPUs, and can efficiently scale up to thousands of GPUs or be partitioned into seven isolated GPU instances with new multi-Instance GPU (MIG) capability to accelerate workloads of all sizes.

 

The NVIDIA A100 GPU features third-generation Tensor Core technology that supports a broad range of math precisions providing a unified workload accelerator for data analytics, AI training, AI inference, and HPC. It also supports new features such as New Multi-Instance GPU, delivering optimal utilization with right sized GPU and 7x Simultaneous Instances per GPU; New Sparsity Acceleration, harnessing Sparsity in AI Models with 2x AI Performance; 3rd Generation NVLINK and NVSWITCH, delivering Efficient Scaling to Enable Super GPU, and 2X More Bandwidth than the V100 GPU. Accelerating both scale-up and scale-out workloads on one platform enables elastic data centers that can dynamically adjust to shifting application workload demands. This simultaneously boosts throughput and drives down the cost of data centers.

 

Combined with the NVIDIA software stack, the A100 GPU accelerates all major deep learning and data analytics frameworks and over 700 HPC applications. NVIDIA NGC, a hub for GPU-optimized software containers for AI and HPC, simplifies application deployments so researchers and developers can focus on building their solutions.

 

Applications:

AI, HPC, VDI, machine intelligence, deep learning, machine learning, artificial intelligence, Neural Network, advanced rendering and compute.

Processor

  • Single AMD EPYC™ 7002 or 7003 series processor, 7nm, Socket SP3, up to 64 cores, 128 threads, and 256MB L3 cache, up to 280W TDP
  • Up to 64-core, 128 threads per processor

Memory

  • 8x DDR4 DIMM slots
  • 8-Channel memory architecture

Graphics Processing Unit (GPU):

  • 4x NVIDIA   A100 PCIe 4.0, A40, A10, A30, A16 PCIe 4.0, V100/V100s, T4, Quadro RTX 6000 & 8000 passive PCIe

Expansion Slots

  • 4x PCI-E Gen4 x16 slot(x16 link) or 8 x PCI-E Gen4 x16 slots (x16 link)
  • 2x PCI-E Gen4 x16 slot(x16 link) for Butterfly riser
  • 1x PCI-E Gen4 x8 slot (x8 link) for Front riser or OCP3.0 slot (x8 link) or Hyper M.2 card

Storage

  • 8x 2.5”/ 3.5” hot-swap storage bays (Default 2x NVMe supported)
  • Default on-board 1 x M.2 max 22110, optional Hyper M.2 card up to 4 x M.2max 22110

Network Controller

  • 2x 1GbE LAN ports
  • 1x GbE management LAN

Power Supply

1 + 1 2,200 watt redundant PSUs, 80 PLUS Platinum

System Dimension

3.46″ x 17.22″ x 31.5″ / 88mm x 440mm x 800mm (H x W x D)

Optimized for Turnkey Solutions

Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.