2U Single AMD EPYC Processor 4x Double-Width / 8x Single-Width PCIe 4.0 GPU Server
- Supports up to 4 double-width or 8 single-width PCIe GPUs in a 2U chassis
- Supports one AMD EPYC™ 7002 or 7003 series processor family
- Designed for VDI, machine intelligence, deep learning, machine learning, artificial intelligence, neural network, advanced rendering and compute
Reference # Q712614
The AceleMax DGS-214A is a 2U 4/8 PCIe 4.0 GPU server powered by AMD EPYC™ 7002 or 7003 series single-socket processor, delivering up to 2X the performance and 4X the floating-point capability compared to the previous 7001 generation.
Designed for AI, HPC and virtual desktop infrastructure (VDI) applications in data center or enterprise environments requiring powerful CPU cores, multiple GPU support and faster transmission speeds, the DGS-214A delivers GPU-optimized performance with support for up to four high-performance double-slot or eight single-slot GPUs, including the latest NVIDIA A100 PCIe GPUs built on the NVIDIA Ampere architecture. This performance also provides benefits for virtualization by consolidating GPU resources into a shared pool, enabling users to utilize resources in more efficient ways.
DGS-214A also features up to 11 PCIe 4.0 slots for compute, graphics, storage and networking expansion. PCIe 4.0 provides transfer speeds of up to 16 GT/s — double the bandwidth of PCIe 3.0 — and delivers lower power consumption, better lane scalability and backwards compatibility.
For networking, the DGS-214A supports an OCP 3.0 network interface card, which supports up to 200 Gigabit Ethernet to meet the demands of high-bandwidth applications. With a flexible chassis design, the AG-214 accommodates up to eight hot-swappable 3.5-inch or 2.5-inch hard drives, four of which can be configured as NVMe SSDs.
NVIDIA A100 PCIe GPU
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration and flexibility to power the world’s highest-performing elastic data centers for AI, data analytics and HPC applications. As the engine of the NVIDIA data center platform, the A100 GPU provides up to 20X higher performance and 2.5X AI performance than V100 GPUs, and can efficiently scale up to thousands of GPUs or be partitioned into seven isolated GPU instances with new multi-Instance GPU (MIG) capability to accelerate workloads of all sizes.
The NVIDIA A100 GPU features third-generation Tensor Core technology that supports a broad range of math precisions providing a unified workload accelerator for data analytics, AI training, AI inference, and HPC. It also supports new features such as New Multi-Instance GPU, delivering optimal utilization with right sized GPU and 7x Simultaneous Instances per GPU; New Sparsity Acceleration, harnessing Sparsity in AI Models with 2x AI Performance; 3rd Generation NVLINK and NVSWITCH, delivering Efficient Scaling to Enable Super GPU, and 2X More Bandwidth than the V100 GPU. Accelerating both scale-up and scale-out workloads on one platform enables elastic data centers that can dynamically adjust to shifting application workload demands. This simultaneously boosts throughput and drives down the cost of data centers.
Combined with the NVIDIA software stack, the A100 GPU accelerates all major deep learning and data analytics frameworks and over 700 HPC applications. NVIDIA NGC, a hub for GPU-optimized software containers for AI and HPC, simplifies application deployments so researchers and developers can focus on building their solutions.
AI, HPC, VDI, machine intelligence, deep learning, machine learning, artificial intelligence, Neural Network, advanced rendering and compute.
- Single AMD EPYC™ 7002 or 7003 series processor, 7nm, Socket SP3, up to 64 cores, 128 threads, and 256MB L3 cache, up to 280W TDP
- Up to 64-core, 128 threads per processor
- 8x DDR4 DIMM slots
- 8-Channel memory architecture
Graphics Processing Unit (GPU):
- Up to 4 double-width or 8 single-width PCIe GPU cards in a 2U chassis, including NVIDIA A100 PCIe GPUs built on the NVIDIA Ampere architecture and RTX GPUs 6000, 8000 Passive, A40
- Up to 27,648 FP32 CUDA Cores, 13,824 FP64 CUDA Cores, 1,728 Tensor Cores, 38.80 TF peak FP64 double-precision performance, 78 TF peak FP64 Tensor Core double-precision performance, 78 TF peak FP32 single-precision performance, 1,248 TF peak Bfloat16 performance, 1,248 TF peak FP16 Tensor Core half-precision performance, 4,992 TOPS peak Int8 Tensor Core Inference performance, and 160GB GPU memory, with four A100 PCIe GPUs in a 2U chassis
- On-board Aspeed AST2500 graphics controller
- 4x PCI-E Gen4 x16 slot(x16 link) or 8 x PCI-E Gen4 x16 slots (x16 link)
- 2x PCI-E Gen4 x16 slot(x16 link) for Butterfly riser
- 1x PCI-E Gen4 x8 slot (x8 link) for Front riser or OCP3.0 slot (x8 link) or Hyper M.2 card
- 8x 2.5”/ 3.5” hot-swap storage bays (Default 2x NVMe supported)
- Default on-board 1 x M.2 max 22110, optional Hyper M.2 card up to 4 x M.2max 22110
- 2x 1GbE LAN ports
- 1x GbE management LAN
1 + 1 2,200 watt redundant PSUs, 80 PLUS Platinum
3.46″ x 17.22″ x 31.5″ / 88mm x 440mm x 800mm (H x W x D)
Optimized for Turnkey Solutions
Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.