AceleMax DGS-508

Disaggregated GPU Solution

AMAX’s AceleMax™ DGS-508 disaggregated GPU solution consists of the AE-2484 2U 4 Node server and DGS-308 HPC PCIe expansion chassis. The GPUs are in shared resource pools (GPU chassis) and disaggregated from the physical servers via PCIe fabric. Each compute node gets an additional four PCIe Gen4 x16 slots where GPUs, FPGAs, SSDs and NICs could be added.

Request a Quote

The AceleMax™ DGS-508 disaggregated GPU solution consists of the AE-2484 2U 4 Node server and DGS-308 HPC PCIe expansion chassis, and the GPUs are in shared resource pools (GPU chassis) and disaggregated from the physical servers by using a PCIe fabric. Each compute node gets additional four PCIe Gen4 x16 slots where GPUs, FPGAs, accelerators and SSDs, NICs could be added. The expansion box is purposely designed for high performance PCIe cards in which thermal and power are very critical. In addition, GPUs can be assigned or reassigned to any connected servers to rapidly meet the needs of variable AI applications or different AI development stages. Users can provision GPUs from a pool of GPU resources using the management GUI or API.

 

This solution increases resource utilization in mixed use datacenters where Artificial Intelligence, machine learning, deep learning, inference, data science, simulations and image processing applications may run on the same hardware.

2U4N DP AMD EPYC Processor Server

Designed to drive simplicity for manufacturing, the ServMax™ delivers dual-socket AMD performance in a dense 2U design. Power your mission-critical data center workloads with 24x 2.5” NVMe drives, or 12x 3.5” SATA SSD drives option.

High Density Design for 2U4N NVMe Enclosure 

  • Half-width, dual-socket high performance computing node
  • AMD EPYC Processors up to 512 cores
  • Maximum up to 4TB of memory
  • Compatible with Milan processors

Simplified Designed for Field Service

  • Cable-less design for system assembly
  • Tool-less design for Node and FAN module assembly
  • Industrial standard connectors
  • Simple mechanical structure

Low Cost Consideration

  • TCO competitive (High Computing)
  • Without PCH (AMD Rome/Milan Platform)

Simplified Design for Manufacturing

  • None PTH PCBA process
  • Easy for maintenance

Processor

  • One or Two AMD EPYC™ 7003 Series Processor (Milan), per node, 180W (TDP)

Memory

  • Up to 16x DDR4 RDIMM/LRDIMM per node (8x per socket)
  • Supports 1x NVDIMMs (CPU0) per node

Storage:

Front

  • SKU1: 24x 2.5” hot-swap NVMe drives per chassis (6x drives per node)
  • SKU2: 12x 3.5” hot-swap SATA drives per chassis (3x drives per node)

Internal

  • 1x SATA/NVMe M.2 (2280/22110)
  • 1x SATA M.2 (2280/22110)

Rear

  • 1x 2.5” hot-swap 15mm NVMe U.2 per node (Optional, occupy one PCIe LP slot)
  • Rear 2.5” NVMe drive is for single CPU configuration. High power CPUs might not be supported

Expansion Slots

  • Up to 2x PCIe Gen4 x16 LP slot per node (by SKU)
  • 1x OCP3.0 NIC PCIe Gen4 x 16 per node

Rear Panel

  • 1x RJ45 for BMC dedicated management
  • 1x VGA
  • 2x USB 3.0
  • 1x UID Button, LED

Front Panel

  • 4x Power Button / LED (Green/OFF)
  • 4x UID Button / LED (Blue/OFF)
  • 4x System Healthy LED (OFF/Amber)

Management

  • 1x ASPEED AST2500 BMC per node
  • Supports Intelligent Platform Management Interface 2.0

TPM

  • 1x TPM 2.0 Module

Certification

  • FCC, CE, CCC, UL, CB

Fans

  • 3x fans per node (40x56mm), x12 per chassis

PSUs

  • 2x 3000W(2200W) Redundant Platinum level certified

Optimized for Turnkey Solutions

Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.