FREMONT, CA, October 5, 2020 – AMAX’s HPC and AI Solutions Group announced its series of next-generation NVIDIA A100 powered server systems that bring AI training, inference, and analytics into a consolidated yet scalable platform. The flexible system design accommodates standard full-length, full-height 250W PCIe 4.0 cards or 400W SXM A100 cards for high-capacity, multi-instance burstable workloads and makes them the ideal building blocks for modern AI datacenters.
AMAX’s AceleMax reference design data center servers provide single and dual AMD EPYC 7002 CPU options, four or eight NVIDIA A100 GPUs for up to 10 PetaOPS of AI performance via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes for the lowest latency and highest bandwidth. These systems also support up to two additional high-performance PCI-E 4.0 expansion slots for a variety of uses, including SAS interface cards, NVIDIA Mellanox 200 Gb/s InfiniBand or Ethernet to meet the demands of high-bandwidth applications.
“Enterprise value improves when data science teams and IT teams align to improve overall productivity and results without having to worry about the infrastructure,” said Dr. Rene Meyer, VP of Technology and Product Development at AMAX. “NVIDIA is taking AI computing to new levels through the power of collaborative partner ecosystems that work. We have a long partner history together and our rack integration hubs are ready in capacity to build and integrate the next generation of A100-based solutions into data centers of all sizes.”
“The AceleMax and its upgraded features help to simplify and improve AI computing productivity in enterprise AI environments,” said Paresh Kharya, senior director of product management for accelerated computing at NVIDIA. “Adding in the power and flexibility of the NVIDIA A100 GPU and NVIDIA InfiniBand and Ethernet networking enables AMAX customers to optimize their enterprises for high utilization and lower cost.”
AMAX’s AceleMax series of NVIDIA A100 GPU systems, powered by AMD EPYC 7002 series processor, include:
- AceleMax DGS-214A: 2U single socket server with 8x 3.5″/2.5″ hot-swap SSD/HDD drive bays, and 4x NVIDIA A100 PCIe GPUs, with up to 4 PetaOPS of performance
- AceleMax DGS-224A: 2U dual socket server with 8x 3.5”/2.5” hot-swap SSD/HDD drive bays, 2x SATA-DOM, and 4x NVIDIA A100 PCIe GPUs, with up to 4 PetaOPS of performance
- AceleMax DGS-224AS: 2U dual socket server with 4x 2.5” hot-swap SATA/NVMe hybrid drive bays, and 4x NVIDIA A100 SXM GPUs, with up to 4 PetaOPS of performance
- AceleMax DGS-428A: 4U dual socket server with up to 24x 2.5” hot-swap SAS/SATA, 4x 2.5” NVMe, and 8x NVIDIA A100 PCIe GPUs, with up to 10 PetaOPS of performance
- AceleMax DGS-428AS: 4U dual socket server with up to 6 x U.2 NVMe and 2x M.2 NMVe drive bays, and 8x NVIDIA A100 SXM GPUs, with up to 10 PetaOPS of performance
As an NVIDIA Elite Partner, AMAX offers a comprehensive line of GPU-integrated solutions optimized for deep learning at any scale. To schedule a technical consultation, please contact AMAX at firstname.lastname@example.org.