FREMONT, CA, November 17, 2020 – AMAX’s HPC and AI Solutions Group announced it is showcasing an all new GPU POD Reference Architecture at SC20 from November 17 to 19. AMAX’s GPU POD Reference Architecture incorporates best of breed compute, networking, storage, power, and cooling to deliver the fastest application performance and meet the demands of evolving AI workloads at scale.
As the compute block of the AMAX GPU POD Reference Architecture, the AceleMax GPU platforms provide single and dual AMD EPYC™ 7002 CPU options, four or eight NVIDIA® A100 GPUs for up to 10 PetaOPS of AI performance via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes for the lowest latency and highest bandwidth. These systems support up to two additional high-performance PCI-E 4.0 expansion slots, including SAS interface cards, NVIDIA® Mellanox® 200 Gb/s InfiniBand or Ethernet, to meet the demands of AI workloads with the highest bandwidth, lowest latency and maximum concurrency for full GPU resource utilization.
AMAX’s StorMax all-flash storage solutions feature Excelero NVMesh, an intelligent management storage layer that abstracts underlying hardware with CPU offload, and 200Gb/s NVMe over Fabrics on InfiniBand with NVIDIA Mellanox ConnectX-6 adapters. The StorMax storage blocks in the GPU POD Reference Architecture, are the highest performance, most secure and scalable architecture in class that maximize the utilization of NVIDIA A100 GPUs and the low-latency and high IOPs/BW benefits of NVMe in a distributed and linearly scalable architecture.
“We’re thrilled AMAX selected Excelero for their GPU POD architecture.” Said Sven Breuner, Field CTO at Excelero. “There is no easier, faster, or more flexible way to deploy a turnkey GPU computing solution that solves the toughest storage problems in AI: small files, random and concurrent access, and near-zero latency requirements. Our joint solution makes it easy for customers of any size to quickly take advantage of the latest GPU, networking, and storage technologies. “
The AMAX GPU POD delivers a validated turnkey parallel compute solution and provides scalable high performance shared file access that is ideal for all AI workloads. View AMAX’s GPU POD Reference Architecture to see how fully integrated, ready-to-deploy offerings can simplify and accelerate your data center AI deployments. Learn more at our SC20 virtual booth and contact us at email@example.com for a technical consultation.