Aug 18, 2025 4 min read

AI / LLM Solutions for Enterprise

Explore AI / LLM RackScale Solutions

Liquid and air cooled rack designs built for large-scale training and inference.

Solution Brief

Explore AI / LLM RackScale Solutions

Liquid and air cooled rack designs built for large-scale training and inference.

Solution Brief

Centered Text Example
Purpose-built for AI / LLM Workloads

Three Features Layout
Large Memory Pool

Large Memory Pool

Up to 18.4 TB HBM3e per rack supports long context windows and large models, with NVLink and NVSwitch ensuring GPUs stay fully utilized during training.

High Speed Networking

800 Gbps InfiniBand per rack delivers low latency communication and rapid synchronization across nodes, ideal for distributed training and inference workloads.

Scalable GPU Density

Scalable GPU Density

Air cooled racks scale up to 32 GPUs with 9.2 TB HBM3e. Liquid cooled racks double capacity to 64 GPUs with 18.4 TB HBM3e in a compact rack footprint.

Case Study

Training advanced voice models at scale

A leading Gen AI developer partnered with AMAX to deploy a DGX SuperPOD built on DGX B200 systems. The cluster delivers 4.6 exaflops for training and 9.2 exaflops for inference, providing a scalable foundation for fast iteration on voice synthesis and multimodal workloads.

Read Full Story
Form with Responsive Styled Box
Custom Side by Side Boxes with SVG and Buttons
Feature SVG
Speak to an AMAX representative now.
Contact Us
Feature SVG
Don't see the right solution for you here?
Tell us more