The Foundation for Enterprise AI
High-density, rack-scale solution engineered for large-scale enterprise AI workloads with 32 NVIDIA HGX™ B300 GPUs
Tensor/Transformer Cores
Up to 576 specialized cores that accelerate transformer operations (e.g., attention layers) critical to LLM performance.
High Memory Capacity
Up to 8.4TB of HBM3e memory, enabling efficient handling of massive models and long context windows without bottlenecks.
In-Node Bandwidth
NVIDIA NVLink/NVSwitch interconnect keeps GPUs fully utilized across large-scale training workloads.
High-Speed Networking
800Gbps NVIDIA InfiniBand for rapid multi-node synchronization across distributed AI clusters.
Built for Modern AI Workloads
AI workloads now span everything from large language model training in cloud environments to scientific computing and production scale inference across enterprise, research, financial, and automotive applications. The AceleMax® AXG-828U with NVIDIA HGX B300 provides the compute foundation for these demands, supporting sustained performance and predictable operation as AI systems move into production.
Solution Architecture
AMAX's solution architects optimize IT configurations for performance, scalability, and industry-specific reliability.
Networking
AMAX designs custom networking topologies to enhance connectivity and performance in AI and HPC environments.
Thermal Management
AMAX implements innovative cooling technologies that boost performance and efficiency in dense computing setups.
Compute Optimization
AMAX ensures maximum performance through benchmarking and testing, aligning hardware and software for AI workloads.
Next Level Training Performance
The second generation Transformer Engine with FP8 enables 4x faster training for large models like Llama 3.1 405B. Combined with NVLink at 1.8 TB/s, InfiniBand networking, and Magnum IO software, it scales efficiently across enterprise clusters.
Real Time Inference
HGX B300 delivers up to 11x higher inference performance over the Hopper generation. Blackwell Tensor Cores with TensorRT LLM innovations accelerate inference for Llama 3.1 405B and other large models.
Activate Your AI Infrastructure Instantly with HostMax™
HostMax™ is AMAX’s in-house deployment service that lets you power on and operate your liquid-cooled AI systems as soon as they’re built. Instead of waiting for colocation space, HostMax™ provides immediate hosting at AMAX’s facility, enabling a direct transition from assembly to deployment for testing, validation, and early production.
Learn More
Fully Managed AI Deployment
AMAX's approach to AI solutions begins with intelligent design, emphasizing the creation of high-performance computing and network infrastructures tailored to AI applications. We guide each project from concept to deployment, ensuring systems are optimized for both efficiency and future scalability.
AceleMax® AXG-828U
The AceleMax® AXG-828U is an 8U dual socket Intel® Xeon® 6 platform built around NVIDIA HGX B300 GPU acceleration for modern AI training and inference at scale.
- 8U rackmount system with dual socket Intel® Xeon® 6700E/6700P series processors
- NVIDIA HGX B300 GPU platform with NVSwitch and 2.3 TB total GPU memory
- DDR5 memory support with 16 + 16 DIMM slots, up to 6400 MT/s
- 8x OSFP 800 Gbps InfiniBand and advanced system management