AMAX and WEKA Partnership

The AceleMax™ POD with WEKA® Data Platform

Powered by the NVIDIA HGX™ H200, designed for supercharging AI and HPC workloads in a rack-scale solution.

Request a Quote
Left Aligned Gradient Text with Divider
WEKA Partner
AMAX and NVIDIA Partnership

AMAX with WEKA

AMAX’s integration with the WEKA Data Platform significantly reduces AI model training time by eliminating data processing and storage performance bottlenecks, ensuring a continuous data flow reaches GPUs. The result? Expedited training cycles that enable AI models to be developed and refined at a much faster pace.

Left Aligned Gradient Text with Divider
Specifications

NVIDIA HGX H200 Server

The NVIDIA HGX™  H200 based on the Hopper architecture is designed for enterprise HPC and AI workloads. Our Rack Scale Solutions engineered around the NVIDIA H200 Tensor Core GPU significantly enhance memory capacity to 141 gigabytes per GPU, nearly doubling that of the H100. This increase in memory, coupled with enhanced GPU-to-GPU interconnectivity through NVIDIA NVLink technology, optimizes parallel processing and boosts overall system performance.

AMAX NVIDIA GPU Comparison
Category Description
Processor Dual 5th Gen Intel Xeon or AMD EPYC 9004 Series CPU
Memory 2TB DDR5
GPU NVIDIA HGX H200 (1128 GB HBM3e Total GPU Memory), 900GB/s NVLINK GPU to GPU Interconnect w/ NVSWITCH
Networking 8x NVIDIA ConnectX®-7 Single-port 400Gbps/NDR OSFP NICs 2x NVIDIA ConnectX®-7 Dual-port 200Gbps/NDR200 QSFP112 NICs 1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage
Storage Configurable, up to 10x NVMe U.3 SSD and and Optional M.2 Support
Onboard Networking Dual 10GBase-T RJ45 LAN, 1x Management LAN
Power Supply 6x 3000W Titanium Redundant Power Supplies
AMAX Side by Side Buttons
Left Aligned Gradient Text with Divider
Network Topology

Scalable Infrastructure

AMAX’s modular design allows for easy scaling of compute and storage resources, ensuring that the infrastructure can grow in tandem with the increasing demands of AI workloads.

Left Aligned Gradient Text with Divider
Why AMAX

Engineering Expertise

Our team of thermal, electrical, mechanical, and networking engineers are skilled in designing solutions designed to your specific requirements.

AMAX Specialized IT Solutions
Solution Architect

Solution Architects

AMAX's solution architects optimize IT configurations for performance, scalability, and industry-specific reliability.

Networking Solutions

Networking

AMAX designs custom networking topologies to enhance connectivity and performance in AI and HPC environments.

Thermal Management

Thermal Management

AMAX implements innovative cooling technologies that boost performance and efficiency in dense computing setups.

Benchmarking and Optimization

Compute Optimization

AMAX ensures maximum performance through benchmarking and testing, aligning hardware and software for AI workloads.

Left Aligned Gradient Text with Divider
AI Architects
Text beside Image with Specific Gradient Words

From Design to Deployment

AMAX's approach to AI solutions begins with intelligent design, emphasizing the creation of high-performance computing and network infrastructures tailored to AI applications. We guide each project from concept to deployment, ensuring systems are optimized for both efficiency and future scalability.

Left Aligned Gradient Text with Divider
NVIDIA HGX H200
AMAX and WEKA Partnership
NVIDIA HGX™ H200
ORDER NOW

AceleMax™ POD with NVIDIA HGX H200

Customized Scalable Compute Unit Built For Large Language Models.

  • Powerful and efficient data management with the WEKA® Data Platform
  • Up to 4.5TB of HBM3e GPU Memory per rack
  • Direct GPU-to-GPU interconnect via NVLink delivers 900GB/s bandwidth
  • A dedicated one-GPU-to-one-NIC topology
  • Modular design with reduced cable usage