GPU Workstations Powered by Intel® Xeon® 6 Processors
Designed to maximize energy efficiency and security for AI workloads.
AMAX Solutions powered by Intel® Xeon® 6 Processors
AMAX leverages Intel® Xeon® 6 Processors, optimizing performance for AI workloads. This technology enhances our ability to manage demanding data workloads for large-scale HPC deployments, providing powerful and energy-efficient solutions for data centers.
LiquidMax® Pro
Built for demanding AI training and data center applications, the tower server/workstation delivers top-tier performance with advanced liquid cooling.
- Closed loop liquid cooling design
- AI Training and Inferencing
- Supports liquid cooled CPU + GPU
- Rich I/O scalability with IPMI
- 12.1 PFLOPS
Component | LiquidMax® LX-5a Pro |
---|---|
Processor | Dual Intel® Xeon® Scalable series processors |
GPU | 4 x liquid-cooled GPU (H100/L40S) |
Socket | Intel C741 Chipset |
Memory Capacity | 16 x DDR5 DIMM (8-channel per CPU, 8 DIMM per CPU), Up to 2TB per CPU socket |
Expansion Slots | 6 x PCIe 5.0 x16 slots, 1 x PCIe 5.0 x8 slot |
Network Connectivity | 2 x 10GbE LAN ports (RJ45) |
I/O Ports | 2 x USB 3.2, 1 x Serial Port, 1 x VGA |
Storage | 8 x 3.5”/2.5” SATA/SAS (including 4 x NVMe) hot-swappable drives, 1 x PCIe 4.0 NVMe M.2 (2280/22110) |
Chassis | Tower |
Power Supply | 2 x 1600W Power Supply |
System Dimensions (H x W x D) | 660 mm x 380 mm x 611 mm |
Confidential Computing
AMAX systems with Intel® Xeon® 6 processors provide a security-first approach, ensuring protection for confidential data during use, at rest, and in transit. By minimizing potential attack surfaces, these systems offer peace of mind for organizations managing AI and HPC workloads.
- Intel Trust Domain Extensions (TDX): Ensures each virtual machine operates with distinct encryption keys, shielding tenant data from unauthorized access.
- Intel Software Guard Extensions (SGX): Isolates application data from the broader system, adding an extra layer of protection for critical operations.
Securing AI Data On-Prem
As companies handle increasingly sensitive data, especially for AI infrastructure, moving off the cloud and keeping information on-premises offers critical security benefits.
- Full Data Control: On-prem keeps your data under your control, reducing reliance on third-party providers and minimizing the risk of unauthorized access.
- Protection from External AI Training: Keeping data on-prem ensures that your information is not used to train external AI models, safeguarding proprietary knowledge and ensuring privacy.
LiquidMax® Workstations
The LiquidMax® series offers sleek, liquid cooled, ultra-quiet GPU workstation/tower servers designed for high-performance AI and deep learning applications.
Energy Savings
Our advanced cooling system maximizes efficiency and minimizes energy consumption during demanding workloads.
Liquid Cooling
Liquid cooling technology keeps vibration and noise levels low (55dB to 59dB at full load).
Custom Design
The custom chassis and thermal design allows for flexible deployment and across any office enviornment.
Intelligent LCD Panel
Continuously monitors critical temperatures to ensure peak operation under heavy workloads.
Engineering Expertise
Our team of thermal, electrical, mechanical, and networking engineers are skilled in designing solutions designed to your specific requirements.
Solution Architects
AMAX's solution architects optimize IT configurations for performance, scalability, and industry-specific reliability.
Networking
AMAX designs custom networking topologies to enhance connectivity and performance in AI and HPC environments.
Thermal Management
AMAX implements innovative cooling technologies that boost performance and efficiency in dense computing setups.
Compute Optimization
AMAX ensures maximum performance through benchmarking and testing, aligning hardware and software for AI workloads.
From Design to Deployment
AMAX's approach to AI solutions begins with intelligent design, emphasizing the creation of high-performance computing and network infrastructures tailored to AI applications. We guide each project from concept to deployment, ensuring systems are optimized for both efficiency and future scalability.