AMAX ClusterMax Apex CPU-intensive, high performance HPC cluster

ClusterMax® Apex

HPC Cluster Solution for CPU-Intensive, Large Scale Deployments


  • High density balanced compute solution for large scale power conscious users
  • Based on new Intel Sandy Bridge architecture for best in class performance.
  • Integrated web based cluster management for turnkey provisioning and hassle free management
  • Optimized MPI implementations and pre tuned networking components to get parallel application up and running
  • Highly scalable, modular architecture with FDR Infiniband and PCIe Gen 3.0 support

Request a Quote

Future Proof System With Massive Scalability For Large Scale Deployments

The ClusterMax® Apex Supercomputer Cluster is ideal for large-scale deployments that integrate the latest CPU-based computing technologies and require extreme performance and high-density computing. Based on the 2nd Generation Intel® Xeon® Processor Scalable Family, the ClusterMax™ Apex features up to 4,032 Intel Xeon® processor cores per 42U standard rack cabinet, doubling the density compared with traditional rack mounted servers.


ClusterMax® Apex Supercomputer Cluster offers customers performance scalability, industry leading density and maximum efficiency at scale for applications ranging from in-memory databases, to a diverse set of data and compute-intensive HPC applications.


  • Delivers up to 92,160 Tensor cores, 737,280 CUDA Cores, 1,180+ Teraflops DP, 2,361+ Teraflops SP, and 18,720+ Teraflops Tensor performance per 42U cluster
  • Up to 4,608GB dedicated HBM2 GPU memory
  • Supports EDR/HDR InfiniBand fabric & real time InfiniBand diagnostics
  • Cluster management and GPU monitoring software, including GPU temperature monitoring, fan speed, and power, providing exclusive access to GPUs in a cluster



  • Artificial intelligence / deep learning / machine learning
  • Bio-chemical / Biotechnology / Life Sciences
  • Cloud computing
  • Computational grid endpoint
  • Computer-aided engineering (CAE)
  • Computational fluid dynamics (CFD)
  • Data mining and stream processing
  • Electronic design automation (EDA)
  • Financial market modeling
  • HPC applications (eg. Nastran, Ansys, LS-Dyna)
  • Petro-clusters / oil & gas
  • Server consolidation
  • Scientific research
  • Simulations
  • Web hosting

Cluster Specifications

  • Up to 72 nodes with 28-core 2nd Generation Intel® Xeon® Processor Scalable Family and 4,032 processor cores
  • Up to 108TB DDR4 2933/2666/2400/2133 MHz system memory
  • Up to 2,592TB hot swap storage capacity per 42U cluster
  • Great serviceability with hot-pluggable processor nodes and hot-pluggable PSU
  • High speed interconnect with FDR/EDR Infiniband, Intel® Omni-Path Architecture, Fibre channel, and Ethernet (Gigabit, 10GbE/40GbE, 25GbE, 100GbE) options
  • Dedicated on-board management port, providing a flexible and secure management environment
  • IPMI 2.0 with KVM over LAN and Virtual Media over LAN

Cluster Features

  • Delivers high availability, scalability, flexibility and power efficiency in a dense cluster architecture 
Improves RAS with hot-swappable and redundant fans, and hard disk drives
  • Addresses today’s natural business growth of mid to large-sized HPC and high-density computing
  • Lowest TCO for support, maintenance and upgradeability
  • Ideal for mid to large data centers scaling up to 1,000 nodes

Complete Cluster Assembly and Set Up Services

  • Fully integrated and pre-packaged turnkey HPC solution, including HPC professional services and support, expert installation and setup of rack-optimized cluster nodes, cabling, rails, and other peripherals
  • Configuration of cluster nodes and the network Installation of applications and client computers to offer a comprehensive solution for your IT needs
  • Rapid deployment
  • Server management options include Standards-based IPMI or AMAX remote server management
  • Seamless standard and custom application integration and cluster installation Cluster management options include a choice of open source software solutions Firmware upgrades & BIOS modification
  • Supports a variety of UPS and PDU configuration and interconnect options, including Infiniband (FDR, EDR), Fibre channel, and Ethernet (Gigabit, 10GbE, 40GbE, 25GbE, 100GbE)

Clustered File Storage (from Terabyte to Petabyte)

  • Hardware design and software stack
  • Lustre / Open source file system (Redundancy across system nodes)

Rack Level Verification

  • Performance and Benchmark Testing (HPL)
  • ATA rack level stress test
  • Rack Level Serviceability
  • Ease of Deployment Review
  • MPI jobs over IB for HPC
  • GPU stress test using CUDA
  • Cluster management

Large Scale Rack Deployment Review

  • Scalability Process
  • Rack to Rack Connectivity
  • Multi-Cluster Testing
  • Software/Application Load
  • Cluster management

Optional Cluster System Software Installed

  • Microsoft Windows Server 2019
  • Bright Computing Cluster Manager
  • SuSE / Red Hat Enterprise Linux
  • C-based software development tools, CUDA 10.x Toolkit and SDK, and various libraries for CPU GPU clusters

ClusterMax® Apex Intel Platform Standard Configuration

Total Node Count

Up to 72 nodes per 42U cluster


2x 28-core 2nd Generation Intel® Xeon® Processor Scalable Family processor series per node

Memory (per node)

16x DDR4 DIMM slots, up to 3TB DDR4 2933/2666/2400/2133 MHz ECC registered DIMMs per node

Max Cores/Threads

4,032 cores/8,064 threads


  • Dual 10GbE ports per node
  • 4X EDR InfiniBand (1 per node)
  • Supports Intel Xeon processor Scalable Family
  • Integrated fabric connectors―One 100Gb/s port per processor

Network Port Latency



Intel RMM4 with media redirection

Enclosure Specifications

Form Factor


Systems per enclosure


Enclosures per rack



Front to back

Power Supplies

Dual Redundant

Hard Disk Drives

12x 3.5″ or 24x 2.5″ SAS/SATA/SSD Drives

Max Disk Capacity

Up tp 16TB per drive, up to 3,456TB hot swap storage capacity per 42U cluster

Network Configuration


36 EDR/HDR InfiniBand ports

Form Factor


Switching Capacity


Interconnect Topology

Fat Tree


  • RS232 over DB9
  • Dual 10/100/1000G Ethernet ports

Port to Port Latency


Power Supplies

Dual Redundant

Software Options

System Management

Bright Computing Bright Cluster Manager

Operating Systems

  • Red Hat Enterprise Linux (RHEL)
  • SuSE Enterprise Linux
  • CentOS
  • Ubuntu
  • Microsoft Windows Server 2019


  • Microsoft Windows Hyper-V
  • Citrix XenServer
  • OpenStack
  • VMware
  • ESXi/vSphere

Optimized for Turnkey Solutions

Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.