JUMP START YOUR AI

The MATRIX GPU Cloud™ solution is THE platform for fast tracking AI development and deployment. Powered by NVIDIA GPUs, the MATRIX is a revolutionary DL-in-a-Box solution featuring complete AI environments with the latest Deep Learning frameworks, GPU virtualization technology for sharing and scaling resources, and intuitive UI for full control over workflow.

Deploy the MATRIX platforms as standalone dev boxes or building blocks for a highly elastic, self-service On-Premise GPU Cloud.

"AMAX was instrumental in helping us move off a costly AWS monthly spend to a performance-optimized, on-premise GPU infrastructure featuring their MATRIX products for our Deep Learning workloads. They designed everything from the high-compute density platforms, to optimizing the networking and cooling within our racks to avoid various potential performance bottlenecks."

 

10x Faster Deep Learning Development with MATRIX vs. Do-It-Yourself Approach

Deep Learning with MATRIX for Unmatched Efficiency

Deep Learning with Do-It-Yourself Approach

MATRIX Powered by Bitfusion FlexDirect

The MATRIX is a fully-integrated software/hardware platform geared towards increased efficiency by providing unprecedented resource flexibility and utilization to accelerate all phases of AI and Deep Learning projects, including model development/testing, training and inference at scale. The GPU over Fabrics technology enables sharing & scaling of large numbers of GPUs across systems for multi-tenancy and highly-customizable self-service features. GPU over Fabrics is available for bare metal, VM and container applications.

Key Benefits

  • GPU over Fabrics Technology enables sharing & scaling of large numbers of GPUs across systems for multi-tenancy and highly-customizable self-service features
  • Dynamically allocates GPUs across multiple jobs and users for optimal resource utilization & efficiency
  • Connects any compute servers remotely, over any Ethernet, IB or RoCE network to GPU servers pools
  • Attaches and detaches GPUs to workloads in real-time, offering unprecedented utilization of GPUs
  • Runs in userspace and proven to work in public cloud, private cloud, on-premise hardware, any hypervisor, and container
  • Support FPGAs and ASICs (any OpenCL compliant hardware)

Deep Learning Acceleration with MATRIX

DOWNLOAD TECHNICAL WHITE PAPER

MATRIX Technical White Paper

DOWNLOAD PDF
Live Chat