DEEP LEARNING PLATFORMS

The Most Powerful GPU Servers & Workstations Optimized for DL Workloads

DL-E48A

Remotely Configurable PCIe Root Complex
Deep Learning Server

High performance 8x GPU server platform with first in industry remote configurable dual and single PCIe root complex architectures.

DL-E48A

Remotely Configurable PCIe Root Complex Deep Learning Server

High performance 8x GPU server platform with first in industry remote configurable dual and single PCIe root complex architectures.

DL-E48X

Configurable PCIe Root Complex Deep Learning Server

The first-in-industry accelerated 8x GPU computing solution to feature re-configurable single & dual root complex PCIe architecture, allowing for hardware optimization on the fly for AI and DL training, inference, HPC compute, rendering and virtualization applications.

DL-E280

Ultra-Compact High-Density Deep Learning Server

8x GPU Machine Learning server with Intel Xeon Scalable Processor delivering up to 112 TFLOPs of single precision when populated with NVIDIA Tesla V100 GPU accelerators.

ServMax™ HGX-1

Hyperscale GPU Accelerated AI Cloud Solution

Powered by NVIDIA Tesla GPUs and NVLink™ high-speed interconnect technology, the HGX-1 is purpose-built for AI/HPC cloud computing. Hosting 8x Tesla V100 SXM2 GPUs in a 4U chassis, the HGX-1 features 5,120 Tensor Cores and 40,960 CUDA Cores for 1 PFLOP of Tensor Core operations, and 125 TFLOPS of SP performance.

HGX-1

GPU Accelerated AI Cloud Solution

Powered by NVIDIA Tesla GPUs and NVLink™ high-speed interconnect technology, the HGX-1 is purpose-built for AI/HPC cloud computing. Hosting 8x Tesla V100 SXM2 GPUs in a 4U chassis, the HGX-1 features 5,120 Tensor Cores and 40,960 CUDA Cores for 1 PFLOP of Tensor Core operations, and 125 TFLOPS of SP performance.

DL-E140

Ultra-Compact 1U 4x GPU Deep Learning Server

High-density, ultra-compact building block for GPU-powered Deep Learning and HPC clusters, accelerated by Intel Xeon Scalable Processors with NVIDIA® GPUs, including Tesla V100, P100, P40, Quadro P5000 and P6000.

ServMax™ P47

AMD EPYC GPU-Accelerated Compute Platform

Featuring the latest AMD EPYC™ CPUs, ServMax™ P47 supports both 4x NVIDIA Tesla GPUs and Vega-based Radeon Instinct™ GPUs. The single socket platform easily manages machine learning, advanced rendering and HPC workloads previously only possible on high-end dual socket systems, making it a superior alternative to existing GPU solutions in the market.

ServMax™ P47

AMD EPYC GPU-Accelerated Compute Platform

Featuring the latest AMD EPYC™ CPUs, ServMax™ P47 supports both NVIDIA Tesla GPUs and Vega-based Radeon Instinct™ GPUs. The single socket platform easily manages machine learning, advanced rendering and HPC workloads previously only possible on high-end dual socket systems, making it a superior alternative to existing GPU solutions in the market.

DL-E200

Compact Deep Learning Workstation

Ultra-compact high-end Deep Learning development workstation, perfect for AI startups and labs. This ultra-quiet micro ATX workstation features 2x NVIDIA GTX Titan V, GeForce 1080 Ti, Quadro GV100, GP100, P5000 or P6000 GPUs, on-board dual 10G Ethernet and enterprise-grade motherboard.

DL-E400

High-Performance Deep Learning DevBox

Our best-selling Deep Learning workstation for Deep Learning development! This ultra-quiet compact workstation featuring 4x NVIDIA GTX Titan V, GeForce 1080 Ti, Quadro GV100, GP100, P5000 or P6000 GPUs, on-board dual 1G/10G Ethernet and enterprise-grade motherboard.

[SMART]Rack AI

Turnkey High Performance Machine Learning Cluster

Rackscale solution featuring up to 96x NVIDIA® Tesla GPU accelerator cards for up to 1.34 PFLOPs per rack. All-Flash storage for an ultra-fast in-rack data repository, dual 25GbE ROCE or 100G EDR InfiniBand high speed networking, [SMART]DC Data Center Manager and optional in-rack battery for power loss protection.

Live Chat