Powered by 8x NVIDIA Tesla V100 PCIe GPU accelerators, and 2x Intel® Xeon® Scalable Family Processors, DL-E48X is the first-in-industry accelerated GPU computing solution to feature re-configurable single and dual root complex PCIe architecture, allowing for hardware optimization on the fly for AI and DL training, inference, HPC compute, rendering and virtualization applications.

Features

First-in-industry re-configurable PCIe root complex architecture. Supports both single and dual root complex, allowing for hardware optimization on the fly for AI and DL training, inference, HPC compute, rendering, and virtualization applications.

Supports up to 8x NVIDIA Tesla V100/P100/P40 PCIe GPU accelerator cards in a 4U chassis.

Utilizes Intel’s latest Xeon Scalable Processor (Skylake) for a 56% increase in memory bandwidth and a 54% bandwidth increase for CPU to CPU communication (UPI) compared to the previous Intel Processor generation.

Highly flexible and expandable through additional PCIe 3.0 x16 resources.

Supports up to 4x U.2 NVMe drives and NVMe RAID.

Supports up to 2x HDR Infiniband (100GB) network adapters or up to 2x 100/50/40/25 GbE NIC adapters.

Specifications

GPU

8X Tesla V100/P100/P40 PCIe. Up to 128GB dedicated HBM2 GPU memory per node (Tesla V100).

CPU

2X Intel® Xeon® Processor Scalable Family series. Up to 165W TDP.

NETWORK

On-board dual 10GbE.
Up to 2x EDR InfiniBand cards (optional). Up to 2x 100/50/40/25 GbE NIC cards (optional).

STORAGE

8x 2.5” SSD (4x SATA + 4x U.2 NVMe or 8x SATA) 2x SATADOM.

MEMORY

24 DDR4 DIMM slots up to 2666/2400/2133 MHz. Supports up to 3TB ECC RDIMM.

EXPANSION SLOTS

1x PCIe 3.0 x16 low profile
2x PCIe 3.0 x16 full height full length
1x OCP mezzanine.

Key Features Of Tesla V100

Volta Architecture & Tensor Core - By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.

Maximum Efficiency Mode - Allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget, and providing up to 80% of the performance at half the power consumption.

HBM2 - With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.

Live Chat