Chat with us, powered by LiveChat

Learn More or Request a Quote

Powered by 8x NVIDIA Tesla V100S PCIe GPU accelerators, and 2x Intel® Xeon® Scalable Family Processors, DL-E48A is the first-in-industry accelerated GPU computing solution to feature re-configurable single and dual root complex PCIe architecture, allowing for hardware optimization on the fly for AI and DL training, inference, HPC compute, rendering and virtualization applications.

Single Root Complex vs Dual Root Complex Use Cases

Single Root Complex

For GPU-intensive DL workloads—ideal for reducing GPU to GPU memory copy latency & increased bandwidth.

Dual Root Complex

For CPU-intensive and parallel-computing applications—optimized for CPU/memory to GPU communications.


  • Configurable PCIe root complex architecture to support both single and dual root complex, allowing for hardware optimization for multiple AI applications.
  • Supports up to 8x full-height, full-length, active & passive GPUs in 4U chassis, with maximum GPU utilization.
  • Flexible SKU support through PCIe lanes topology across PLX PCIe switch, configurable via software.
  • Smart Thermal Radar design for energy efficiency.
  • Supports Intel Dual Embedded Omni-Path 100Gbps Fabric and OCuLink for NVMe for high speed transmission.



8x Tesla V100S PCIe.
Up to 256GB dedicated HBM2 GPU memory per system.


2x Intel® Xeon® Processor Scalable Family series.
Up to 205W TDP.


2x I350 LAN + 1x Management LAN. 2x Intel Omni-Path Fabric; 2x InfiniBand EDR/10G kit.


On-board SATA Controller; Intel® VROC and Software RAID 0/1; Optional SAS Controller.


24 DDR4 DIMM slots up to 2666/2400/2133 MHz. Supports up to 3TB ECC RDIMM.


Rear: 2x PCIe 3.0 x16 LP HL
1x PCIe 3.0 x16 LP HL;
Front: 1x PCIe 3.0 x8 LP.

Key Features Of Tesla V100


Volta Architecture & Tensor Core - By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100S GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.

Maximum Efficiency Mode - Allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget, and providing up to 80% of the performance at half the power consumption.

HBM2 - With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100S delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.