Powered by 8x NVIDIA Tesla V100 PCIe GPU accelerators, and 2x Intel® Xeon® Scalable Family Processors, DL-E48A is the first-in-industry accelerated GPU computing solution to feature re-configurable single and dual root complex PCIe architecture, allowing for hardware optimization on the fly for AI and DL training, inference, HPC compute, rendering and virtualization applications.

Features

Single Root Complex

For GPU-intensive DL workloads—ideal for reducing GPU to GPU memory copy latency & increased bandwidth.

Dual Root Complex

For CPU-intensive and parallel-computing applications—optimized for CPU/memory to GPU communications.

  • Configurable PCIe root complex architecture to support both single and dual root complex, allowing for hardware optimization for multiple AI applications.
  • Supports up to 8x full-height, full-length, active & passive GPUs in 4U chassis, with maximum GPU utilization.
  • Flexible SKU support through PCIe lanes topology across PLX PCIe switch, configurable via software.
  • Smart Thermal Radar design for energy efficiency.
  • Supports Intel Dual Embedded Omni-Path 100Gbps Fabric and OCuLink for NVMe for high speed transmission.

    Specifications

    GPU

    8x Tesla V100 PCIe.
    Up to 256GB dedicated HBM2 GPU memory per system.

    CPU

    2x Intel® Xeon® Processor Scalable Family series.
    Up to 205W TDP.

    NETWORK

    2x I350 LAN + 1x Management LAN. 2x Intel Omni-Path Fabric; 2x InfiniBand EDR/10G kit.

    STORAGE

    On-board SATA Controller; Intel® VROC and Software RAID 0/1; Optional SAS Controller.

    MEMORY

    24 DDR4 DIMM slots up to 2666/2400/2133 MHz. Supports up to 3TB ECC RDIMM.

    EXPANSION SLOTS

    Rear: 2x PCIe 3.0 x16 LP HL
    1x PCIe 3.0 x16 LP HL;
    Front: 1x PCIe 3.0 x8 LP.

    Key Features Of Tesla V100

    Volta Architecture & Tensor Core - By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.

    Maximum Efficiency Mode - Allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget, and providing up to 80% of the performance at half the power consumption.

    HBM2 - With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.

    Live Chat