BrainMax™ DL-E410T

4U 10x Tesla T4 GPU Inference Server

 

  • Deep Learning Inference

Request a Quote

  • Hyperscale Datacenters, Supercomputing Centers, Consumer Internet Companies, Higher Ed Research, Government, Healthcare, Financial Services, Retail, Manufacturing

Processor

  • Dual Socket P (LGA 3647)
  • 2nd Gen. Intel® Xeon® Scalable Processors (Cascade Lake/Skylake), 3 UPI up to 10.4GT/s
  • Supports CPU TDP up to 205W

Chipset

  • Intel® C621 Express Chipset

GPU Support & Quantity

  • 10 x Tesla T4 PCIe

System Memory (Maximum)

  • 24 DDR4 DIMM slots up to 2933/2666/2400/2133 MHz
  • Supports up to 6TB ECC RDIMM

Expansion Slots

  • 10 x PCIe x16 slots (Gen3 x16 bus) for GPUs
  • 1 x PCIe x16 (Gen3 x8 bus) Half-length low-profile slot in front
  • 1 x PCIe x16 (Gen3 x16 bus) Half-length low-profile slot at rear

Connectivity

  • Rear Side: 2 x 10Gb/s BASE-T LAN ports, 1 x 10/100/1000 management LAN
  • Front Side:
    2 x 1Gb/s BASE-T LAN ports
    Optional 4 x QSFP28 LAN ports with Intel® Omni-Path Host Fabric Interface, providing 25Gb/s bandwidth per port, total 100Gb/s bandwidth

VGA

  • Aspeed AST2500 BMC

Management

  • IPMI 2.0 + KVM with dedicated LAN
  • GPU health monitoring with fan speed control

Drive Bays

  • Pre-install with Broadcom SAS3008 storage adapter with IR mode and expander board
  • 12 x 3.5″ hot swapp SATA/SAS HDD/SSD bays
  • 10 x 2.5″ hot swap HDD/SSD bays
  • 8 x NVMe or SATA/SAS HDD trays
  • 2 x SATA/SAS HDD trays

Power Supply

  • 3 x 2200W redundant PSUs 80 PLUS Platinum

System Dimensions

  • 7.0″ x 17.6″ x 34.6” / 176mm x 448mm x 880mm (H x W x D)

Optimized for Turnkey Solutions

Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.