BrainMax® DL-E14T

1U 4x Tesla T4 GPU Inference Server


  • Deep Learning Inference

Request a Quote

  • Supports up to four Tesla T4 GPUs in a 2U chassis, with dual latest generation Intel Xeon Scalable processors.


  • Dual Socket P (LGA 3647)
  • 2nd Gen. Intel® Xeon® Scalable Processors (Cascade Lake/Skylake), 3 UPI up to 10.4GT/s
  • Supports CPU TDP up to 205W


  • Intel® C621 Express Chipset

GPU Support & Quantity

  • 4 x Tesla T4 PCIe

System Memory (Maximum)

  • 24 DIMM slots
  • Up to 6TB 3DS ECC DDR4-2933/ 2666/2400 MHz RDIMM/LRDIMM
  • Supports Intel® Optane™ DCPMM

Expansion Slots

  • 4x PCI-E 3.0 x16 slots for 4x GPU cards
  • 2x PCI-E 3.0 x8 (in x16) low-profile slots


  • 2 x GbE LAN ports (Intel® I350-AM2)
  • 1 x 10/100/1000 management LAN


  • Aspeed AST2500 BMC


  • Intelligent Platform Management Interface v.2.0
  • IPMI 2.0 + KVM with dedicated LAN

Drive Bays

  • 2 x 2.5″ hot-swappable HDD/SSD bays
  • 2 x 2.5″ internal fixed HDD/SSD bays

Power Supply

  • 2 x 2000W redundant PSU 80 PLUS Platinum

System Dimensions

  • 1.7″ x 17″ x 35.4” / 43.5mm x 430mm x 900 mm (H x W x D)

Optimized for Turnkey Solutions

Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.