BrainMax™ DL-E24T

2U 4x Tesla T4 GPU Inference Server

 

  • Deep Learning Inference

Request a Quote

  • Hyperscale Datacenters, Supercomputing Centers, Consumer Internet Companies, Higher Ed Research, Government, Healthcare, Financial Services, Retail, Manufacturing

Processor

  • Dual Socket P (LGA 3647)
  • 2nd Gen. Intel® Xeon® Scalable Processors (Cascade Lake/Skylake), 3 UPI up to 10.4GT/s
  • Supports CPU TDP up to 150W

Chipset

  • Intel® C621 Express Chipset

GPU Support & Quantity

  • 4 x Tesla T4 PCIe

System Memory (Maximum)

  • 16 DDR4 DIMM slots up to 2933/2666/2400/2133 MHz
  • Up to 4TB ECC RDIMM

Expansion Slots

  • 8x PCI-e x16 Gen 3 slots for 8x double width GPU cards, plus
  • Rear:1x PCI-E 3.0 x24 (support with riser card)  (1 x PCI-E x16 (x16 Gen3 Link) and 1x PCI-E x8 (x8 Gen3 Link))
  • Front:1x PCI-E x8 (internal HBA/RAID card)

Connectivity

  • 1 x Dual Port Intel I350-AM2 Gigabit LAN controller
  • 1 x RJ45 Dedicated IPMI LAN port

VGA

  • Aspeed AST2500 BMC

Management

  • IPMI 2.0 + KVM with dedicated LAN
  • GPU health monitoring with fan speed control

Drive Bays

  • 8 x Hot-swap 3.5″ HDD Bays

Power Supply

  • 1+1 Redundant 1600W 80 PLUS Platinum Power Supply

System Dimensions

  • 3.5″ x 17.2″ x 31” / 89mm x 437mm x 787mm (H x W x D)

Optimized for Turnkey Solutions

Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.