Powered by the latest NVIDIA Tesla V100 GPU accelerator, the world’s fastest, most advanced, and most efficient data center GPUs ever built, delivering performance of up to 100 CPUs in a single Tesla V100 GPU, enabling data scientists, researchers, and engineers to tackle challenges that were once thought impossible.

Key Features of Tesla V100

Volta Architecture - By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.

Maximum Efficiency Mode - The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.

Tensor Core - Equipped with 640 Tensor Cores, Tesla V100 delivers 112 Teraflops of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.

Next Generation NVLink - NVIDIA NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server.

HBM2 - With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.

Programmability - Tesla V100 is architected from the ground up to simplify programmability. It’s new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.



4X Tesla V100 PCIe.
Delivers up to 2,560 Tensor cores, 20,480 CUDA Cores, 28 Teraflops double precision, 56 Teraflops single precision, and 448 Teraflops Tensor performance.


2X Intel® Xeon® Processor Scalable Family series.
LGA 3647 Socket P0


14x DDR4 DIMM slots up to 2933/2666/2400/2133 MHz.


4x PCI-E 3.0 x16 slots for 4x GPU cards.
2x PCI-E 3.0 x8 (in x16) low-profile slots


1U rack-optimized chassis.
2x 2.5" hot-swap drive bays and 2x 2.5" internal drive bays, optional NVME SSD support.


Cloud Artificial Intelligence
Deep Learning Data Analytics
Satellite Imaging Oil and Gas
Climate Modeling Computing Physics
Live Chat