
NVIDIA DGX Station™
Maximize Your Data Science Productivity

- 72x the performance for deep learning training, compared with CPU-based servers
- 100x speedup on large data set analysis, compared with a 20 node Spark server cluster
- 5x increase in bandwidth compared to PCIe with NVLink technology
- Maximized versatility with deep learning training and over 30,000 images per second inferencing
Your data science team depends on computing performance to gain insights and innovate faster through the power of AI and deep learning. Until now, AI supercomputing was confined to the data center, limiting the experimentation needed to develop and test deep neural networks prior to training at scale. Designed for your data science team, NVIDIA® DGX Station™ is the world’s fastest workstation for leading-edge AI development. This fully-integrated and optimized system enables your team to get started faster and effortlessly experiment with the power of a data center in your office.
World-Class Computing Performance in the Hands of Your Team
Your real work is innovation and discovery. DGX Station is the only workstation with four NVIDIA® Tesla® V100 Tensor Core GPUs, integrated with a fullyconnected four-way NVIDIA NVLink™ architecture. With 500 TFLOPS of supercomputing performance, your entire data science team can experience over 2X the training performance of today’s fastest workstations.
Get the Fastest Start in Data Science and AI Research
Spend less time and money on configuration, and more time on data science. DGX Station can save you hundreds of thousands of dollars in engineering hours and lost productivity waiting for stable versions of open source code. Powered by the NVIDIA DGX Software Stack, DGX Station lets you to start innovating within one hour.
This integrated hardware and software solution allows your data science team to easily access a comprehensive catalog of NVIDIA optimized GPU-accelerated containers that offer the fastest possible performance for AI and data science workloads. It also includes access to NVIDIA DIGITS™ deep learning frameworks, HPC containers, third-party accelerated solutions, the NVIDIA Deep Learning SDK (e.g. cuDNN, cuBLAS, NCCL), NVIDIA CUDA® toolkit, RAPIDS open source libraries, and NVIDIA drivers.
Built on container technology and powered by NVIDIA Container Runtime for Docker, this unified deep learning software stack simplifies your workflow, saving you days in re-compilation time when you need to scale your work and deploy your models in the data center or cloud. The same workload running on DGX Station can be effortlessly migrated to an NVIDIA DGX-1™, NVIDIA DGX-2™, or the cloud, without modification. With GPU-aware Kubernetes from NVIDIA, your data science team can benefit from industry-leading orchestration tools to better schedule AI resources and workloads. Data scientists can run compute workloads by scheduling and queuing jobs, running multiple jobs simultaneously, and easily monitoring GPU health. Eliminate any idle usage of GPUs, drive down the cost per training run, and maximize the productivity and return on investment for your data science team. Enjoy productive experimentation and spend more time focused on insight.
Access to AI Expertise
With DGX Station, you benefit from NVIDIA’s AI expertise, enterprisegrade support, extensive training, and field-proven capabilities that can jump-start your work for faster insights. Our dedicated team is ready to get you started with prescriptive guidance, design expertise, and access to our fully-optimized DGX Software Stack. You get an IT-proven solution backed by enterprise-grade support and a team of experts who can help ensure your mission-critical AI applications stay up and running.
GPUs
4x Tesla V100
GPU Memory
128 GB total system
TFLOPS (Mixed precision)
500
NVIDIA Tensor Cores
2,560
NVIDIA CUDA® Cores
20,480
CPU
Intel Xeon E5-2698 v4 2.2 GHz (20-Core)
System Memory
256 GB RDIMM DDR4
Storage
Data: 3X 1.92 TB SSD RAID 0
OS: 1X 1.92 TB SSD
Network
Dual 10GBASE-T (RJ45)
Display
3X DisplayPort, 4K resolution
Additional Ports
2x eSATA, 2x USB 3.1, 4x USB 3.0
Acoustic
< 35 dB
System Weight
88 lbs / 40 kg
System Dimensions
518 D x 256 W x 639 H (mm)
Maximum Power Requirements
1,500 W
Operating Temperature Range
10–30 °C
Software
Ubuntu Desktop Linux OS,
Red Hat Enterprise Linux OS,
DGX Recommended GPU Driver
CUDA Toolkit
Optimized for Turnkey Solutions
Enable powerful design, training, and visualization with built-in software tools including TensorFlow, Caffe, Torch, Theano, BIDMach cuDNN, NVIDIA CUDA Toolkit and NVIDIA DIGITS.




