Ten years have gone by since GPU-accelerated computing was first introduced. This year at GPU Technology Conference 2017, advances in GPU computing and methods have culminated into the most ambitious and far-reaching technical endeavor yet—Artificial Intelligence. As researchers, global enterprises, and startups alike converged at GTC, the hottest topic was clearly AI and Machine Learning, with NVIDIA doubling down on its position as an AI company, “Powering the AI Revolution.”
In his keynote, NVIDIA founder and CEO Jensen Huang discussed how the AI Boom is fueling a post-Moore’s Law (or Moore’s Law Squared) demand for GPU compute power. In response, NVIDIA has invested over $3 billion to develop the new Tesla (Volta) V100 accelerator. Built with 21 billion transistors, the Volta V100 delivers deep learning performance equal to 100 CPUs and will support new releases of deep learning frameworks such as Caffe 2, TensorFlow, Microsoft Cognitive Toolkit, and MXNet. The DGX-1 also gets an upgrade with V100 GPUs, selling at $149,000 (want one? Inquire here).
Huang also introduced a DGX workstation featuring 4x V100 GPUs for 480 Teraflops of Tensor computing power with a selling price of $69,000.
Other announcements included a collaboration with Toyota for autonomous driving, Project Holodeck for working in a shared VR environment, but more than anything, it was the signal that NVIDIA has every intention of powering the AI boom, particularly with regards to accelerating Machine Learning.
Along those same lines, AMAX showcased its dedication to providing advanced tools to fast track Deep Learning development while reducing barrier to entry. With launch of “The MATRIX” product line, AMAX combined its award-winning Deep Learning platforms with end-to-end Deep Learning tools as well as GPU virtualization technology. The MATRIX increases GPU utilization, fast tracks AI development and training, facilitates task management, minimizes infrastructure costs all to an unprecedented level.
While the MATRIX is deployed as a turnkey appliance in workstation, server and rackscale clusters, it’s especially beneficial to AI startups and incubators who need a deep learning platform to scale with them. The ultra-quiet MATRIX workstations feature a mini 2-GPU form factor and a 4-GPU form factor, and through the MATRIX software, GPU resources can be aggregated and presented to users as an on-premise GPU cloud for dynamic sharing. What this means is that AI companies can build virtual GPU clusters on demand, using hardware that sits quietly under a desk.
Our Presenter Series featured topics around GPU Virtualization and Cloud Computing for Machine Learning, including how the MATRIX enables AI startups to accelerate Time-to-Market, how to upgrade non-GPU infrastructures to include GPU resources, how to break through current performance limitations for GPU computing, and many more.
We were also honored to be interviewed by insideHPC.com to talk about the use cases of the MATRIX.
insideHPC: What are you showcasing at the booth today?
Dr. Rene Meyer (VP of Technology, AMAX): What we are showcasing here is a very interesting solution—a hardware/software solution. We not only present the hardware, but we put a software layer on top, which allows you to virtualize GPUs in those machines.
insideHPC: Can you tell me about some use cases and what problems it solves?
Dr. Rene Meyer: One of the use cases is for enterprise customers who purchased a few racks of hardware, and now that they learn that the software they use supports GPUs, and benefits from acceleration. So what they usually do is rip the old hardware out, and replace them with new hardware featuring GPUs, which can be expensive. Rather than do that, they can add a few blocks of our high-density MATRIX servers, and then use the MATRIX software and virtualize the GPUs and reattach them to the existing cluster. So with the MATRIX, you can turn your existing non-GPU cluster into GPU cluster, with minimal additional hardware and without performing a complete refresh.
insideHPC: Ok Rene. This MATRIX box has been described as groundbreaking. Can you tell me more about it?
Dr. Rene Meyer: The MATRIX offering is an end-to-end solution. It’s not just a very powerful deep learning box, but it also has an integrated software layer for a plug and play solution. The software layer allows you to spin off instances, containers, which are pre-configured with Deep Learning frameworks, like Tensorflow, Caffe, Torch, and so on. So you don’t have to worry about having the IT to configure, set up or load things to make sure you have the latest version and things are working. You can literally, at a click of a button, spin off instances and be ready to go.
insideHPC: So for developers this would be pretty powerful to get stuff out of the way and focus on work. It that the idea?
Dr. Rene Meyer: That’s exactly the idea. You can start off development with one of these boxes. Once you see that you need to upgrade or out scale as you need more power, there are various ways how we can do this. One of the ways is that you buy multiple MATRIX boxes. What the MATRIX software does is through virtualization, you can attach GPUs from one box to another box or combine compute resources dynamically. Therefore, you can build more powerful servers or workstations for your workloads on demand. As you continue to grow, you can purchase more servers or workstations, which can be seamlessly integrated into your growing virtual GPU pool. What’s good for startups is you can grow your computation power significantly without the need to build a data center or rent from a colo, and reduce the time and cost of a traditional infrastructure.