AMAX Gives Customers an Inside Look into NVIDIA’s new H100 GPUs and DGX H100 Systems

As data center performance continues to evolve, GPU performance needs to scale as enterprises move into the latest technologies for AI and various new workloads. Calculating large amounts of data processing, scaling at exascale (a quintillion operations per second) has been the norm, as well as the need for powerful equipment and IT infrastructure.

NVIDIA has announced its latest breakthrough in accelerated computing with the release of the NVIDIA H100 Tensor Core GPU, the successor to the NVIDIA A100 Tensor Core GPU. H100 is built on the NVIDIA Hopper architecture to deliver industry-leading conversational AI and speed up large language models by 30X over the previous generation.

Flexibility: The H100 accelerates a wide range of workloads, including enterprise tasks, exascale HPC, and trillion parameter AI models. H100 is the world’s most advanced chip, built using TSMC’s 4nm process customized for NVIDIA, with 80 billion transistors and numerous architectural advances.

Comparison: As NVIDIA’s ninth-generation data center GPU, the H100 is designed to provide an order-of-magnitude performance boost for large-scale AI and HPC compared to the previous-generation NVIDIA A100 Tensor Core GPU. The H100 continues the A100 GPU’s major design focus of improving strong scaling for AI and HPC workloads, with significant architectural efficiency improvements. For today’s mainstream AI and HPC models, the H100 with NDR InfiniBand interconnect outperforms A100 by up to 30X.

Scale to NVIDIA DGX SuperPOD: A single NVIDIA DGX H100 system contains eight NVIDIA H100 GPUs and provides unrivaled FP8 performance of 32 petaFLOPS. This performance can be easily scaled up by using multiple DGX H100 systems in an NVIDIA DGX BasePOD or NVIDIA DGX SuperPOD architecture. DGX SuperPOD starts at 32 DGX H100 systems, integrating 256 H100 GPUs into a “scalable unit” that uses the high-speed NVIDIA InfiniBand networking. DGX SuperPOD with DGX H100 systems can scale from one to multiple scalable units, allowing customers to deploy clusters to solve their largest AI and deep learning challenges.

Ready to scale AI workloads through HPC? With the new NVIDIA H100 GPU architecture, AMAX can assist in adopting enterprises into AI-ready infrastructure that will accelerate various industries into a new generation of streamlined workloads. From deep learning training and inferencing to analyzing datasets, AMAX’s AceleMax line of products of AI solutions with the latest NVIDIA H100 acceleration allows organizations to solve the world’s most complex problems with our best-in-class engineering.

Contact us about AMAX AI GPU solutions

Comments are closed.