• Ben's Bites
  • Posts
  • Nvidia announces new AI chip - the H200

Nvidia announces new AI chip - the H200

NVIDIA has announced a new GPU, the H200, based on their Hopper architecture. This will provide a major boost in performance for AI workloads like large language models.

What’s going on here?

The H200 delivers vastly more memory bandwidth and capacity over previous GPUs, enabling generative AI to process much more data much faster.

What does this mean?

The H200 has 141GB of HBM3e memory, nearly double the A100, and 2.4x more memory bandwidth. This is ideal for handling the massive data needs of large language models and other generative AI applications.

The new GPU is expected to nearly double inference speeds on 70 billion parameter models compared to the H100 GPU. Further optimizations will unlock more performance over time.

The H200 will be available in NVIDIA's HGX server boards and the Grace Hopper Superchip module. It is compatible with existing H100 servers, allowing easy upgrading.

Why should I care?

This new GPU will accelerate development and use of large language models that can power more capable AI assistants, creative tools, and information systems.

Faster training and inference directly enables companies to build and deploy more advanced AI sooner. This can translate into providing better services, insights, and automation.

As AI becomes an increasingly crucial part of business and research, access to more powerful hardware like the H200 will be key to staying competitive. The acceleration it offers will help drive generative AI forward across industries.

Reply

or to participate.