Nvidia’s Hopper H100 pictured, features 80GB of HBM3 memory and impressive VRM

Briefly: Nvidia took the wraps off its Hopper architecture at GTC 2022 and announced the H100 server accelerator, but only showed renders of it. Now we finally have some hand shots of the SXM variant of the card, which has a mind-boggling 700W TDP.

It’s been a little over a month since Nvidia unveiled their H100 server accelerator based on the Hopper architecture, and so far we’ve only seen renders of it. That changes today, as ServeTheHome just shared photos of the card in its SXM5 form factor.

The GH100 computer GPU is fabricated on TSMC’s N4 process node and has a die size of 814 mm2. The SXM variant has 16896 FP32 CUDA cores, 528 Tensor cores and 80 GB HBM3 memory connected via a 5120-bit bus. As can be seen from the images, there are six 16GB memory stacks around the GPU, but one of them is disabled.

Nvidia also quoted a staggering 700W TDP, 75% higher than its predecessor, so it’s no surprise that the card comes with an extremely impressive VRM solution. It features 29 inductors, each equipped with two power stages and an additional three inductors with one power stage. Cooling all these tightly packed components will likely be a challenge.

Another notable change is the connector layout for SXM5. There is now a short and a long mezzanine connector, while previous generations had two identical longer ones.

Nvidia will begin H100-equipped systems in the third quarter of this year. It’s worth noting that the PCIe version of the H100 is currently listed in Japan for 4,745,950 yen ($36,300) after taxes and shipping, although it has fewer CUDA cores, downgraded HBM2e memory, and half the TDP of the SXM variant.

Leave a Comment