HPE Adds Support for Qualcomm Cloud AI 100 Inference Accelerator

HPE’s approval for the Qualcomm Technology Cloud AI 100 is a huge step for the most efficient and powerful AI inference engines on the market today.

While working at AMD to add the first-generation EPYC server SoC to HPE servers, I learned that the process the company used to qualify a CPU or GPU was rightly called “The Meat Grinder.” Read that in a low, menacing voice and you get the idea. After HPE management determined that you had a valuable product that was likely in high demand, the “fun” part began. HPE engineers spent months testing millions of corner cases of power, temperature, load, performance and logic before telling their customers that this CPU or card met their high quality standards and was fully supported.

That’s the hurdle Qualcomm Technologies (QTI) had to overcome to get their AI accelerator on HPE Edgeline servers. So this is a big issue, and probably not the last we’ll see for QTI and HPE. The Edgeline family is, as the name implies, for edge deployments in factory automation, often configured in a robust chassis. As AI becomes a ubiquitous tool for edge processing applications, HPE sees an exciting opportunity to partner with QTI for state-of-the-art AI processing in imaging and NLP applications.

ADVERTISEMENT

The greater opportunity for both companies could be for HPE to qualify the QTI card for the HPE Proliant DL server line, which is popular with enterprise and 2nd-tier cloud service providers. AI at scale is a large market and growing rapidly. For example, Meta processes hundreds of trillions (yes, trillions with a “T”) of inferences on the Facebook platform every day. While Intel has done a great job keeping these cycles on Xeon processors with AI features, special AI processing has TCO benefits and can ensure consistent performance and latency.

ADVERTISEMENT

While the image above is for a Gigabyte server, an HPE server would offer similar value and open a lot of doors for QTI. Speaking to HPE management, they pointed to the AI ​​100’s performance leadership compared to alternative technologies, citing the ~6X performance advantage indicated by MLPerf benchmarks at the same (75W) power consumption as something their customers have asked for. .

conclusions

Adoption from HPE, the industry’s largest server vendor, should give QTI the respect and product demand it deserves in AI inference processing at the edge and in the cloud. According to our calculations, a large data center could save tens of millions of dollars in annual energy consumption by using the AI ​​100 to process neural networks for image and language models.

This announcement has several important implications:

  1. HPE should see growing customer interest in dedicated AI inference processing at the edge.
  2. HPE concluded that the QTI platform offers significant performance and energy efficiency benefits.
  3. While the extra performance is welcome on Edgeline servers, the potential for power efficiency (6x again!) will be critical for larger scale deployments should HPE add QTI support to the Proliant server family. We would be surprised if HPE doesn’t support QTI AI 100 on the server line later this year.
  4. Where HPE goes, others will follow.

ADVERTISEMENT

disclosures: This article represents the opinion of the author and should not be construed as advice to buy from or invest in the companies mentioned. Cambrian AI Research is fortunate to have many, if not most, semiconductor companies as our clients, including Blaize, Cerebras, D-Matrix, Esperanto, Graphcore, GML, IBM, Intel, Mythic, NVIDIA, Qualcomm Technologies, Si-Five, Synopsys and Tenstorrent. We have no investment positions in any of the companies mentioned in this article and do not plan to start one in the near future. For more information, visit our website at https://cambrian-AI.com.

Leave a Comment