Nvidia’s flagship AI chip reportedly 4.5x faster than previous champion

Nvidia H100 Tensor Core GPU
Enlarge / Press photo of Nvidia H100 Tensor Core GPU.

Nvidia announced yesterday that it’s coming H100 Tensor Core GPU “hopper” sets new performance record in industry-standard debut MLPerf benchmark, delivering results 4.5 times faster than A100currently Nvidia’s fastest produced AI chip.

The MPerf benchmark (technically called “MLPerfTM 2.1 . inference“) measures “inference” workloads, showing how well a chip can apply a previously trained machine learning model to new data. to be MLCommons developed the MLPerf . benchmarks in 2018 to provide a standardized metric to convey machine learning performance to potential customers.

Nvidia's H100 benchmark results against the A100, in fancy bar chart form.
Enlarge / Nvidia’s H100 benchmark results against the A100, in fancy bar chart form.


In particular, the H100 did well in BERT-Large benchmark, which measures natural language processing performance using the BERT model developed by Google. Nvidia credits this particular result to the Hopper . architecture Transformers, specially speed up the training transformer models. This means that the H100 can accelerate future natural language models similar to that of OpenAI GPT-3can compose works of writing in different styles and hold conversations.

Nvidia positions the H100 as a high-end datacenter GPU chip designed for AI and supercomputing applications such as image recognition, large language models, image synthesis, and more. Analysts expect it to replace the A100 as Nvidia’s top datacenter GPU, but it’s still a work in progress. goverment American limitations Imposing last week on chip exports to China has raised concerns that Nvidia may not be able to deliver the H100 by the end of 2022 as part of the development is taking place there.

Nvidia clarify in a second submission by the Securities and Exchange Commission last week that the US government will allow continued development of the H100 in China, so the project is now on track. According to Nvidia, the H100 will available “later this year.” If the success of the previous-generation A100 chip is any indication, the H100 could power many groundbreaking AI applications in the years to come.

Source link


News5s: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button