Nvidia, Intel and ARM Publish New FP8 AI Interchange Format
September 19, 2022
Nvidia, Intel and ARM have published a draft specification for a common AI interchange format aimed at faster and more efficient system development. The proposed “8-bit floating point” standard, known as FP8, will potentially accelerate both training and operating the systems by reducing memory usage and optimizing interconnect bandwidth. The lower precision number format is a key factor in driving efficiency. Transformer networks, in particular, benefit from an 8-bit floating point precision, and having a common interchange format should facilitate interoperability advances for both hardware and software platforms.
This FP8 specification has two variants, E5M2 and E4M3, explains Nvidia director of AI training product marketing Shar Narasimhan in a blog entry that notes the new spec is natively implemented in the Nvidia Hopper H100 GPU architecture. It’s also native to the Gaudi2 AI training chipset from Intel.
“FP8 is a natural progression for accelerating deep learning training inference beyond the 16-bit formats common in modern processors,” according to a white paper detailing the specifications for the new FP8 format, which the group will submit to IEEE. The partners say they will make available “in an open, license-free format to encourage broad industry adoption.”
While the industry has moved from 32-bit precisions to 16-bit, and now 8-bit precision formats, TechCrunch points out the more efficient lower precision points “sacrifice some accuracy to achieve those gains; after all, 16 bits is less to work with than 32,” but notes “many in the industry — including Intel, ARM and Nvidia — are coalescing around FP8 (8 bits) as the sweet spot.”
Narasimhan says FP8 shows “comparable accuracy to 16-bit precisions” across a wide variety of use cases, architectures and networks. Results on transformers, computer vision, and GAN networks — the generative adversarial network machine learning model whereby two neural networks compete with each other to become more accurate in their predictions — “all show that FP8 training accuracy is similar to 16-bit precisions while delivering significant speedups,” Narasimhan writes.
While FP8 is native to chipsets from Nvidia and Intel, the common format “would also benefit rivals like SambaNova, AMD, Groq, IBM, Graphcore and Cerebras — all of which have experimented with or adopted some form of FP8 for system development,” TechCrunch reports.
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.