Big Tech Forms a Group to Develop AI Connectivity Standard

Big Tech players have joined forces to develop a new industry standard to advance high-speed and low latency communication among data centers by coordinating component development. AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta Platforms and Microsoft are backing the Ultra Accelerator Link (UALink) promoter group. The group plans to define and establish an open industry standard that will enable AI accelerators to communicate more effectively. The UALink aims to create a pathway for system OEMs, IT professionals and system integrators to connect and scale their AI-connected data centers.

The group has formed the UALink Consortium and expects it to be incorporated in Q3. “A higher-bandwidth updated spec, UALink 1.1, is set to arrive in Q4,” reports TechCrunch.

The group “is proposing a new industry standard to connect the AI accelerator chips found within a growing number of servers,” writes TechCrunch, noting that “broadly defined, AI accelerators are chips ranging from GPUs to custom-designed solutions to speed up the training, fine-tuning and running of AI models.”

“The industry needs an open standard that can be moved forward very quickly, in an open [format] that allows multiple companies to add value to the overall ecosystem,” EVP and GM of the Data Center Solutions Business Group at AMD Forrest Norrod said at a press briefing last week, adding that the standard must allow “innovation to proceed at a rapid clip unfettered by any single company.”

“The 1.0 specification will enable the connection of up to 1,024 accelerators within an AI computing pod and allow for direct loads and stores between the memory attached to accelerators, such as GPUs, in the pod,” the group explained in a news announcement. The 1.0 specification is expected to be made available in Q3 to companies that join the UALink Consortium.

“As the demand for AI compute grows, it is critical to have a robust, low-latency and efficient scale-up network that can easily add computing resources to a single instance,” VentureBeat writes, adding that “creating an open, industry standard specification for scale-up capabilities will help to establish an open and high-performance environment for AI workloads, providing the highest performance possible,” according to the group.

That the world’s largest AI chipmaker, Nvidia, is not a founding participant did not escape notice. It could be the UALink group’s attempt to rein in the runaway giant, or perhaps Nvidia is secure in the fact that the many members using its chips will necessarily represent its interests.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.