VMware and Nvidia Are Bringing Generative AI to Enterprises

VMware and Nvidia have joined forces on “VMware Private AI Foundation with Nvidia,” a fully-integrated solution designed to bring generative AI training and deployment to enterprise clients running on VMware’s hybrid cloud infrastructure. The full-stack product will provide software, compute power and everything needed to fine-tune large language models using proprietary data. “Together with Nvidia, we’ll empower enterprises to run their generative AI workloads adjacent to their data with confidence while addressing their corporate data privacy, security and control concerns,” said VMware CEO Raghu Raghuram.

“Customer data is everywhere — in their data centers, at the edge, and in their clouds,” Raghuram said in a joint press release with Nvidia. “Generative AI and multi-cloud are the perfect match.”

The collaboration anticipates readying “the hundreds of thousands of enterprises that run on VMware’s cloud infrastructure for the era of generative AI,” helping them run applications including intelligent chatbots and AI assistants as well as providing search and summarization tools.

Generative AI is estimated to contribute as much as $4.4 trillion in annual economic value globally across 63 use cases, according to a June McKinsey Digital report on the potential of generative AI.

“However, in this race, many teams are working in fragmented environments and struggling to maintain the best possible standards for the security of their data and the performance of the gen AI applications they power,” reports VentureBeat.

The fully-integrated VMware and Nvidia suite is “tackling this challenge by giving enterprises running VMware’s cloud infrastructure a one-stop shop to take any open model of their choice,” notes VB. Models can be power-boosted by the combination of the Nvidia NeMo framework and VMware’s virtualized platform.

The support ecosystem for the VMware Private AI Foundation with Nvidia includes Dell Technologies, Hewlett Packard Enterprise and Lenovo — three of the first to provide hardware optimized for enterprise LLM customization and inference workloads with Nvidia L40S GPUs, Nvidia BlueField-3 DPUs and Nvidia ConnectX-7 SmartNICs.

Related:
Nvidia’s Q2 Earnings Prove It’s the Big Winner in the Generative AI Boom, TechCrunch, 8/23/23
Nvidia to Triple Production of $40,000 Chips as It Races to Meet AI Demand, Business Insider, 8/23/23
How Nvidia Built a Competitive Moat Around AI Chips, The New York Times, 8/21/23
Nvidia’s New DLSS 3.5 Works on All RTX GPUs to Improve the Quality of Ray Tracing, The Verge, 8/22/23

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.