Nvidia Debuts New Products to Accelerate Adoption of GenAI

After 50 years of SIGGRAPH, the conference has come full circle, from high-tech for PhDs to AI for everyone. That was Nvidia founder and CEO Jensen Huang’s message in back-to-back keynote sessions, including a Q&A with Meta CEO Mark Zuckerberg. Huang touted Universal Scene Description (OpenUSD), discussing developments aiming to speed adoption of the universal 3D data interchange framework for use in everything from robotics to the creation of “highly accurate virtual worlds for the next evolution of AI.” As Zuckerberg’s interlocutor, he prompted the Facebook founder to share a vision of AI’s personalization of social media.

“Everybody will have an AI assistant,” Huang said from the stage at Denver’s Colorado Convention Center in the first session. “Every single company, every single job within the company, will have AI assistance.”

To illustrate the potential of generative AI as a collaborative partner, Huang presented a powerful WPP-produced clip that features a child’s voice commanding, “build me a tree in an empty field … build me hundreds of them in all directions,” while AI visually executes the speech prompts.

“Coca-Cola and marketing giant WPP are among the earliest adopters” of Nvidia’s “AI art tools,” according to Axios.

Another video showed how the Nvidia Omniverse platform, used in entertainment visualization and industrial design, will help build the next wave of AI, which involves “physical AI” (robots).

USD connectors to robotics and developer tools can allow users to stream massive, Nvidia RTX ray-traced datasets to Apple Vision Pro headsets, Huang said, touting the company’s Omniverse systems built using the OpenUSD standard. Huang predicts Omniverse will be a foundational element of using AI to create virtual worlds and “assets that the world’s largest brands can use.”

Huang announced new NIM microservices that will enable AI models to generate OpenUSD language “to answer user queries,  generate OpenUSD Python code, apply materials to 3D objects, and understand 3D space and physics to help accelerate digital twin development.”

“Nvidia NIM is a comprehensive solution for deploying generative AI simplified for developers, but built for applications at scale,” Kari Briski, Nvidia VP product management for AI and HPC software, told Bloomberg.

Discussing Nvidia’s role in computer graphics at some length, Huang seemed most proud of how the company has branched out from its gaming roots to develop tools like the Metropolis reference workflows for building AI visual agents used to create “intelligent, immersive  environments” for everything from manufacturing to medicine — but with implications for immersive entertainment worlds, too.

Huang discussed a new cloud-based inference-as-a-service through Hugging Face that will make popular AI models accessible to millions of developers. Available from Hugging Face, the inference service “will enable developers to rapidly deploy leading large language models such as the Llama 3 family and Mistral AI models,” optimized with NIM running on the DGX Cloud, the company announced.

That Huang managed to pack day one of SIGGRAPH with fresh information from Nvidia was no small feat, coming on the heels of the company’s GTC conference for developers, held in March. For those who missed that event, GTC 2024 Highlights are available in a 37-page PDF and video replays of key sessions.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.