Nvidia Unveils New Tools for AI, the Metaverse at SIGGRAPH
August 15, 2022
Nvidia founder and CEO Jensen Huang shared his vision for a computer graphics industry transformed by AI, the metaverse and digital humans. “The combination of AI and computer graphics will power the metaverse, the next evolution of the Internet,” Huang told attendees at SIGGRAPH 2022 in Vancouver. To support this transformation, Nvidia unveiled the Avatar Cloud Engine (ACE) and discussed plans to build out the Universal Scene Description (USD) industry standard, which Huang called “the language of the metaverse.” New extensions for Omniverse and graphics workflow optimizations using machine learning were also part of the mix.
Describing the metaverse as a series of connected virtual worlds and digital twins where people will work and play using “one of the most widely used kinds of robots,” digital human avatars, Huang predicts “there will be billions of avatars” designed, trained and operated in Omniverse.
“With Omniverse ACE, developers can build, configure and deploy their avatar application across any engine in any public or private cloud,” Nvidia senior director of graphics and AI Simon Yuen explained, noting “we want to democratize building interactive avatars for every platform.” ACE will be available in early 2023.
Huang showcased the latest iteration of Audio2Face, an AI model that can create facial animation directly from voices, and teased a future version that “will create avatars from a single photo, applying textures automatically and generating animation-ready 3D meshes,” with “high-fidelity simulations of muscle movements an AI can learn from watching a video — even lifelike hair that responds as expected to virtual grooming,” according to Nvidia’s SIGGRAPH blog.
At SIGGRAPH, Nvidia also announced a new pair of software development kits (SDKs) “that apply the power of neural graphics to the creation and presentation of animation and 3D objects,” VentureBeat writes, explaining “Kaolin WISP is an extension to an existing PyTorch machine learning library designed to enable fast 3D deep learning” using neural fields, “a subset of neural graphics that uses neural techniques in 3D image representation and content creation.”
“While Kaolin WISP is about speed, NeuralVDB is a project designed to help compact 3D images,” VentureBeat reports, detailing Nvidia’s NeuralVBD library, “a next generation of the OpenVBD open-source library for sparse volume data.”
Nvidia VP of Omniverse and simulation technology Rev Lebaredian emphasized the company’s interest in further developing the open-source Universal Scene Description technology, the framework for Nvidia’s Omniverse platform.
The company has been extending USD “and making it viable as the core pillar and foundation of the metaverse, so that it will be analogous to the metaverse just like HTML is to the web,’ Lebaredian said,” according to VentureBeat.
Nvidia will release a compatibility testing and certification suite for USD. “Our next milestones aim to make USD performant for real-time, large-scale virtual worlds and industrial digital twins,” Lebaredian said on the company blog, noting NVIDIA plans to help build out support in USD for international character sets, geospatial coordinates and real-time streaming of IoT data.
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.