Algorithms Use NeRFs to Instantly Convert 2D Images to 3D
February 9, 2022
A new technique for machine learning that allows computers to use algorithms that turn 2D images into 3D is stirring a lot of excitement in the worlds of games, computer graphics and AI. The approach relies on “neural rendering,” which uses a neural network to generate 3D imagery from multiple 2D snapshots. A merger of concepts from the worlds of computer graphics and artificial intelligence, neural rendering gained steam in 2020 when researchers at Google and UC Berkeley demonstrated how a neural network could photorealistically capture a scene in 3D after ingesting several 2D snapshots.
Using the 2D images, the algorithm “exploits the way light travels through the air and performs computations that calculate the density and color of points in 3D space,” making it possible to view objects from any angle. Converting the 2D pixels into their 3D counterparts, called voxels, results in Neural Radiance Fields, or NeRFs.
Nvidia drew on the approach to create its Omniverse plugin GANverse3D, which uses a single 2D image to output a manipulatable 3D mesh model that can then “be used with a 3D neural renderer that gives developers control to customize objects and swap out backgrounds.”
Meta Platforms “has developed an approach similar to NeRF that could be used to flesh out scenes in Mark Zuckerberg’s much-vaunted Metaverse,” writes Wired. Georgia Tech professor Frank Dellaert tells Wired the technique turns what would have required hours of painstaking labor in the past into a task achievable in minutes.
“It is ultra-hot, there is a huge buzz,” UC Berkeley roboticist Ken Goldberg told Wired, which says he is “using the technology to improve the ability of AI-enhanced robots to grasp unfamiliar shapes.”
Goldberg and associates used NeRF to help their machine learners understand transparency, “normally a challenge because of the way these objects reflect light, by letting them infer the shape of an object based on a video image,” Wired reports. “Andrej Karpathy, director of AI at Tesla, said the company was using the technology to generate 3D scenes needed to train its self-driving algorithms to recognize and react to more on-road scenarios,” like reflective wet surfaces.
The technology, Goldberg adds, has “’hundreds of applications’ in fields ranging from entertainment to architecture,” Wired writes. Some experts feel the approach may ultimately help machines understand and analyze the world in a more human way.
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.