Researchers from Meta Platforms’ Reality Labs and the University of Texas at Austin have developed audio tools that Meta CEO Mark Zuckerberg says will deliver a more realistic metaverse experience. Among the new tools is a visual acoustic matching model called AViTAR that adapts any audio clip to a chosen environment using a photograph, not advanced geometry, to create a simulation. Also in the pipeline is the Visually-Informed Dereverberation mode (VIDA), which will “remove reverberation,” isolating source sounds for a fully immersive effect. Continue reading Meta Releases Trio of New Spatial Audio Tools to Developers