Meta’s AudioCraft Turns Words into Music with Generative AI

Meta Platforms is releasing AudioCraft, a generative AI framework that creates “high-quality,” “realistic” audio and music from text prompts. AudioCraft consists of three models: MusicGen, AudioGen and EnCodec, all of which Meta announced it is open-sourcing. Released in June, MusicGen was trained on Meta-owned and licensed music, and generates music from text prompts, while AudioGen, which was trained on public domain samples, generates sound effects (like honking horns and barking dogs) from text prompts. The EnCodec decoder allows “higher quality music generation with fewer artifacts,” according to Meta. Continue reading Meta’s AudioCraft Turns Words into Music with Generative AI

Meta’s Open-Source ImageBind Works Across Six Modalities

Meta Platforms has built and is open-sourcing ImageBind, an artificial intelligence that combines six modalities: audio, visual, text, thermal, movement and depth data. Currently a research project, it suggests a future in which AI models generate multisensory content. “ImageBind equips machines with a holistic understanding that connects objects in a photo with how they will sound, their 3D shape, how warm or cold they are, and how they move,” Meta says. In other words, ImageBind’s approach more closely approximates human thinking by training on the relationship between things rather than ingesting massive datasets so as absorb every possibility. Continue reading Meta’s Open-Source ImageBind Works Across Six Modalities