Lifelike AI Avatars to Get New Features with Synthesia Update

Synthesia, which uses AI to create business avatars for use in content such as training, presentation and customer service videos, has announced a major platform update. “Coming soon” with Synthesia 2.0 are full-body avatars that include hands capable of a wide range of motions. Users can animate motion using skeletal sequences on which the persona selected from the catalog can then be automatically mapped. Starting next month, the Nvidia-backed UK company will offer the ability to incorporate brand identity — including typography, colors and logos — into templated videos. A new translation tool automatically applies updates to all languages. Continue reading Lifelike AI Avatars to Get New Features with Synthesia Update

ElevenLabs Launches an AI Tool for Generating Sound Effects

ElevenLabs has launched its text-to-sound generator Sound Effects for all users, available now at the company’s website. The new AI tool can create audio effects, short instrumental tracks, soundscapes and even character voices. Sound Effects “has been designed to help creators — including film and television studios, video game developers, and social media content creators — generate rich and immersive soundscapes quickly, affordably and at scale,” according to the startup, which developed the tool in partnership with Shutterstock, using its library of licensed audio tracks. Continue reading ElevenLabs Launches an AI Tool for Generating Sound Effects

Synthesia Express-1 Model Gives ‘Expressive Avatars’ Emotion

London-based AI-startup Synthesia, which creates avatars for enterprise-level generative video presentations, has added “Expressive Avatars” to its feature kit. Powered by Synthesia’s new Express-1 model, these fourth-generation avatars have achieved a new benchmark in realism by using contextual expressions that approximates human emotion, the company says. Express-1 has been trained “to understand the intricate relationship between what we say and how we say it,” allowing Expressive Avatars to perform a script with the correct vocal tone, body language and lip movement, “like a real actor,” according to Synthesia. Continue reading Synthesia Express-1 Model Gives ‘Expressive Avatars’ Emotion

Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

Microsoft has developed VASA, a framework for generating lifelike virtual characters with vocal capabilities including speaking and singing. The premiere model, VASA-1, can perform the feat in real time from a single static image and a vocalization clip. The research demo showcases realistic audio-enhanced faces that can be fine-tuned to look in different directions or change expression in video clips of up to one minute at 512 x 512 pixels and up to 40fps “with negligible starting latency,” according to Microsoft, which says “it paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.” Continue reading Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

Midjourney Creates a Feature to Advance Image Consistency

Artificial intelligence imaging service Midjourney has been embraced by storytellers who have also been clamoring for a feature that enables characters to regenerate consistently across new requests. Now Midjourney is delivering that functionality with the addition of the new “–cref” tag (short for Character Reference), available for those who are using Midjourney v6 on the Discord server. Users can achieve the effect by adding the tag to the end of text prompts, followed by a URL that contains the master image subsequent generations should match. Midjourney will then attempt to repeat the particulars of a character’s face, body and clothing characteristics. Continue reading Midjourney Creates a Feature to Advance Image Consistency

Alibaba’s EMO Can Generate Performance Video from Images

Alibaba is touting a new artificial intelligence system that can animate portraits, making people sing and talk in realistic fashion. Researchers at the Alibaba Group’s Institute for Intelligent Computing developed the generative video framework, calling it EMO, short for Emote Portrait Alive. Input a single reference image along with “vocal audio,” as in talking or singing, and “our method can generate vocal avatar videos with expressive facial expressions and various head poses,” the researchers say, adding that EMO can generate videos of any duration, “depending on the length of video input.” Continue reading Alibaba’s EMO Can Generate Performance Video from Images

Pika Taps ElevenLabs Audio App to Add Lip Sync to AI Video

On the heels of ElevenLabs’ demo of a text-to-sound app unveiled using clips generated by OpenAI’s text-to-video artificial intelligence platform Sora, Pika Labs is releasing a feature called Lip Sync that lets its paid subscribers use the ElevenLabs app to add AI-generated voices and dialogue to Pika-generated videos and have the characters’ lips moving in sync with the speech. Pika Lip Sync supports both uploaded audio files and text-to-audio AI, allowing users to type or record dialogue, or use pre-existing sound files, then apply AI to change the voicing style. Continue reading Pika Taps ElevenLabs Audio App to Add Lip Sync to AI Video

CES: Creators Talk About Integrating AI into Their Media Work

Nuconomi CTO Greg Carron, tech and business journalist Molly Wood, and CBS Sports Radio host JR Jackson spoke with Consumer Technology Association Senior VP of Government Affairs Michael Petricone about how they’ve integrated artificial intelligence into their artistic expression. “The synergy of technology and creativity is creating a profound transformation,” explained Petricone. Reporter and climate change investor Wood noted that synergy led to the launch of Molly Wood Media where she uses AI to streamline her process. “I used AI to make myself a cyborg and do everything I want to do as a human being,” she said. “Turns out I don’t need a producer.” Continue reading CES: Creators Talk About Integrating AI into Their Media Work

Captions Debuts AI Lipdub with Translation and Gen Z Slang

Captions, which leverages AI to help its customers produce “studio quality videos directly from their mobile devices,” has launched a new app called Lipdub that automatically translates and dubs content into 28 languages. The free download lets user dub anyone “and experience familiar voices and faces in a suite of new languages.” Lipdub’s translations not only duplicate what the company says is “the subject’s exact voice,” but also syncs lip movements to match. It also incorporates dialects and idioms, with options like Gen Z and Texas slang. Continue reading Captions Debuts AI Lipdub with Translation and Gen Z Slang