Synthesia Express-1 Model Gives ‘Expressive Avatars’ Emotion

London-based AI-startup Synthesia, which creates avatars for enterprise-level generative video presentations, has added “Expressive Avatars” to its feature kit. Powered by Synthesia’s new Express-1 model, these fourth-generation avatars have achieved a new benchmark in realism by using contextual expressions that approximates human emotion, the company says. Express-1 has been trained “to understand the intricate relationship between what we say and how we say it,” allowing Expressive Avatars to perform a script with the correct vocal tone, body language and lip movement, “like a real actor,” according to Synthesia. Continue reading Synthesia Express-1 Model Gives ‘Expressive Avatars’ Emotion

Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

Microsoft has developed VASA, a framework for generating lifelike virtual characters with vocal capabilities including speaking and singing. The premiere model, VASA-1, can perform the feat in real time from a single static image and a vocalization clip. The research demo showcases realistic audio-enhanced faces that can be fine-tuned to look in different directions or change expression in video clips of up to one minute at 512 x 512 pixels and up to 40fps “with negligible starting latency,” according to Microsoft, which says “it paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.” Continue reading Microsoft’s VASA-1 Can Generate Talking Faces in Real Time