London-based AI-startup Synthesia, which creates avatars for enterprise-level generative video presentations, has added “Expressive Avatars” to its feature kit. Powered by Synthesia’s new Express-1 model, these fourth-generation avatars have achieved a new benchmark in realism by using contextual expressions that approximates human emotion, the company says. Express-1 has been trained “to understand the intricate relationship between what we say and how we say it,” allowing Expressive Avatars to perform a script with the correct vocal tone, body language and lip movement, “like a real actor,” according to Synthesia. Continue reading Synthesia Express-1 Model Gives ‘Expressive Avatars’ Emotion
By
ETCentric StaffApril 22, 2024
Microsoft has developed VASA, a framework for generating lifelike virtual characters with vocal capabilities including speaking and singing. The premiere model, VASA-1, can perform the feat in real time from a single static image and a vocalization clip. The research demo showcases realistic audio-enhanced faces that can be fine-tuned to look in different directions or change expression in video clips of up to one minute at 512 x 512 pixels and up to 40fps “with negligible starting latency,” according to Microsoft, which says “it paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.” Continue reading Microsoft’s VASA-1 Can Generate Talking Faces in Real Time
By
Rob ScottNovember 12, 2015
Following announcements that Google is releasing its TensorFlow machine learning platform so developers can create their own artificial intelligence programs, and Nvidia has made a significant update to its Jetson TX1 supercomputer-on-a-chip, Microsoft is the latest with major AI news. The company has updated its Project Oxford suite of AI tools with powerful new features and programs designed to identify human emotions and voices, for example, that could make their way into the apps we use on a daily basis. Continue reading Microsoft Project Oxford Updates Could Bring AI to More Apps
By
Lisette LeonardApril 15, 2014
Stanford engineers have created the next step in interactive gaming — a video game controller that can sense a player’s emotions. The handheld game controller can monitor a player’s brain activity to decipher when a user is extremely engaged or bored, which could trigger zombies or another element of the game to be thrown at them to catch their attention. Gregory Kovacs, a professor of electrical engineering at Stanford, created a prototype controller in his lab in collaboration with Texas Instruments. Continue reading Engineers Developing Emotion-Based Video Game Controller
By
Meghan CoyleApril 11, 2014
The access to millions of songs on Spotify, Pandora and other online music streaming services has music fans feeling overwhelmed. Some fans are now turning to professional music curators to help them identify the best songs for their specific mood. Professional playlist makers typically do not compile mixes based on broad genres or decades, but instead, they create mixes for specific occasions or emotional states, such as a family road trip or a sad break-up. Continue reading Pro Music Curators Create Specialized Playlists for Listeners
By
Tim MillerApril 8, 2014
During the NAB Show, thousands of companies descend on Las Vegas. Perhaps one of the smallest is a three-man startup called Eyeris, that aims to change the way we gather data about consumer preference. Featured by the SPROCKIT program, a new venue co-sponsored by NAB that aims to highlight nascent tech companies that may have a big impact on the entertainment industry, Eyeris tracks viewer response to motion picture content using clever software and the cameras already embedded in the the devices most of us carry. Continue reading Tech Startup Offers Compelling New Way to Watch Audiences
By
Phil LelyveldApril 8, 2014
In partnership with the National Association of Broadcasters and the World Series of Startups, SPROCKIT is a program that shines a spotlight on interesting startups through the NAB Show and SPROCKIT Sync, the exclusive community of entertainment and media decision-makers that meets three times a year. The July 2014 meeting will take place at ETC@USC. The other meetings are October 2014 in conjunction with NY TV week, and January 2015 in Silicon Valley. Continue reading Startups From SPROCKIT Program Deliver Pitches During NAB
By
Rob ScottMarch 7, 2014
When mapping out product placement strategies, marketers often avoid scary movies so that consumers will not associate their brands with fear. However, a recent study from the University of British Columbia’s Sauder School of Business suggests that viewers, especially when alone, are actually more likely to remember products and think of them favorably when they see them in a scary movie. When subjects of the study experienced fear, they also experienced an emotional attachment to familiar brands. Continue reading Product Placement Most Effective When Viewers Are Scared?
By
Dennis KubaJanuary 1, 2014
With the annual Consumer Electronics Show just around the corner, we’ve compiled a first pass list of products and services we’re looking forward to seeing in Las Vegas next week. We believe these should be of particular interest to those who work in entertainment media. While we anticipate seeing products that directly compete or overlap with those on this list — and we hope there will be plenty of additional surprises — we wanted to share some of the expected highlights in advance. Continue reading CES 2014: Compelling Products Generating Early Buzz (Part 1)
By
Cassie PatonDecember 3, 2013
New technology allows computers to be programmed to recognize facial expressions — even the most subtle, fleeting expressions. Using frame-by-frame video analysis, computer software can read the muscular changes within people’s faces that indicate a range of emotions. Many predict such software will be used via computer webcams to rate how users respond to certain content — like games or videos — and cater to those users’ perceived needs or desires accordingly. Continue reading Myriad Applications Envisioned for Facial Recognition Tech
By
Valerie SavranAugust 28, 2013
Intel is developing depth sensing 3D cameras and software that are able to detect an individual’s emotional state. While refined motion detection technologies are not entirely new, Intel’s product goes beyond tracking the physical movements of objects to determining what the movement actually means. Intel’s depth sensing technology will first be available in webcams and may eventually become available in laptops, smartphones and tablets. Continue reading Intel Develops Depth Sensing 3D Cameras to Track Emotion