By
Paula ParisiNovember 20, 2024
Hangout is a new service that wants to make enjoying music more of a social experience. The platform, from Turntable Labs, is available on iOS, Android and the Web. At launch, Hangout offers more than 100 million songs, available to stream globally as a result of deals with Universal Music Group, Sony Music Entertainment and Warner Music Group as well as indie rights group Merlin. Users can select an avatar and invite friends to their personal ‘hangout’ space, taking turns playing songs from their favorite artists in the virtual DJ booth. Continue reading Hangout Is a New Social Platform with Music Listening Rooms
By
Paula ParisiNovember 19, 2024
A digital avatar may soon join the talent lineup on ESPN’s college football show “SEC Nation.” Called FACTS, the AI-generated character was developed at the ESPN Edge Innovation Center as “a way to help foster engagement and educate fans on complex sports analytics,” according to ESPN. The avatar was unveiled last week at the 4th Annual ESPN Edge Conference. Built on Nvidia’s Omniverse platform, using the company’s ACE microservices, FACTS integrates with Azure OpenAI for natural language processing and ElevenLabs for text-to-speech integration. Continue reading ESPN Readies a Data-Filled Sports Talk Host Generated by AI
By
Paula ParisiNovember 5, 2024
D-ID has launched two new types of AI-powered avatars: Premium+ and Express. The company’s video-to-video avatar tools aim to provide personal look-alikes that can sub for their creators in uses ranging from instructional videos to business presentations, offloading on-camera duties in areas including sales, marketing and customer support. “Premium+ Avatars can generate hyper-realistic digital humans that are indistinguishable from real people and will serve as the foundation for fully interactive digital agents revolutionizing how brands communicate,” while Express Avatars can rapidly generate serviceable avatars “from just one minute of source footage.” Continue reading D-ID’s New Business-Use Avatars Can Converse in Real Time
By
Paula ParisiOctober 11, 2024
The theme of Zoomtopia 2024 was “an AI-first work platform for human connection,” with the release of the custom Zoom AI Companion 2.0 for Zoom Workplace, an AI assistant that costs $12 per month that starting next year will have the ability to create custom AI avatars. Initially available for short video presentations to share with internal teams, the eventual goal is to have a “digital twin” that can participate in meetings and calls. The avatars will “save time and production costs” with segments that work with Zoom Clips for internal circulation in brief messages to colleagues. Continue reading Zoom Updates Its AI Assistant and Previews Custom Avatars
By
Paula ParisiSeptember 26, 2024
Meta has secured rights to the voices of actors Judi Dench, Kristen Bell, John Cena and others for its Meta AI chatbot, a ChatGPT-like digital assistant that is part of the plan for conversational AI as part of the multimodal Llama 3.2. Also revealed at Meta Connect this week was Orion, “the most advanced glasses the world has ever seen,” queued up to become Meta’s “first consumer full holographic AR glasses,” though they won’t be available anytime soon. A low-priced Quest 3 mixed reality headset, the $299 Quest 3S, will be arriving in time for the holidays, however. Continue reading Meta Reveals Orion Concept Glasses, Celeb Voices, Quest 3S
By
Paula ParisiAugust 23, 2024
D-ID, a platform that uses AI to generate digital humans, has announced D-ID Video Translate in general availability. The tool lets businesses and content creators automatically re-voice videos in multiple languages, “cloning the speaker’s voice and adapting their lip movements from a single upload.” D-ID is making the Video Translate tool, which accommodates 30 different languages, free to D-ID subscribers for a limited time, available through the D-ID Studio or the company’s API. Languages include Arabic, Mandarin, Japanese, Hindi and Ukrainian, in addition to Spanish, German, French and Italian. Users can simultaneously translate content using bulk translation. Continue reading D-ID Employs AI to Translate Videos into Multiple Languages
By
Paula ParisiAugust 13, 2024
YouTube is testing an integration with parent company Google’s Gemini AI. Called Brainstorm with Gemini, it invites creators to ideate with video, titles and thumbnails. The limited test makes the feature available to a handful of creators whose feedback will be used in strategizing how and whether to introduce the feature more broadly. In May, YouTube began testing another AI tool, renaming its “Research” tab “Inspiration.” The Inspiration tool provides topics that its algorithm detects a creator’s audience might find interested, supplying an outline and talking points. Brainstorm is similar but supports Google’s AI branding. Continue reading YouTube Invites Content Creators to ‘Brainstorm with Gemini’
By
Paula ParisiJuly 11, 2024
Generative video creation and editing platform Captions has raised $60 million in Series C funding. Founded in 2021 by former Microsoft engineer Gaurav Misra and Goldman Sachs alum Dwight Churchill, the company’s technologies — Lipdub, AI Edit and the 3D avatar app AI Creator — have amassed more than 10 million downloads for mobile, the firm says. The C round brings its total raise to $100 million for a stated market valuation of $500 million. With the new funding, Captions plans to expand its presence in New York City, which is “emerging as the epicenter for AI research,” according to Misra. Continue reading Captions: Generative Video Startup Raises $60 Million in NYC
By
Paula ParisiJuly 9, 2024
Meta’s popular instant messaging service WhatsApp is reportedly beta testing a feature that would allow the already integrated Meta AI chatbot to edit and reply to images. The capability was spotted in the WhatsApp beta for Android 2.24.14.20, with AI powered by Llama 3, the company’s newest large language model released in April. The beta version works via a camera button added to the text box for Meta AI chat in WhatsApp. When pressed, the button triggers a pop-up that indicates Meta AI can analyze and edit photos, though it’s currently unclear to what extent. Continue reading Meta AI Image Analysis and Editing Beta Tested for WhatsApp
Meta Platforms CEO Mark Zuckerberg recently announced that the company will test a feature to create AI characters through the AI Studio on Instagram that can engage with fans and respond to messages. “Rolling out an early test in the U.S. of our AI Studio so you might start seeing AIs from your favorite creators and interest-based AIs in the coming weeks on Instagram,” he wrote. “These will primarily show up in messaging for now, and will be clearly labeled as AI.” Zuckerberg noted the beta test will help the company improve AI characters and will be made “available to more people soon.” Meta launched AI Studio last year to help businesses build custom chatbots. Continue reading Meta Testing AI Chatbots for Instagram Created by Its Users
By
Paula ParisiJune 27, 2024
Synthesia, which uses AI to create business avatars for use in content such as training, presentation and customer service videos, has announced a major platform update. “Coming soon” with Synthesia 2.0 are full-body avatars that include hands capable of a wide range of motions. Users can animate motion using skeletal sequences on which the persona selected from the catalog can then be automatically mapped. Starting next month, the Nvidia-backed UK company will offer the ability to incorporate brand identity — including typography, colors and logos — into templated videos. A new translation tool automatically applies updates to all languages. Continue reading Lifelike AI Avatars to Get New Features with Synthesia Update
London-based AI-startup Synthesia, which creates avatars for enterprise-level generative video presentations, has added “Expressive Avatars” to its feature kit. Powered by Synthesia’s new Express-1 model, these fourth-generation avatars have achieved a new benchmark in realism by using contextual expressions that approximates human emotion, the company says. Express-1 has been trained “to understand the intricate relationship between what we say and how we say it,” allowing Expressive Avatars to perform a script with the correct vocal tone, body language and lip movement, “like a real actor,” according to Synthesia. Continue reading Synthesia Express-1 Model Gives ‘Expressive Avatars’ Emotion
By
ETCentric StaffApril 22, 2024
Microsoft has developed VASA, a framework for generating lifelike virtual characters with vocal capabilities including speaking and singing. The premiere model, VASA-1, can perform the feat in real time from a single static image and a vocalization clip. The research demo showcases realistic audio-enhanced faces that can be fine-tuned to look in different directions or change expression in video clips of up to one minute at 512 x 512 pixels and up to 40fps “with negligible starting latency,” according to Microsoft, which says “it paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.” Continue reading Microsoft’s VASA-1 Can Generate Talking Faces in Real Time
By
ETCentric StaffMarch 12, 2024
Soul Machines debuted a synthetic Marilyn Monroe last week at SXSW. The New Zealand-based company teamed on the Digital Marilyn project with Authentic Brands Group, a New York management firm that represents a host of fashion labels as well as personalities such as Elvis Presley, David Beckham and Muhammad Ali. The result is a sophisticated chatbot that Soul Machines describes as an “interactive experience.” Drawing on biological AI, Soul Machines is packaging a “personalized engagement opportunity” for fans and brands, which could lead to new approaches in advertising and promotions. Continue reading Soul Machines Aims for Photorealistic Marilyn Monroe Chatbot
By
ETCentric StaffMarch 11, 2024
Alibaba is touting a new artificial intelligence system that can animate portraits, making people sing and talk in realistic fashion. Researchers at the Alibaba Group’s Institute for Intelligent Computing developed the generative video framework, calling it EMO, short for Emote Portrait Alive. Input a single reference image along with “vocal audio,” as in talking or singing, and “our method can generate vocal avatar videos with expressive facial expressions and various head poses,” the researchers say, adding that EMO can generate videos of any duration, “depending on the length of video input.” Continue reading Alibaba’s EMO Can Generate Performance Video from Images