By
Rob ScottNovember 11, 2024
Google announced it is rolling out its Gemini AI-powered video presentation app that enables users to easily create video presentations. Vids is a productivity app featured in the company’s suite of Google Workspace products. The new app uses AI model Gemini to automatically insert royalty-free stock video footage, create storyboards and scripts, and generate music and voiceovers. It allows users to add documents, slides, visuals, audio and transitions to the presentation’s timeline. “Personalize your content with Vids recording studio to deliver employee training, share company-wide announcements, meeting updates, and more,” suggests Google. Continue reading Google Offers New AI-Powered Vids App to Workspace Users
New York-based AI startup Runway has made its latest frontier model — which creates realistic AI videos from text, image or video prompts — generally available to users willing to upgrade to a paid plan starting at $12 per month for each editor. Introduced several weeks go, Gen-3 Alpha reportedly offers significant improvements over Gen-1 and Gen-2 in areas such as speed, motion, fidelity and consistency. Runway explains it worked with a “team of research scientists, engineers and artists” to develop the upgrades but did not specify where it collected its training data. As the AI video field ramps up, current rivals include Stability AI, OpenAI, Pika and Luma Labs. Continue reading Runway Making Gen-3 Alpha AI Video Model Available to All
London-based AI-startup Synthesia, which creates avatars for enterprise-level generative video presentations, has added “Expressive Avatars” to its feature kit. Powered by Synthesia’s new Express-1 model, these fourth-generation avatars have achieved a new benchmark in realism by using contextual expressions that approximates human emotion, the company says. Express-1 has been trained “to understand the intricate relationship between what we say and how we say it,” allowing Expressive Avatars to perform a script with the correct vocal tone, body language and lip movement, “like a real actor,” according to Synthesia. Continue reading Synthesia Express-1 Model Gives ‘Expressive Avatars’ Emotion
By
ETCentric StaffMarch 27, 2024
OpenAI’s Sora text- and image-to-video tool isn’t publicly available yet, but the company is showing what it’s capable of by putting it in the hands of seven artists. The results — from a short film about a balloon man to a hybrid flamingo giraffe — are stirring excitement and priming the pump for what OpenAI CTO Mira Murati says will be a 2024 general release. Challenges include making it cheaper to run and enhancing guardrails. Since introducing Sora last month, OpenAI says it’s “been working with visual artists, designers, creative directors and filmmakers to learn how Sora might aid in their creative process.” Continue reading OpenAI Releases Early Demos of Sora Video Generation Tool