Adobe Considers Sora, Pika and Runway AI for Premiere Pro

Adobe plans to add generative AI capabilities to its Premiere Pro editing platform and is exploring the update with third-party AI technologies including OpenAI’s Sora, as well as models from Runway and Pika Labs, making it easier “to draw on the strengths of different models” within everyday workflows, according to Adobe. Editors will gain the ability to generate and add objects into scenes or shots, remove unwanted elements with a click, and even extend frames and footage length. The company is also developing a video model for its own Firefly AI for video and audio work in Premiere Pro. Continue reading Adobe Considers Sora, Pika and Runway AI for Premiere Pro

Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

Microsoft has developed VASA, a framework for generating lifelike virtual characters with vocal capabilities including speaking and singing. The premiere model, VASA-1, can perform the feat in real time from a single static image and a vocalization clip. The research demo showcases realistic audio-enhanced faces that can be fine-tuned to look in different directions or change expression in video clips of up to one minute at 512 x 512 pixels and up to 40fps “with negligible starting latency,” according to Microsoft, which says “it paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.” Continue reading Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

Google Imagen 2 Now Generates 4-Second Clips on Vertex AI

During Google Cloud Next 2024 in Las Vegas, Google announced an updated version of its text-to-image generator Imagen 2 on Vertex AI that has the ability to generate video clips of up to four seconds. Google calls this feature “text-to-live images,” and it essentially delivers animated GIFs at 24 fps and 360×640 pixel resolution, though Google says there will be “continuous enhancements.” Imagen 2 can also generate text, emblems and logos in different languages, and has the ability to overlay those elements on existing images like business cards, apparel and products. Continue reading Google Imagen 2 Now Generates 4-Second Clips on Vertex AI

AI Video Startup Haiper Announces Funding and Plans for AGI

London-based AI video startup Haiper has emerged from stealth mode with $13.8 million in seed funding and a platform that generates up to two seconds of HD video from text prompts or images. Founded by alumni from Google DeepMind, TikTok and various academic research labs, Haiper is built around a bespoke foundation model that aims to serve the needs of the creative community while the company pursues a path to artificial general intelligence (AGI). Haiper is offering a free trial of what is currently a web-based user interface similar to offerings from Runway and Pika. Continue reading AI Video Startup Haiper Announces Funding and Plans for AGI

Lightricks LTX Studio Is a Text-to-Video Filmmaking Platform

Lightricks, the company behind apps including Facetune, Photoleap and Videoleap, has come up with a text-to-video tool called LTX Studio that it is being positioned as a turnkey AI tool for filmmakers and other creators. “From concept to creation,” the new app aims to enable “the transformation of a single idea into a cohesive, AI-generated video.” Currently waitlisted, Lightricks says it will make the web-based tool available to the public for free, at least initially, beginning in April, allowing users to “direct each scene down to specific camera angles with specialized AI.” Continue reading Lightricks LTX Studio Is a Text-to-Video Filmmaking Platform

Pika Taps ElevenLabs Audio App to Add Lip Sync to AI Video

On the heels of ElevenLabs’ demo of a text-to-sound app unveiled using clips generated by OpenAI’s text-to-video artificial intelligence platform Sora, Pika Labs is releasing a feature called Lip Sync that lets its paid subscribers use the ElevenLabs app to add AI-generated voices and dialogue to Pika-generated videos and have the characters’ lips moving in sync with the speech. Pika Lip Sync supports both uploaded audio files and text-to-audio AI, allowing users to type or record dialogue, or use pre-existing sound files, then apply AI to change the voicing style. Continue reading Pika Taps ElevenLabs Audio App to Add Lip Sync to AI Video

Google Takes New Approach to Create Video with Lumiere AI

Google has come up with a new approach to high resolution AI video generation with Lumiere. While most GenAI video models output individual high resolution frames at various points in the sequence (called “distant keyframes”), fill in the missing frames with low-res images to create motion (known as “temporal super-resolution,” or TSR), then up-res that connective tissue (“spatial super-resolution,” or SSR) of non-overlapping frames, Lumiere takes what Google calls a “Space-Time U-Net architecture,” which processes all frames at once, “without a cascade of TSR models, allowing us to learn globally coherent motion.” Continue reading Google Takes New Approach to Create Video with Lumiere AI

VideoPoet: Google Launches a Multimodal AI Video Generator

Google has unveiled a new large language model designed to advance video generation. VideoPoet is capable of text-to-video, image-to-video, video stylization, video inpainting and outpainting, and video-to-audio. “The leading video generation models are almost exclusively diffusion-based,” Google says, citing Imagen Video as an example. Google finds this counter intuitive, since “LLMs are widely recognized as the de facto standard due to their exceptional learning capabilities across various modalities.” VideoPoet eschews the diffusion approach of relying on separately trained tasks in favor of integrating many video generation capabilities in a single LLM. Continue reading VideoPoet: Google Launches a Multimodal AI Video Generator

Runway Teams with Getty on AI Video for Hollywood and Ads

The Google and Nvidia-backed AI video startup Runway is partnering with Getty Images to develop Runway Getty Images Model (RGM), which it is positioning as a new type of generative AI model capable of “providing a new way to bring ideas and stories to life through video” for enterprise customers using copyright compliant means. Targeting Hollywood studios, advertising, media and broadcast clients, RGM will “provide a baseline model upon which companies can build their own custom models for the generation of video content,” Runway explains. Continue reading Runway Teams with Getty on AI Video for Hollywood and Ads

Stability Introduces GenAI Video Model: Stable Video Diffusion

Stability AI has opened research preview on its first foundation model for generative video, Stable Video Diffusion, offering text-to-video and image-to-video. Based on the company’s Stable Diffusion text-to-image model, the new open-source model generates video by animating existing still frames, including “multi-view synthesis.” While the company plans to enhance and extend the model’s capabilities, it currently comes in two versions: SVD, which transforms stills into 576×1024 videos of 14 frames, and SVD-XT that generates up to 24 frames — each at between three and 30 frames per second. Continue reading Stability Introduces GenAI Video Model: Stable Video Diffusion

Startup Kaiber Launches Mobile GenAI App for Music Videos

Kaiber, the AI-powered creative studio whose credits include music video collaborations with artists such as Kid Cudi and Linkin Park, has launched a mobile version of its creator tools designed to give musicians and graphic artists on-the-go access to its suite of GenAI tools offering text-to-video, image-to-video and video-to-video, “now with curated music to reimagine the music video creation process.” Users can select artist tracks to accompany visuals to build a music video “with as much or little AI collaboration as they wish.” Users can also upload their own music or audio and tap Kaiber for visuals. Continue reading Startup Kaiber Launches Mobile GenAI App for Music Videos

Magic Studio from Canva Offers AI Design for All Skill Levels

Web-based design app Canva has raised the curtain on its AI-powered Magic Studio as part of the company’s 10-year anniversary outreach. Canva is positioning Magic Studio as collecting diverse AI tools to provide a “comprehensive AI-design platform” for business and home users that want to automate labor-intensive tasks like creating and editing images and outputting to different formats using generative artificial intelligence. Created for “the 99 percent of the world without complex design skills,” Canva’s Magic Studio offers many of the features now being built-in to smartphones and software suites, but easier and “all in one place.” Continue reading Magic Studio from Canva Offers AI Design for All Skill Levels

Getty GenAI Tool for Images and Video Is Powered by Nvidia

Nvidia’s Picasso continues to gain market share among visual companies looking for an AI foundry to train models for generative use. Getty Images has partnered with Nvidia to create custom foundation models for still images and video. Generative AI by Getty Images lets customers create visuals using Getty’s library of licensed photos. The tool is trained on Getty’s own creative library and has the company’s guarantee of “full indemnification for commercial use.” Getty joins Shutterstock and Adobe among enterprise clients using Picasso. Runway and Cuebric are using it, too — and Picasso is still in development. Continue reading Getty GenAI Tool for Images and Video Is Powered by Nvidia

Runway Makes Next Advance in Consumer Text-to-Video AI

Google-backed AI startup Runway has released Gen-2, an early entry among commercially available text-to-video models. Previously waitlisted in limited release, the commercial availability is impactful, since text-to-video is predicted as the next big bump in artificial intelligence, following the explosion of AI use generating text and images. While Runway’s solution may not be ready to serve as a professional video tool, this is the next step in development of tech expected to impact media and entertainment. Filmmaker Joe Russo recently predicted that within the next two years, AI may have the ability to create feature films. Continue reading Runway Makes Next Advance in Consumer Text-to-Video AI

Runway Opens Waitlist for Its Gen 2 Text-to-Video AI System

New York-based Runway is releasing its Gen 2 system, which generates video clips of up to a few seconds from text or image-based user prompts. The company, which specializes in artificial intelligence-enhanced film and editing tools, has opened a waitlist for the new product that will be accessed through a private Discord channel by an audience grown over time. Last year, Meta Platforms and Google both previewed text-to-video software in the research stage, but neither detailed plans to make their platforms public. Bloomberg called Runway’s limited launch “the most high-profile instance of such text-to-video generation outside of a lab.” Continue reading Runway Opens Waitlist for Its Gen 2 Text-to-Video AI System