By
Paula ParisiOctober 11, 2024
Hailuo, the free text-to-video generator released last month by the Alibaba-backed company MiniMax, has delivered its promised image-to-video feature. Founded by AI researcher Yan Junjie, the Shanghai-based MiniMax also has backing from Tencent. The model earned high marks for what has been called “ultra realistic” video, and MiniMax says the new image-to-video feature will improve output across the board as a result of “text-and-image joint instruction following,” which means Hailuo now “seamlessly integrates both text and image command inputs, enhancing your visuals while precisely adhering to your prompts.” Continue reading MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability
By
Paula ParisiSeptember 16, 2024
Backed by Alibaba and Tencent, Chinese startup MiniMax has launched a new text-to-video model called Hailuo AI that is quickly gaining traction on social media based on its impressive capabilities, with comments ranging from “fantastical” to “hyper-realistic.” The free, web-based tool has already triggered videos that have gone viral, despite the current limitation of only 6-second clips. However, an image-to-video model is reportedly coming soon, in addition to a version 2 that promises longer video duration and improved motion. Unlike the Jimeng AI text-to-video model that was issued by ByteDance last month, the MiniMax technology is available outside of China. Continue reading Hailuo AI: China’s MiniMax Releases Free Text-to-Video App
By
Paula ParisiSeptember 13, 2024
Adobe is showcasing upcoming generative AI video tools that build on the Firefly video model the software giant announced in April. The offerings include a text-to-video feature and one that generates video from pictures. Each outputs clips of up to five seconds. Adobe has developed Firefly as the generative component of the AI integration it is rolling out across its Adobe’s Creative Cloud applications, which previously focused on editing and now, thanks to gen AI, incorporate creation. Adobe wasn’t a first-mover in the space, but its percolating effort has been received enthusiastically. Continue reading Adobe Publicly Demos Firefly Text- and Image-to Video Tools
By
Paula ParisiAugust 29, 2024
Canadian generative video startup Viggle AI, which specializes in character motion, has raised $19 million in Series A funding. Viggle was founded in 2022 on the premise of providing a simplified process “to create lifelike animations using simple text-to-video or image-to-video prompts.” The result has been robust adoption among meme creators, with many viral videos circulating among social media platforms powered by Viggle, including one featuring Joaquin Phoenix as the Joker mimicking the movements of rapper Lil Yachty. Viggle’s Discord community has four million members including “both novice and experienced animators,” according to the company. Continue reading Viggle AI Raises $19 Million on the Power of Memes and More
By
Paula ParisiAugust 5, 2024
AI media firm Runway has launched Gen-3 Alpha, building on the text-to-video model by using images to prompt realistic videos generated in seconds. Navigate to Runway’s web-based interface and click on “try Gen 3-Alpha” and you’ll land on a screen with an image uploader, as well as a text box for those who either prefer that approach or want to use natural language to tweak results. Runway lets users generate up to 10 seconds of contiguous video using a credit system. “Image to Video is major update that greatly improves the artistic control,” Runway said in an announcement. Continue reading Runway’s Gen-3 Alpha Creates Realistic Video from Still Image
By
Paula ParisiJune 14, 2024
Northern California startup Luma AI has released Dream Machine, a model that generates realistic videos from text prompts and images. Built on a scalable and multimodal transformer architecture and “trained directly on videos,” Dream Machine can create “action-packed scenes” that are physically accurate and consistent, says Luma, which has a free version of the model in public beta. Dream Machine is what Luma calls the first step toward “a universal imagination engine,” while others are calling it “powerful” and “slammed with traffic.” Though Luma has shared scant details, each posted sequence looks to be about 5 seconds long. Continue reading Luma AI Dream Machine Video Generator in Free Public Beta
By
ETCentric StaffMarch 27, 2024
OpenAI’s Sora text- and image-to-video tool isn’t publicly available yet, but the company is showing what it’s capable of by putting it in the hands of seven artists. The results — from a short film about a balloon man to a hybrid flamingo giraffe — are stirring excitement and priming the pump for what OpenAI CTO Mira Murati says will be a 2024 general release. Challenges include making it cheaper to run and enhancing guardrails. Since introducing Sora last month, OpenAI says it’s “been working with visual artists, designers, creative directors and filmmakers to learn how Sora might aid in their creative process.” Continue reading OpenAI Releases Early Demos of Sora Video Generation Tool
By
ETCentric StaffMarch 25, 2024
Stability AI has released Stable Video 3D, a generative video model based on the company’s foundation model Stable Video Diffusion. SV3D, as it’s called, comes in two versions. Both can generate and animate multi-view 3D meshes from a single image. The more advanced version also let users set “specified camera paths” for a “filmed” look to the video generation. “By adapting our Stable Video Diffusion image-to-video diffusion model with the addition of camera path conditioning, Stable Video 3D is able to generate multi-view videos of an object,” the company explains. Continue reading Stable Video 3D Generates Orbital Animation from One Image
By
ETCentric StaffMarch 8, 2024
London-based AI video startup Haiper has emerged from stealth mode with $13.8 million in seed funding and a platform that generates up to two seconds of HD video from text prompts or images. Founded by alumni from Google DeepMind, TikTok and various academic research labs, Haiper is built around a bespoke foundation model that aims to serve the needs of the creative community while the company pursues a path to artificial general intelligence (AGI). Haiper is offering a free trial of what is currently a web-based user interface similar to offerings from Runway and Pika. Continue reading AI Video Startup Haiper Announces Funding and Plans for AGI
By
Paula ParisiJanuary 26, 2024
Google has come up with a new approach to high resolution AI video generation with Lumiere. While most GenAI video models output individual high resolution frames at various points in the sequence (called “distant keyframes”), fill in the missing frames with low-res images to create motion (known as “temporal super-resolution,” or TSR), then up-res that connective tissue (“spatial super-resolution,” or SSR) of non-overlapping frames, Lumiere takes what Google calls a “Space-Time U-Net architecture,” which processes all frames at once, “without a cascade of TSR models, allowing us to learn globally coherent motion.” Continue reading Google Takes New Approach to Create Video with Lumiere AI
By
Paula ParisiDecember 22, 2023
Google has unveiled a new large language model designed to advance video generation. VideoPoet is capable of text-to-video, image-to-video, video stylization, video inpainting and outpainting, and video-to-audio. “The leading video generation models are almost exclusively diffusion-based,” Google says, citing Imagen Video as an example. Google finds this counter intuitive, since “LLMs are widely recognized as the de facto standard due to their exceptional learning capabilities across various modalities.” VideoPoet eschews the diffusion approach of relying on separately trained tasks in favor of integrating many video generation capabilities in a single LLM. Continue reading VideoPoet: Google Launches a Multimodal AI Video Generator
By
Paula ParisiNovember 27, 2023
Stability AI has opened research preview on its first foundation model for generative video, Stable Video Diffusion, offering text-to-video and image-to-video. Based on the company’s Stable Diffusion text-to-image model, the new open-source model generates video by animating existing still frames, including “multi-view synthesis.” While the company plans to enhance and extend the model’s capabilities, it currently comes in two versions: SVD, which transforms stills into 576×1024 videos of 14 frames, and SVD-XT that generates up to 24 frames — each at between three and 30 frames per second. Continue reading Stability Introduces GenAI Video Model: Stable Video Diffusion
By
Paula ParisiNovember 7, 2023
Kaiber, the AI-powered creative studio whose credits include music video collaborations with artists such as Kid Cudi and Linkin Park, has launched a mobile version of its creator tools designed to give musicians and graphic artists on-the-go access to its suite of GenAI tools offering text-to-video, image-to-video and video-to-video, “now with curated music to reimagine the music video creation process.” Users can select artist tracks to accompany visuals to build a music video “with as much or little AI collaboration as they wish.” Users can also upload their own music or audio and tap Kaiber for visuals. Continue reading Startup Kaiber Launches Mobile GenAI App for Music Videos