By
Paula ParisiNovember 6, 2024
New York-based AI firm Runway has added 3D video camera controls to Gen-3 Alpha Turbo, giving users the ability to manipulate granular aspects of the scene they are generating using effects whether originating from text prompts, uploaded images or self-created video. Users can zoom in and out on a subject or scene, moving around an AI-generated character or form in 3D as if on a real set or actual location. The new feature, available now, lets creators “choose both the direction and intensity of how you move through your scenes for even more intention in every shot,” Runway explains. Continue reading Runway Adds 3D Video Cam Controls to Gen-3 Alpha Turbo
By
Paula ParisiOctober 25, 2024
Runway is launching Act-One motion capture system that uses video and voice recordings to map human facial expressions onto characters using the company’s latest model, Gen-3 Alpha. Runway calls it “a significant step forward in using generative models for expressive live action and animated content.” Compared to past facial capture techniques — which typically require complex rigging — Act-One is driven directly and only by the performance of an actor, requiring “no extra equipment,” making it more likely to capture and preserve an authentic, nuanced performance, according to the company. Continue reading Runway’s Act-One Facial Capture Could Be a ‘Game Changer’
By
Paula ParisiOctober 16, 2024
Adobe has launched a public beta of its Generate Video app, part of the Firefly Video model, which users can try for free on a dedicated website. Login is required, and there is still a waitlist for unfettered access, but the Web app facilitates up to five seconds of video generation using text and image prompts. It can turn 2D pictures into 3D animation and is also capable of producing video with dynamic text. The company has also added an AI feature called “Extend Video” to Premiere Pro to lengthen existing footage by two seconds. The news has the media lauding Adobe for beating OpenAI’s Sora and Google’s Veo to market. Continue reading Adobe Promos AI in Premiere Pro, ‘Generate Video’ and More
By
Paula ParisiOctober 14, 2024
Generative video models seem to be debuting daily. Pyramid Flow, among the latest, aims for realism, producing dynamic video sequences that have temporal consistency and rich detail while being open source and free. The model can create clips of up to 10 seconds using both text and image prompts. It offers a cinematic look, supporting 1280×768 pixel resolution clips at 24 fps. Developed by a consortium of researchers from Peking University, Beijing University and Kuaishou Technology, Pyramid Flow harnesses a new technique that starts with low-resolution video, outputting at full-res only at the end of the process. Continue reading Pyramid Flow Introduces a New Approach to Generative Video
By
Paula ParisiOctober 11, 2024
Hailuo, the free text-to-video generator released last month by the Alibaba-backed company MiniMax, has delivered its promised image-to-video feature. Founded by AI researcher Yan Junjie, the Shanghai-based MiniMax also has backing from Tencent. The model earned high marks for what has been called “ultra realistic” video, and MiniMax says the new image-to-video feature will improve output across the board as a result of “text-and-image joint instruction following,” which means Hailuo now “seamlessly integrates both text and image command inputs, enhancing your visuals while precisely adhering to your prompts.” Continue reading MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability
By
Paula ParisiSeptember 30, 2024
Artificial intelligence platform Runway has launched The Hundred Film Fund to help finance 100 projects that use its AI to tell stories. Created by the company through its Runway Studios, the Fund is starting with $5 million, “with the potential to grow to $10 million.” Runway is presenting the Fund as “an open call to all creative professionals who have AI-augmented film projects in the pre- or post-production phases and are in need of funding.” Directors, producers and screenwriters are among those invited to apply. The program will consider all formats, from features to shorts, documentaries, experimental projects, music videos and more. Continue reading Runway Launches $5M AI Film Fund as Open Call to Creators
By
Paula ParisiAugust 5, 2024
AI media firm Runway has launched Gen-3 Alpha, building on the text-to-video model by using images to prompt realistic videos generated in seconds. Navigate to Runway’s web-based interface and click on “try Gen 3-Alpha” and you’ll land on a screen with an image uploader, as well as a text box for those who either prefer that approach or want to use natural language to tweak results. Runway lets users generate up to 10 seconds of contiguous video using a credit system. “Image to Video is major update that greatly improves the artistic control,” Runway said in an announcement. Continue reading Runway’s Gen-3 Alpha Creates Realistic Video from Still Image
By
Paula ParisiJuly 29, 2024
Stability AI has unveiled an experimental new model, Stable Video 4D, which generates photorealistic 3D video. Building on what it created with Stable Video Diffusion, released in November, this latest model can take moving image data of an object and iterate it from multiple angles — generating up to eight different perspectives. Stable Video 4D can generate five frames across eight views in about 40 seconds using a single inference, according to the company, which says the model has “future applications in game development, video editing, and virtual reality.” Users begin by uploading a single video and specifying desired 3D camera poses. Continue reading Stable Video 4D Adds Time Dimension to Generative Imagery
By
Paula ParisiJuly 11, 2024
DreamFlare has emerged from stealth to launch what is being billed as the first streaming platform for GenAI video. In addition to the consumer-facing subscription platform, the business model includes a sort of AI studio where creators can tap the expertise of professional storytellers to produce AI video using third-party tools like Runway, Sora and Midjourney. The company will feature two types of content: Flips, which are animated narratives with audio that viewers can also examine frame-by-frame, as with comic books, and Spins, described as “short movies” featuring branched narratives that provide interactive plot choices. Continue reading DreamFlare Launches AI Video Studio and Streaming Service
New York-based AI startup Runway has made its latest frontier model — which creates realistic AI videos from text, image or video prompts — generally available to users willing to upgrade to a paid plan starting at $12 per month for each editor. Introduced several weeks go, Gen-3 Alpha reportedly offers significant improvements over Gen-1 and Gen-2 in areas such as speed, motion, fidelity and consistency. Runway explains it worked with a “team of research scientists, engineers and artists” to develop the upgrades but did not specify where it collected its training data. As the AI video field ramps up, current rivals include Stability AI, OpenAI, Pika and Luma Labs. Continue reading Runway Making Gen-3 Alpha AI Video Model Available to All
By
Paula ParisiJuly 1, 2024
The world’s first AI-powered movie camera has surfaced. Still in development, it aims to enable filmmakers to turn footage into AI imagery in real time while shooting. Called the CMR-M1, for camera model 1, it is the product of creative tech agency SpecialGuestX and media firm 1stAveMachine, with the goal of providing creatives with a familiar interface for AI imagemaking. It was inspired by the Cine-Kodak device, the first portable 16mm camera. “We designed a camera that serves as a physical interface to AI models,” said Miguel Espada, co-founder and executive creative technologist at SpecialGuestX, a company that does not think directors will use AI sitting at a keyboard. Continue reading New Prototype Is the World’s First AI-Powered Movie Camera
By
Paula ParisiJune 27, 2024
Toys R Us is the first company to use OpenAI’s generative video platform Sora to produce a commercial, or what is being described as a “brand film.” With a running time of 1:06, the spot depicts company founder Charles Lazurus as a young boy, “envisioning his dreams” for the toy store and mascot Geoffrey the Giraffe. It was co-produced and directed by Los Angeles creative agency Native Foreign co-founder Nik Kleverov, who has alpha access to the pre-release Sora. Toys R Us says that from concept to completed video, the project came together in just a few weeks to premiere at the 2024 Cannes Lions International Festival of Creativity. Continue reading Toys R Us and Native Foreign Create Ad Using OpenAI’s Sora
By
Paula ParisiJune 19, 2024
Runway ML has introduced a new foundation model, Gen-3 Alpha, which the company says can generate high-quality, realistic scenes of up to 10 seconds long from text prompts, still images or a video sample. Offering a variety of camera movements, Gen-3 Alpha will initially roll out to Runway’s paid subscribers, but the company plans to add a free version in the future. Runway says Gen-3 Alpha is the first of a new series of models trained on the company’s new large-scale multimodal infrastructure, which offers improvements “in fidelity, consistency, and motion over Gen-2,” released last year. Continue reading Runway’s Gen-3 Alpha Creates AI Videos Up to 10-Seconds
By
Paula ParisiJune 19, 2024
Google DeepMind has unveiled new research on AI tech it calls V2A (“video-to-audio”) that can generate soundtracks for videos. The initiative complements the wave of AI video generators from companies ranging from biggies like OpenAI and Alibaba to startups such as Luma and Runway, all of which require a separate app to add sound. V2A technology “makes synchronized audiovisual generation possible” by combining video pixels with natural language text prompts “to generate rich soundscapes for the on-screen action,” DeepMind writes, explaining that it can “create shots with a dramatic score, realistic sound effects or dialogue.” Continue reading DeepMind’s V2A Generates Music, Sound Effects, Dialogue
By
Paula ParisiJune 14, 2024
Northern California startup Luma AI has released Dream Machine, a model that generates realistic videos from text prompts and images. Built on a scalable and multimodal transformer architecture and “trained directly on videos,” Dream Machine can create “action-packed scenes” that are physically accurate and consistent, says Luma, which has a free version of the model in public beta. Dream Machine is what Luma calls the first step toward “a universal imagination engine,” while others are calling it “powerful” and “slammed with traffic.” Though Luma has shared scant details, each posted sequence looks to be about 5 seconds long. Continue reading Luma AI Dream Machine Video Generator in Free Public Beta