Adobe Promos AI in Premiere Pro, ‘Generate Video’ and More

Adobe has launched a public beta of its Generate Video app, part of the Firefly Video model, which users can try for free on a dedicated website. Login is required, and there is still a waitlist for unfettered access, but the Web app facilitates up to five seconds of video generation using text and image prompts. It can turn 2D pictures into 3D animation and is also capable of producing video with dynamic text.  The company has also added an AI feature called “Extend Video” to Premiere Pro to lengthen existing footage by two seconds. The news has the media lauding Adobe for beating OpenAI’s Sora and Google’s Veo to market. Continue reading Adobe Promos AI in Premiere Pro, ‘Generate Video’ and More

Pyramid Flow Introduces a New Approach to Generative Video

Generative video models seem to be debuting daily. Pyramid Flow, among the latest, aims for realism, producing dynamic video sequences that have temporal consistency and rich detail while being open source and free. The model can create clips of up to 10 seconds using both text and image prompts. It offers a cinematic look, supporting 1280×768 pixel resolution clips at 24 fps. Developed by a consortium of researchers from Peking University, Beijing University and Kuaishou Technology, Pyramid Flow harnesses a new technique that starts with low-resolution video, outputting at full-res only at the end of the process. Continue reading Pyramid Flow Introduces a New Approach to Generative Video

MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability

Hailuo, the free text-to-video generator released last month by the Alibaba-backed company MiniMax, has delivered its promised image-to-video feature. Founded by AI researcher Yan Junjie, the Shanghai-based MiniMax also has backing from Tencent. The model earned high marks for what has been called “ultra realistic” video, and MiniMax says the new image-to-video feature will improve output across the board as a result of “text-and-image joint instruction following,” which means Hailuo now “seamlessly integrates both text and image command inputs, enhancing your visuals while precisely adhering to your prompts.” Continue reading MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability

Meta’s Movie Gen Model is a Powerful Content Creation Tool

Meta Platforms has unveiled Movie Gen, a new family of AI models that generates video and audio content. Coming to Instagram next year, Movie Gen also allows a high degree of editing and effects customization using text prompts. Meta CEO Mark Zuckerberg demonstrated its abilities last week in an example shared on his Instagram account, where he sends a leg press machine at the gym through transformations as a steam punk machine and one made of molten gold. The models have been trained on a combination of licensed and publicly available datasets. Continue reading Meta’s Movie Gen Model is a Powerful Content Creation Tool

Adobe Publicly Demos Firefly Text- and Image-to Video Tools

Adobe is showcasing upcoming generative AI video tools that build on the Firefly video model the software giant announced in April. The offerings include a text-to-video feature and one that generates video from pictures. Each outputs clips of up to five seconds. Adobe has developed Firefly as the generative component of the AI integration it is rolling out across its Adobe’s Creative Cloud applications, which previously focused on editing and now, thanks to gen AI, incorporate creation. Adobe wasn’t a first-mover in the space, but its percolating effort has been received enthusiastically. Continue reading Adobe Publicly Demos Firefly Text- and Image-to Video Tools

ByteDance Intros Jimeng AI Text-to-Video Generator in China

ByteDance has debuted a text-to-video mobile app in its native China that is available on the company’s TikTok equivalent there, Douyin. Called Jimeng AI, there is speculation that it will be coming to North America and Europe soon via TikTok or ByteDance’s CapCut editing tool, possibly beating competing U.S. technologies like OpenAI’s Sora to market. Jimeng (translation: “dream”) uses text prompts to generate short videos. For now, its responsiveness is limited to prompts written in Chinese. In addition to entertainment, the app is described as applicable to education, marketing and other purposes. Continue reading ByteDance Intros Jimeng AI Text-to-Video Generator in China

Runway’s Gen-3 Alpha Creates Realistic Video from Still Image

AI media firm Runway has launched Gen-3 Alpha, building on the text-to-video model by using images to prompt realistic videos generated in seconds. Navigate to Runway’s web-based interface and click on “try Gen 3-Alpha” and you’ll land on a screen with an image uploader, as well as a text box for those who either prefer that approach or want to use natural language to tweak results. Runway lets users generate up to 10 seconds of contiguous video using a credit system. “Image to Video is major update that greatly improves the artistic control,” Runway said in an announcement. Continue reading Runway’s Gen-3 Alpha Creates Realistic Video from Still Image

Stable Video 4D Adds Time Dimension to Generative Imagery

Stability AI has unveiled an experimental new model, Stable Video 4D, which generates photorealistic 3D video. Building on what it created with Stable Video Diffusion, released in November, this latest model can take moving image data of an object and iterate it from multiple angles — generating up to eight different perspectives. Stable Video 4D can generate five frames across eight views in about 40 seconds using a single inference, according to the company, which says the model has “future applications in game development, video editing, and virtual reality.” Users begin by uploading a single video and specifying desired 3D camera poses. Continue reading Stable Video 4D Adds Time Dimension to Generative Imagery

Gemini Powering Google Vids Multimedia Presentation Builder

Google has launched the beta version of its Gemini-powered Google Vids productivity app, which lets users create work-related video presentations that embed documents, slides, audio recordings and even additional videos into a timeline. Incorporated into Workspace Labs, Google’s AI preview space, Google says invited participants can use Vids to “build a narrative with high quality templates” or “get to a first draft faster.” Access to Google’s royalty-free stock content library and Vids recording studio means a project can be completed “without ever leaving Workspace,” according to the company. Continue reading Gemini Powering Google Vids Multimedia Presentation Builder

DreamFlare Launches AI Video Studio and Streaming Service

DreamFlare has emerged from stealth to launch what is being billed as the first streaming platform for GenAI video. In addition to the consumer-facing subscription platform, the business model includes a sort of AI studio where creators can tap the expertise of professional storytellers to produce AI video using third-party tools like Runway, Sora and Midjourney. The company will feature two types of content: Flips, which are animated narratives with audio that viewers can also examine frame-by-frame, as with comic books, and Spins, described as “short movies” featuring branched narratives that provide interactive plot choices. Continue reading DreamFlare Launches AI Video Studio and Streaming Service

Runway Making Gen-3 Alpha AI Video Model Available to All

New York-based AI startup Runway has made its latest frontier model — which creates realistic AI videos from text, image or video prompts — generally available to users willing to upgrade to a paid plan starting at $12 per month for each editor. Introduced several weeks go, Gen-3 Alpha reportedly offers significant improvements over Gen-1 and Gen-2 in areas such as speed, motion, fidelity and consistency. Runway explains it worked with a “team of research scientists, engineers and artists” to develop the upgrades but did not specify where it collected its training data. As the AI video field ramps up, current rivals include Stability AI, OpenAI, Pika and Luma Labs. Continue reading Runway Making Gen-3 Alpha AI Video Model Available to All

Drexel Claims Its AI Has 98 Percent Rate Detecting Deepfakes

Deepfake videos are becoming increasingly problematic, not only in spreading disinformation on social media but also in enterprise attacks. Now researchers at Drexel University College of Engineering say they have developed an advanced algorithm with a 98 percent accuracy rate in detecting deepfake videos. Called the MISLnet algorithm, for the school’s Multimedia and Information Security Lab where it was invented, the platform uses machine learning to recognize and extract the “digital fingerprints” of video generators including Stable Video Diffusion, VideoCrafter and CogVideo. Continue reading Drexel Claims Its AI Has 98 Percent Rate Detecting Deepfakes

New Prototype Is the World’s First AI-Powered Movie Camera

The world’s first AI-powered movie camera has surfaced. Still in development, it aims to enable filmmakers to turn footage into AI imagery in real time while shooting. Called the CMR-M1, for camera model 1, it is the product of creative tech agency SpecialGuestX and media firm 1stAveMachine, with the goal of providing creatives with a familiar interface for AI imagemaking. It was inspired by the Cine-Kodak device, the first portable 16mm camera. “We designed a camera that serves as a physical interface to AI models,” said Miguel Espada, co-founder and executive creative technologist at SpecialGuestX, a company that does not think directors will use AI sitting at a keyboard. Continue reading New Prototype Is the World’s First AI-Powered Movie Camera

Toys R Us and Native Foreign Create Ad Using OpenAI’s Sora

Toys R Us is the first company to use OpenAI’s generative video platform Sora to produce a commercial, or what is being described as a “brand film.” With a running time of 1:06, the spot depicts company founder Charles Lazurus as a young boy, “envisioning his dreams” for the toy store and mascot Geoffrey the Giraffe. It was co-produced and directed by Los Angeles creative agency Native Foreign co-founder Nik Kleverov, who has alpha access to the pre-release Sora. Toys R Us says that from concept to completed video, the project came together in just a few weeks to premiere at the 2024 Cannes Lions International Festival of Creativity. Continue reading Toys R Us and Native Foreign Create Ad Using OpenAI’s Sora

Luma AI Dream Machine Video Generator in Free Public Beta

Northern California startup Luma AI has released Dream Machine, a model that generates realistic videos from text prompts and images. Built on a scalable and multimodal transformer architecture and “trained directly on videos,” Dream Machine can create “action-packed scenes” that are physically accurate and consistent, says Luma, which has a free version of the model in public beta. Dream Machine is what Luma calls the first step toward “a universal imagination engine,” while others are calling it “powerful” and “slammed with traffic.” Though Luma has shared scant details, each posted sequence looks to be about 5 seconds long. Continue reading Luma AI Dream Machine Video Generator in Free Public Beta