By
Paula ParisiDecember 12, 2024
World Labs, the AI startup co-founded by Stanford AI pioneer Fei-Fei Li, has debuted a “spatial intelligence” system that can generate 3D worlds from a single image. Although the output is not photorealistic, the tech could be a breakthrough for animation companies and video game developers. Deploying what it calls Large World Models (LWMs), World Labs is focused on transforming 2D images into turnkey 3D environments with which users can interact. Observers say that reciprocity is what sets World Labs’ technology apart from offerings by other AI companies that transform 2D to 3D. Continue reading World Labs AI Lets Users Create 3D Worlds from Single Photo
By
Paula ParisiNovember 8, 2024
Wonder Animation is the latest tool from Wonder Dynamics, the AI startup founded by actor Tye Sheridan and VFX artist Nikola Todorovic in 2017 that Autodesk purchased in May. Now in beta, Wonder Animation can automatically transpose live-action footage into stylized 3D animation. Creators can shoot using any camera, on any set or location, and easily convert to 3D CGI. Matching the camera position and movement to the characters and environment, Wonder Animation lets you film using any camera system and lenses, edit those shots using Maya, Blender or Unreal, and then reconstruct the result as 3D animation using AI. Continue reading Autodesk’s AI Tool Turns Live-Action Video into 3D Animation
By
Paula ParisiOctober 25, 2024
Runway is launching Act-One motion capture system that uses video and voice recordings to map human facial expressions onto characters using the company’s latest model, Gen-3 Alpha. Runway calls it “a significant step forward in using generative models for expressive live action and animated content.” Compared to past facial capture techniques — which typically require complex rigging — Act-One is driven directly and only by the performance of an actor, requiring “no extra equipment,” making it more likely to capture and preserve an authentic, nuanced performance, according to the company. Continue reading Runway’s Act-One Facial Capture Could Be a ‘Game Changer’
By
Paula ParisiOctober 16, 2024
Adobe has launched a public beta of its Generate Video app, part of the Firefly Video model, which users can try for free on a dedicated website. Login is required, and there is still a waitlist for unfettered access, but the Web app facilitates up to five seconds of video generation using text and image prompts. It can turn 2D pictures into 3D animation and is also capable of producing video with dynamic text. The company has also added an AI feature called “Extend Video” to Premiere Pro to lengthen existing footage by two seconds. The news has the media lauding Adobe for beating OpenAI’s Sora and Google’s Veo to market. Continue reading Adobe Promos AI in Premiere Pro, ‘Generate Video’ and More
By
Paula ParisiOctober 15, 2024
Meta is rolling out new generative AI advertising tools for video creation on Facebook and Instagram. The expansion to the Advantage+ creative ad suite will become widely available to advertisers in early 2025. The announcement, made at Advertising Week in New York last week, was positioned as a way for marketers to improve campaign performance on Meta’s social platforms. The new tools will allow brands to convert static images into video ads. The company also announced a new full screen video tab for Facebook that feeds short-form Reels with long-form and live-stream content. Continue reading Meta Announces New GenAI Video Tools at Advertising Week
By
Paula ParisiSeptember 20, 2024
A newly redesigned Snapchat experience is built around a three-tab user interface called Simple Snapchat. As part of that effort, the social platform is launching more generative video features, including text-to-video as part of the app’s Lens Studio AR authoring tool. Easy Lens allows the quick generation of Lenses by typing text prompts, making it possible to do things like experiment with Halloween costumes or explore looks for back to school. Launching in beta for select creators, Snap says the new features are designed for all ability levels. The company is also updating its GenAI Suite and adding an Animation Library of “hundreds of high-quality movements.” Continue reading Snapchat Is Getting a Redesign and Generative Text-to-Video
By
Paula ParisiAugust 29, 2024
Canadian generative video startup Viggle AI, which specializes in character motion, has raised $19 million in Series A funding. Viggle was founded in 2022 on the premise of providing a simplified process “to create lifelike animations using simple text-to-video or image-to-video prompts.” The result has been robust adoption among meme creators, with many viral videos circulating among social media platforms powered by Viggle, including one featuring Joaquin Phoenix as the Joker mimicking the movements of rapper Lil Yachty. Viggle’s Discord community has four million members including “both novice and experienced animators,” according to the company. Continue reading Viggle AI Raises $19 Million on the Power of Memes and More
By
Paula ParisiJuly 1, 2024
The world’s first AI-powered movie camera has surfaced. Still in development, it aims to enable filmmakers to turn footage into AI imagery in real time while shooting. Called the CMR-M1, for camera model 1, it is the product of creative tech agency SpecialGuestX and media firm 1stAveMachine, with the goal of providing creatives with a familiar interface for AI imagemaking. It was inspired by the Cine-Kodak device, the first portable 16mm camera. “We designed a camera that serves as a physical interface to AI models,” said Miguel Espada, co-founder and executive creative technologist at SpecialGuestX, a company that does not think directors will use AI sitting at a keyboard. Continue reading New Prototype Is the World’s First AI-Powered Movie Camera
By
Paula ParisiJune 4, 2024
A year after its announcement, Fable is launching Showrunner, a platform that lets anyone make TV-style animated content by writing prompts that are turned into shows by generative AI. The San Francisco company run by CEO Edward Saatchi with recruits from Oculus, Pixar and various AI startups is launching 10 shows that let users make their own episodes “from their couch,” waiting only minutes to see the finished result, according to Saatchi, who says a 15-word prompt is enough to generate 10- to 20-minute episodes. Saatchi is hoping Fable’s shows can garner an audience by self-publishing on Amazon Prime. Continue reading Fable Launches Showrunner Animated Episodic TV Generator
By
ETCentric StaffApril 8, 2024
Mobile entertainment platform Storiaverse is connecting writers and animators around the world to create content for what it claims is a unique “read-watch” format. Available on iOS and Android, Storiaverse combines animated video, audio and text into a narrative that “enhances the reading experience for digital native adults.” Created by Agnes Kozera and David Kierzkowski, co-founders of the Podcorn podcast sponsorship marketplace, Storiaverse caters to graphic novel fans interested in discovering original, short-form animated stories that range from 5-10 minutes in length. At launch there will be 25 original titles. Continue reading Short-Form Video App Storiaverse Touts ‘Read-Write’ Format
By
ETCentric StaffMarch 25, 2024
Stability AI has released Stable Video 3D, a generative video model based on the company’s foundation model Stable Video Diffusion. SV3D, as it’s called, comes in two versions. Both can generate and animate multi-view 3D meshes from a single image. The more advanced version also let users set “specified camera paths” for a “filmed” look to the video generation. “By adapting our Stable Video Diffusion image-to-video diffusion model with the addition of camera path conditioning, Stable Video 3D is able to generate multi-view videos of an object,” the company explains. Continue reading Stable Video 3D Generates Orbital Animation from One Image
By
ETCentric StaffMarch 15, 2024
Artificial intelligence imaging service Midjourney has been embraced by storytellers who have also been clamoring for a feature that enables characters to regenerate consistently across new requests. Now Midjourney is delivering that functionality with the addition of the new “–cref” tag (short for Character Reference), available for those who are using Midjourney v6 on the Discord server. Users can achieve the effect by adding the tag to the end of text prompts, followed by a URL that contains the master image subsequent generations should match. Midjourney will then attempt to repeat the particulars of a character’s face, body and clothing characteristics. Continue reading Midjourney Creates a Feature to Advance Image Consistency
By
ETCentric StaffMarch 11, 2024
Alibaba is touting a new artificial intelligence system that can animate portraits, making people sing and talk in realistic fashion. Researchers at the Alibaba Group’s Institute for Intelligent Computing developed the generative video framework, calling it EMO, short for Emote Portrait Alive. Input a single reference image along with “vocal audio,” as in talking or singing, and “our method can generate vocal avatar videos with expressive facial expressions and various head poses,” the researchers say, adding that EMO can generate videos of any duration, “depending on the length of video input.” Continue reading Alibaba’s EMO Can Generate Performance Video from Images
By
ETCentric StaffMarch 6, 2024
Filmmaker Gary Hustwit and artist Brendan Dawes aspire to change the way audiences experience film. Their startup, Anamorph, has launched with an app that can reassemble different versions of the same film. The app debuted with “Eno,” a Hustwit-directed documentary about the music iconoclast Brian Eno that premiered in January at the Sundance Film Festival, where every “Eno” showing presented the audience with a unique viewing experience. Drawing scenes from a repository of over 500 hours of “Eno” material, the Anamorph app would potentially be able to generate what the company says is billions of different configurations. Continue reading Generative Tech Enables Multiple Versions of the Same Movie
By
ETCentric StaffFebruary 16, 2024
Apple has taken a novel approach to animation with Keyframer, using large language models to add motion to static images through natural language prompts. “The application of LLMs to animation is underexplored,” Apple researchers say in a paper that describes Keyframer as an “animation prototyping tool.” Based on input from animators and engineers, Keyframer lets users refine their work through “a combination of prompting and direct editing,” the paper explains. The LLM can generate CSS animation code. Users can also use natural language to request design variations. Continue reading Apple’s Keyframer AI Tool Uses LLMs to Prototype Animation