Veo 2 Is Unveiled Weeks After Google Debuted Veo in Preview

Attempting to stay ahead of OpenAI in the generative video race, Google announced Veo 2, which it says can output 4K clips of two-minutes-plus at 4096 x 2160 pixels. Competitor Sora can generate video of up to 20 seconds at 1080p. However, TechCrunch says Veo 2’s supremacy is “theoretical” since it is currently available only through Google Labs’ experimental VideoFX platform, which is limited to videos of up to 8-seconds at 720p. VideoFX is also waitlisted, but Google says it will expand access this week (with no comment on expanding the cap). Continue reading Veo 2 Is Unveiled Weeks After Google Debuted Veo in Preview

World Labs AI Lets Users Create 3D Worlds from Single Photo

World Labs, the AI startup co-founded by Stanford AI pioneer Fei-Fei Li, has debuted a “spatial intelligence” system that can generate 3D worlds from a single image. Although the output is not photorealistic, the tech could be a breakthrough for animation companies and video game developers. Deploying what it calls Large World Models (LWMs), World Labs is focused on transforming 2D images into turnkey 3D environments with which users can interact. Observers say that reciprocity is what sets World Labs’ technology apart from offerings by other AI companies that transform 2D to 3D. Continue reading World Labs AI Lets Users Create 3D Worlds from Single Photo

Runway Adds 3D Video Cam Controls to Gen-3 Alpha Turbo

New York-based AI firm Runway has added 3D video camera controls to Gen-3 Alpha Turbo, giving users the ability to manipulate granular aspects of the scene they are generating using effects whether originating from text prompts, uploaded images or self-created video. Users can zoom in and out on a subject or scene, moving around an AI-generated character or form in 3D as if on a real set or actual location. The new feature, available now, lets creators “choose both the direction and intensity of how you move through your scenes for even more intention in every shot,” Runway explains. Continue reading Runway Adds 3D Video Cam Controls to Gen-3 Alpha Turbo

OpenAI’s Generative Video Tech Is Described as ‘Eye-Popping’

OpenAI has debuted a generative video model called Sora that could be a game changer. In OpenAI’s demonstration clips, Sora depicts both fantasy and natural scenes with photorealistic intensity that makes the images appear to be photographed. Although Sora is said to be currently limited to one-minute clips, it is only a matter of time until that expands, which suggests the technology could have a significant impact on all aspects of production — from entertainment to advertising to education. Concerned about Sora’s disinformation potential, OpenAI is proceeding cautiously, and initially making it available only to a select group to help it troubleshoot. Continue reading OpenAI’s Generative Video Tech Is Described as ‘Eye-Popping’

Runway Makes Next Advance in Consumer Text-to-Video AI

Google-backed AI startup Runway has released Gen-2, an early entry among commercially available text-to-video models. Previously waitlisted in limited release, the commercial availability is impactful, since text-to-video is predicted as the next big bump in artificial intelligence, following the explosion of AI use generating text and images. While Runway’s solution may not be ready to serve as a professional video tool, this is the next step in development of tech expected to impact media and entertainment. Filmmaker Joe Russo recently predicted that within the next two years, AI may have the ability to create feature films. Continue reading Runway Makes Next Advance in Consumer Text-to-Video AI

Unity Deepens Storytelling Workbench with CineCast Feature

At the Unite 2018 developers conference last week, Unity Technologies’ head of cinematics, Adam Myhill, unveiled CineCast, a synthetic co-director for filming video games that has implications for narrative storytelling and sports broadcasts of all kinds. Myhill, with the help of four players and a stand-in director — professional gamer Stephanie Harvey — demoed the CineCast mode for “GTFO,” a first-person shooter and the first property to use CineCast. Under Harvey’s watchful eye, CineCast automatically and in real-time chose the best and highest quality shot to move the action forward, with Harvey making only a few, on-the-fly adjustments. Continue reading Unity Deepens Storytelling Workbench with CineCast Feature

Unity’s Cinemachine Designed for Animation, Games, Movies

At the Unite Europe conference in Amsterdam, more than 1,400 game developers examined tools and innovations from game engine company Unity. Among those was the virtual camera system Cinemachine, which makes it easier for even neophyte content creators to get creative with animation, games, eSports, cinematics and movie pre-visualization. Unity’s Asset Store offers free 3D models and environments, including the Adam character from last year’s impressive tech demo. The engine also offers generic animations that can be applied to characters. Continue reading Unity’s Cinemachine Designed for Animation, Games, Movies

ABCs of Light Field Capture, Key to Photorealistic Virtual Reality

A technique called light field capture will become the foundation for photoreal virtual actors for virtual reality, says Paul Debevec, chief visual officer at the University of Southern California’s Institute of Creative Technologies (ICT). At the recent VRLA Expo, Debevec gave a talk on the topic that explored two decades of research and development in light field capture technology, and described the basics of what makes this technique so compelling to create photorealistic virtual reality. Continue reading ABCs of Light Field Capture, Key to Photorealistic Virtual Reality