Runway Adds 3D Video Cam Controls to Gen-3 Alpha Turbo

New York-based AI firm Runway has added 3D video camera controls to Gen-3 Alpha Turbo, giving users the ability to manipulate granular aspects of the scene they are generating using effects whether originating from text prompts, uploaded images or self-created video. Users can zoom in and out on a subject or scene, moving around an AI-generated character or form in 3D as if on a real set or actual location. The new feature, available now, lets creators “choose both the direction and intensity of how you move through your scenes for even more intention in every shot,” Runway explains. Continue reading Runway Adds 3D Video Cam Controls to Gen-3 Alpha Turbo

OpenAI’s Generative Video Tech Is Described as ‘Eye-Popping’

OpenAI has debuted a generative video model called Sora that could be a game changer. In OpenAI’s demonstration clips, Sora depicts both fantasy and natural scenes with photorealistic intensity that makes the images appear to be photographed. Although Sora is said to be currently limited to one-minute clips, it is only a matter of time until that expands, which suggests the technology could have a significant impact on all aspects of production — from entertainment to advertising to education. Concerned about Sora’s disinformation potential, OpenAI is proceeding cautiously, and initially making it available only to a select group to help it troubleshoot. Continue reading OpenAI’s Generative Video Tech Is Described as ‘Eye-Popping’

Runway Makes Next Advance in Consumer Text-to-Video AI

Google-backed AI startup Runway has released Gen-2, an early entry among commercially available text-to-video models. Previously waitlisted in limited release, the commercial availability is impactful, since text-to-video is predicted as the next big bump in artificial intelligence, following the explosion of AI use generating text and images. While Runway’s solution may not be ready to serve as a professional video tool, this is the next step in development of tech expected to impact media and entertainment. Filmmaker Joe Russo recently predicted that within the next two years, AI may have the ability to create feature films. Continue reading Runway Makes Next Advance in Consumer Text-to-Video AI

Unity Deepens Storytelling Workbench with CineCast Feature

At the Unite 2018 developers conference last week, Unity Technologies’ head of cinematics, Adam Myhill, unveiled CineCast, a synthetic co-director for filming video games that has implications for narrative storytelling and sports broadcasts of all kinds. Myhill, with the help of four players and a stand-in director — professional gamer Stephanie Harvey — demoed the CineCast mode for “GTFO,” a first-person shooter and the first property to use CineCast. Under Harvey’s watchful eye, CineCast automatically and in real-time chose the best and highest quality shot to move the action forward, with Harvey making only a few, on-the-fly adjustments. Continue reading Unity Deepens Storytelling Workbench with CineCast Feature

Unity’s Cinemachine Designed for Animation, Games, Movies

At the Unite Europe conference in Amsterdam, more than 1,400 game developers examined tools and innovations from game engine company Unity. Among those was the virtual camera system Cinemachine, which makes it easier for even neophyte content creators to get creative with animation, games, eSports, cinematics and movie pre-visualization. Unity’s Asset Store offers free 3D models and environments, including the Adam character from last year’s impressive tech demo. The engine also offers generic animations that can be applied to characters. Continue reading Unity’s Cinemachine Designed for Animation, Games, Movies

ABCs of Light Field Capture, Key to Photorealistic Virtual Reality

A technique called light field capture will become the foundation for photoreal virtual actors for virtual reality, says Paul Debevec, chief visual officer at the University of Southern California’s Institute of Creative Technologies (ICT). At the recent VRLA Expo, Debevec gave a talk on the topic that explored two decades of research and development in light field capture technology, and described the basics of what makes this technique so compelling to create photorealistic virtual reality. Continue reading ABCs of Light Field Capture, Key to Photorealistic Virtual Reality