Top Stories

URSA Cine Immersive for Apple Vision Pro Set for Q1 at $30K

Blackmagic Design is live with URSA Cine Immersive pre-orders. If it meets its late Q1 2025 ship date the $29,995 the camera will be the first on the market optimized for the Apple Immersive Video (AIV) format compatible with the Apple Vision Pro mixed-reality headset. Currently, there isn’t much content that takes advantage of the Vision Pro’s immersive features. The Cine Immersive captures 3D footage in resolution of 8160 x 7200 per eye at 90 fps. The package includes a fixed-distance lens and 8TB of onboard network storage. Also in Q1, DaVinci Resolve Studio will be updated to support AIV editing. Read more

Veo 2 Is Unveiled Weeks After Google Debuted Veo in Preview

Attempting to stay ahead of OpenAI in the generative video race, Google announced Veo 2, which it says can output 4K clips of two-minutes-plus at 4096 x 2160 pixels. Competitor Sora can generate video of up to 20 seconds at 1080p. However, TechCrunch says Veo 2’s supremacy is “theoretical” since it is currently available only through Google Labs’ experimental VideoFX platform, which is limited to videos of up to 8-seconds at 720p. VideoFX is also waitlisted, but Google says it will expand access this week (with no comment on expanding the cap). Read more

Ray-Ban Meta Gets Live AI, RT Language Translation, Shazam

Meta has added new features to Ray-Ban Metas in time for the holidays via a firmware update that make the smart glasses “the gift that keeps on giving,” per Meta marketing.  “Live AI” adds computer vision, letting Meta AI see and record what you see “and converse with you more naturally than ever before.” Along with Live AI, Live Translation is available for Meta Early Access members. Translation of Spanish, French or Italian will pipe through as English (or vice versa) in real time as audio in the glasses’ open-ear speakers. In addition, Shazam support is added for users interested in easily identifying songs. Read more

Twelve Labs Creating AI That Can Search and Analyze Video

Twelve Labs has raised $30 million in funding for its efforts to train video-analyzing models. The San Francisco-based company has received strategic investments from notable enterprise infrastructure providers Databricks and SK Telecom as well as Snowflake Ventures and HubSpot Ventures. Twelve Labs targets customers using video across a variety of fields including media and entertainment, professional sports leagues, content creators and business users. The funding coincides with the release of Twelve Labs’ new video foundation model, Marengo 2.7, which applies a multi-vector approach to video understanding. Read more

Pika 2.0 Video Generator Adds Character Integration, Objects

Pika Labs has updated its generative video model, Pika 2.0 adding more user control and customizability, the company says. Improvements include better “text alignment,” making it easier to have the AI follow through with intricate prompts. Enhanced motion rendering is said to deliver more “naturalistic movement” and better physics, including greater believability in transformations that tend toward the surreal, which has typically been a challenge for genAI tools. The biggest change may be “Scene Ingredients,” which lets users add their own images when building Pika-generated videos. Read more

Also Noted