By
Paula ParisiDecember 18, 2024
Attempting to stay ahead of OpenAI in the generative video race, Google announced Veo 2, which it says can output 4K clips of two-minutes-plus at 4096 x 2160 pixels. Competitor Sora can generate video of up to 20 seconds at 1080p. However, TechCrunch says Veo 2’s supremacy is “theoretical” since it is currently available only through Google Labs’ experimental VideoFX platform, which is limited to videos of up to 8-seconds at 720p. VideoFX is also waitlisted, but Google says it will expand access this week (with no comment on expanding the cap). Continue reading Veo 2 Is Unveiled Weeks After Google Debuted Veo in Preview
By
Paula ParisiDecember 18, 2024
Pika Labs has updated its generative video model, Pika 2.0 adding more user control and customizability, the company says. Improvements include better “text alignment,” making it easier to have the AI follow through with intricate prompts. Enhanced motion rendering is said to deliver more “naturalistic movement” and better physics, including greater believability in transformations that tend toward the surreal, which has typically been a challenge for genAI tools. The biggest change may be “Scene Ingredients,” which lets users add their own images when building Pika-generated videos. Continue reading Pika 2.0 Video Generator Adds Character Integration, Objects
By
Paula ParisiDecember 18, 2024
Elon Musk’s xAI has been rolling out an updated Grok-2 model that is now available free to all users of the X social platform. Prior to last week, the “unfiltered” chatbot — which debuted in November 2023 — was available only by paid subscription. Now Grok is coming to X’s masses, but those on the free tier can only ask the chatbot 10 questions in two hours, while Premium and Premium+ users will “get higher usage limits and will be the first to access any new capabilities.” There is also now a Grok button featured on X that aims to encourage exploration. Continue reading Grok-2 Chatbot Is Now Available Free to All Users of X Social
By
Paula ParisiDecember 12, 2024
Ten months after its preview, OpenAI has officially released a Sora video model called Sora Turbo. Described as “hyperrealistic,” Sora Turbo generates clips of 10 to 20 seconds from text or image inputs. It outputs video in widescreen, vertical or square aspect ratios at resolutions from 480p to 1080p. The new product is being made available to ChatGPT Plus and Pro subscribers ($20 and $200 per month, respectively) but is not yet included with ChatGPT Team, Enterprise, or Edu plans, or available to minors. The company explains that Sora videos contain C2PA metadata indicating that they were generated by AI. Continue reading OpenAI Releases Sora, Adding It to ChatGPT Plus, Pro Plans
By
Paula ParisiDecember 12, 2024
World Labs, the AI startup co-founded by Stanford AI pioneer Fei-Fei Li, has debuted a “spatial intelligence” system that can generate 3D worlds from a single image. Although the output is not photorealistic, the tech could be a breakthrough for animation companies and video game developers. Deploying what it calls Large World Models (LWMs), World Labs is focused on transforming 2D images into turnkey 3D environments with which users can interact. Observers say that reciprocity is what sets World Labs’ technology apart from offerings by other AI companies that transform 2D to 3D. Continue reading World Labs AI Lets Users Create 3D Worlds from Single Photo
By
Paula ParisiNovember 6, 2024
Nvidia’s growing AI arsenal now includes video search and summarization tool AI Blueprint, which helps developers build visual AI agents that analyze video and image content. The agents can answer user questions, generate summaries and even enable alerts for specific scenarios. The new feature is part of Metropolis, Nvidia’s developer toolkit for building computer vision applications using generative AI. Globally, enterprises and public organizations increasingly rely on visual information. Cameras, IoT sensors and autonomous vehicles are ingesting visual data at high rates, and visual agents can help monitor and make sense of that workflow. Continue reading Nvidia’s AI Blueprint Develops Agents to Analyze Visual Data
By
Paula ParisiOctober 31, 2024
Yahoo News has signed up to use San Jose-based cybersecurity company McAfee’s deepfake image detection technology. The scalable McAfee system can “quickly identify images that may have been produced or modified using AI, including deepfake images,” flagging them for the Yahoo News editorial standards team for human review. The standards team then “determines whether the flagged images meet the platform’s editorial guidelines.” The partnership provides news aggregator Yahoo with an extra layer of protection as it deals with a large network of global publishers in addition to policing its original content. Continue reading Yahoo Using McAfee’s Modified Image Detector to Flag Fakes
By
Paula ParisiOctober 28, 2024
Midjourney is turning heads with its new image editor, which lets users upload images and then make adjustments. The company’s models — most recently Midjourney 6.1 — accept uploaded images as a reference to use for generative results. Now the Midjourney image editor allows precise adjustments to aspects of the frame. An “image retexturing mode” is also being introduced, as is v2 of its “AI moderator.” The new features are only available to users with yearly memberships, monthly memberships for the past 12 months, or those who have generated at least 10,000 Midjourney images. Continue reading Midjourney Makes Powerful AI Image Editor Available in Alpha
By
Paula ParisiOctober 28, 2024
OpenAI is taking a new approach to generating media that it says is 50 times faster than the models commonly used today. Called sCM, the approach is a “consistency model,” a variation on the diffusion method used by many leading systems. OpenAI claims its new model is ideal for training for large scale datasets and generating video, audio and images that are of “comparable sample quality to leading diffusion models.” Such models often require hundreds of steps, creating challenges when it comes to real-time applications. OpenAI aims to change this with a faster system that requires less power. Continue reading OpenAI: sCM Generates Media 50x Faster Than Other Models
By
Paula ParisiOctober 25, 2024
Runway is launching Act-One motion capture system that uses video and voice recordings to map human facial expressions onto characters using the company’s latest model, Gen-3 Alpha. Runway calls it “a significant step forward in using generative models for expressive live action and animated content.” Compared to past facial capture techniques — which typically require complex rigging — Act-One is driven directly and only by the performance of an actor, requiring “no extra equipment,” making it more likely to capture and preserve an authentic, nuanced performance, according to the company. Continue reading Runway’s Act-One Facial Capture Could Be a ‘Game Changer’
By
Paula ParisiOctober 15, 2024
Meta is rolling out new generative AI advertising tools for video creation on Facebook and Instagram. The expansion to the Advantage+ creative ad suite will become widely available to advertisers in early 2025. The announcement, made at Advertising Week in New York last week, was positioned as a way for marketers to improve campaign performance on Meta’s social platforms. The new tools will allow brands to convert static images into video ads. The company also announced a new full screen video tab for Facebook that feeds short-form Reels with long-form and live-stream content. Continue reading Meta Announces New GenAI Video Tools at Advertising Week
By
Paula ParisiOctober 14, 2024
Generative video models seem to be debuting daily. Pyramid Flow, among the latest, aims for realism, producing dynamic video sequences that have temporal consistency and rich detail while being open source and free. The model can create clips of up to 10 seconds using both text and image prompts. It offers a cinematic look, supporting 1280×768 pixel resolution clips at 24 fps. Developed by a consortium of researchers from Peking University, Beijing University and Kuaishou Technology, Pyramid Flow harnesses a new technique that starts with low-resolution video, outputting at full-res only at the end of the process. Continue reading Pyramid Flow Introduces a New Approach to Generative Video
By
Paula ParisiOctober 11, 2024
Hailuo, the free text-to-video generator released last month by the Alibaba-backed company MiniMax, has delivered its promised image-to-video feature. Founded by AI researcher Yan Junjie, the Shanghai-based MiniMax also has backing from Tencent. The model earned high marks for what has been called “ultra realistic” video, and MiniMax says the new image-to-video feature will improve output across the board as a result of “text-and-image joint instruction following,” which means Hailuo now “seamlessly integrates both text and image command inputs, enhancing your visuals while precisely adhering to your prompts.” Continue reading MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability
By
Paula ParisiOctober 8, 2024
Meta Platforms has unveiled Movie Gen, a new family of AI models that generates video and audio content. Coming to Instagram next year, Movie Gen also allows a high degree of editing and effects customization using text prompts. Meta CEO Mark Zuckerberg demonstrated its abilities last week in an example shared on his Instagram account, where he sends a leg press machine at the gym through transformations as a steam punk machine and one made of molten gold. The models have been trained on a combination of licensed and publicly available datasets. Continue reading Meta’s Movie Gen Model is a Powerful Content Creation Tool
By
Paula ParisiOctober 8, 2024
Apple has released a new AI model called Depth Pro that can create a 3D depth map from a 2D image in under a second. The system is being hailed as a breakthrough that could potentially revolutionize how machines perceive depth, with transformative impact on industries from augmented reality to self-driving vehicles. “The predictions are metric, with absolute scale” without relying on the camera metadata typically required for such mapping, according to Apple. Using a consumer-grade GPU, the model can produce a 2.25-megapixel depth map using a single image in only 0.3 seconds. Continue reading Apple Advances Computer Vision with Its Depth Pro AI Model