YouTube Invites Content Creators to ‘Brainstorm with Gemini’

YouTube is testing an integration with parent company Google’s Gemini AI. Called Brainstorm with Gemini, it invites creators to ideate with video, titles and thumbnails. The limited test makes the feature available to a handful of creators whose feedback will be used in strategizing how and whether to introduce the feature more broadly. In May, YouTube began testing another AI tool, renaming its “Research” tab “Inspiration.” The Inspiration tool provides topics that its algorithm detects a creator’s audience might find interested, supplying an outline and talking points. Brainstorm is similar but supports Google’s AI branding. Continue reading YouTube Invites Content Creators to ‘Brainstorm with Gemini’

Lightricks LTX Studio Is a Text-to-Video Filmmaking Platform

Lightricks, the company behind apps including Facetune, Photoleap and Videoleap, has come up with a text-to-video tool called LTX Studio that it is being positioned as a turnkey AI tool for filmmakers and other creators. “From concept to creation,” the new app aims to enable “the transformation of a single idea into a cohesive, AI-generated video.” Currently waitlisted, Lightricks says it will make the web-based tool available to the public for free, at least initially, beginning in April, allowing users to “direct each scene down to specific camera angles with specialized AI.” Continue reading Lightricks LTX Studio Is a Text-to-Video Filmmaking Platform

Meta Touts Its Emu Foundational Model for Video and Editing

Having made the leap from image generation to video generation over the course of a few months in 2022, Meta Platforms introduces Emu, its first visual foundational model, along with Emu Video and Emu Edit, positioned as milestones in the trek to AI moviemaking. Emu uses just two diffusion models to generate 512×512 four-second long videos at 16 frames per second, Meta said, comparing that to 2022’s Make-A-Video, which requires a “cascade” of five models. Internal research found Emu video generations were “strongly preferred” over the Make-A-Video model based on quality (96 percent) and prompt fidelity (85 percent). Continue reading Meta Touts Its Emu Foundational Model for Video and Editing

Social Startup Plai Labs Debuts Free Text-to-Video Generator

The entrepreneurs behind the Myspace social network and gaming company Jam City have shifted their focus to generative AI and web3 with a new venture, Plai Labs, a social platform that provides AI tools for collaboration and connectivity. Plai Labs has released a free text-to-video generator, PlaiDay, which will compete with other GenAI video tools from the likes of OpenAI (DALL-E 2), Google (Imagen), Meta Platforms (Make-A-Video) and Stable Diffusion. But PlaiDay hopes to set itself apart by offering the ability to personalize videos with selfie likenesses. Continue reading Social Startup Plai Labs Debuts Free Text-to-Video Generator

Startup Kaiber Launches Mobile GenAI App for Music Videos

Kaiber, the AI-powered creative studio whose credits include music video collaborations with artists such as Kid Cudi and Linkin Park, has launched a mobile version of its creator tools designed to give musicians and graphic artists on-the-go access to its suite of GenAI tools offering text-to-video, image-to-video and video-to-video, “now with curated music to reimagine the music video creation process.” Users can select artist tracks to accompany visuals to build a music video “with as much or little AI collaboration as they wish.” Users can also upload their own music or audio and tap Kaiber for visuals. Continue reading Startup Kaiber Launches Mobile GenAI App for Music Videos

Meta’s Open-Source ImageBind Works Across Six Modalities

Meta Platforms has built and is open-sourcing ImageBind, an artificial intelligence that combines six modalities: audio, visual, text, thermal, movement and depth data. Currently a research project, it suggests a future in which AI models generate multisensory content. “ImageBind equips machines with a holistic understanding that connects objects in a photo with how they will sound, their 3D shape, how warm or cold they are, and how they move,” Meta says. In other words, ImageBind’s approach more closely approximates human thinking by training on the relationship between things rather than ingesting massive datasets so as absorb every possibility. Continue reading Meta’s Open-Source ImageBind Works Across Six Modalities

Google and Meta Are Developing AI Text-to-Video Generators

AI image generators like OpenAI’s DALL-E 2 and Google’s Imagen have been generating a lot of attention recently. Now AI text-to-video generators are edging into the spotlight, with Google debuting Imagen Video on the heels of Meta AI’s Make-A-Video rollout last month. Imagen Video has been used to generate videos of up to 25-minutes at a 24 fps, 1280×768 pixel spec. Imagen Video was trained “on a combination of an internal dataset consisting of 14 million video-text pairs and 60 million image-text pairs,” resulting in some unusual functionality, according to Google Research. Continue reading Google and Meta Are Developing AI Text-to-Video Generators