Highly Realistic Alibaba GenVid Models Are Available for Free

Alibaba has open-sourced its Wan 2.1 video- and image-generating AI models, heating up an already competitive space. The Wan 2.1 family, which has four models, is said to produce “highly realistic” images and videos from text and images. The company has since December been previewing a new reasoning model, QwQ-Max, indicating it will be open-sourced when fully released. The move comes after another Chinese AI company, DeepSeek, released its R1 reasoning model for free download and use, triggering demand for more open-source artificial intelligence. Continue reading Highly Realistic Alibaba GenVid Models Are Available for Free

ByteDance’s Goku Video Model Is Latest in Chinese AI Streak

Barely two weeks after the launch of its OmniHuman-1 AI model, ByteDance has released Goku, a new artificial intelligence designed to create photorealistic video featuring humanoid actors. Goku uses text prompts to create among other things, realistic product videos without the need for human actors. This last is a boon for ByteDance social media unit TikTok. Goku is open source, trained on a large dataset of roughly 36 million video-text pairs and 160 million image-text pairs. Goku’s debut is received as more bad news for OpenAI in the form of added competition, but a positive step for global enterprise. Continue reading ByteDance’s Goku Video Model Is Latest in Chinese AI Streak

YouTube Shorts Updates Dream Screen with Google Veo 2 AI

YouTube Shorts has upgraded its Dream Screen AI background generator to incorporate Google DeepMind’s latest video model, Veo 2, which will also generate standalone video clips that users can post to Shorts. “Need a specific scene but don’t have the right footage? Want to turn your imagination into reality and tell a unique story? Simply use a text prompt to generate a video clip that fits perfectly into your narrative, or create a whole new world,” coaxes YouTube, which seems to be trying out “Dream Screen” branding as an umbrella for its genAI efforts. Continue reading YouTube Shorts Updates Dream Screen with Google Veo 2 AI

Adobe Firefly Video Now in Public Beta Starting at $10 Month

Adobe’s Firefly video is now in public beta as part of Firefly AI, now multi-modal with video, image and vector generation. Available for $10 for Firefly Standard or $30 for Firefly Pro, the Firefly app offers additional tiers for premium video and audio features, offering a degree of customization based on project needs. Adobe continues to position Firefly as “the only generative AI model that is IP-friendly and commercially safe,” offering the option of contractual IP indemnification to protect against infringement lawsuits “in the unlikely event of a claim involving a Firefly output.” Continue reading Adobe Firefly Video Now in Public Beta Starting at $10 Month

Luma AI Upgrades Its Video Generator and Adds Image Model

Anticipating what one outlet calls “the likely imminent release of OpenAI’s Sora,” generative AI video competitors are compelled to step up their game. Luma AI has released a major upgrade to its Dream Machine, speeding its already quick video generation and enabling a chat function for natural language prompts, so you can talk to it as with OpenAI’s ChatGPT. In addition to the new interface, Dream Machine is going mobile and adding a new foundation image model, Luma AI Photon, which “has been purpose built to advance the power and capabilities of Dream Machine,” according to the company. Continue reading Luma AI Upgrades Its Video Generator and Adds Image Model

MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability

Hailuo, the free text-to-video generator released last month by the Alibaba-backed company MiniMax, has delivered its promised image-to-video feature. Founded by AI researcher Yan Junjie, the Shanghai-based MiniMax also has backing from Tencent. The model earned high marks for what has been called “ultra realistic” video, and MiniMax says the new image-to-video feature will improve output across the board as a result of “text-and-image joint instruction following,” which means Hailuo now “seamlessly integrates both text and image command inputs, enhancing your visuals while precisely adhering to your prompts.” Continue reading MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability

Meta’s Movie Gen Model is a Powerful Content Creation Tool

Meta Platforms has unveiled Movie Gen, a new family of AI models that generates video and audio content. Coming to Instagram next year, Movie Gen also allows a high degree of editing and effects customization using text prompts. Meta CEO Mark Zuckerberg demonstrated its abilities last week in an example shared on his Instagram account, where he sends a leg press machine at the gym through transformations as a steam punk machine and one made of molten gold. The models have been trained on a combination of licensed and publicly available datasets. Continue reading Meta’s Movie Gen Model is a Powerful Content Creation Tool

Alibaba Cloud Ups Its AI Game with 100 Open-Source Models

Alibaba Cloud last week globally released more than 100 new open-source variants of its large language foundation model, Qwen 2.5, to the global open-source community. The company has also revamped its proprietary offering as a full-stack AI-computing infrastructure across cloud products, networking and data center architecture, all aimed at supporting the growing demands of AI computing. Alibaba Cloud’s significant contribution was revealed at the Apsara Conference, the annual flagship event held by the cloud division of China’s e-retail giant, often referred to as the Chinese Amazon. Continue reading Alibaba Cloud Ups Its AI Game with 100 Open-Source Models

YouTube Unveils New AI-Powered Features at Creator Event

YouTube is going all in on generative AI with nine new generative features announced at the Made on YouTube creator event in New York. Google DeepMind’s AI video generation model, Veo, is coming to YouTube Shorts later this year, enabling “even more incredible video backgrounds, breathing life into concepts that were once impossible to visualize,” as well as six-second standalone AI segments that can be incorporated into short videos. “Imagine a BookTuber stepping into the pages of the classic novel ‘The Secret Garden,’” suggests YouTube Chief Product Officer Johanna Voolich in describing the new AI-powered features. Continue reading YouTube Unveils New AI-Powered Features at Creator Event

Hailuo AI: China’s MiniMax Releases Free Text-to-Video App

Backed by Alibaba and Tencent, Chinese startup MiniMax has launched a new text-to-video model called Hailuo AI that is quickly gaining traction on social media based on its impressive capabilities, with comments ranging from “fantastical” to “hyper-realistic.” The free, web-based tool has already triggered videos that have gone viral, despite the current limitation of only 6-second clips. However, an image-to-video model is reportedly coming soon, in addition to a version 2 that promises longer video duration and improved motion. Unlike the Jimeng AI text-to-video model that was issued by ByteDance last month, the MiniMax technology is available outside of China. Continue reading Hailuo AI: China’s MiniMax Releases Free Text-to-Video App

Adobe Publicly Demos Firefly Text- and Image-to Video Tools

Adobe is showcasing upcoming generative AI video tools that build on the Firefly video model the software giant announced in April. The offerings include a text-to-video feature and one that generates video from pictures. Each outputs clips of up to five seconds. Adobe has developed Firefly as the generative component of the AI integration it is rolling out across its Adobe’s Creative Cloud applications, which previously focused on editing and now, thanks to gen AI, incorporate creation. Adobe wasn’t a first-mover in the space, but its percolating effort has been received enthusiastically. Continue reading Adobe Publicly Demos Firefly Text- and Image-to Video Tools

Viggle AI Raises $19 Million on the Power of Memes and More

Canadian generative video startup Viggle AI, which specializes in character motion, has raised $19 million in Series A funding. Viggle was founded in 2022 on the premise of providing a simplified process “to create lifelike animations using simple text-to-video or image-to-video prompts.” The result has been robust adoption among meme creators, with many viral videos circulating among social media platforms powered by Viggle, including one featuring Joaquin Phoenix as the Joker mimicking the movements of rapper Lil Yachty. Viggle’s Discord community has four million members including “both novice and experienced animators,” according to the company. Continue reading Viggle AI Raises $19 Million on the Power of Memes and More

ByteDance Intros Jimeng AI Text-to-Video Generator in China

ByteDance has debuted a text-to-video mobile app in its native China that is available on the company’s TikTok equivalent there, Douyin. Called Jimeng AI, there is speculation that it will be coming to North America and Europe soon via TikTok or ByteDance’s CapCut editing tool, possibly beating competing U.S. technologies like OpenAI’s Sora to market. Jimeng (translation: “dream”) uses text prompts to generate short videos. For now, its responsiveness is limited to prompts written in Chinese. In addition to entertainment, the app is described as applicable to education, marketing and other purposes. Continue reading ByteDance Intros Jimeng AI Text-to-Video Generator in China

Runway’s Gen-3 Alpha Creates Realistic Video from Still Image

AI media firm Runway has launched Gen-3 Alpha, building on the text-to-video model by using images to prompt realistic videos generated in seconds. Navigate to Runway’s web-based interface and click on “try Gen 3-Alpha” and you’ll land on a screen with an image uploader, as well as a text box for those who either prefer that approach or want to use natural language to tweak results. Runway lets users generate up to 10 seconds of contiguous video using a credit system. “Image to Video is major update that greatly improves the artistic control,” Runway said in an announcement. Continue reading Runway’s Gen-3 Alpha Creates Realistic Video from Still Image

Toys R Us and Native Foreign Create Ad Using OpenAI’s Sora

Toys R Us is the first company to use OpenAI’s generative video platform Sora to produce a commercial, or what is being described as a “brand film.” With a running time of 1:06, the spot depicts company founder Charles Lazurus as a young boy, “envisioning his dreams” for the toy store and mascot Geoffrey the Giraffe. It was co-produced and directed by Los Angeles creative agency Native Foreign co-founder Nik Kleverov, who has alpha access to the pre-release Sora. Toys R Us says that from concept to completed video, the project came together in just a few weeks to premiere at the 2024 Cannes Lions International Festival of Creativity. Continue reading Toys R Us and Native Foreign Create Ad Using OpenAI’s Sora