By
Paula ParisiMarch 19, 2025
San Mateo, California-based game developer Roblox has released a 3D object generator called Cube 3D, the first of several models the company plans to make available. Cube currently generates 3D models and environments from text, and in the future the company plans to add image inputs. Roblox says it is open-sourcing the tool, making it available to users on and off the platform. Cube will serve as the core generative AI system for Roblox’s 3D and 4D plans, the latter referring to interactive responsiveness. The launch coincides with the Game Developers Conference, running through Friday in San Francisco. Continue reading Roblox Reveals Its Generative AI System Cube for 3D and 4D
By
Paula ParisiMarch 18, 2025
Baidu has launched two new AI systems, the native multimodal foundation model Ernie 4.5 and deep-thinking reasoning model Ernie X1. The latter supports features like generative imaging, advanced search and webpage content comprehension. Baidu is touting Ernie X1 as of comparable performance to another Chinese model, DeepSeek-R1, but says it is half the price. Both Baidu models are available to the public, including individual users, through the Ernie website. Baidu, the dominant search engine in China, says its new models mark a milestone in both reasoning and multimodal AI, “offering advanced capabilities at a more accessible price point.” Continue reading Baidu Releases New LLMs that Undercut Competition’s Price
By
Paula ParisiMarch 10, 2025
Google has added Gemini Embedding to its Gemini developer API. This new experimental model for text translates words, phrases and other text inputs into numerical representations, otherwise known as embeddings, which capture their semantic meaning. Embeddings are used in a wide range of applications including document retrieval and classification, potentially reducing costs and improving latency. Google is also testing an expansion of its AI Overviews search feature as part of a Gemini 2.0 update. Called AI Mode, it helps explain complex topics by generating search results that use advanced reasoning and thinking capabilities. Continue reading Google Updates AI Search and Intros Gemini Text Embedding
By
Paula ParisiMarch 5, 2025
Flora is a new software interface built by AI creatives for creative AI applications. Much like Apple reinvented the personal computer UI to make it feel natural for people who were not IT specialists, Flora aims to reframe the way designers and artists interact with generative AI. “AI tools make it easy to create, but lack creative control,” the startup’s founder Weber Wong says, opining that such tools have proven “great for making AI slop, but not for doing great creative work.” Wong’s goal is to make an AI interface everyone will find comfortable and intuitive, simplifying use and curating “the best text, image, and video models.” Continue reading Flora Is a New AI Interface Geared Toward Helping Creatives
By
Paula ParisiFebruary 7, 2025
Snap has created a lightweight AI text-to-image model that will run on-device, expected to power some Snapchat mobile features in the months ahead. Using an iPhone 16 Pro Max, the model can produce high-resolution images in approximately 1.4 seconds, running on the phone, which reduces computational costs. Snap says the research model “is the continuation of our long-term investment in cutting edge AI and ML technologies that enable some of today’s most advanced interactive developer and consumer experiences.” Among the Snapchat AI features the new model will enhance are AI Snaps and AI Bitmoji Backgrounds. Continue reading Snap Develops a Lightweight Text-to-Video AI Model In-House
By
Paula ParisiFebruary 6, 2025
ByteDance has developed a generative model that can use a single photo to generate photorealistic video of humans in motion. Called OmniHuman-1, the multimodal system supports various visual and audio styles and can generate people doing things like singing, dancing, speaking and moving in a natural fashion. ByteDance says its new technology clears hurdles that hinder existing human-generators — obstacles like short play times and over-reliance on high-quality training data. The diffusion transformer-based OmniHuman addressed those challenges by mixing motion-related conditions into the training phase, a solution ByteDance researchers claim is new. Continue reading ByteDance’s AI Model Can Generate Video from Single Image
By
Paula ParisiDecember 18, 2024
Attempting to stay ahead of OpenAI in the generative video race, Google announced Veo 2, which it says can output 4K clips of two-minutes-plus at 4096 x 2160 pixels. Competitor Sora can generate video of up to 20 seconds at 1080p. However, TechCrunch says Veo 2’s supremacy is “theoretical” since it is currently available only through Google Labs’ experimental VideoFX platform, which is limited to videos of up to 8-seconds at 720p. VideoFX is also waitlisted, but Google says it will expand access this week (with no comment on expanding the cap). Continue reading Veo 2 Is Unveiled Weeks After Google Debuted Veo in Preview
By
Paula ParisiDecember 12, 2024
Ten months after its preview, OpenAI has officially released a Sora video model called Sora Turbo. Described as “hyperrealistic,” Sora Turbo generates clips of 10 to 20 seconds from text or image inputs. It outputs video in widescreen, vertical or square aspect ratios at resolutions from 480p to 1080p. The new product is being made available to ChatGPT Plus and Pro subscribers ($20 and $200 per month, respectively) but is not yet included with ChatGPT Team, Enterprise, or Edu plans, or available to minors. The company explains that Sora videos contain C2PA metadata indicating that they were generated by AI. Continue reading OpenAI Releases Sora, Adding It to ChatGPT Plus, Pro Plans
By
Paula ParisiNovember 27, 2024
Nvidia has unveiled an AI sound model research project called Fugatto that “can create any combination of music, voices and sounds” based on text and audio inputs. Described by Nvidia as “the world’s most flexible sound machine,” many appear to agree that the new model represents an audio breakthrough, with the potential to generate a wide array of sounds that have not previously existed. While popular sound models from companies including Suno and ElevenLabs “can compose a song or modify a voice, none have the dexterity of the new offering,” Nvidia claims. Continue reading Nvidia AI Model Fugatto a Breakthrough in Generative Sound
By
Paula ParisiNovember 15, 2024
DeepL, a German company that gained a profile with online text translation, has released DeepL Voice, a B2B tool that translates to captions in real time. DeepL Voice debuts in two iterations: DeepL Voice for Meetings, which allows participants to speak in their preferred language while serving colleagues with captions, and DeepL Voice for Conversations, which works on mobile devices, facilitating in-person, one-on-one conversations “with customers, colleagues or anyone else, in the language that works best for them,” the company explains, noting that real-time voice translation offers specific challenges. Continue reading DeepL Voice Translates 33 Languages to Captions in Real Time
By
Paula ParisiNovember 15, 2024
Reports indicate that Meta Platforms is preparing to introduce advertising to Threads, perhaps as soon as January. Threads is the year-old social platform it launched to compete with Twitter in July 2023, the same month Elon Musk was rebranding that platform as X. Meta is looking to begin Threads’ transition to ad support by initially allowing only a small group of advertisers to create and publish ads before opening the platform to the ad industry at large later in the year. Head of Instagram Adam Mosseri, who also runs Threads, has said Meta is “definitely” planning to open ad inventory on Threads. Continue reading Meta Readies Year-Old Threads for Advertising in Early 2025
By
Paula ParisiOctober 25, 2024
Runway is launching Act-One motion capture system that uses video and voice recordings to map human facial expressions onto characters using the company’s latest model, Gen-3 Alpha. Runway calls it “a significant step forward in using generative models for expressive live action and animated content.” Compared to past facial capture techniques — which typically require complex rigging — Act-One is driven directly and only by the performance of an actor, requiring “no extra equipment,” making it more likely to capture and preserve an authentic, nuanced performance, according to the company. Continue reading Runway’s Act-One Facial Capture Could Be a ‘Game Changer’
By
Paula ParisiOctober 15, 2024
Meta is rolling out new generative AI advertising tools for video creation on Facebook and Instagram. The expansion to the Advantage+ creative ad suite will become widely available to advertisers in early 2025. The announcement, made at Advertising Week in New York last week, was positioned as a way for marketers to improve campaign performance on Meta’s social platforms. The new tools will allow brands to convert static images into video ads. The company also announced a new full screen video tab for Facebook that feeds short-form Reels with long-form and live-stream content. Continue reading Meta Announces New GenAI Video Tools at Advertising Week
By
Paula ParisiOctober 14, 2024
Generative video models seem to be debuting daily. Pyramid Flow, among the latest, aims for realism, producing dynamic video sequences that have temporal consistency and rich detail while being open source and free. The model can create clips of up to 10 seconds using both text and image prompts. It offers a cinematic look, supporting 1280×768 pixel resolution clips at 24 fps. Developed by a consortium of researchers from Peking University, Beijing University and Kuaishou Technology, Pyramid Flow harnesses a new technique that starts with low-resolution video, outputting at full-res only at the end of the process. Continue reading Pyramid Flow Introduces a New Approach to Generative Video
By
Paula ParisiOctober 11, 2024
Hailuo, the free text-to-video generator released last month by the Alibaba-backed company MiniMax, has delivered its promised image-to-video feature. Founded by AI researcher Yan Junjie, the Shanghai-based MiniMax also has backing from Tencent. The model earned high marks for what has been called “ultra realistic” video, and MiniMax says the new image-to-video feature will improve output across the board as a result of “text-and-image joint instruction following,” which means Hailuo now “seamlessly integrates both text and image command inputs, enhancing your visuals while precisely adhering to your prompts.” Continue reading MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability