By
Paula ParisiFebruary 28, 2025
Alibaba has open-sourced its Wan 2.1 video- and image-generating AI models, heating up an already competitive space. The Wan 2.1 family, which has four models, is said to produce “highly realistic” images and videos from text and images. The company has since December been previewing a new reasoning model, QwQ-Max, indicating it will be open-sourced when fully released. The move comes after another Chinese AI company, DeepSeek, released its R1 reasoning model for free download and use, triggering demand for more open-source artificial intelligence. Continue reading Highly Realistic Alibaba GenVid Models Are Available for Free
By
Paula ParisiFebruary 24, 2025
Barely two weeks after the launch of its OmniHuman-1 AI model, ByteDance has released Goku, a new artificial intelligence designed to create photorealistic video featuring humanoid actors. Goku uses text prompts to create among other things, realistic product videos without the need for human actors. This last is a boon for ByteDance social media unit TikTok. Goku is open source, trained on a large dataset of roughly 36 million video-text pairs and 160 million image-text pairs. Goku’s debut is received as more bad news for OpenAI in the form of added competition, but a positive step for global enterprise. Continue reading ByteDance’s Goku Video Model Is Latest in Chinese AI Streak
By
Paula ParisiFebruary 7, 2025
Snap has created a lightweight AI text-to-image model that will run on-device, expected to power some Snapchat mobile features in the months ahead. Using an iPhone 16 Pro Max, the model can produce high-resolution images in approximately 1.4 seconds, running on the phone, which reduces computational costs. Snap says the research model “is the continuation of our long-term investment in cutting edge AI and ML technologies that enable some of today’s most advanced interactive developer and consumer experiences.” Among the Snapchat AI features the new model will enhance are AI Snaps and AI Bitmoji Backgrounds. Continue reading Snap Develops a Lightweight Text-to-Video AI Model In-House
By
Paula ParisiDecember 6, 2024
Google DeepMind’s new Genie 2 is a large foundation world model that generates interactive 3D worlds that are being likened to video games. “Games play a key role in the world of artificial intelligence research,” says Google DeepMind, noting “their engaging nature, challenges and measurable progress make them ideal environments to safely test and advance AI capabilities.” Based on a simple prompt image, Genie 2 is capable of producing “an endless variety of action-controllable, playable 3D environments” — suitable for training and evaluating embodied agents — that can be played by a human or AI agent using keyboard and mouse inputs. Continue reading DeepMind Genie 2 Creates Worlds That Emulate Video Games
By
Paula ParisiAugust 22, 2024
Google DeepMind has made its latest AI image generator, Imagen 3, free for use in the U.S. via the company’s ImageFX platform. Imagen 3 will be available in multiple versions, “each optimized for different types of tasks, from generating quick sketches to high-resolution images.” Google announced Imagen 3 at Google I/O in March, and in June made it available to enterprise users through Vertex. Using simplified natural language text input rather than “complex prompt engineering,” Google says Imagen 3 generates high-quality images in a range styles, from photorealistic, painterly and textured to whimsically cartoony. Continue reading Google DeepMind Releases Imagen 3 for Free to U.S. Users
By
Paula ParisiAugust 20, 2024
ByteDance has debuted a text-to-video mobile app in its native China that is available on the company’s TikTok equivalent there, Douyin. Called Jimeng AI, there is speculation that it will be coming to North America and Europe soon via TikTok or ByteDance’s CapCut editing tool, possibly beating competing U.S. technologies like OpenAI’s Sora to market. Jimeng (translation: “dream”) uses text prompts to generate short videos. For now, its responsiveness is limited to prompts written in Chinese. In addition to entertainment, the app is described as applicable to education, marketing and other purposes. Continue reading ByteDance Intros Jimeng AI Text-to-Video Generator in China
By
Paula ParisiAugust 19, 2024
Grok-2 and Grok-2 mini, the latest generative chatbots from Elon Musk’s xAI, create images with seemingly few guardrails. Early pictures of notable personalities such as Bill Gates, Donald Trump and Kamala Harris in questionable or compromising settings may not appear photorealistic to a trained eye, but they are still described in many cases to be quite realistic. Powered by the FLUX.1 AI model from Black Forest Labs, Grok-2 and Grok-2 mini are available in beta on X social for Premium and Premium+ subscribers and will be coming to xAI’s enterprise API later this month, according to the company. Continue reading xAI’s Grok-2 Generates Realistic Images with Few Guardrails
By
Paula ParisiAugust 6, 2024
A new generative AI startup called Black Forest Labs has hit the scene, debuting with a suite of text-to-image models branded FLUX.1. Based in Germany, Black Forest was founded by some of the researchers involved in developing Stable Diffusion and has raised $31 million in funding from principal investor Andreessen Horowitz and angels including CAA founder and former talent agent Michael Ovitz. The FLUX.1 suite focuses on “image detail, prompt adherence, style diversity and scene complexity,” the company says of its three initial variants: FLUX.1 [pro], FLUX.1 [dev] and FLUX.1 [schnell]. Continue reading Black Forest Labs Announces Suite of Text-to-Image Models
By
Paula ParisiJuly 10, 2024
Meta Platforms has introduced an AI model it says can generate 3D images from text prompts in under one minute. The new model, called 3D Gen, is billed as a “state-of-the-art, fast pipeline” for turning text input into high-resolution 3D images quickly. The app also adds textures to AI output or existing images through text prompts, and “supports physically-based rendering (PBR), necessary for 3D asset relighting in real-world applications,” Meta explains, adding that in internal tests, 3D Gen outperforms industry baselines on “prompt fidelity and visual quality” and for speed. Continue reading Meta’s 3D Gen Bridges Gap from AI to Production Workflow
By
Paula ParisiMay 16, 2024
Google is launching two new AI models: the video generator Veo and Imagen 3, billed as the company’s “highest quality text-to-image model yet.” The products were introduced at Google I/O this week, where new demo recordings created using the Music AI Sandbox were also showcased. The 1080p Veo videos can be generated in “a wide range of cinematic and visual styles” and run “over a minute” in length, Google says. Veo is available in private preview in VideoFX by joining a waitlist. At a future date, the company plans to bring some Veo capabilities to YouTube Shorts and other products. Continue reading Veo AI Image Generator and Imagen 3 Unveiled at Google I/O
By
ETCentric StaffApril 17, 2024
Meta is testing a new large language chatbot, Meta AI, on social platforms in parts of India and Africa. The chatbot was introduced in late 2023, and began testing on U.S. WhatApp users in March. The test is expanding to include more territories and the addition of Instagram and Facebook Messenger. India is reported to be Meta’s largest social market, with more than 500 million Facebook and WhatsApp users, and has big implications as the company scales up its AI plans to compete against OpenAI and others. The Meta AI chatbot answers questions and generates photorealistic images. Continue reading Meta Tests Image-Generating Social Chatbot on Its Platforms
By
ETCentric StaffApril 12, 2024
During Google Cloud Next 2024 in Las Vegas, Google announced an updated version of its text-to-image generator Imagen 2 on Vertex AI that has the ability to generate video clips of up to four seconds. Google calls this feature “text-to-live images,” and it essentially delivers animated GIFs at 24 fps and 360×640 pixel resolution, though Google says there will be “continuous enhancements.” Imagen 2 can also generate text, emblems and logos in different languages, and has the ability to overlay those elements on existing images like business cards, apparel and products. Continue reading Google Imagen 2 Now Generates 4-Second Clips on Vertex AI
By
ETCentric StaffMarch 28, 2024
Researchers from the Massachusetts Institute of Technology and Adobe have unveiled a new AI acceleration tool that makes generative apps like DALL-E 3 and Stable Diffusion up to 30x faster by reducing the process to a single step. The new approach, called distribution matching distillation, or DMD, maintains or enhances image quality while greatly streamlining the process. Theoretically, the technique “marries the principles of generative adversarial networks (GANs) with those of diffusion models,” consolidating “the hundred steps of iterative refinement required by current diffusion models” into one step, MIT PhD student and project lead Tianwei Yin says. Continue reading New Tech from MIT, Adobe Advances Generative AI Imaging
By
ETCentric StaffMarch 15, 2024
Artificial intelligence imaging service Midjourney has been embraced by storytellers who have also been clamoring for a feature that enables characters to regenerate consistently across new requests. Now Midjourney is delivering that functionality with the addition of the new “–cref” tag (short for Character Reference), available for those who are using Midjourney v6 on the Discord server. Users can achieve the effect by adding the tag to the end of text prompts, followed by a URL that contains the master image subsequent generations should match. Midjourney will then attempt to repeat the particulars of a character’s face, body and clothing characteristics. Continue reading Midjourney Creates a Feature to Advance Image Consistency
By
ETCentric StaffFebruary 16, 2024
Stability AI, purveyor of the popular Stable Diffusion image generator, has introduced a completely new model called Stable Cascade. Now in preview, Stable Cascade uses a different architecture than Stable Diffusion’s SDXL that the UK company’s researchers say is more efficient. Cascade builds on a compression architecture called Würstchen (German for “sausage”) that Stability began sharing in research papers early last year. Würstchen is a three-stage process that includes two-step encoding. It uses fewer parameters, meaning less data to train on, greater speed and reduced costs. Continue reading Stability AI Advances Image Generation with Stable Cascade