By
Paula ParisiApril 8, 2025
Meta Platforms has released its first Llama 4 models, a multimodal trio that ranges from the foundational Behemoth to tiny Scout, with Maverick in between. With 16 experts and only 17B active parameters (the number used per task), Llama Scout is “more powerful than all previous generation Llama models, while fitting in a single Nvidia H100 GPU,” according to Meta. Maverick, with 17B active parameters and 128 experts, is touted as beating GPT-4o and Gemini 2.0 Flash across various benchmarks, “while achieving comparable results to the new DeepSeek v3 on reasoning and coding with less than half the active parameters.” Continue reading Meta Unveils Multimodal Llama 4 Models, Previews Behemoth
By
Paula ParisiMarch 28, 2025
Alibaba Cloud has released Qwen2.5-Omni-7B, a new AI model the company claims is efficient enough to run on edge devices like mobile phones and laptops. Boasting a relatively light 7-billion parameter footprint, Qwen2.5-Omni-7B understands text, images, audio and video and generates real-time responses in text and natural speech. Alibaba says its combination of compact size and multimodal capabilities is “unique,” offering “the perfect foundation for developing agile, cost-effective AI agents that deliver tangible value, especially intelligent voice applications.” One example would be using a phone’s camera to help a vision impaired-person navigate their environment. Continue reading Alibaba’s Powerful Multimodal Qwen Model Is Built for Mobile
By
Paula ParisiMarch 25, 2025
Google has added a Canvas feature to its Gemini AI chatbot that provides users with a real-time collaborative space where writing and coding projects can be refined and other ideas iterated and shared. “Canvas is designed for seamless collaboration with Gemini,” according to Gemini Product Director Dave Citron, who notes that Canvas makes it “an even more effective collaborator” in helping bring ideas to life. The move marks a trend whereby AI companies are trying to turn chatbot platforms into turnkey productivity suites. Google is launching a limited release of Gemini Live Video in addition to bringing its Audio Overview feature of NotebookLM to Gemini. Continue reading Canvas and Live Video Add Productivity Features to Gemini AI
By
Paula ParisiMarch 18, 2025
Baidu has launched two new AI systems, the native multimodal foundation model Ernie 4.5 and deep-thinking reasoning model Ernie X1. The latter supports features like generative imaging, advanced search and webpage content comprehension. Baidu is touting Ernie X1 as of comparable performance to another Chinese model, DeepSeek-R1, but says it is half the price. Both Baidu models are available to the public, including individual users, through the Ernie website. Baidu, the dominant search engine in China, says its new models mark a milestone in both reasoning and multimodal AI, “offering advanced capabilities at a more accessible price point.” Continue reading Baidu Releases New LLMs that Undercut Competition’s Price
By
Paula ParisiFebruary 19, 2025
YouTube Shorts has upgraded its Dream Screen AI background generator to incorporate Google DeepMind’s latest video model, Veo 2, which will also generate standalone video clips that users can post to Shorts. “Need a specific scene but don’t have the right footage? Want to turn your imagination into reality and tell a unique story? Simply use a text prompt to generate a video clip that fits perfectly into your narrative, or create a whole new world,” coaxes YouTube, which seems to be trying out “Dream Screen” branding as an umbrella for its genAI efforts. Continue reading YouTube Shorts Updates Dream Screen with Google Veo 2 AI
By
Paula ParisiJanuary 30, 2025
Less than a week after sending tremors through Silicon Valley and across the media landscape with an affordable large language model called DeepSeek-R1, the Chinese AI startup behind that technology has debuted another new product — the multimodal Janus-Pro-7B with an aptitude for image generation. Further mining the vein of efficiency that made R1 impressive to many, Janus-Pro-7B utilizes “a single, unified transformer architecture for processing.” Emphasizing “simplicity, high flexibility and effectiveness,” DeepSeek says Janus Pro is positioned to be a frontrunner among next-generation unified multimodal models. Continue reading DeepSeek Follows Its R1 LLM Debut with Multimodal Janus-Pro
By
Paula ParisiJanuary 27, 2025
Perplexity joins the list of AI companies launching agents, debuting the Perplexity Assistant for Android. The tool uses reasoning, search, browsers and apps to help mobile users with daily tasks. Concurrently, Perplexity — independently founded in 2022 as a conversational AI search engine — has launched an API called Sonar intended for enterprise and developers who want real-time intelligent search, taking on heavyweights like Google, OpenAI and Anthropic. While to date AI search has largely been limited to answers informed by training data, which freezes their knowledge in time, next-gen tools can pull from the Internet in real time. Continue reading Perplexity Bows Real-Time AI Search Tool, Android Assistant
By
Paula ParisiJanuary 13, 2025
The Xreal One Pro AR glasses have raised the stakes for those competing in the wearable augmented reality space, according to some CES 2025 attendees. The eyewear, which debuted at the show, updates the Xreal One, released in the U.S. last month. The Pro’s cinematic virtual display (of up to 447 inches) comes with 57-degree field of view, an improvement over the Xreal One’s 50-degree FOV. Xreal says the Pro model offers “professional-grade color accuracy.” An optional detachable 12MP camera, Xreal Eye, captures photos and video. The new model will sell for $500 starting in March. Continue reading CES: Xreal One Pro AR Glasses Are Thinner, with Greater FOV
By
Paula ParisiDecember 18, 2024
Twelve Labs has raised $30 million in funding for its efforts to train video-analyzing models. The San Francisco-based company has received strategic investments from notable enterprise infrastructure providers Databricks and SK Telecom as well as Snowflake Ventures and HubSpot Ventures. Twelve Labs targets customers using video across a variety of fields including media and entertainment, professional sports leagues, content creators and business users. The funding coincides with the release of Twelve Labs’ new video foundation model, Marengo 2.7, which applies a multi-vector approach to video understanding. Continue reading Twelve Labs Creating AI That Can Search and Analyze Video
By
Paula ParisiNovember 22, 2024
Microsoft’s expansion of AI agents within the Copilot Studio ecosystem was a central focus of the company’s Ignite conference. Since the launch of Copilot Studio, more than 100,000 enterprise organizations have created or edited AI agents using the platform. Copilot Studio is getting new features to increase productivity, including multimodal capabilities that take agents beyond text and Retrieval Augmented Generation (RAG) enhancements to enable agents with real-time knowledge from multiple third-party sources, such as Salesforce, ServiceNow, and Zendesk. Integration with Azure is expanded as 1,800 large language models in the Azure catalog are made available. Continue reading Microsoft Pushes Copilot Studio Agents, Adds Azure Models
By
Paula ParisiOctober 4, 2024
Nvidia has unveiled the NVLM 1.0 family of multimodal LLMs, a powerful open-source AI that the company says performs comparably to proprietary systems from OpenAI and Google. Led by NVLM-D-72B, with 72 billion parameters, Nvidia’s new entry in the AI race achieved what the company describes as “state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models.” Nvidia has made the model weights publicly available and says it will also be releasing the training code, a break from the closed approach of OpenAI, Anthropic and Google. Continue reading Nvidia Releases Open-Source Frontier-Class Multimodal LLMs
By
Paula ParisiOctober 2, 2024
Snap Inc. is leveraging its relationship with Google Cloud to use Gemini for powering generative AI experiences within Snapchat’s My AI chatbot. The multimodal capabilities of Gemini on Vertex AI will greatly increase the My AI chatbot’s ability to understand and operate across different types of information such as text, audio, image, video and code. Snapchatters can use My AI to take advantage of Google Lens-like features, including asking the chatbot “to translate a photo of a street sign while traveling abroad, or take a video of different snack offerings to ask which one is the healthiest option.” Continue reading Snapchat: My AI Goes Multimodal with Google Cloud, Gemini
By
Paula ParisiAugust 27, 2024
OpenAI announced its newest model, GPT-4o, can now be customized. The company said that the ability to fine-tune the multimodal GPT-4o has been “one of the most requested features from developers.” Customization can move the model toward more specific structure and tone of responses or allow it to follow specific instruction sets geared toward individual use cases. Developers can now implement custom datasets, aiming for better performance at a lower cost. The ChatGPT maker is rolling out the welcome mat by offering 1 million training tokens per day “for free for every organization” through September 23. Continue reading OpenAI Pushes GPT-4o Customization with Free Token Offer
By
Paula ParisiAugust 20, 2024
Google has released its AI assistant, Gemini Live, and is positioning it to replace Google Assistant on mobile. Gemini Live is rolling out on Android to subscribers of Gemini Advanced, which is part of the $20 monthly Google One AI Premium plan. Those consumers who purchase the new Pixel 9 Pro — which begins shipping this week — will get the assistant as part of a year of free access to Gemini Advanced, a $240 value, according to the company. Google claims that Gemini Live technology enables natural, flowing conversations with the AI assistant, putting “a sidekick in your pocket.” Continue reading Google Rolls Out Its Gemini Live, Challenging ChatGPT Voice
By
Paula ParisiAugust 5, 2024
OpenAI has released its new Advanced Voice Mode in a limited alpha rollout for select ChatGPT Plus users. The feature, which is being implemented for the ChatGPT mobile app on Android and iOS, aims for more natural dialogue with the AI chatbot. Powered by GPT-4o, which is multimodal, Advanced Voice Mode is said to be able to sense emotional inflections, including excitement, sadness or singing. According to an OpenAI post on X, the company plans to “continue to add more people on a rolling basis” so that everyone using ChatGPT Plus will have access to the new feature in the fall. Continue reading OpenAI Brings Advanced Voice Mode Feature to ChatGPT Plus