By
Paula ParisiJanuary 14, 2025
Nvidia Cosmos, a platform of generative world foundation models (WFMs) and related tools to advance the development of physical AI systems like autonomous vehicles and robots, was introduced at CES 2025. Cosmos WFMs are designed to provide developers a way to generate massive amounts of photo-real, physics-based synthetic data to train and evaluate their existing models. The goal is to reduce costs by streamlining real-world testing with a ready data pipeline. Developers can also build custom models by fine-tuning Cosmos WFMs. Cosmos integrates Nvidia Omniverse, a physics simulation tool used for entertainment world-building. Continue reading CES: Nvidia’s Cosmos Models Teach AI About Physical World
By
Paula ParisiJanuary 7, 2025
Samsung Electronics has teamed with Google on a new spatial sound standard, Eclipsa Audio, that could emerge as a free alternative to Dolby Atmos. On display at CES 2025 in Las Vegas this week, the format is rolling out across Samsung’s line of 2025 TVs and soundbars, and Google will support it on the content side by enabling Eclipsa 3D audio on some YouTube videos this year. Samsung has been a notable holdout on Dolby Vision HDR embracing instead the competing HDR10+. Now the South Korean electronics giant seems to be staking out its own turf in 3D audio, advocating for open source. Continue reading CES: Samsung and Google Team on Spatial Audio Standard
By
Paula ParisiDecember 17, 2024
Meta’s FAIR (Fundamental AI Research) team has unveiled recent work in areas ranging from transparency and safety to agents, and architectures for machine learning. The projects include Meta Motivo, a foundation model for controlling the behavior of virtual embodied agents, and Video Seal, an open-source model for video watermarking. All were developed in the unit’s pursuit of advanced machine intelligence, helping “models to learn new information more effectively and scale beyond current limits.” Meta announced it is sharing the new FAIR research, code, models and datasets so the research community can build upon its work. Continue reading Meta Rolls Out Watermarking, Behavioral and Concept Models
By
Paula ParisiDecember 10, 2024
Meta Platforms has packed more artificial intelligence into a smaller package with Llama 3.3, which the company released last week. The open-source large language model (LLM) “improves core performance at a significantly lower cost, making it even more accessible to the entire open-source community,” Meta VP of Generative AI Ahmad Al-Dahle wrote on X social. The 70 billion parameter text-only Llama 3.3 is said to perform on par with the 405 billion parameter model that was part of Meta’s Llama 3.1 release in July, with less computing power required, significantly lowering its operational costs. Continue reading Meta’s Llama 3.3 Delivers More Processing for Less Compute
By
Paula ParisiDecember 4, 2024
Artificial voice startup Hume AI has had a busy Q4, introducing Voice Control, a no-code artificial speech interface that gives users control over 10 voice dimensions ranging from “assertiveness” to “buoyancy” and “nasality.” The company also debuted an interface that “creates emotionally intelligent voice interactions” with Anthropic’s foundation model Claude that has prompted one observer to ponder the possibility that keyboards will become a thing of the past when it comes to controlling computers. Both advances expand on Hume’s work with its own foundation model, Empathic Voice Interface 2 (EVI 2), which adds emotional timbre to AI voices. Continue reading Hume AI Introduces Voice Control and Claude Interoperability
By
Paula ParisiDecember 4, 2024
Alibaba Cloud has released the latest entry in its growing Qwen family of large language models. The new Qwen with Questions (QwQ) is an open-source competitor to OpenAI’s o1 reasoning model. As with competing large reasoning models (LRMs), QwQ can correct its own mistakes, relying on extra compute cycles during inference to assess its responses, making it well suited for reasoning tasks like math and coding. Described as an “experimental research model,” this preview version of QwQ has 32-billion-parameters and a 32,000-token context, leading to speculation that a more powerful iteration is in the offing. Continue reading Qwen with Questions: Alibaba Previews New Reasoning Model
By
Paula ParisiDecember 2, 2024
Lightricks has released an AI model called LTX Video (LTXV) it says generates five seconds of 768 x 512 resolution video (121 frames) in just four seconds, outputting in less time than it takes to watch. The model can run on consumer-grade hardware and is open source, positioning Lightricks as a mass market challenger to firms like Adobe, OpenAI, Google and their proprietary systems. “It’s time for an open-sourced video model that the global academic and developer community can build on and help shape the future of AI video,” Lightricks co-founder and CEO Zeev Farbman said. Continue reading Lightricks LTX Video Model Impresses with Speed and Motion
By
Paula ParisiNovember 27, 2024
Anthropic is releasing what it hopes will be a new standard in data integration for AI. Called the Model Context Protocol (MCP), its goal is to eliminate the need to customize each integration by having code written each time a company’s data is connected to a model. The open-source MCP tool could become a universal way to link data sources to AI. The aim is to have models querying databases directly. MCP is “a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments,” according to Anthropic. Continue reading Anthropic Protocol Intends to Standardize AI Data Integration
By
Paula ParisiNovember 26, 2024
The GitHub Secure Open Source Fund will award financing to select applicants in a program designed to fuel security and sustainability for open-source projects. Applications are open now and close on January 7. During that time, 125 projects will be selected for a piece of the $1.25 million investment fund, made possible through the participation of American Express, the Alfred P. Sloan Foundation, Chainguard, HeroDevs, Kraken, Mayfield Fund, Microsoft, Shopify, Stripe and others. In addition to monetary support, recipients will be invited to take part in a three-week educational program. Continue reading GitHub Promotes Open-Source Security with Funding Initiative
By
Paula ParisiOctober 21, 2024
Nvidia has debuted a new AI model, Llama-3.1-Nemotron-70B-Instruct, that it claims is outperforming competitors GPT-4o from OpenAI and Anthropic’s Claude 3.5 Sonnet. The impressive showing has prompted speculation of an AI shakeup and a significant shift in Nividia’s AI strategy, which has thus far been focused primarily on chipmaking. The model was quietly released on Hugging Face, and Nvidia says as of October 1 it ranked first on three top automatic alignment benchmarks, “edging out strong frontier models” and vaulting Nvidia to the forefront of the LLM field in areas like comprehension, context and generation. Continue reading Nvidia’s Impressive AI Model Could Compete with Top Brands
By
Paula ParisiOctober 16, 2024
OpenAI has announced Swarm, an experimental framework that coordinates networks of AI agents, and true to its name the news has kicked over a hornet’s nest of contentious debate about the ethics of artificial intelligence and the future of enterprise automation. OpenAI emphasizes that Swarm is not an official product and says though it has shared the code publicly it has no intention of maintaining it. “Think of it more like a cookbook,” OpenAI engineer Shyamal Anadkat said in a social media post, calling it “code for building simple agents.” Continue reading OpenAI Tests Open-Source Framework for Autonomous Agents
By
Paula ParisiOctober 8, 2024
Apple has released a new AI model called Depth Pro that can create a 3D depth map from a 2D image in under a second. The system is being hailed as a breakthrough that could potentially revolutionize how machines perceive depth, with transformative impact on industries from augmented reality to self-driving vehicles. “The predictions are metric, with absolute scale” without relying on the camera metadata typically required for such mapping, according to Apple. Using a consumer-grade GPU, the model can produce a 2.25-megapixel depth map using a single image in only 0.3 seconds. Continue reading Apple Advances Computer Vision with Its Depth Pro AI Model
By
Paula ParisiOctober 4, 2024
Nvidia has unveiled the NVLM 1.0 family of multimodal LLMs, a powerful open-source AI that the company says performs comparably to proprietary systems from OpenAI and Google. Led by NVLM-D-72B, with 72 billion parameters, Nvidia’s new entry in the AI race achieved what the company describes as “state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models.” Nvidia has made the model weights publicly available and says it will also be releasing the training code, a break from the closed approach of OpenAI, Anthropic and Google. Continue reading Nvidia Releases Open-Source Frontier-Class Multimodal LLMs
By
Paula ParisiOctober 1, 2024
The Allen Institute for AI (also known as Ai2, founded by Paul Allen and led by Ali Farhadi) has launched Molmo, a family of four open-source multimodal models. While advanced models “can perceive the world and communicate with us, Molmo goes beyond that to enable one to act in their worlds, unlocking a whole new generation of capabilities, everything from sophisticated web agents to robotics,” according to Ai2. On some third-party benchmark tests, Molmo’s 72 billion parameter model outperforms other open AI offerings and “performs favorably” against proprietary rivals like OpenAI’s GPT-4o, Google’s Gemini 1.5 and Anthropic’s Claude 3.5 Sonnet, Ai2 says. Continue reading Allen Institute Announces Vision-Optimized Molmo AI Models
By
Paula ParisiSeptember 27, 2024
Meta’s Llama 3.2 release includes two new multimodal LLMs, one with 11 billion parameters and one with 90 billion — considered small- and medium-sized — and two lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices. Included are pre-trained and instruction-tuned versions. In addition to text, the multimodal models can interpret images, supporting apps that require visual understanding. Meta says the models are free and open source. Alongside them, the company is releasing “the first official Llama Stack distributions,” enabling “turnkey deployment” with integrated safety. Continue reading Meta Unveils New Open-Source Multimodal Model Llama 3.2