Codename Goose: Block Unveils Open-Source AI Agent Builder

Jack Dorsey’s financial tech and media firm Block (formerly Square) has released a platform for building AI agents: Codename Goose. Previously available in beta, Goose is primarily designed to build agents for coding and software development, but Block built in many basic features that could be applied to general purpose pursuits. Because it is open source and offered under Apache License 2.0, the hope is that developers will apply it to varied use cases. A leading feature of Codename Goose is its flexibility. It can integrate a wide range of large language models, letting developers use it with their preferred model. Continue reading Codename Goose: Block Unveils Open-Source AI Agent Builder

Facebook, Instagram, WhatsApp Get Meta AI Memory Boost

Meta is rolling out personalization updates to its Meta AI personal assistant. At the end of last year, the company introduced a feature that lets Meta AI remember what you’ve shared with it in one-on-one chats on WhatsApp and Messenger so it could produce more relevant responses. That feature will now be available to Meta AI on Facebook, Messenger and WhatsApp for iOS and Android in the U.S. and Canada. “Meta AI will only remember certain things you tell it in 1:1 conversations (not group chats), and you can delete its memories at any time,” explains the company. Continue reading Facebook, Instagram, WhatsApp Get Meta AI Memory Boost

Chinese AI Startup DeepSeek Disrupting the U.S. Tech Sector

Hangzhou-based AI firm DeepSeek is roiling the U.S. tech sector and upending financial markets. The startup has managed to become competitive with Silicon Valley’s deep learning firms despite U.S. sanctions that prevent Chinese technology companies from buying premium chips. DeepSeek has made it into the global top 10 in terms of model performance, and as of this week had the top-ranked free AI assistant at the Apple App Store. DeepSeek’s new R1 model has drawn attention for using less computing power than competing systems, while performing comparably, despite having been developed using older Nvidia chips. Continue reading Chinese AI Startup DeepSeek Disrupting the U.S. Tech Sector

CES: Nvidia’s Cosmos Models Teach AI About Physical World

Nvidia Cosmos, a platform of generative world foundation models (WFMs) and related tools to advance the development of physical AI systems like autonomous vehicles and robots, was introduced at CES 2025. Cosmos WFMs are designed to provide developers a way to generate massive amounts of photo-real, physics-based synthetic data to train and evaluate their existing models. The goal is to reduce costs by streamlining real-world testing with a ready data pipeline. Developers can also build custom models by fine-tuning Cosmos WFMs. Cosmos integrates Nvidia Omniverse, a physics simulation tool used for entertainment world-building. Continue reading CES: Nvidia’s Cosmos Models Teach AI About Physical World

CES: Samsung and Google Team on Spatial Audio Standard

Samsung Electronics has teamed with Google on a new spatial sound standard, Eclipsa Audio, that could emerge as a free alternative to Dolby Atmos. On display at CES 2025 in Las Vegas this week, the format is rolling out across Samsung’s line of 2025 TVs and soundbars, and Google will support it on the content side by enabling Eclipsa 3D audio on some YouTube videos this year. Samsung has been a notable holdout on Dolby Vision HDR embracing instead the competing HDR10+. Now the South Korean electronics giant seems to be staking out its own turf in 3D audio, advocating for open source. Continue reading CES: Samsung and Google Team on Spatial Audio Standard

Meta Rolls Out Watermarking, Behavioral and Concept Models

Meta’s FAIR (Fundamental AI Research) team has unveiled recent work in areas ranging from transparency and safety to agents, and architectures for machine learning. The projects include Meta Motivo, a foundation model for controlling the behavior of virtual embodied agents, and Video Seal, an open-source model for video watermarking. All were developed in the unit’s pursuit of advanced machine intelligence, helping “models to learn new information more effectively and scale beyond current limits.” Meta announced it is sharing the new FAIR research, code, models and datasets so the research community can build upon its work. Continue reading Meta Rolls Out Watermarking, Behavioral and Concept Models

Meta’s Llama 3.3 Delivers More Processing for Less Compute

Meta Platforms has packed more artificial intelligence into a smaller package with Llama 3.3, which the company released last week. The open-source large language model (LLM) “improves core performance at a significantly lower cost, making it even more accessible to the entire open-source community,” Meta VP of Generative AI Ahmad Al-Dahle wrote on X social. The 70 billion parameter text-only Llama 3.3 is said to perform on par with the 405 billion parameter model that was part of Meta’s Llama 3.1 release in July, with less computing power required, significantly lowering its operational costs. Continue reading Meta’s Llama 3.3 Delivers More Processing for Less Compute

Hume AI Introduces Voice Control and Claude Interoperability

Artificial voice startup Hume AI has had a busy Q4, introducing Voice Control, a no-code artificial speech interface that gives users control over 10 voice dimensions ranging from “assertiveness” to “buoyancy” and “nasality.” The company also debuted an interface that “creates emotionally intelligent voice interactions” with Anthropic’s foundation model Claude that has prompted one observer to ponder the possibility that keyboards will become a thing of the past when it comes to controlling computers. Both advances expand on Hume’s work with its own foundation model, Empathic Voice Interface 2 (EVI 2), which adds emotional timbre to AI voices. Continue reading Hume AI Introduces Voice Control and Claude Interoperability

Qwen with Questions: Alibaba Previews New Reasoning Model

Alibaba Cloud has released the latest entry in its growing Qwen family of large language models. The new Qwen with Questions (QwQ) is an open-source competitor to OpenAI’s o1 reasoning model. As with competing large reasoning models (LRMs), QwQ can correct its own mistakes, relying on extra compute cycles during inference to assess its responses, making it well suited for reasoning tasks like math and coding. Described as an “experimental research model,” this preview version of QwQ has 32-billion-parameters and a 32,000-token context, leading to speculation that a more powerful iteration is in the offing. Continue reading Qwen with Questions: Alibaba Previews New Reasoning Model

Lightricks LTX Video Model Impresses with Speed and Motion

Lightricks has released an AI model called LTX Video (LTXV) it says generates five seconds of 768 x 512 resolution video (121 frames) in just four seconds, outputting in less time than it takes to watch. The model can run on consumer-grade hardware and is open source, positioning Lightricks as a mass market challenger to firms like Adobe, OpenAI, Google and their proprietary systems. “It’s time for an open-sourced video model that the global academic and developer community can build on and help shape the future of AI video,” Lightricks co-founder and CEO Zeev Farbman said. Continue reading Lightricks LTX Video Model Impresses with Speed and Motion

Anthropic Protocol Intends to Standardize AI Data Integration

Anthropic is releasing what it hopes will be a new standard in data integration for AI. Called the Model Context Protocol (MCP), its goal is to eliminate the need to customize each integration by having code written each time a company’s data is connected to a model. The open-source MCP tool could become a universal way to link data sources to AI. The aim is to have models querying databases directly. MCP is “a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments,” according to Anthropic. Continue reading Anthropic Protocol Intends to Standardize AI Data Integration

GitHub Promotes Open-Source Security with Funding Initiative

The GitHub Secure Open Source Fund will award financing to select applicants in a program designed to fuel security and sustainability for open-source projects. Applications are open now and close on January 7. During that time, 125 projects will be selected for a piece of the $1.25 million investment fund, made possible through the participation of American Express, the Alfred P. Sloan Foundation, Chainguard, HeroDevs, Kraken, Mayfield Fund, Microsoft, Shopify, Stripe and others. In addition to monetary support, recipients will be invited to take part in a three-week educational program. Continue reading GitHub Promotes Open-Source Security with Funding Initiative

Nvidia’s Impressive AI Model Could Compete with Top Brands

Nvidia has debuted a new AI model, Llama-3.1-Nemotron-70B-Instruct, that it claims is outperforming competitors GPT-4o from OpenAI and Anthropic’s Claude 3.5 Sonnet. The impressive showing has prompted speculation of an AI shakeup and a significant shift in Nividia’s AI strategy, which has thus far been focused primarily on chipmaking. The model was quietly released on Hugging Face, and Nvidia says as of October 1 it ranked first on three top automatic alignment benchmarks, “edging out strong frontier models” and vaulting Nvidia to the forefront of the LLM field in areas like comprehension, context and generation. Continue reading Nvidia’s Impressive AI Model Could Compete with Top Brands

OpenAI Tests Open-Source Framework for Autonomous Agents

OpenAI has announced Swarm, an experimental framework that coordinates networks of AI agents, and true to its name the news has kicked over a hornet’s nest of contentious debate about the ethics of artificial intelligence and the future of enterprise automation. OpenAI emphasizes that Swarm is not an official product and says though it has shared the code publicly it has no intention of maintaining it. “Think of it more like a cookbook,” OpenAI engineer Shyamal Anadkat said in a social media post, calling it “code for building simple agents.” Continue reading OpenAI Tests Open-Source Framework for Autonomous Agents

Apple Advances Computer Vision with Its Depth Pro AI Model

Apple has released a new AI model called Depth Pro that can create a 3D depth map from a 2D image in under a second. The system is being hailed as a breakthrough that could potentially revolutionize how machines perceive depth, with transformative impact on industries from augmented reality to self-driving vehicles. “The predictions are metric, with absolute scale” without relying on the camera metadata typically required for such mapping, according to Apple. Using a consumer-grade GPU, the model can produce a 2.25-megapixel depth map using a single image in only 0.3 seconds. Continue reading Apple Advances Computer Vision with Its Depth Pro AI Model