Codename Goose: Block Unveils Open-Source AI Agent Builder

Jack Dorsey’s financial tech and media firm Block (formerly Square) has released a platform for building AI agents: Codename Goose. Previously available in beta, Goose is primarily designed to build agents for coding and software development, but Block built in many basic features that could be applied to general purpose pursuits. Because it is open source and offered under Apache License 2.0, the hope is that developers will apply it to varied use cases. A leading feature of Codename Goose is its flexibility. It can integrate a wide range of large language models, letting developers use it with their preferred model. Continue reading Codename Goose: Block Unveils Open-Source AI Agent Builder

Perplexity Bows Real-Time AI Search Tool, Android Assistant

Perplexity joins the list of AI companies launching agents, debuting the Perplexity Assistant for Android. The tool uses reasoning, search, browsers and apps to help mobile users with daily tasks. Concurrently, Perplexity — independently founded in 2022 as a conversational AI search engine — has launched an API called Sonar intended for enterprise and developers who want real-time intelligent search, taking on heavyweights like Google, OpenAI and Anthropic. While to date AI search has largely been limited to answers informed by training data, which freezes their knowledge in time, next-gen tools can pull from the Internet in real time. Continue reading Perplexity Bows Real-Time AI Search Tool, Android Assistant

CES: Nvidia Will Launch a $3,000 Personal AI Supercomputer

Just weeks after Nvidia announced the availability of its $249 “compact AI supercomputer,” the Jetson Orin Nano Super Developer Kit for startups and hobbyists, CEO Jensen Huang revealed the company is planning to launch a personal AI supercomputer called Project Digits with a starting price of $3,000. The desktop-sized system features the GB10 Grace Blackwell Superchip, which enables it to handle AI models with up to 200 billion parameters. Nvidia claims there is enough processing power to run high-end AI models (performing up to one quadrillion AI calculations per second) while the compact system can run from a standard power outlet. Continue reading CES: Nvidia Will Launch a $3,000 Personal AI Supercomputer

CES: Nvidia’s Cosmos Models Teach AI About Physical World

Nvidia Cosmos, a platform of generative world foundation models (WFMs) and related tools to advance the development of physical AI systems like autonomous vehicles and robots, was introduced at CES 2025. Cosmos WFMs are designed to provide developers a way to generate massive amounts of photo-real, physics-based synthetic data to train and evaluate their existing models. The goal is to reduce costs by streamlining real-world testing with a ready data pipeline. Developers can also build custom models by fine-tuning Cosmos WFMs. Cosmos integrates Nvidia Omniverse, a physics simulation tool used for entertainment world-building. Continue reading CES: Nvidia’s Cosmos Models Teach AI About Physical World

CES: Thoughts on the Benefits and Limitations of AI in Gaming

During the “Speed, Customization, Innovation: AI in Gaming” panel during CES this week, game publishers and developers shared their latest insights regarding how they use generative AI tools. A prevailing question involved the impact of AI’s ability to generate pixels and video frames efficiently — especially in light of Nvidia’s keynote the prior evening announcing its new Blackwell RTX 50 Series GPUs’ enormous ability to do so. Other opinions shared during the panel included thoughts on whether AI is overhyped for gaming and wish lists for fixing the limitations of AI tools. Continue reading CES: Thoughts on the Benefits and Limitations of AI in Gaming

CES: Standards Are Increasingly Vital for Fostering Innovation

In an era of tremendous innovation and an explosion of new lines of products, the creation of standards has never been so important. UL Standards & Engagement (ULSE) created its first standard in 1903 and now boasts a portfolio of 1,700 standards; other standards-setting bodies include the Consumer Technology Association (CTA) and the Connectivity Standards Alliance (CSA). Moderated by ULSE Director of Insights Sayon Deb, a CES panel of experts underscored the critical importance of such standards for developing and marketing innovative products. According to Deb, 60 percent of consumers express greater confidence in certified products. Continue reading CES: Standards Are Increasingly Vital for Fostering Innovation

World Labs AI Lets Users Create 3D Worlds from Single Photo

World Labs, the AI startup co-founded by Stanford AI pioneer Fei-Fei Li, has debuted a “spatial intelligence” system that can generate 3D worlds from a single image. Although the output is not photorealistic, the tech could be a breakthrough for animation companies and video game developers. Deploying what it calls Large World Models (LWMs), World Labs is focused on transforming 2D images into turnkey 3D environments with which users can interact. Observers say that reciprocity is what sets World Labs’ technology apart from offerings by other AI companies that transform 2D to 3D. Continue reading World Labs AI Lets Users Create 3D Worlds from Single Photo

DeepMind Genie 2 Creates Worlds That Emulate Video Games

Google DeepMind’s new Genie 2 is a large foundation world model that generates interactive 3D worlds that are being likened to video games. “Games play a key role in the world of artificial intelligence research,” says Google DeepMind, noting “their engaging nature, challenges and measurable progress make them ideal environments to safely test and advance AI capabilities.” Based on a simple prompt image, Genie 2 is capable of producing “an endless variety of action-controllable, playable 3D environments” — suitable for training and evaluating embodied agents — that can be played by a human or AI agent using keyboard and mouse inputs. Continue reading DeepMind Genie 2 Creates Worlds That Emulate Video Games

Hume AI Introduces Voice Control and Claude Interoperability

Artificial voice startup Hume AI has had a busy Q4, introducing Voice Control, a no-code artificial speech interface that gives users control over 10 voice dimensions ranging from “assertiveness” to “buoyancy” and “nasality.” The company also debuted an interface that “creates emotionally intelligent voice interactions” with Anthropic’s foundation model Claude that has prompted one observer to ponder the possibility that keyboards will become a thing of the past when it comes to controlling computers. Both advances expand on Hume’s work with its own foundation model, Empathic Voice Interface 2 (EVI 2), which adds emotional timbre to AI voices. Continue reading Hume AI Introduces Voice Control and Claude Interoperability

Couchbase Capella AI Helps Deploy Agents, Models, Services

Couchbase, the publicly traded data platform for developers, has launched Capella AI Services with the aim of simplifying the process of developing and deploying agentic AI apps for enterprise clients. Capella AI joins the company’s flagship Couchbase Capella cloud data platform. AI offerings include model hosting, automated vectorization, unstructured data preprocessing and AI agent catalog services. Couchbase’s goal is to “allow organizations to prototype, build, test and deploy AI agents” while giving developers control over data across the development lifecycle, including secure data mitigation for large language models running outside the organization. Continue reading Couchbase Capella AI Helps Deploy Agents, Models, Services

Lightricks LTX Video Model Impresses with Speed and Motion

Lightricks has released an AI model called LTX Video (LTXV) it says generates five seconds of 768 x 512 resolution video (121 frames) in just four seconds, outputting in less time than it takes to watch. The model can run on consumer-grade hardware and is open source, positioning Lightricks as a mass market challenger to firms like Adobe, OpenAI, Google and their proprietary systems. “It’s time for an open-sourced video model that the global academic and developer community can build on and help shape the future of AI video,” Lightricks co-founder and CEO Zeev Farbman said. Continue reading Lightricks LTX Video Model Impresses with Speed and Motion

Anthropic Protocol Intends to Standardize AI Data Integration

Anthropic is releasing what it hopes will be a new standard in data integration for AI. Called the Model Context Protocol (MCP), its goal is to eliminate the need to customize each integration by having code written each time a company’s data is connected to a model. The open-source MCP tool could become a universal way to link data sources to AI. The aim is to have models querying databases directly. MCP is “a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments,” according to Anthropic. Continue reading Anthropic Protocol Intends to Standardize AI Data Integration

Roblox Plans an Ad Revamp as ‘Clip It’ Passes 1 Billion Views

Clip It, the Roblox social media contender in the TikTok space, has crossed the one billion views threshold, according to the company. Roblox reportedly plans to leverage that achievement by launching an ad-supported product for Clip It that merges aspects of its custom-branded Roblox game spaces with programmatic advertising. Clip It offers an endless feed of short-form video, with video ads peppered amidst the content, drawing comparisons to TikTok. Clip It currently serves the same programmatic ads as the Roblox mothership, but that is about to change, thanks to its growing popularity. Continue reading Roblox Plans an Ad Revamp as ‘Clip It’ Passes 1 Billion Views

Nvidia’s AI Blueprint Develops Agents to Analyze Visual Data

Nvidia’s growing AI arsenal now includes video search and summarization tool AI Blueprint, which helps developers build visual AI agents that analyze video and image content. The agents can answer user questions, generate summaries and even enable alerts for specific scenarios. The new feature is part of Metropolis, Nvidia’s developer toolkit for building computer vision applications using generative AI. Globally, enterprises and public organizations increasingly rely on visual information. Cameras, IoT sensors and autonomous vehicles are ingesting visual data at high rates, and visual agents can help monitor and make sense of that workflow. Continue reading Nvidia’s AI Blueprint Develops Agents to Analyze Visual Data

Microsoft, Amazon Jockey for Lead Among AI Code Assistants

Microsoft is previewing GitHub Copilot for Azure in an ambitious expansion of its AI app development toolkit that some say could fundamentally change how developers build software for the AI era. The new premise is that switching from one software to another, as developers often do, should be seamless, not disruptive — sort of a real-time language translation and integration system for code. To fend off the move by Microsoft, AWS announced it is making its Q Developer AI code assistant available as an inline chat add-on accessible from IDEs like JetBrains and Microsoft’s own Visual Studio. Continue reading Microsoft, Amazon Jockey for Lead Among AI Code Assistants