By
Paula ParisiJanuary 27, 2025
Samsung’s new Galaxy S25 line — the Galaxy S25 Ultra, Galaxy S25+ and Galaxy S25 — will more tightly integrate AI, including AI agents, becoming “true AI companions” at a level previously unknown to mobile devices. That leap is credited largely to a “first-of-its-kind” Snapdragon 8 Elite customization for the Galaxy chipset that “delivers greater on-device processing power for Galaxy AI and superior camera range and control with Galaxy’s next-gen ProVisual Engine,” according to Samsung. In addition, the top-of-the-line Galaxy S25 Ultra has been redesigned with a slightly larger 6.9-inch screen and rounded bevel. Continue reading Galaxy Unpacked: More AI for S25 and a Peek at AR Glasses
By
Rob ScottJanuary 24, 2025
Just weeks after Nvidia announced the availability of its $249 “compact AI supercomputer,” the Jetson Orin Nano Super Developer Kit for startups and hobbyists, CEO Jensen Huang revealed the company is planning to launch a personal AI supercomputer called Project Digits with a starting price of $3,000. The desktop-sized system features the GB10 Grace Blackwell Superchip, which enables it to handle AI models with up to 200 billion parameters. Nvidia claims there is enough processing power to run high-end AI models (performing up to one quadrillion AI calculations per second) while the compact system can run from a standard power outlet. Continue reading CES: Nvidia Will Launch a $3,000 Personal AI Supercomputer
By
Paula ParisiJanuary 14, 2025
Nvidia Cosmos, a platform of generative world foundation models (WFMs) and related tools to advance the development of physical AI systems like autonomous vehicles and robots, was introduced at CES 2025. Cosmos WFMs are designed to provide developers a way to generate massive amounts of photo-real, physics-based synthetic data to train and evaluate their existing models. The goal is to reduce costs by streamlining real-world testing with a ready data pipeline. Developers can also build custom models by fine-tuning Cosmos WFMs. Cosmos integrates Nvidia Omniverse, a physics simulation tool used for entertainment world-building. Continue reading CES: Nvidia’s Cosmos Models Teach AI About Physical World
By
Yves BergquistJanuary 9, 2025
In the never-ending smorgasbord of AI hype, “agents” represent practical and worthwhile potential. AI agents are autonomous AI programs that can understand some context and take action in that context. Agents can autonomously perform a task that involves mapping a goal to its context and parameters (even if they’re not explicitly laid out), process data across multiple formats and ontologies to understand the goal and work through the task, call multiple functions across multiple apps, and take some action to achieve the goal. Unfortunately, however, while many are talking about AI agents, few are promoting actual products at CES. Continue reading CES: Show Features a Surprisingly Small Number of AI Agents
By
Debra KaufmanJanuary 8, 2025
During CES 2025 in Las Vegas this week, Meta Vice President and Chief AI Scientist Yann LeCun had a compelling conversation with Wing Venture Capital Head of Research Rajeev Chand on the latest hot button topics in the rapidly evolving field of artificial intelligence. Some of the conclusions were that AI agents will become ubiquitous — but not for 10 to 15 years, human intelligence means different things to different AI experts, and nuclear power remains the best and safest source for powering AI. And, for those looking for more of LeCun’s tweets, he said he no longer posts on X. Continue reading CES: AI Pioneer Yann LeCun on AI Agents, Human Intelligence
By
Paula ParisiDecember 6, 2024
Google DeepMind’s new Genie 2 is a large foundation world model that generates interactive 3D worlds that are being likened to video games. “Games play a key role in the world of artificial intelligence research,” says Google DeepMind, noting “their engaging nature, challenges and measurable progress make them ideal environments to safely test and advance AI capabilities.” Based on a simple prompt image, Genie 2 is capable of producing “an endless variety of action-controllable, playable 3D environments” — suitable for training and evaluating embodied agents — that can be played by a human or AI agent using keyboard and mouse inputs. Continue reading DeepMind Genie 2 Creates Worlds That Emulate Video Games
By
Paula ParisiDecember 3, 2024
Couchbase, the publicly traded data platform for developers, has launched Capella AI Services with the aim of simplifying the process of developing and deploying agentic AI apps for enterprise clients. Capella AI joins the company’s flagship Couchbase Capella cloud data platform. AI offerings include model hosting, automated vectorization, unstructured data preprocessing and AI agent catalog services. Couchbase’s goal is to “allow organizations to prototype, build, test and deploy AI agents” while giving developers control over data across the development lifecycle, including secure data mitigation for large language models running outside the organization. Continue reading Couchbase Capella AI Helps Deploy Agents, Models, Services
By
Paula ParisiNovember 22, 2024
Microsoft’s expansion of AI agents within the Copilot Studio ecosystem was a central focus of the company’s Ignite conference. Since the launch of Copilot Studio, more than 100,000 enterprise organizations have created or edited AI agents using the platform. Copilot Studio is getting new features to increase productivity, including multimodal capabilities that take agents beyond text and Retrieval Augmented Generation (RAG) enhancements to enable agents with real-time knowledge from multiple third-party sources, such as Salesforce, ServiceNow, and Zendesk. Integration with Azure is expanded as 1,800 large language models in the Azure catalog are made available. Continue reading Microsoft Pushes Copilot Studio Agents, Adds Azure Models
By
Paula ParisiNovember 6, 2024
Nvidia’s growing AI arsenal now includes video search and summarization tool AI Blueprint, which helps developers build visual AI agents that analyze video and image content. The agents can answer user questions, generate summaries and even enable alerts for specific scenarios. The new feature is part of Metropolis, Nvidia’s developer toolkit for building computer vision applications using generative AI. Globally, enterprises and public organizations increasingly rely on visual information. Cameras, IoT sensors and autonomous vehicles are ingesting visual data at high rates, and visual agents can help monitor and make sense of that workflow. Continue reading Nvidia’s AI Blueprint Develops Agents to Analyze Visual Data
By
Paula ParisiOctober 29, 2024
In its first week of public beta, Anthropic’s “Computer Use” feature is gaining immediate traction, helping people do research and complete coding tasks. Claude works autonomously in Computer Use mode, suggesting broad implications for future productivity and workforce goals. Coming on the heels of OpenAI’s Swarm framework, these early forays into independent AI assistants seem to indicate that implementing such systems will be an area of focus for businesses in 2025. Claude can “see” what’s onscreen and use its “judgment” to adapt to different tasks, segueing across workflows and software. Continue reading Anthropic’s AI Agents for Claude Sonnet Increase Productivity
By
Paula ParisiOctober 23, 2024
Microsoft next month moves to public preview with a Copilot Studio feature that lets users create autonomous AI agents. The agents had been in private preview since the spring, and the tech giant’s move to take them public comes after Salesforce launched its own agentic program in September. Microsoft also has plans to add 10 autonomous agents to Dynamics 365, an enterprise suite geared toward resource planning and customer relationship management. Microsoft announced the news this week at its “AI Tour” event in London. Copilot is Microsoft’s branded AI assistant, while Copilot Studio lets people customize their Copilot assistants. Continue reading Microsoft Widens Copilot AI Agent Preview, Adds Templates
By
Paula ParisiOctober 16, 2024
OpenAI has announced Swarm, an experimental framework that coordinates networks of AI agents, and true to its name the news has kicked over a hornet’s nest of contentious debate about the ethics of artificial intelligence and the future of enterprise automation. OpenAI emphasizes that Swarm is not an official product and says though it has shared the code publicly it has no intention of maintaining it. “Think of it more like a cookbook,” OpenAI engineer Shyamal Anadkat said in a social media post, calling it “code for building simple agents.” Continue reading OpenAI Tests Open-Source Framework for Autonomous Agents
By
Paula ParisiOctober 3, 2024
Accenture is forming an internal Nvidia Business Group staffed with 30,000 global employees trained to help clients “reinvent processes and scale enterprise AI adoption with AI agents,” the consulting firm announced. Accenture will also use its AI Refinery platform to help companies customize AI models and agents using the full Nvidia AI stack including AI Foundry, AI Enterprise and Omniverse. “With generative AI demand driving $3 billion in Accenture bookings in its recently closed fiscal year, the new group will help clients lay the foundation for agentic AI functionality,” Accenture said. Continue reading Accenture Has Plans for Scaling Enterprise AI with Nvidia Unit
By
Paula ParisiJuly 31, 2024
Meta Platforms CEO Mark Zuckerberg unveiled the latest version of computer vision platform SAM 2, an update on the company’s Segment Anything Model that automates for video what the original SAM did for still images — identifying the edges of an object and isolating it in the frame. Zuckerberg demonstrated SAM 2 as part of a SIGGRAPH 2024 keynote session in which he was interviewed by Nvidia CEO Jensen Huang. “Being able to do this in video and have it be zero shot and tell it what you want, it’s pretty cool,” Zuckerberg said. Meta is sharing the code and model weights for SAM 2 with a permissive Apache 2.0 license. Continue reading Mark Zuckerberg Unveils SAM 2 AI Tech for Segmenting Video
By
ETCentric StaffMarch 21, 2024
Deepgram’s new Aura software turns text into generative audio with a “human-like voice.” The 9-year-old voice recognition company has raised nearly $86 million to date on the strength of its Voice AI platform. Aura is an extremely low-latency text-to-speech voice AI that can be used for voice AI agents, the company says. Paired with Deepgram’s Nova-2 speech-to-text API, developers can use it to “easily (and quickly) exchange real-time information between humans and LLMs to build responsive, high-throughput AI agents and conversational AI applications,” according to Deepgram. Continue reading Deepgram’s Speech Portfolio Now Includes Human-Like Aura