By
Paula ParisiApril 28, 2025
Nvidia has released NeMo microservices into general availability with version 25.4, pivoting its profile from a modular toolkit for creating custom generative AI models to emphasizing it as a platform for building AI agents at scale. As AI agents have become an in-demand commodity, Nvidia is leveraging the fact that NeMo’s capabilities seem purpose built to help them grow and thrive. Built around the Kubernetes open-source container management system, NeMo microservices are offered as “an end-to-end developer platform for creating state-of-the-art agentic AI systems,” according to Nvidia. Continue reading Nvidia Positions Its NeMo Microservices for AI Agent-Building
By
Paula ParisiMay 20, 2024
Google is offering developers a toolkit for incorporating generative AI features into mobile and web applications. Firebase Genkit, an open-source framework, is available now in beta. Blending models, cloud services, agents, data sources and more in a “code-centric approach” developers are used to, the Genkit makes building and debugging for AI easier, according to Google. The first release is built for JavaScript and TypeScript developers, making building AI-powered apps available to professionals who specialize in building server-side applications using the Node.js JavaScript runtime. Continue reading Firebase Genkit: Developer Framework for AI-Powered Apps
By
ETCentric StaffApril 9, 2024
Opera has become the first browser to add support for large language models (LLMs). At this point the feature is experimental, and available only on the Opera One Developer browser as part of the AI Feature Drops program. The update offers about 150 LLMs from more than 50 different families, including Meta’s LLaMA, Google’s Gemma, Mixtral and Vicuna. Opera had previously only offered local support for its own Aria AI, a competitor to Microsoft Copilot and OpenAI’s ChatGPT. The local LLMs are being offered for testing as a complimentary addition to Opera’s online Aria service. Continue reading Opera Browser Is Experimenting with Local Support for LLMs