Nvidia Releases Open-Source Frontier-Class Multimodal LLMs

Nvidia has unveiled the NVLM 1.0 family of multimodal LLMs, a powerful open-source AI that the company says performs comparably to proprietary systems from OpenAI and Google. Led by NVLM-D-72B, with 72 billion parameters, Nvidia’s new entry in the AI race achieved what the company describes as “state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models.” Nvidia has made the model weights publicly available and says it will also be releasing the training code, a break from the closed approach of OpenAI, Anthropic and Google. Continue reading Nvidia Releases Open-Source Frontier-Class Multimodal LLMs

Accenture Has Plans for Scaling Enterprise AI with Nvidia Unit

Accenture is forming an internal Nvidia Business Group staffed with 30,000 global employees trained to help clients “reinvent processes and scale enterprise AI adoption with AI agents,” the consulting firm announced. Accenture will also use its AI Refinery platform to help companies customize AI models and agents using the full Nvidia AI stack including AI Foundry, AI Enterprise and Omniverse. “With generative AI demand driving $3 billion in Accenture bookings in its recently closed fiscal year, the new group will help clients lay the foundation for agentic AI functionality,” Accenture said. Continue reading Accenture Has Plans for Scaling Enterprise AI with Nvidia Unit

Meta Unveils New Open-Source Multimodal Model Llama 3.2

Meta’s Llama 3.2 release includes two new multimodal LLMs, one with 11 billion parameters and one with 90 billion — considered small- and medium-sized — and two lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices. Included are pre-trained and instruction-tuned versions. In addition to text, the multimodal models can interpret images, supporting apps that require visual understanding. Meta says the models are free and open source. Alongside them, the company is releasing “the first official Llama Stack distributions,” enabling “turnkey deployment” with integrated safety. Continue reading Meta Unveils New Open-Source Multimodal Model Llama 3.2

Cloudflare Tool Can Prevent AI Bots from Scraping Websites

Cloudflare has released AI Audit, a free set of new tools designed to help websites analyze and control how their content is used by artificial intelligence models. Described as “one-click blocking” to prevent unauthorized AI scraping, Cloudflare says it will also make it easier to identify the content bots scan most, so they can wall it off and negotiate payment in exchange for access. Helping its clients toward a sustainable future, Cloudflare is also creating a marketplace for sites to negotiate fees based on AI audits that trace cyber footprints on server files. Continue reading Cloudflare Tool Can Prevent AI Bots from Scraping Websites

Alibaba Cloud Ups Its AI Game with 100 Open-Source Models

Alibaba Cloud last week globally released more than 100 new open-source variants of its large language foundation model, Qwen 2.5, to the global open-source community. The company has also revamped its proprietary offering as a full-stack AI-computing infrastructure across cloud products, networking and data center architecture, all aimed at supporting the growing demands of AI computing. Alibaba Cloud’s significant contribution was revealed at the Apsara Conference, the annual flagship event held by the cloud division of China’s e-retail giant, often referred to as the Chinese Amazon. Continue reading Alibaba Cloud Ups Its AI Game with 100 Open-Source Models

OpenAI Previews New LLMs Capable of Complex Reasoning

OpenAI is previewing a new series of AI models that can reason and correct complex coding mistakes, providing a more efficient solution for developers. Powered by OpenAI o1, the new models are “designed to spend more time thinking before they respond, much like a person would,” and as a result can “solve harder problems than previous models in science, coding, and math,” OpenAI claims, noting that “through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.” The first model in the series is being released in preview in OpenAI’s popular ChatGPT and in the company’s API. Continue reading OpenAI Previews New LLMs Capable of Complex Reasoning

Alibaba’s Latest Vision Model Has Advanced Video Capability

China’s largest cloud computing company, Alibaba Cloud, has released a new computer vision model, Qwen2-VL, which the company says improves on its predecessor in visual understanding, including video comprehension and text-to-image processing in languages including English, Japanese, French, Spanish, Chinese and others. The company says it can analyze videos of more than 20 minutes in length and is able to respond appropriately to questions about content. Third-party benchmark tests compare Qwen2-VL favorably to leading competitors and the company is releasing two open-source versions with a larger private model to come. Continue reading Alibaba’s Latest Vision Model Has Advanced Video Capability

New AI Coding App Cursor Gains Following and $60M in Funds

An AI-powered coding app called Cursor is building a fanbase, with everyone from hobbyists to engineers subscribing to the service. The platform reportedly has 30,000 paying customers, among them employees at OpenAI, Midjourney and Perplexity. Referred to as “the ChatGPT of coding,” Cursor uses popular models including GPT-4o and Claude 3.5 Sonnet to automate building apps and other coding tasks. Cursor was launched by two-year-old startup Anysphere, which has raised more than $60 million in Series A funding led by Andreessen Horowitz and Thrive Capital. Continue reading New AI Coding App Cursor Gains Following and $60M in Funds

Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

In a move toward increased transparency, San Francisco-based AI startup Anthropic has published the system prompts for three of its most recent large language models: Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku. The information is now available on the web and in the Claude iOS and Android apps. The prompts are instruction sets that reveal what the models can and cannot do. Anthropic says it will regularly update the information, emphasizing that evolving system prompts do not affect the API. Examples of Claude’s prompts include “Claude cannot open URLs, links, or videos” and, when dealing with images, “avoid identifying or naming any humans.” Continue reading Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

OpenAI Pushes GPT-4o Customization with Free Token Offer

OpenAI announced its newest model, GPT-4o, can now be customized. The company said that the ability to fine-tune the multimodal GPT-4o has been “one of the most requested features from developers.” Customization can move the model toward more specific structure and tone of responses or allow it to follow specific instruction sets geared toward individual use cases. Developers can now implement custom datasets, aiming for better performance at a lower cost. The ChatGPT maker is rolling out the welcome mat by offering 1 million training tokens per day “for free for every organization” through September 23. Continue reading OpenAI Pushes GPT-4o Customization with Free Token Offer

Meta, Spotify Issue Statement Criticizing EU’s AI Regulations

Meta Platforms CEO Mark Zuckerberg and Spotify CEO Daniel Ek have joined forces to express displeasure with the European Union’s regulations on artificial intelligence, claiming they are suppressing innovation. That is the opposite of the stated goals of EU lawmakers in passing the regulations. In a joint statement first published in The Economist and then on the Meta and Spotify websites Friday, the duo took aim at alleged EU obstruction to the development of open source AI, suggesting that Europe’s “fragmented regulatory structure, riddled with inconsistent implementation, is hampering innovation and holding back developers.” Continue reading Meta, Spotify Issue Statement Criticizing EU’s AI Regulations

YouTube Invites Content Creators to ‘Brainstorm with Gemini’

YouTube is testing an integration with parent company Google’s Gemini AI. Called Brainstorm with Gemini, it invites creators to ideate with video, titles and thumbnails. The limited test makes the feature available to a handful of creators whose feedback will be used in strategizing how and whether to introduce the feature more broadly. In May, YouTube began testing another AI tool, renaming its “Research” tab “Inspiration.” The Inspiration tool provides topics that its algorithm detects a creator’s audience might find interested, supplying an outline and talking points. Brainstorm is similar but supports Google’s AI branding. Continue reading YouTube Invites Content Creators to ‘Brainstorm with Gemini’

Latest Gemma 2 Models Emphasize Security and Performance

Google has unveiled three additions to its Gemma 2 family of compact yet powerful open-source AI models, emphasizing safety and transparency. The company’s Gemma 2 2B is a 2.6 billion parameter update to the lightweight 2B parameter Gemma 2, with built-in improvements in safety and performance. Built on Gemma 2, ShieldGemma is a suite of safety content classifier models that “filter the input and outputs of AI models and keep the user safe.” Interoperability model tool Gemma Scope offers what Google calls “unparalleled insight into our models’ inner workings.” Continue reading Latest Gemma 2 Models Emphasize Security and Performance

OpenAI Brings Advanced Voice Mode Feature to ChatGPT Plus

OpenAI has released its new Advanced Voice Mode in a limited alpha rollout for select ChatGPT Plus users. The feature, which is being implemented for the ChatGPT mobile app on Android and iOS, aims for more natural dialogue with the AI chatbot. Powered by GPT-4o, which is multimodal, Advanced Voice Mode is said to be able to sense emotional inflections, including excitement, sadness or singing. According to an OpenAI post on X, the company plans to “continue to add more people on a rolling basis” so that everyone using ChatGPT Plus will have access to the new feature in the fall. Continue reading OpenAI Brings Advanced Voice Mode Feature to ChatGPT Plus

Microsoft Testing Bing Generative Search for User Feedback

Microsoft has begun the release of Bing generative search, making it available for “a small percentage of user queries.” The company says it will solicit user feedback and undertake further testing prior to a broader rollout. Google began dabbling in what it called the Search Generative Experience last summer, then upped the ante by adding a search-optimized version of its Gemini model this spring. The journey was not without controversy, something Microsoft will surely try to avoid. Microsoft says its new AI-driven search functionality “combines the foundation of Bing’s search results with the power of large and small language models (LLMs and SLMs).” Continue reading Microsoft Testing Bing Generative Search for User Feedback