By
Paula ParisiJanuary 27, 2025
Perplexity joins the list of AI companies launching agents, debuting the Perplexity Assistant for Android. The tool uses reasoning, search, browsers and apps to help mobile users with daily tasks. Concurrently, Perplexity — independently founded in 2022 as a conversational AI search engine — has launched an API called Sonar intended for enterprise and developers who want real-time intelligent search, taking on heavyweights like Google, OpenAI and Anthropic. While to date AI search has largely been limited to answers informed by training data, which freezes their knowledge in time, next-gen tools can pull from the Internet in real time. Continue reading Perplexity Bows Real-Time AI Search Tool, Android Assistant
By
Paula ParisiJanuary 27, 2025
OpenAI has launched Operator, a semi-autonomous AI agent that uses a proprietary web browser to execute tasks like planning a vacation using Tripadvisor or booking restaurant reservations through OpenTable. “It can look at a webpage and interact with it by typing, clicking and scrolling,” explains OpenAI. Operator is powered by a new model called Computer-Using Agent (CUA), and is available in research preview to ChatGPT Pro subscribers in the U.S. Combining GPT-4o’s computer vision capabilities with advanced reasoning, CUA is trained to interact with graphical user interfaces (GUIs) — parsing menus, clicking buttons and reading screen text. Continue reading OpenAI Operator Agent Available to ChatGPT Pro Subscribers
By
Paula ParisiDecember 10, 2024
Meta Platforms has packed more artificial intelligence into a smaller package with Llama 3.3, which the company released last week. The open-source large language model (LLM) “improves core performance at a significantly lower cost, making it even more accessible to the entire open-source community,” Meta VP of Generative AI Ahmad Al-Dahle wrote on X social. The 70 billion parameter text-only Llama 3.3 is said to perform on par with the 405 billion parameter model that was part of Meta’s Llama 3.1 release in July, with less computing power required, significantly lowering its operational costs. Continue reading Meta’s Llama 3.3 Delivers More Processing for Less Compute
By
Paula ParisiDecember 9, 2024
OpenAI has launched ChatGPT Pro, a $200 per month subscription plan that provides unlimited access to the full version of o1, its new large reasoning model, and all other OpenAI models. The toolkit includes o1-mini, GPT-4o and Advanced Voice. It also includes the new o1 pro mode, “a version of o1 that uses more compute to think harder and provide even better answers to the hardest problems,” OpenAI explains, describing the high-end subscription plan as a path to “research-grade intelligence” for a way for scientists, engineers, enterprise, academics and others who use AI to accelerate productivity. Continue reading OpenAI Announces $200 Monthly Subscription for ChatGPT Pro
By
Paula ParisiDecember 4, 2024
Alibaba Cloud has released the latest entry in its growing Qwen family of large language models. The new Qwen with Questions (QwQ) is an open-source competitor to OpenAI’s o1 reasoning model. As with competing large reasoning models (LRMs), QwQ can correct its own mistakes, relying on extra compute cycles during inference to assess its responses, making it well suited for reasoning tasks like math and coding. Described as an “experimental research model,” this preview version of QwQ has 32-billion-parameters and a 32,000-token context, leading to speculation that a more powerful iteration is in the offing. Continue reading Qwen with Questions: Alibaba Previews New Reasoning Model
By
Paula ParisiOctober 21, 2024
Nvidia has debuted a new AI model, Llama-3.1-Nemotron-70B-Instruct, that it claims is outperforming competitors GPT-4o from OpenAI and Anthropic’s Claude 3.5 Sonnet. The impressive showing has prompted speculation of an AI shakeup and a significant shift in Nividia’s AI strategy, which has thus far been focused primarily on chipmaking. The model was quietly released on Hugging Face, and Nvidia says as of October 1 it ranked first on three top automatic alignment benchmarks, “edging out strong frontier models” and vaulting Nvidia to the forefront of the LLM field in areas like comprehension, context and generation. Continue reading Nvidia’s Impressive AI Model Could Compete with Top Brands
By
Paula ParisiOctober 8, 2024
On the heels of announcing a $6.6 billion funding round, OpenAI is getting busy with new products including the launch of the latest iteration of ChatGPT. The chatbot will now extend beyond simple questions and answers with Canvas, a new interface that opens in a separate window, allowing collaborative engagement with ChatGPT on writing and coding projects. Launching in beta, Canvas was built with GPT-4o and can be manually selected in the model picker. Canvas is being made available first to global ChatGPT Plus and Team users with Enterprise and Edu users next. The company says it will be available to all free ChatGPT users when it’s out of beta. Continue reading ChatGPT Enhances Collaborative Ability with Canvas Interface
By
Paula ParisiOctober 4, 2024
Nvidia has unveiled the NVLM 1.0 family of multimodal LLMs, a powerful open-source AI that the company says performs comparably to proprietary systems from OpenAI and Google. Led by NVLM-D-72B, with 72 billion parameters, Nvidia’s new entry in the AI race achieved what the company describes as “state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models.” Nvidia has made the model weights publicly available and says it will also be releasing the training code, a break from the closed approach of OpenAI, Anthropic and Google. Continue reading Nvidia Releases Open-Source Frontier-Class Multimodal LLMs
By
Paula ParisiOctober 3, 2024
OpenAI unveiled major updates at its DevDay conference with the focus largely on making AI more accessible, efficient and affordable. Included were four innovations: Vision Fine-Tuning in the API, Model Distillation, Prompt Caching and the public beta of Realtime API. The approach underscores OpenAI’s effort to empower its developer ecosystem even as it continues to compete for end-users in the enterprise space. The Realtime API gives developers the option of building “nearly real-time” speech-to-speech app experiences, selecting from among six OpenAI voices. Vision Fine-Tuning for GPT-4o enables customization of the model’s visual understanding of images and text. Continue reading OpenAI Showcases Latest Updates for Voice, Picture and More
By
Paula ParisiOctober 1, 2024
The Allen Institute for AI (also known as Ai2, founded by Paul Allen and led by Ali Farhadi) has launched Molmo, a family of four open-source multimodal models. While advanced models “can perceive the world and communicate with us, Molmo goes beyond that to enable one to act in their worlds, unlocking a whole new generation of capabilities, everything from sophisticated web agents to robotics,” according to Ai2. On some third-party benchmark tests, Molmo’s 72 billion parameter model outperforms other open AI offerings and “performs favorably” against proprietary rivals like OpenAI’s GPT-4o, Google’s Gemini 1.5 and Anthropic’s Claude 3.5 Sonnet, Ai2 says. Continue reading Allen Institute Announces Vision-Optimized Molmo AI Models
By
Paula ParisiSeptember 16, 2024
OpenAI is previewing a new series of AI models that can reason and correct complex coding mistakes, providing a more efficient solution for developers. Powered by OpenAI o1, the new models are “designed to spend more time thinking before they respond, much like a person would,” and as a result can “solve harder problems than previous models in science, coding, and math,” OpenAI claims, noting that “through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.” The first model in the series is being released in preview in OpenAI’s popular ChatGPT and in the company’s API. Continue reading OpenAI Previews New LLMs Capable of Complex Reasoning
By
Paula ParisiSeptember 5, 2024
China’s largest cloud computing company, Alibaba Cloud, has released a new computer vision model, Qwen2-VL, which the company says improves on its predecessor in visual understanding, including video comprehension and text-to-image processing in languages including English, Japanese, French, Spanish, Chinese and others. The company says it can analyze videos of more than 20 minutes in length and is able to respond appropriately to questions about content. Third-party benchmark tests compare Qwen2-VL favorably to leading competitors and the company is releasing two open-source versions with a larger private model to come. Continue reading Alibaba’s Latest Vision Model Has Advanced Video Capability
By
Paula ParisiSeptember 4, 2024
An AI-powered coding app called Cursor is building a fanbase, with everyone from hobbyists to engineers subscribing to the service. The platform reportedly has 30,000 paying customers, among them employees at OpenAI, Midjourney and Perplexity. Referred to as “the ChatGPT of coding,” Cursor uses popular models including GPT-4o and Claude 3.5 Sonnet to automate building apps and other coding tasks. Cursor was launched by two-year-old startup Anysphere, which has raised more than $60 million in Series A funding led by Andreessen Horowitz and Thrive Capital. Continue reading New AI Coding App Cursor Gains Following and $60M in Funds
By
Paula ParisiAugust 27, 2024
OpenAI announced its newest model, GPT-4o, can now be customized. The company said that the ability to fine-tune the multimodal GPT-4o has been “one of the most requested features from developers.” Customization can move the model toward more specific structure and tone of responses or allow it to follow specific instruction sets geared toward individual use cases. Developers can now implement custom datasets, aiming for better performance at a lower cost. The ChatGPT maker is rolling out the welcome mat by offering 1 million training tokens per day “for free for every organization” through September 23. Continue reading OpenAI Pushes GPT-4o Customization with Free Token Offer
By
Paula ParisiAugust 5, 2024
OpenAI has released its new Advanced Voice Mode in a limited alpha rollout for select ChatGPT Plus users. The feature, which is being implemented for the ChatGPT mobile app on Android and iOS, aims for more natural dialogue with the AI chatbot. Powered by GPT-4o, which is multimodal, Advanced Voice Mode is said to be able to sense emotional inflections, including excitement, sadness or singing. According to an OpenAI post on X, the company plans to “continue to add more people on a rolling basis” so that everyone using ChatGPT Plus will have access to the new feature in the fall. Continue reading OpenAI Brings Advanced Voice Mode Feature to ChatGPT Plus