By
Paula ParisiMarch 27, 2025
Google has released what it calls its most intelligent AI model yet, Gemini 2.5. The first 2.5 model release, an experimental version of Gemini 2.5 Pro, is a next-gen reasoning model that Google says outperformed OpenAI o3-mini and Claude 3.7 Sonnet from Anthropic on common benchmarks “by meaningful margins.” Gemini 2.5 models “are thinking models, capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy,” according to Google. The new model comes just three months after Google released Gemini 2.0 with reasoning and agentic capabilities. Continue reading Google Debuts Next-Gen Reasoning Models with Gemini 2.5
By
Paula ParisiMarch 25, 2025
Anthropic’s Claude can now search the Internet in real time, allowing it to provide timely and relevant responses that are also more accurate than what the chatbot previously offered, according to the company. Claude incorporates direct citations for its Web-retrieved material, so users can fact-check its sources. “Instead of finding search results yourself, Claude processes and delivers relevant sources in a conversational format.” While this is not exactly groundbreaking — ChatGPT, Grok 3, Copilot, Perplexity and Gemini all have real-time Web retrieval and most include citations — Claude takes a slightly different approach. Continue reading Real-Time Web Access Informs Claude 3.7 Sonnet Responses
By
Paula ParisiMarch 18, 2025
Baidu has launched two new AI systems, the native multimodal foundation model Ernie 4.5 and deep-thinking reasoning model Ernie X1. The latter supports features like generative imaging, advanced search and webpage content comprehension. Baidu is touting Ernie X1 as of comparable performance to another Chinese model, DeepSeek-R1, but says it is half the price. Both Baidu models are available to the public, including individual users, through the Ernie website. Baidu, the dominant search engine in China, says its new models mark a milestone in both reasoning and multimodal AI, “offering advanced capabilities at a more accessible price point.” Continue reading Baidu Releases New LLMs that Undercut Competition’s Price
By
Paula ParisiMarch 12, 2025
Taiwan’s Foxconn, the contract manufacturer that assembles Apple’s iPhones, has built its own AI. Called FoxBrain, the company says the large language model was trained in just four weeks with help from Nvidia, using 120 of that company’s H100 chips. FoxBrain has reasoning and mathematical skills and can analyze data and generate code. Initially built for in-house use, Foxconn says it intends to open source the model and hopes it will become a collaborative tool for its partners and enable advancements in manufacturing techniques and supply-chain management. Continue reading Foxconn AI Trained in Four Weeks, Suggesting Industry Shift
By
Paula ParisiMarch 10, 2025
Alibaba is making AI news again, releasing another Qwen reasoning model, QwQ-32B, which was trained and scaled using reinforcement learning (RL). The Qwen team says it “has the potential to enhance model performance beyond conventional pretraining and post-training methods.” QwQ-32B, a 32 billion parameter model, “achieves performance comparable to DeepSeek-R1, which boasts 671 billion parameters (with 37 billion activated),” Alibaba claims. While parameters refer to the total set of adjustable weights and biases in the model’s neural network, “activated” parameters are a subset used for a specific inference task, like generating a response. Continue reading Alibaba Says Qwen Reasoning Model on Par with DeepSeek
By
Paula ParisiMarch 6, 2025
Amazon is ramping up its AI activity, reportedly planning to release its own advanced reasoning model as part of the company’s Nova family. The Nova line was introduced in December at re:Invent and the new addition could debut as early as June. Its reasoning prowess is being compared to the abilities of OpenAI’s o3-mini and DeepSeek-R1. But reports say Amazon is taking the hybrid reasoning approach embraced by Anthropic’s Claude 3.7 Sonnet (Amazon has a 10 percent stake in Anthropic). The e-retail giant is also preparing for an agentic AI push, having established a dedicated unit, reports say. Continue reading Amazon Plans an AI Push with Nova Reasoning Model, Agents
By
Paula ParisiMarch 4, 2025
OpenAI is releasing a research preview of what it calls its “largest and best” chat model to date, GPT‑4.5, which scales unsupervised learning in pre-training and post-training. As a result, the new chat model has the ability to recognize patterns, draw connections, and generate creative insights without having to draw on time and energy consuming “reasoning.” GPT‑4.5 is currently available to ChatGPT Pro subscribers ($200 per month) and developers subscribing to OpenAI’s API tier. ChatGPT Plus and ChatGPT Team customers are expected to gain access this week. Continue reading OpenAI’s GPT-4.5 Model Sees Patterns and Thinks Creatively
By
Paula ParisiFebruary 28, 2025
Nvidia delivered stellar earnings again, with profit up 80 percent to $22.09 billion for fiscal Q4, the period that ended January 26, 2025. Record quarterly revenue hit $39.3 billion, a 12 percent uptick from Q3 and a 78 percent increase year-over-year, driven in part by sales of the company’s Blackwell AI chips. The results rebut predictions that the leading-edge chipmaker would suffer due to a recent wave of Chinese AI models created using fewer and largely older chips. That trend rocked Nvidia stock over the past quarter, but the Silicon Valley-based company managed to maintain momentum. Continue reading New Blackwell AI Chip Helps Boost Nvidia to Record Quarter
By
Paula ParisiFebruary 26, 2025
Anthropic has released a new frontier model, Claude 3.7 Sonnet, described as the industry’s first “hybrid AI reasoning model.” The new Claude is different in that it can both respond to questions in real time or, alternatively, “think” about a problem for a prolonged period of time — basically as long as a user would like. Users can choose between “near-instant responses or extended, step-by-step thinking that is made visible to the user” by selecting the appropriate “reasoning” capability for Claude, Anthropic says. Along with the new model, Anthropic is also debuting a command line tool for agentic coding, Claude Code. Continue reading Anthropic Introduces a New Claude Hybrid Reasoning Model
By
Paula ParisiFebruary 14, 2025
OpenAI has decided to simplify its product offerings. A month after announcing the in-development GPT-o3 as its next frontier model, the company has canceled it as a standalone release, explaining that it would be integrated into the upcoming GPT-5 instead. “A top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks,” OpenAI co-founder and CEO Sam Altman wrote in a social media post this week. Expected to ship later this year, the GPT-5 models will incorporate voice, canvas, search, deep research and more, OpenAI says. Continue reading Sam Altman Reveals Plans to Simplify OpenAI’s Product Line
By
Paula ParisiJanuary 10, 2025
OpenAI has unveiled a new frontier model, OpenAI o3, which it claims can “reason” through challenges involving math, science and computer programming. Available to safety and research testers, it is expected to be available to individuals and businesses this year. OpenAI o3 is said to be over 20 percent more efficient at common programming tasks than its predecessor OpenAI o1 and beat a company scientist on a programming test. Model o3 is part of a broader effort to create AI systems that can reason through complex problems. In late December Google debuted a similar platform, the experimental Gemini 2.0 Flash Thinking Mode. Continue reading OpenAI Previews Two New Reasoning Models: o3 and o3-Mini
By
Paula ParisiDecember 4, 2024
Alibaba Cloud has released the latest entry in its growing Qwen family of large language models. The new Qwen with Questions (QwQ) is an open-source competitor to OpenAI’s o1 reasoning model. As with competing large reasoning models (LRMs), QwQ can correct its own mistakes, relying on extra compute cycles during inference to assess its responses, making it well suited for reasoning tasks like math and coding. Described as an “experimental research model,” this preview version of QwQ has 32-billion-parameters and a 32,000-token context, leading to speculation that a more powerful iteration is in the offing. Continue reading Qwen with Questions: Alibaba Previews New Reasoning Model