Google Adds Gemini Flash Thinking to Search, Maps and More

Google has initiated a flurry of AI activity following the recent collection of Chinese AI releases. The Alphabet company has launched an experimental version of a new flagship AI model, Gemini 2.0 Pro. Its premiere coding and complex questions model is now available in Google AI Studio, Vertex AI and the Gemini Advanced app. The company has also made its general-purpose “workhorse” model, Gemini 2.0 Flash, available in general release via the Gemini API in AI Studio and Vertex. This follows last week’s announcement that Gemini 2.0 Flash is powering the Gemini app for desktop and mobile.

The company’s “most cost-efficient model yet,” Gemini 2.0 Flash-Lite, has moved into public preview and is available for testing in Google AI Studio and Vertex AI. Finally, the Gemini 2.0 Flash Thinking experimental model is coming to Gemini app users on desktop, Android and iOS.

“All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months,” Google said in a blog post, referring interested parties to the Google for Developers blog for information about pricing.

TechCrunch reports that “Google is releasing these AI models as the tech world remains fixated on cheaper AI reasoning models offered by the Chinese AI startup DeepSeek,” which offers open-source and accessible tech “through the company’s API for a relative steal.”

On Alphabet’s Q4 earnings call, Google and Alphabet CEO Sundar Pichai predicted “2025 is going to be one of the biggest years for search innovation” as “Gemini 2.0’s advances in multimodality and native tool use enable us to build new AI agents that bring us closer to our vision of a universal assistant.”

The new Search “browses the Internet for you, looks at web pages, and returns an answer,” writes TechCrunch, adding that it’s a long way from returning “10 blue links.”

With its new slew of announcements, Google is playing to its multimodal strengths, VentureBeat points out, noting that neither DeepSeek-R1 nor the new OpenAI o3-mini “can accept multimodal inputs — that is, images and file uploads or attachments.”

R1 relies on a more than 60-year-old technology, optical character recognition (OCR) extracting text while “not actually understanding or analyzing any of the other features contained therein,” VentureBeat says, adding that both are reasoning models as opposed to “LLMs like the Gemini 2.0 Pro series.”

Google’s Gemini 2.0 Flash Thinking reasoning model “can be connected to Google Maps, YouTube and Google Search, allowing for a whole new range of AI-powered research and interactions that simply can’t be matched” by DeepSeek and OpenAI that don’t have such ancillary businesses.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.