Google has infused search with more Gemini AI, adding expanded AI Overviews and more planning and research capabilities. “Ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork” culling from “a knowledge base of billions of facts about people, places and things,” explained Google and Alphabet CEO Sundar Pichai at the Google I/O developer conference. AI Overviews will roll out to all U.S. users this week. Coming soon are customizable AI Overview options that can simplify language or add more detail.
Google Head of Search Liz Reid explained that that the company has trained a Gemini model specifically for search, where it combines real-time information, Google rankings and multimodal features.
“Google has been testing AI-powered overviews through its Search Generative Experience (SGE) since last year,” and Google intends to expand it beyond its “hundreds of millions” of U.S. users to over a billion global users by the end of the year, TechCrunch reports.
Google’s AI Overviews are AI-generated summaries that U.S. users will begin to see atop search results pages. The Alphabet company says the links included in AI Overviews “get more clicks than if the page had appeared as a traditional web listing for that query,” emphasizing their value to publishers and creators.
“As always, ads will continue to appear in dedicated slots throughout the page, with clear labeling to distinguish between organic and sponsored results,” a blog post by Reid explains.
Pichai touted the abilities of Gemini, “a frontier model built to be natively multimodal from the beginning” and capable of reasoning “across text, images, video, code, and more,” turning any input into any output. Pichai calls it “an ‘I/O’ for a new generation.”
More than 1.5 million developers are now using Gemini, and not just in Search, but across Photos, Workspace, Android and other apps, Pichai said in his keynote.
The Verge calls the Gemini upgrade “nothing short of a full-stack AI-ification of search,” explaining “Google is using its Gemini AI to figure out what you’re asking about, whether you’re typing, speaking, taking a picture, or shooting a video.”
TechCrunch referenced “criticism about AI-powered search changing the way the web and websites work and how it will affect businesses and journalism,” a topic The New York Times delves into.
Reid says AI Overviews won’t appear “when traditional search is sufficient to serve results,” according to TechCrunch, which writes that “the feature is more useful for queries that are more complex and information is scattered.”
Related:
Everything Google Announced at I/O 2024, Wired, 5/14/24
Google Takes the Next Step in Its AI Evolution, The New York Times, 5/14/24
Google Experiments with Using Video to Search, Thanks to Gemini AI, TechCrunch, 5/14/24
Google’s Generative AI Can Now Analyze Hours of Video, TechCrunch, 5/14/24
Google Expands Digital Watermarks to AI-Made Video and Text, Engadget, 5/14/24
Introducing VideoFX, Plus New Features for ImageFX and MusicFX, Google Blog, 5/14/24
Project IDX, Google’s Next-Gen IDE, Is Now in Open Beta, TechCrunch, 5/14/24
Google’s New Private Space Feature Is Like Incognito Mode for Android, TechCrunch, 5/15/24
Google Will Use Gemini to Detect Scams During Calls, TechCrunch, 5/14/24
Google’s Call-Scanning AI Could Dial Up Censorship by Default, TechCrunch 5/15/24
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.