Google Releases Gemini 2.0 in Shift Toward Agentic Era of AI

Google has introduced Gemini 2.0, the latest version of its multimodal AI model, signaling a shift toward what the company is calling “the agentic era.” The upgraded model promises not only to outperform previous iterations on standard benchmarks but also introduces more proactive, or agentic, functions. The company announced that “Project Astra,” its experimental assistant, would receive updates that allow it to use Google Search, Lens, and Maps, and that “Project Mariner,” a Chrome extension, would enable Gemini 2.0 to navigate a user’s web browser to complete tasks autonomously. Continue reading Google Releases Gemini 2.0 in Shift Toward Agentic Era of AI

Google Shopping Redesigned with Gemini Feed, Infinite Scroll

Just in time for the holiday season, Google Shopping is launching an AI-powered personalized feed that recommends items customers might like. The redesign is coming to desktop and mobile devices in the U.S. in the coming weeks. Suggested items are based on search and YouTube histories as well as AI inference. Shoppers will get “an AI-generated brief with top things to consider” in finding the right item, plus a curated feed of products. For now, the brief will be labeled “experimental,” and Google is encouraging feedback for the times AI doesn’t get it 100 percent right. Continue reading Google Shopping Redesigned with Gemini Feed, Infinite Scroll

Google Serving Ads in AI Overviews and Lens Search Results

Having demonstrated how advertisements in its AI Overviews would work back in May at its Google Marketing Live event, the search giant is now adding the feature for U.S. mobile users and plans to include Google Lens shopping ads “above and alongside visual search results by the end of the year.” “The ways people ask questions today have expanded beyond the search box,” notes Google, explaining the move as a response to that evolution, as artificial intelligence technology has helped consumers use their voice and cameras “to explore the world around them.” Continue reading Google Serving Ads in AI Overviews and Lens Search Results

Snapchat: My AI Goes Multimodal with Google Cloud, Gemini

Snap Inc. is leveraging its relationship with Google Cloud to use Gemini for powering generative AI experiences within Snapchat’s My AI chatbot. The multimodal capabilities of Gemini on Vertex AI will greatly increase the My AI chatbot’s ability to understand and operate across different types of information such as text, audio, image, video and code. Snapchatters can use My AI to take advantage of Google Lens-like features, including asking the chatbot “to translate a photo of a street sign while traveling abroad, or take a video of different snack offerings to ask which one is the healthiest option.” Continue reading Snapchat: My AI Goes Multimodal with Google Cloud, Gemini

Figma Redesigns Its User Interface and Adds New AI Features

Figma is rolling out its third redesigned user interface, UI3, aimed at making the company even more competitive with Adobe. New are native AI features that accelerate workflows, letting teams build high-quality software. Available in limited beta, Figma AI adds the ability to generate design drafts with a single prompt, enabling rapid experimentation and prototyping. The move advances Figma’s goal of moving beyond design tool to a full-blown product development platform, while making the service intuitive and friendly enough for novices while maintaining the full features demanded by Sigma’s professional users. Continue reading Figma Redesigns Its User Interface and Adds New AI Features

Google Adds Open-Source Gameface for Android Developers

In a move aimed at launching more accessible Android apps, Google has open-sourced code for Project Gameface, a hands-free game control feature released last year that allows users to move a computer with facial and head gestures. Developers will now have more Gameface resources with which to build Android applications for physically challenged users, “to make every Android device more accessible.” Project Gameface evolved as a collaboration with quadriplegic video game streamer Lance Carr, who has muscular dystrophy. The technology uses a smartphone’s front camera to track movement. Continue reading Google Adds Open-Source Gameface for Android Developers

Google Taps AI for Tools to Help Authenticate Search Results

Google is rolling out three new tools to verify images and search results. “About this image,” Fact Check Explorer and Search Generative Experience (SGE) all add context to Google Search results. “About this image” is rolling out globally to English-language users as part of the Google Search UI. Available in beta since summer, Fact Check Explorer will let journalists and professional fact checkers delve into an image or topic more deeply via API. Search Generative Experience lets GenAI investigate and share results about websites by populating source descriptions for some targets that will appear in “more about this page.” Continue reading Google Taps AI for Tools to Help Authenticate Search Results

OpenAI’s ChatGPT Upgraded with ‘Talk’ Tech, Image Search

OpenAI is experimenting with new voice and image capabilities in ChatGPT. According to the company, users can now “speak with ChatGPT and have it talk back,” thanks to an intuitive new interface that, in addition to facilitating voice conversations, will allow users to show ChatGPT an image to discuss. “Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it,” OpenAI explains, alternatively suggesting you “snap pictures of your fridge and pantry to figure out what’s for dinner” or have it help with homework based on pictures of a math problem. Continue reading OpenAI’s ChatGPT Upgraded with ‘Talk’ Tech, Image Search

Google Touts Search Plans During Its ‘Live from Paris’ Event

Google unveiled new search features during its “Live from Paris” event via a YouTube stream. The emphasis was on multisearch, which will go live globally to mobile platforms in more than 70 languages where Google Lens is used, according to the company. Introduced last year, the multisearch feature looks through images and text, driven by an AI technology the company has developed called MUM, for Multitask Unified Model. There were no new announcements regarding Bard, Google’s new conversational AI search tool, although media outlets reported that Bard responded incorrectly in a Twitter promo the same day. Continue reading Google Touts Search Plans During Its ‘Live from Paris’ Event

Google Intros New Features for Search, Maps and Shopping

Google is starting to publicly roll out many of the new features introduced at its Search On event in September. Spanning Google Search, Shopping and Maps, the tools let consumers do things like search their favorite restaurant dish by name, like “truffle mac and cheese near me.” A visual search experience for Maps called Live View lets users glimpse street scenes in cities including London, Los Angeles, New York, Paris, San Francisco and Tokyo. And an AR shopping feature invites people to try on everything from makeup to accessories using a library of 148 models. Continue reading Google Intros New Features for Search, Maps and Shopping

Google Search Reinvention Focuses on Visuals and Discovery

Google is the latest tech giant to be swayed by the influence of TikTok and Instagram as it reimagines a more visual, discovery-centric type of search. That was major media’s takeaway from the third annual Google Search On event, which continued the trend of trying to find more intuitive ways to search, namely visually and vocally, by snapping a photo or asking your phone a question. Thanks to advances in artificial intelligence, the Alphabet company says it is “going far beyond the search box to create search experiences that work more like our minds.” Continue reading Google Search Reinvention Focuses on Visuals and Discovery

Google Search Will Use MUM AI to Combine Text and Images

Google Lens visual search will be updated to incorporate the company’s new AI technology, the Multitask Unified Model (MUM), which understands context and draws from various formats, including text, images and videos. With MUM, users will be able to incorporate text in order to specify queries on visual search. For instance, you could use your phone to snap a photo of a favorite shirt using the Google Lens feature — or find a shirt you like through Google Search — then tap the Lens icon on the open image and type in “socks with this pattern” to search with specificity. Continue reading Google Search Will Use MUM AI to Combine Text and Images

Google I/O: Android 12, Remote Working Tools, Wear Update

At this week’s Google I/O developer conference, the company unveiled its Android 12 mobile operating system with numerous visual changes and new privacy features. The company also showcased Project Starline, a prototype virtual meeting booth that could replace Google Meet. In addition, Google is tweaking its smartwatch Wear OS and has improved Photos’ discovery and Chrome’s built-in password manager. More remote working tools and new natural language skills were debuted. Google also tweaked Maps and shopping tools. Continue reading Google I/O: Android 12, Remote Working Tools, Wear Update

Google Upgrades Shopping Portal, Extends Lens Capability

Google has streamlined its Shopping desktop and mobile portals in anticipation of the holiday season and unveiled a fashion recommendation engine for Google Lens, its AI-enabled computer vision search tool. According to Google Shopping vice president Surojit Chatterjee, the redesign is aimed at making it easier for users to “research and buy” what they are looking for. A personalized homepage offers product suggestions, and new sections allow re-ordering. Also more prominent are links to “nearby and online” stores. Continue reading Google Upgrades Shopping Portal, Extends Lens Capability

Latest Google Feature Provides Shortcut to Video Highlights

Google introduced Key Moments, a feature that enables users to find shortcuts to video highlights. A search for a how-to video, for example, will bring up links that creators have time-stamped. According to Google, the feature will also make video easier to find for people using screen-reading software to navigate the Internet. Key Moments will first appear in English for YouTube videos time-stamped by the creators. It is limited to a small number of creators but those interested can sign up for early access. Continue reading Latest Google Feature Provides Shortcut to Video Highlights