By
Paula ParisiMarch 25, 2025
Google has added a Canvas feature to its Gemini AI chatbot that provides users with a real-time collaborative space where writing and coding projects can be refined and other ideas iterated and shared. “Canvas is designed for seamless collaboration with Gemini,” according to Gemini Product Director Dave Citron, who notes that Canvas makes it “an even more effective collaborator” in helping bring ideas to life. The move marks a trend whereby AI companies are trying to turn chatbot platforms into turnkey productivity suites. Google is launching a limited release of Gemini Live Video in addition to bringing its Audio Overview feature of NotebookLM to Gemini. Continue reading Canvas and Live Video Add Productivity Features to Gemini AI
By
Paula ParisiMarch 6, 2025
Google plans to launch video- and screen-sharing capabilities for Gemini Live by the end of the month as part of the Gemini app on Android, according to discussions coming out of Mobile World Congress in Barcelona this week. Previewed a year ago as Project Astra, the new functionality will allow Gemini Live to accept a video stream captured in real time by the phone’s camera and answer questions about the feed in a conversational way, based on voice input, screen-sharing live videos with Gemini on mobile as Gemini 2.0 currently offers desktop users. Continue reading Google Live Gets Computer Vision Screen Sharing This Month
By
Rob ScottFebruary 18, 2025
Google announced last week that its Gemini AI chatbot now offers the ability to provide responses based on earlier conversations. It can also summarize a previous chat and recall information the user has shared in other threads. “Whether you’re asking a question about something you’ve already discussed, or asking Gemini to summarize a previous conversation, Gemini now uses information from relevant chats to craft a response,” according to Google. The new feature is rolling out via Google’s $20-per-month One AI Premium Plan to start and will be available to Google Workspace Business and Enterprise customers in the coming weeks. Continue reading Gemini Recalls Previous Chats to Provide Helpful Responses
By
Paula ParisiJanuary 27, 2025
Samsung’s new Galaxy S25 line — the Galaxy S25 Ultra, Galaxy S25+ and Galaxy S25 — will more tightly integrate AI, including AI agents, becoming “true AI companions” at a level previously unknown to mobile devices. That leap is credited largely to a “first-of-its-kind” Snapdragon 8 Elite customization for the Galaxy chipset that “delivers greater on-device processing power for Galaxy AI and superior camera range and control with Galaxy’s next-gen ProVisual Engine,” according to Samsung. In addition, the top-of-the-line Galaxy S25 Ultra has been redesigned with a slightly larger 6.9-inch screen and rounded bevel. Continue reading Galaxy Unpacked: More AI for S25 and a Peek at AR Glasses
By
Paula ParisiDecember 18, 2024
Meta has added new features to Ray-Ban Metas in time for the holidays via a firmware update that make the smart glasses “the gift that keeps on giving,” per Meta marketing. “Live AI” adds computer vision, letting Meta AI see and record what you see “and converse with you more naturally than ever before.” Along with Live AI, Live Translation is available for Meta Early Access members. Translation of Spanish, French or Italian will pipe through as English (or vice versa) in real time as audio in the glasses’ open-ear speakers. In addition, Shazam support is added for users interested in easily identifying songs. Continue reading Ray-Ban Meta Gets Live AI, RT Language Translation, Shazam
By
Hank GerbaDecember 16, 2024
Google has introduced Gemini 2.0, the latest version of its multimodal AI model, signaling a shift toward what the company is calling “the agentic era.” The upgraded model promises not only to outperform previous iterations on standard benchmarks but also introduces more proactive, or agentic, functions. The company announced that “Project Astra,” its experimental assistant, would receive updates that allow it to use Google Search, Lens, and Maps, and that “Project Mariner,” a Chrome extension, would enable Gemini 2.0 to navigate a user’s web browser to complete tasks autonomously. Continue reading Google Releases Gemini 2.0 in Shift Toward Agentic Era of AI
By
Paula ParisiMay 17, 2024
Google is showing off a developmental chatbot it says represents the future of AI assistants. Called Project Astra, it has the ability to “see” and “hear,” remembering the information ingested, which it can then answer questions about — from simple queries such as “Where did I leave my glasses?” to unpacking and explaining computer code. Demonstrated at the Google I/O conference this week, Astra understands the world “just like people do” and is able to converse naturally, in real time. The company says some Project Astra features may come to Gemini late this year. Continue reading Google Teases Astra AI Assistant and Debuts Gemini 1.5 Pro