By
Paula ParisiJune 4, 2025
Google has quietly released the AI Edge Gallery app, which lets users download models and run them locally without Internet connectivity. Available for Android and eventually on iOS, the experimental app is hosted on GitHub where it can be downloaded for free. Users can find compatible models capable of running on-device, like Google’s Gemma 3n, and run them offline to do things like generate images, get answers to questions, and write and edit code using the processor of supported smartphones. While locally running models aren’t as powerful as their cloud counterparts, they offer more privacy and can sometimes be faster. Continue reading Google AI Edge Gallery App Runs Models Locally on Android
By
Paula ParisiMay 17, 2024
In a move aimed at launching more accessible Android apps, Google has open-sourced code for Project Gameface, a hands-free game control feature released last year that allows users to move a computer with facial and head gestures. Developers will now have more Gameface resources with which to build Android applications for physically challenged users, “to make every Android device more accessible.” Project Gameface evolved as a collaboration with quadriplegic video game streamer Lance Carr, who has muscular dystrophy. The technology uses a smartphone’s front camera to track movement. Continue reading Google Adds Open-Source Gameface for Android Developers
By
Debra KaufmanFebruary 18, 2020
Google has unveiled AutoFlip, an open source, AI-enabled tool that offers smarter, automated video reframing. A lot of video is captured in landscape aspect ratios such as 16:9 and 4:3, not optimized for different (read: vertical) display devices. The traditional method has been to statically crop the material that doesn’t fit in the destination device, but that usually offers an unsatisfactory result. AutoFlip, however, relies on AI object detection and tracking to intelligently understand the video content. Continue reading Google’s AutoFlip for Automated AI-Enabled Video Reframing
By
Debra KaufmanAugust 21, 2019
Google relied on computer vision and machine learning to research a better way to perceive hand shapes and motions in real-time, for use in gesture control systems, sign language recognition and augmented reality. The result is the ability to infer up to 21 3D points of a hand (or hands) on a mobile phone from a single frame. Google, which demonstrated the technique at the 2019 Conference on Computer Vision and Pattern Recognition, also put the source code and a complete use case scenario on GitHub. Continue reading Google Open-Sources Real-Time Gesture Recognition Tech