Google Adds Open-Source Gameface for Android Developers

In a move aimed at launching more accessible Android apps, Google has open-sourced code for Project Gameface, a hands-free game control feature released last year that allows users to move a computer with facial and head gestures. Developers will now have more Gameface resources with which to build Android applications for physically challenged users, “to make every Android device more accessible.” Project Gameface evolved as a collaboration with quadriplegic video game streamer Lance Carr, who has muscular dystrophy. The technology uses a smartphone’s front camera to track movement.

“A user could smile to ‘select’ items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone,” Engadget writes, noting that “users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action.”

While the version of Project Gameface unveiled at the 2023 Google I/O focused on gaming, relying on webcams to track facial activity, the presentation at the 2024 event is more ambitious and enhances use of the camera feed through “deeper comprehension of gestures,” Google explains in a blog post. The API has “52 face blendshape values which represent the expressiveness of 52 facial gestures such as raising left eyebrow or mouth opening.”

“Project Gameface uses the device’s camera and a database of facial expressions from MediaPipe’s Face Landmarks Detection API to manipulate the cursor,” The Verge reports.

For this release, Google worked with India’s Incluzza, a social enterprise that supports people with disabilities “to learn how Project Gameface can be expanded to educational, work and other settings, like being able to type messages to family or searching for new jobs,” Google says.

The Gameface code is available on GitHub. Google for Developers posted an interesting video.

The Google I/O event also included announcement of “updates to the machine learning suite that powers Google Lens and Google Meet features like object tracking and recognition, gesture control, and of course, facial detection,” reports The Verge, noting that these updates enable app developers to do things like “create Snapchat-like face filters and hand tracking.”

Related:
Everything Google Announced at I/O 2024, Wired, 5/14/24
Google Takes the Next Step in Its AI Evolution, The New York Times, 5/14/24
Google Experiments with Using Video to Search, Thanks to Gemini AI, TechCrunch, 5/14/24
Google’s Generative AI Can Now Analyze Hours of Video, TechCrunch, 5/14/24
Google Expands Digital Watermarks to AI-Made Video and Text, Engadget, 5/14/24
Introducing VideoFX, Plus New Features for ImageFX and MusicFX, Google Blog, 5/14/24
Project IDX, Google’s Next-Gen IDE, Is Now in Open Beta, TechCrunch, 5/14/24
Google’s New Private Space Feature Is Like Incognito Mode for Android, TechCrunch, 5/15/24
Google Will Use Gemini to Detect Scams During Calls, TechCrunch, 5/14/24
Google’s Call-Scanning AI Could Dial Up Censorship by Default, TechCrunch 5/15/24

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.