By
Debra KaufmanAugust 21, 2019
Google relied on computer vision and machine learning to research a better way to perceive hand shapes and motions in real-time, for use in gesture control systems, sign language recognition and augmented reality. The result is the ability to infer up to 21 3D points of a hand (or hands) on a mobile phone from a single frame. Google, which demonstrated the technique at the 2019 Conference on Computer Vision and Pattern Recognition, also put the source code and a complete use case scenario on GitHub. Continue reading Google Open-Sources Real-Time Gesture Recognition Tech
By
Debra KaufmanJuly 20, 2017
San Francisco-based startup Meta makes augmented reality headsets that its founder/chief executive Meron Gribetz wants to use to remake the traditional office. With his headset, the user can use a virtual floating screen to control 3D models, browse web pages, write code and send emails. Gribetz — who studied neuroscience and computer science at Columbia University — is now using his own employees to test the headset and its software to figure out how to improve it, an experiment described in a Bloomberg News “Decrypted” podcast. Continue reading Meta AR Headset May Help Reimagine the Traditional Office
By
Meghan CoyleAugust 11, 2014
Belgian 3D vision company SoftKinetic believes the future will include using hand and finger gestures to operate some of your car’s controls, such as the navigation system, radio volume, and air conditioning. SoftKinetic’s system works by mounting a camera with radar-like technology that can recognize the slightest hand gestures, even in complete darkness. With the help of Delphi Automotive, SoftKinetic is hoping to get its product in a production vehicle later this year. Continue reading Operating a Car’s Air Conditioning With the Wave of a Finger