Experts Suggest Gesture Recognition is the First Step Toward 3D UIs

  • EE Times provides an interesting overview of the technologies and uses of 3D gestural user interfaces in this article by Dong-Ik Ko and Gaurav Agarwal of Texas Instruments.
  • “Gesture recognition is the first step to fully 3D interaction with computing devices,” begins the article. “The authors outline the challenges and techniques to overcome them in embedded systems.”
  • Featured sections include: 1) “Limitations of (x,y) coordinate-based 2D vision;” 2) “z (depth) innovation” (such as stereo vision, structured light patterns and time of flight sensors); 3) “3D vision technologies;” 4) z & human/machine interface;” 5) “Technology processing steps;” 6) “Challenges for 3D-vision embedded systems” (such as two different processor architectures and lack of standard middleware); and 7) “Anything cool after z? (new ways to see beyond, through, and inside people and objects).”
  • “Gesture recognition takes human interaction with machines even further. It’s long been researched with 2D vision, but the advent of 3D sensor technology means gesture recognition will be used more widely and in more diverse applications,” predict the authors. “Soon a person sitting on the couch will be able to control the lights and TV with a wave of the hand, and a car will automatically detect if a pedestrian is close by.”
  • Ko and Agarwal suggest that gesture recognition is only the beginning: “Transparence research will yield systems that are able to see through objects and materials. And with emotion detection systems, applications will be able to see inside the human mind to detect whether the person is lying.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.