Someday soon, gesticulating wildly in front of one’s office PC may hardly warrant a raised eyebrow.
Microsoft is making some big strides in creating gesture-based computer interfaces that can detect the slightest hand movements, potentially affecting the way users interact with objects in virtual reality or their favorite operating system in the coming years. The Redmond, Wash. software giant plans to unveil some of its latest research during SIGGRAPH and Conference on Computer Vision and Pattern Recognition (CVPR) this summer, said the company in a June 26 announcement.
The research builds on Handpose, a project first revealed by Microsoft Research a year ago. Instead of using a Kinect or other motion-tracking sensor to track an entire body, Handpose exclusively devotes its resources to exclusively track hand movements, enabling users to manipulate on-screen objects with a higher degree of accuracy.
Now, the company’s researchers are developing real-time hand-tracking software that can potentially work on mobile devices.
As described in an abstract from a Microsoft Research paper to be presented at SIGGRAPH, the “system runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers. The hand tracker is efficient enough to run on low-power devices such as tablets.”
The system is able to track hand movements several meters away from the camera, according to the company. An accompanying video shows how Microsoft’s technology uses 3D point data collected from a depth sensor to generated a surface model of the user’s hands, and their corresponding movements, in real-time. Despite tracking movements with relatively high accuracy, the technology still struggles with problems affecting most hand-tracking systems, like glitches that appear when users form a fist or the system encounters “noisy data.”
Another technical paper describes a fast method of creating detailed personalized models of a user’s hands. The technology helps improves the reliability and accuracy of hand tracking, claim the company’s researchers.
Meanwhile, Microsoft’s Advanced Technologies Lab is exploring how to integrate hand gestures into everyday computing.
Based in Israel, the facility is working on Project Prague, an effort to open up gesture-based interactivity to developers, enabling them to incorporate basic gestures captured by off-the-shelf 3D cameras into their apps. After training the system with images of millions of hand images, the team has created a technology that interprets each gesture and the user’s intent.
For example, Microsoft envisions one day being able to lock a Windows PC by simply reaching out and mimicking the motion of turning a physical key in a lock.
“Adi Diamant, who directs the Advanced Technologies Lab, said that when people think about hand and gesture recognition, they often think about ways it can be used for gaming or entertainment. But he also sees great potential for using gesture for everyday work tasks, like designing and giving presentations, flipping through spreadsheets, editing e-mails and browsing the web,” wrote Microsoft senior content manager Allison Linn, in a blog post.