Kinect’s heyday may have come and gone, but Microsoft still stands behind the low-cost sensor. In the upcoming Anniversary Update for Windows 10, the company plans to help developers who have jumped aboard Microsoft’s Universal Windows Platform (UWP) application model to exploit Kinect’s capabilities and integrate them into their software.
UWP apps are code-once, run-anywhere affairs, allowing developers to target multiple Windows device classes without having to rewrite their software for each. In practice, developers can write a Windows 10 UWP app for the PC and it will run on Windows 10 smartphones, tablets and even the Xbox One video game console with little to no modification to the underlying code.
The trouble with UWP, at least as it pertains to Kinect, is that apps written for the platform cannot access most of the sensor data generated by the hardware. “Today, there are several sensors that can provide rich correlated data, such as RGB, IR or depth information,” wrote Microsoft’s Kinect for Windows team in a blog post.
Microsoft plans to remedy this with this summer’s Windows 10 Anniversary Update, an updated driver and a new software development kit.
“To make such data available to app developers in a device-independent manner, we introduced the Windows.Media.Capture.Frames APIs, a set of extensions to Media Capture that add frame-by-frame access to RGB, IR [infrared] and depth data, as well as sensor correlation to the traditional image and video-capture features, all with a consistent, familiar programming model,” continued the company’s staffers. Media Capture is a Windows runtime API that enables the operating system to capture video and audio content.
The expanded programming model also allows access to custom data streams from Kinect, a capability the company is relying on to enable skeleton tracking for UWP apps.
Kinect’s skeleton-tracking features allow developers to detect the positioning of a user’s body parts, allowing them to interact with software by gesturing or simply moving around. In the second half of 2016, Microsoft plans to release a supplemental SDK for Kinect that can decode custom streams. Microsoft noted that the Anniversary Update will also enable developers to use the company’s face analysis APIs (Windows.Media.FaceAnalysis) with Kinect hardware to track and detect faces.
Other tech companies are interested in bridging the virtual and physical worlds.
Earlier this year, it was revealed that Apple acquired Emotient, an artificial-intelligence startup. Emotient’s cloud-based technology analyzes people’s facial expressions and detects their feelings, allowing organizations to gauge their emotional reactions to ads and other content.
Last year, Apple acquired Faceshift, a technology startup from Switzerland. Faceshift’s motion-capture technology can capture a user’s facial expressions and replicate them using animated avatars. Recently, Apple was awarded a patent for a technology that may lead to iPhone and iPad screens that can register user inputs without having to touch the screen. Using proximity sensors that detect fingers, palms and other objects, Apple devices may one day allow users to launch and interact with apps by gesturing just above the display.