Google has introduced Project Tango, an initiative to compress current understandings about robotics and computer vision into a mobile phone.
“Mobile devices today assume that the physical world ends at the boundaries of the screen. Our goal is to give mobile devices a human-scale understanding of space and motion,” Johnny Chung Lee, project lead at Google’s Advanced Technology and Projects (ATAP) group, said in a video released Feb. 20.
In a briefing with eWEEK, Remi El-Ouazzane, CEO of Movidius, a Google partner on Project Tango and the creator of the vision processor platform that the phone will be based on, explained the idea of putting “human vision” into a device.
“When you look at things, you are capturing, tracking things in motion and you are [without thinking about it] taking millions of 3D measurements,” said El-Ouazzane. “What a camera in a mobile device can do today is only the capture piece. What we’re talking about is adding all of that extra intelligence.”
The ATAP group has created a phone with “highly customized hardware and software designed to allow the phone to track its motion in full 3D, in real time as you hold it,” Lee said.
The phone has a 5-inch display, a 4-megapixel camera at the top of the back of the phone, 2x computer vision processors, integrated depth sensing and a motion-tracking camera on the bottom of the back of the phone.
“These sensors,” Lee said in the video, “make over a quarter-million 3D measurements every single second, updating the position and rotation of the phone and fusing this information into a single 3D model of the environment.”
Indoor navigation is a problem in the world today, Chris Anderson, CEO of 3D Robotics, another Project Tango partner, said in the video. “This is a solution to that problem.”
What happens when such a solution is put in a phone? Project Tango imagines apps that could use the camera to offer guidance to the visually impaired; apps that, via the camera, might show shoppers how the furniture in a catalog would literally fit in their living rooms; apps that can scan a room in your home and create a virtual game world within it; or apps that offer directions that don’t stop at the front door of an office building.
Project Tango is going to “allow people to interact with their environments in just a fundamentally different way,” Eitan Marder-Eppstein, president of hiDOF, another ATAP partner, said in the video.
In a blog post on the hiDOF site coinciding with Google’s announcement, Marder-Eppstein called Project Tango a next step, behind GPS, in the evolution of mapping.
The use of cameras—coupled with accelerometers, gyroscopes and depth sensors—to create maps is nothing new, states the post, and hiDOF specializes in simultaneous localization and mapping (SLAM) software. When ATAP approached hiDOF, it challenged it to select from the “vast” research in the space the technologies that could be appropriate and possible inside a mobile phone.
“The ultimate goal is to generate realistic, dense maps of the world. Our area of focus, however, has been to provide reliable estimates of the pose of a phone (position and orientation) relative to its environment,” wrote Marder-Eppstein. “Specifically, we have worked with ATAP to build a system capable of providing drift-free, real-time, 6 degree of freedom, localization in typical indoor environments.”
ATAP has 200 thick, white prototypes. Some have been allocated for projects related to indoor mapping and others for gaming. But it has set aside some units for “applications we haven’t thought of yet,” said the Project Tango site, offering an application to apply to receive a device.
“Tell us what you would build. Be creative. Be specific. Be bold.”
The plan is for all units to be distributed by March 14.
“While we may believe we know where this technology will take us, history suggests [that we] should be humble in our predictions,” the site added. “We are excited to see the effort take shape with each step forward. The future is awesome. We can build it faster together.”