Soon, Android users will be able to search for relevant information on objects around them—and take related actions—simply by pointing their mobile device at the things that they are interested in.
For instance, a user looking for reviews on a specific restaurant or for ticket prices to a show could get that information by pointing their Android smartphone or tablet’s camera at the restaurant sign or event marquee.
Similarly, by putting the camera in front of the sticker on a WiFi router an Android device user could automatically connect to the network without the need to manually enter the network password.
The new capabilities will become available sometime this year and are courtesy of Google Lens, a vision-based computing technology that uses artificial intelligence and machine learning to help people gain a more actionable understanding of things around them.
Google CEO Sundar Pichai announced Google Lens at the company’s I/O Conference in Mountain View, CA May 17. In his keynote address, Pichai described Google Lens as the latest manifestation of the company’s ongoing efforts to harness AI and machine learning technologies to make it products smarter and easier to use.
“Google Lens is a set of vision-based computing capabilities that can understand what you are looking at and help you to take action on that information,” Pichai said. Google will ship Lens initially with its digital assistant technology Google Assistant and with Google Photo. But eventually, it will be integrated into other Google products as well, Pichai said.
“All of Google was built because we started understanding text and web pages,” Pichai said. “The fact that computers can understand images and videos has profound implications for our core mission,” he noted.
In the same way that Google had to rethink its computational architecture and build data centers from the ground up to deal with the mobile revolution, the company is now in the midst of re-architecting them for AI and machine learning, Google’s CEO said. “In an AI first world, we are rethinking all our products and applying machine learning and AI to solve user problems,” he said.
Google Assistant, which Pichai has previously touted as exemplifying the company’s efforts to make it easier for people to interact with its technologies, received updates this week as well.
Starting this week, users of the Google Home digital assistant for instance will be able to schedule appointments and setup reminders by speaking to the device using normal conversation. “Since it’s the same Google Assistant across devices, you’ll be able to get a reminder at home or on the go,” Google vice presidents Scott Huffman and Rishi Chandra wrote in a blog.
Similarly, users of Google Home will now be able use Assistant to interact with and control smart home technologies from a broader number of technology vendors including LG, Honeywell, TP-Link, Logitech and August locks, using natural conversation
Over the next few months, Google Home users will be able to ask Assistant to connect them to mobile phones and landlines in the U.S. and Canada for free, Huffman and Chandra wrote. Later this year, Google will also integrate features that let Assistant respond visually to questions. For instance, users who ask Google Home for their schedule for the day will be able to see their calendar displayed on their home TV.
Huffman and Chandra noted that Google is also working to help third-party developers enable support for Google Assistant in their products so users can interact with the products using natural conversation.
Some 100 million Android devices currently have Assistant on them. Starting this week the technology will be available also to iOS users who want to use it with their iPhones and iPads, the Google vice presidents said.