The rate at which science and technology are improving the ability to extract data from photos and videos is truly astounding.
A few years ago, search engines, like Google Image Search, gained the ability to match color sets. You can upload or link to a picture and have Google show you a bazillion other images with the same colors in them.
More recently, Google demonstrated the ability to recognize the content of some photos. If you upload a picture into Google+, Google will automatically add a hashtag to it based on the content. It works much of the time, but not always.
Upload people pictures, and Google can tell if the people in the images are adults or children and if people are smiling or not smiling. If you upload two group shots of the same group and don't have everyone smiling in each, Google will combine the images into a near-perfect group shot where everyone is smiling. It's pretty amazing.
Facebook is no slouch, either, when it comes to processing photos—especially recognizing faces. Sometime earlier this year, Facebook's DeepFace facial recognition research project crossed a threshold where it can now recognize human faces in photos as well as people can, more or less. (As of March, Facebook was 97.25 percent accurate while people are generally 97.53 percent accurate.)
Note that DeepFace is still in the lab, specifically the Facebook AI research group in Menlo Park, Calif. Facebook hasn't publically deployed it yet, instead using a lesser facial recognition technology for everyday use.
Such ability isn't exclusive to Facebook—numerous companies and multiple university research labs are developing similar capabilities, as are major law enforcement agencies like the FBI. Many of these technologies take different approaches.
Facebook's is especially interesting and usable because it mimics the human brain, using some 120 million different parameters. By showing the system two photos of the same person, the software constructs a 3D model of the face, so it can recognize faces from any angle.
This technology is not widely applied, but it could be and almost certainly will be. Any image—a publicly posted smartphone picture, bank camera image, security camera, store camera, etc.—could be fed into the algorithm to discover your identity.
It's safe to ssume that this is pretty much going to happen; any business or law enforcement agency or schmuck with a smartphone or Google Glass-like smart glasses will be able to instantly know who you are wherever you go.
Now, this week we've learned about some really incredible technology developed independently at Google and at Stanford (a university that's less than six miles away—must be something in the water) as well as Baidu (China's Google), University of California, Los Angeles, the University of Toronto and University of California, Berkeley.
All research groups are using neural net artificial intelligence to enable computers to understand what's happening in a picture, although their methods vary.