Why You Will Fall in Love With a Robot
All these personal assistants and other home robots beyond Pepper will become sophisticated, responsive, helpful and "empathetic" enough to trigger feelings of affection. Computer scientists, science fiction fans and writers and AI enthusiasts obsess over something called the Turing test, which is a process proposed by Alan Turing in the 1950s to judge whether a computer can interact in a way that's indistinguishable from a human. The test requires a person to blindly interact with both a computer and a person by asking both questions. If the person can't tell which set of responses comes from the computer and which from the human, then the computer "passes" the Turing test. There’s no question that Alexa, Siri, Cortana and Google Now, their competitors and descendents will some day pass the Turing test. It’'s just a matter of time and that time is nearly upon us.Meanwhile, it has to be said that the Turing test isn’t as important as we once thought—no disrespect for Turing and his work. Turing’s so-called imitation game, in which computers get really, really good at imitating human verbal responses and interaction, isn't going to be the ground-breaking, culture-shifting singularity we thought it would, for three reasons. The first is that in hindsight the Turing test has a major flaw. It boils down "humanness" to mere words. The Turing test involves passing text back and forth and if the AI can trick judges that the text came from a person, it's a pass. But people are much more than words. And so are virtual assistants and robots. Alexa and Siri, for example, have vocal intonations, agency, practical applications and functions. They actually do things for us. Google Now and, soon, Siri, will be able to act proactively and appear to make decisions about how to help you. Pepper has hand gestures and movements. All these qualities and many more—and not just words on a page—will help convince human minds that computers are virtual people. Second, it was assumed in the previous century that of course artificial intelligence would be entirely designed to create a human, but now we realize that AI is being applied to fake animals and other non-human creatures. Imagine a future version of the Aibo, for example, which is aimed at a dog "imitation game," rather than a human one. The third and most important reason, however, is that we will choose to accept robots and virtual assistants as artificial or substitute people long before they can get an "A" on a "test" of imitation—or we'll be convinced on levels that are not logical. In other words, being "tricked" isn't as significant as our knowing acceptance of A.I. as "human enough" or "sentient enough." And robots will trick our hearts long before they trick our heads. As we do with pet lizards, old cars and other objects that don't actually care at all about us, we'll feel affection for our virtual assistants and home robots, but much more so because they'll be designed to engender such affection. The point is that those gadget fans in Japan who are crying at Aibo funerals aren't strange or weird or misguided. They’re just ahead of the curve.
It was previously reported that an AI program named "Eugene" passed the Turing test last year, but that isn't true. The testers set up "Eugene" to imitate a 13-year-old Ukrainian boy, and they did that to account for linguistic errors and goofy answers.