In Japan, they love their robots. No, really. They love them.
A recent series of stories coming out of Japan uncovers a mini-trend among some fans of the Sony Aibo robot, which launched in 1999. Sony sold about 150,000 of them before the line was discontinued in 2006.
The Aibo robots are “dying” now, with their batteries and electronics reaching the end of life. Some Japanese owners are holding funerals for them because they formed real emotional attachments to these robots, according to interviews with robot owners.
Now, there’s a new, even more lovable robot hitting Japan. It’s called Pepper, and it’s being sold by Aldebaran Robotics and SoftBank Mobile. Unlike the Aibo, which was designed to physically move and respond like a dog, the $1,600 Pepper robot is optimized for reading and expressing emotions.
About 1,000 of the Pepper robots went on sale in Japan earlier this month, and the full inventory was sold out in less than a minute, according to Softbank.
The robot makes humans feel like they’re “bonding” with a sentient, caring being even if it’s made from plastic, wires and electronics. It’s about four feet tall, has four microphones, two HD cameras and a range of sensors that enable it to perceive what people are doing. Pepper has wheels for getting around and arms for gesturing, mostly.
Interestingly, it’s got a computer inside that’s optimized to mimic human gestures and emotions. You talk to Pepper and Pepper talks back. People are going to love it—literally.
People love their dogs and cats, which reciprocate emotions in part because we have selectively bred them—consciously and unconsciously— over millennia to do so—especially dogs.
But people also form attachments to other less responsive living creatures, including lizards, spiders, sea monkeys—you name it.
Some people form emotional attachments to their cars, give them a name and even speak to them. In fact, any sort of complex machinery can cause us to feel at some level that it’s either on our side or against us, or inspire us to feel like it’s sentient even though we know it’s not.
The reason for all this is that the human brain is hard-wired to function in a social, human context. Our brains process any kind of response from any kind of animal or object as if it were another person, at least to some extent.
Not surprisingly, as robots and intelligent virtual assistants become more sophisticated, helpful and gain increasing abilities for natural language processing, we grow attached to them.
But some of this can be attributed to clever marketing.
For example, you might place all four major virtual assistant agents on a naming spectrum from most human name to least: Alexa, Siri, Cortana, Google Now.
By simply naming their virtual assistant “Alexa,” Amazon successfully made it easier for users of their Amazon Echo product to feel like Alexa is more sentient than, say, Google Now. (There’s no question that Google Now is “better” and more sophisticated, but it doesn’t feel like a person the way Alexa does.)
A second reason Alexa “feels” more sentient or human than the rest of the field is because she has a physical body. While all the virtual assistants are cloud services, including Alexa, the difference is that you interact with the single-purpose Echo hardware, which is mostly just a speaker inside along with a microphone and WiFi antenna. That helps create the vague feeling that it’s an artificial being.
Why You Will Fall in Love With a Robot
All these personal assistants and other home robots beyond Pepper will become sophisticated, responsive, helpful and “empathetic” enough to trigger feelings of affection.
Computer scientists, science fiction fans and writers and AI enthusiasts obsess over something called the Turing test, which is a process proposed by Alan Turing in the 1950s to judge whether a computer can interact in a way that’s indistinguishable from a human.
The test requires a person to blindly interact with both a computer and a person by asking both questions. If the person can’t tell which set of responses comes from the computer and which from the human, then the computer “passes” the Turing test.
There’s no question that Alexa, Siri, Cortana and Google Now, their competitors and descendents will some day pass the Turing test. It’’s just a matter of time and that time is nearly upon us.
It was previously reported that an AI program named “Eugene” passed the Turing test last year, but that isn’t true. The testers set up “Eugene” to imitate a 13-year-old Ukrainian boy, and they did that to account for linguistic errors and goofy answers.
Meanwhile, it has to be said that the Turing test isn’t as important as we once thought—no disrespect for Turing and his work. Turing’s so-called imitation game, in which computers get really, really good at imitating human verbal responses and interaction, isn’t going to be the ground-breaking, culture-shifting singularity we thought it would, for three reasons.
The first is that in hindsight the Turing test has a major flaw. It boils down “humanness” to mere words. The Turing test involves passing text back and forth and if the AI can trick judges that the text came from a person, it’s a pass.
But people are much more than words. And so are virtual assistants and robots. Alexa and Siri, for example, have vocal intonations, agency, practical applications and functions. They actually do things for us.
Google Now and, soon, Siri, will be able to act proactively and appear to make decisions about how to help you. Pepper has hand gestures and movements. All these qualities and many more—and not just words on a page—will help convince human minds that computers are virtual people.
Second, it was assumed in the previous century that of course artificial intelligence would be entirely designed to create a human, but now we realize that AI is being applied to fake animals and other non-human creatures. Imagine a future version of the Aibo, for example, which is aimed at a dog “imitation game,” rather than a human one.
The third and most important reason, however, is that we will choose to accept robots and virtual assistants as artificial or substitute people long before they can get an “A” on a “test” of imitation—or we’ll be convinced on levels that are not logical. In other words, being “tricked” isn’t as significant as our knowing acceptance of A.I. as “human enough” or “sentient enough.” And robots will trick our hearts long before they trick our heads.
As we do with pet lizards, old cars and other objects that don’t actually care at all about us, we’ll feel affection for our virtual assistants and home robots, but much more so because they’ll be designed to engender such affection.
The point is that those gadget fans in Japan who are crying at Aibo funerals aren’t strange or weird or misguided. They’re just ahead of the curve.