If you saw the first reports about Facebook’s artificial intelligence chatbots, you might believe that the robot revolution was about to overthrow human civilization.
The reports said that the bots were talking among themselves using a language that humans could not understand. The word was that Facebook’s bots had slipped their leashes and were taking over.
Well, not exactly. While it is true that some chatbots created for AI experiments on automated negotiation had developed their own language, this wasn’t a surprise. In fact, it wasn’t even the first time that such a thing had happened. The fact that it might happen was explained in a blog entry on the Facebook Code pages.
The blog discussed how researchers were teaching an AI program how to negotiate by having two AI agents, one named Bob and the other Alice, negotiate with each other to divide a set of objects, which consisted a hats, books and balls. Each AI agent was assigned a value to each item, with the value not known to the other ‘bot. Then the chatbots were allowed to talk to each other to divide up the objects.
The goal of the negotiation was for each chatbot to accumulate the most points. While the ‘bots started out talking to each other in English, that quickly changed to a series of words that reflected meaning to the bots, but not to the humans doing the research. Here’s a typical exchange between the ‘bots, using English words but with different meaning:
Bob: “I can i i everything else.”
Alice responds: “Balls have zero to me to me to me to me to me to me to me to me to,”
The conversation continues with variations of the number of the times Bob said “i” and the number of times Alice said “to me” in the discussion.
The AI language emerged during a part of Facebook’s research where the AI agents practiced their negotiation skills with each other. There, the agents work on improving their skills by chatting with other agents. The researchers initially worked to have the agents simulate being human, specifically to avoid problems such as language creation.
“During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent. While the other agent could be a human, FAIR (Facebook AI Research) used a fixed supervised model that was trained to imitate humans,” the researchers explained in their blog entry.
“The second model is fixed, because the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating.”
It turns out that such ad hoc language development has happened with some regularly at Facebook, as well as in other research efforts. For example, Google’s Translate AI is reported to have quietly created an entire language to help it translate between different human languages.
The reason for this language development isn’t that the AI software is taking over, but rather that its priorities are set for it to perform with maximum efficiency. The ‘bots received points for efficiency, but no points were assigned by the researchers for sticking with English, so they didn’t. The researchers published a paper that details how this works, but it’s clear that the researchers could have added points for English if they’d so chosen.
“The researchers had the systems stop creating their own language because it wasn’t what they set out to investigate and it was affecting the parameters of their study,” a Facebook spokesperson explained to eWEEK. The spokesperson stressed that the AI process that was shut down was an experimental system, not production software.
But the study did turn up some interesting and potentially useful information, perhaps the most important being that when the agents were communicating with humans in an actual negotiation session, the humans couldn’t tell that they were talking to a robot. This is important because it demonstrates that these chatbots can determine a desired outcome, and work to realize it.
But there’s also an important lesson for IT managers, now that machine learning is becoming prevalent. As machine learning and other AI characteristics become part of your critical systems, the single most important activity as you integrate them is to test them thoroughly.
That means testing with more than the expected parameters. You must test the response of your AI systems with wildly divergent data and you must test it with information that’s simply wrong. After all, if you’re expecting input from humans, at some point they’re going to make a mistake.
In addition, you must also develop a means of monitoring what’s happening when your AI system is receiving input or providing output to other systems. It’s not so much that having your machines create their own language is a problem as it is that you need to be able to audit the results. To audit the results, you need to understand what they’re up to.
Finally, deep down inside, AI agents need to be instructed to speak English all the time—not just when it thinks the humans are listening.