Facebook AI Experiment Shutdown Holds Lessons for IT Industry - Page 2

It turns out that such ad hoc language development has happened with some regularly at Facebook, as well as in other research efforts. For example, Google’s Translate AI is reported to have quietly created an entire language to help it translate between different human languages.

The reason for this language development isn’t that the AI software is taking over, but rather that its priorities are set for it to perform with maximum efficiency. The ‘bots received points for efficiency, but no points were assigned by the researchers for sticking with English, so they didn’t. The researchers published a paper that details how this works, but it’s clear that the researchers could have added points for English if they’d so chosen.

“The researchers had the systems stop creating their own language because it wasn't what they set out to investigate and it was affecting the parameters of their study,” a Facebook spokesperson explained to eWEEK. The spokesperson stressed that the AI process that was shut down was an experimental system, not production software.

But the study did turn up some interesting and potentially useful information, perhaps the most important being that when the agents were communicating with humans in an actual negotiation session, the humans couldn’t tell that they were talking to a robot. This is important because it demonstrates that these chatbots can determine a desired outcome, and work to realize it.

But there’s also an important lesson for IT managers, now that machine learning is becoming prevalent. As machine learning and other AI characteristics become part of your critical systems, the single most important activity as you integrate them is to test them thoroughly.

That means testing with more than the expected parameters. You must test the response of your AI systems with wildly divergent data and you must test it with information that’s simply wrong. After all, if you’re expecting input from humans, at some point they’re going to make a mistake.

In addition, you must also develop a means of monitoring what’s happening when your AI system is receiving input or providing output to other systems. It’s not so much that having your machines create their own language is a problem as it is that you need to be able to audit the results. To audit the results, you need to understand what they’re up to.

Finally, deep down inside, AI agents need to be instructed to speak English all the time—not just when it thinks the humans are listening.

Wayne Rash

Wayne Rash

Wayne Rash is a freelance writer and editor with a 35 year history covering technology. He’s a frequent speaker on business, technology issues and enterprise computing. He covers Washington and...