Microsoft's Tay Chatbot Debacle Reveals Immaturity of AI, Web Trolls

By Wayne Rash  |  Posted 2016-03-28 Print this article Print
Microsoft Tay Hate

NEWS ANALYSIS: The sadly embarrassing end to Microsoft's public experiment with a chatbot that would mirror millennial attitudes provides some critical lessons about designing machine learning systems.

Imagine if you will that Edgar Rice Burroughs had taught his famous character Tarzan how to type and then dropped him into a room with nothing but a computer attached to Twitter.

That computer would be the young Tarzan’s only window to the outside world. You’ll remember that Burroughs' fictional Tarzan (the character in the book, not the movie Tarzan who yodeled among jungle greenery) was a very fast learner who had limited context with which to judge humanity.

If you think of Microsoft’s Tay machine learning project as being roughly equivalent to Tarzan, it makes it easier to understand what happened when Microsoft had to take its teenaged chatbot off the Internet after the Web's creepier denizens taught it to spew anti-Semitic rants.

Microsoft has now apologized for its chatbot’s actions and we are left to wonder how this was allowed to happen and if anyone else will be brave enough to release an artificially intelligent adolescent into the wild.

While it’s easy to have a good laugh at Microsoft’s expense and even easier to pontificate about things the company should have done, the fact is that the only thing that Microsoft failed to foresee was the depth of depravity that exists on the open Internet. This is especially true on social media sites where real users have taken their own lives in response to such mistreatment.

Fortunately, Tay had no feelings to hurt and no fragile self-esteem that could be shattered. It simply listened to the only input it had available, which was a group of morons that have nothing else to do with their time beyond expressing hate. While it’s certainly no secret that these individuals exist, they aren’t the kind of people one normally encounters on the social networks.

Tay’s biggest failing, besides being a soft target, was that it had no context and no way to learn beyond Twitter. Like the young Tarzan who knew only the apes who raised him and the few books he found, Tay was launched without the social tools or defenses to confront a hateful world.

Because Tay had such limited access to information, it had to believe what it was told, so when a group of wanna-be Nazis taught it to echo hateful statements, that’s what it did. It knew no better.

But in the process of running and then terminating Tay, machine learning researchers are learning important lessons. Chief among those lessons is that there needs to be some executive function to its processes in which it can confirm those alleged facts it is being taught. The question is whether a machine can have the context to determine an insulting or hateful comment from a socially acceptable one.


Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel