Microsoft's Tay Chatbot Debacle Reveals Immaturity of AI, Web Trolls
NEWS ANALYSIS: The sadly embarrassing end to Microsoft's public experiment with a chatbot that would mirror millennial attitudes provides some critical lessons about designing machine learning systems.Imagine if you will that Edgar Rice Burroughs had taught his famous character Tarzan how to type and then dropped him into a room with nothing but a computer attached to Twitter. That computer would be the young Tarzan’s only window to the outside world. You’ll remember that Burroughs' fictional Tarzan (the character in the book, not the movie Tarzan who yodeled among jungle greenery) was a very fast learner who had limited context with which to judge humanity. If you think of Microsoft’s Tay machine learning project as being roughly equivalent to Tarzan, it makes it easier to understand what happened when Microsoft had to take its teenaged chatbot off the Internet after the Web's creepier denizens taught it to spew anti-Semitic rants. Microsoft has now apologized for its chatbot’s actions and we are left to wonder how this was allowed to happen and if anyone else will be brave enough to release an artificially intelligent adolescent into the wild.
While it’s easy to have a good laugh at Microsoft’s expense and even easier to pontificate about things the company should have done, the fact is that the only thing that Microsoft failed to foresee was the depth of depravity that exists on the open Internet. This is especially true on social media sites where real users have taken their own lives in response to such mistreatment.