Microsoft Silences Its Tay AI Chatbot After Racist Rants

Tay gets trolled. After a brush with Internet's dark side, Microsoft unplugs the youth-oriented chatbot.

Microsoft Tay

Microsoft's efforts to mainstream artificial intelligence (AI) technologies hit a bit of a snag after the company was forced to pull the plug on Tay, a social media chatbot meant to mimic a young American Millennial woman with a playful streak. Soon after Tay was launched on Wednesday, Tay seemed to take on a darker persona.

As observed by Twitter users and reported in the Wall Street Journal and several other media outlets, Tay (@TayandYou) went from bubbly to outright offensive in a matter of hours. "Hitler was right I hate the Jews," read one tweet, according to the Journal.

Microsoft has since removed that tweet, along with several other racially and politically charged Twitter conversations. Several still remain among the over 96,000 tweets attributed to the account, however.

Currently, Tay is sitting silent. Just after midnight on March 24, the company tweeted, "c u soon humans need sleep now so many conversations today thx." A banner at the top of the Tay homepage reads, "Phew. Busy day. Going offline for a while to absorb it all. Chat soon," indicating Microsoft has pulled the plug, at least for the short term.

According to Microsoft's Website, the chatbot was developed by the software giant's Technology, Research and Bing groups to research conversational understanding. During Tay's development, the company mined anonymized public data and built the chatbot employing AI along with "editorial developed by a staff including improvisational comedians," stated the company's FAQ on the technology.

"The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment as it is technical," a Microsoft spokesperson emailed eWEEK after a request for comment.

As Microsoft quickly learned, engaging with the Internet at large comes with risks, and not all users adhere to conducting work-safe conversations online. Although some of Tay's incendiary remarks are the result of parroting some users, others appear to have come straight from the chatbot's mouth, so to speak.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," continued the spokesperson. "As a result, we have taken Tay offline and are making adjustments."

While the incident is attention-grabbing and embarrassing from a PR perspective, it's unlikely to diminish Microsoft's bullish stance on AI.

In 2015, the company debuted Project Adam, a deep-learning system patterned after the network of neurotransmitters that make up the human brain. Early experiments involved determining dog breeds, with the ultimate aim of perhaps one day helping users determine the nutritional content of a meal or diagnose a skin ailment by snapping a picture with a smartphone, among other science-fiction-like scenarios.

Last week, the company unveiled Project AIX, an AI-training Minecraft mod. Due to be released this summer under an open-source license, AIX uses Minecraft—the block-based sandbox construction game that Microsoft acquired for $2.5 billion in 2014—to provide academic researchers with a virtual environment in which to test their AI creations.

Pedro Hernandez

Pedro Hernandez

Pedro Hernandez is a contributor to eWEEK and the IT Business Edge Network, the network for technology professionals. Previously, he served as a managing editor for the network of...