While Tay is offline, Microsoft will use the incident as a lesson to fortify the AI chatbot against similar attacks.
Microsoft issued a public apology after its latest foray into artificial intelligence (AI) took an unfortunate turn. Last week, the company was forced to shut down its AI chatbot Tay
after it began spouting racist and offensive tweets.
A Microsoft spokesperson told eWEEK
that Tay's bad behavior was the result "of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," in an email statement Thursday.
Designed to mimic a young, good-natured American Millennial woman, Tay's replies were meant to be amusing and entertaining. Within hours of being introduced to Twitter's massive user base, the Tay account (@TayandYou) began parroting and ultimately generating her own racially and politically charged Twitter conversations.
On Friday, Peter Lee, corporate vice president of Microsoft Research, weighed in on the issue. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," he said in a March 25 blog post
Encouraged by the popularity of the company's XiaoIce chatbot in China, where it is used by 40 million people, Microsoft decided to launch Tay, aimed at 18- to 24-year-olds in the U.S. Despite stress testing, filtering technologies and sweeping user studies, Tay debuted with an unnoticed flaw.
"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack," explained Lee. "As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time."
Microsoft is working on fixing the vulnerability, added Lee. The software giant is using the incident as a lesson to improve its public-facing AI software. Although the company will attempt to limit possible technical exploits in the future, Lee cautioned that making an AI that interacts with the public is not without its risks. Microsoft "cannot fully predict all possible human interactive misuses without learning from mistakes," he added.
To date, Microsoft's public experiments with AI have been largely benign.
Last month, the company launched Fetch
, an iOS app that identify dogs and determines their breeds. The app is based on Project Oxford, a cloud-based collection of machine learning and AI software. Other Project Oxford examples include TwinsOrNot.net and the viral hit How-Old.net.
Naturally, Microsoft isn't the only tech giant pursuing AI technologies. In October, Apple acquired Perceptio
for an undisclosed amount. The AI startup specializes in image classification systems for smartphones.
In November, Google open-sourced the main component of TensorFlow
, enabling other companies and startups to expand on the search giant's photo-analyzing AI and to integrate it into their apps. Last week, the company announced the alpha release of Cloud Machine Learning at GCP Next 2016 (Google Cloud Platform User Conference). Cloud Machine Learning is a framework for building and training custom models used by intelligent applications.