Microsoft's Tay Chatbot Debacle Reveals Immaturity of AI, Web Trolls

 
 
By Wayne Rash  |  Posted 2016-03-28 Print this article Print
 
 
 
 
 
 
 
Microsoft Tay Hate


A normal teenager when confronted with social media will often check those items presented as facts by Googling them (or using Bing in Microsoft’s case). While not every teen or every adult bothers to check things learned from the Internet, they still have the world of knowledge they learned before the day they found Twitter. That’s an important difference.

As a part of Microsoft’s blog entry that included its apology, vice president Peter Lee made it clear that while Tay’s tenure on Twitter was brief, it still presented a big research opportunity. Now Microsoft has the raw data necessary to build a chatbot or other machine learning project that won’t make those same mistakes.

What are those changes? Perhaps a list of key words that the chatbot won’t use without checking with the home office. Or perhaps some more hands-on adult supervision so that corrections can be made before the wheels really come off.

Microsoft is already thinking about the next stages of machine learning, as Lee points out in his blog entry, “Looking ahead, we face some difficult–and yet exciting–research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.”

Lee also makes it clear that he realizes that designing artificial intelligence and machine learning are very tough things to do. They will require a number of attempts, some of which will prove to be false starts as well as some that fail in large and small ways.

“To do AI right, one needs to iterate with many people and often in public forums,” Lee explained in his blog. “We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

While I suspect that Lee has unrealistic goals–after all, no matter what you do or say on the Internet it will always offend someone–the overall goal is realistic if perhaps more distant than Microsoft expects. Finding a way to use machine learning to interact with people in a positive and supportive manner seems like a good idea.

But as good as the idea seems I suspect that someone at Microsoft must know that the next AI project will also be attacked by trolls who will try to hijack it. Microsoft should also keep in mind that such projects will suffer malware attacks, denial of service attacks, and any other abuse that the people who assaulted Tay are capable of thinking up.

Perhaps in this, Microsoft will do more than find a way to create a useful form of AI, and in addition will help discover the root causes of such activity on the Internet.



 
 
 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel