IBM's Watson Supercomputer Beats Humans in Jeopardy Practice Match

 
 
By Fahmida Y. Rashid  |  Posted 2011-01-13 Email Print this article Print
 
 
 
 
 
 
 

IBM showed off its Watson supercomputer in an exhibition Human vs. Machine game of Jeopardy, while discussing its potential practical uses in the IT industry, especially in health care and tech support.

Watson, IBM's latest DeepQA supercomputer, defeated its two human challengers during a demonstration round of Jeopardy on Jan. 13. The supercomputer will face former Jeopardy champions Ken Jennings and Brad Rutter in a two-game, men-versus-machine tournament to be aired in February.

However, the Jeopardy match-up was not the "culmination" of four years of work by IBM Research scientists that worked on the Watson project, but rather, "just the beginning of a journey," Katharine Frase, vice president of industry solutions and emerging business at IBM Research, told eWEEK.

Supercomputers that can understand natural human language-complete with puns, plays on words and slang-to answer complex questions will have applications in areas such as health care, tech support and business analytics, David Ferrucci, the lead researcher and principal investigator on the Watson project, said at the media event showcasing Watson at IBM's Yorktown Heights Research Lab.

Watson analyzes "real language," or spoken language, as opposed to simple or keyword-based questions, to understand the question, and then looks at the millions of pieces of information it has stored to find a specific answer, said Ferrucci. "The hard part for Watson is finding and justifying the correct answer, computing confidence that it's right and doing it fast enough," said Ferrucci.

This is where Jeopardy comes in. The quiz show covers a broad range of topics, and the questions can be asked in a variety of ways, whether it's quirky, straightforward or downright strange. Creating a machine that can take on human challengers on Jeopardy became a "rally cry" for researchers to think about question and answer processing in a "more open and different way," Frase said.

"Grand challenges are a big deal to IBM," said John Kelly III, IBM's senior vice president and director of IBM Research. IBM's last major challenge was Deep Blue, the supercomputer that took on Chess Grandmaster Garry Kasparov in 1987. Many of the supercomputers used by the Department of Defense are the "sons and grandsons" of Deep Blue, Frase said.

Jeopardy is significantly more complicated than chess, said Ferrucci. Chess can be broken down mathematically and there are finite combinations, he said, while Jeopardy has "infinite ways" to extract data. Watson needs to understand the clues, pick which categories to choose, decide how confident it is that the answer is correct and decide how to wager during for "Daily Doubles" questions or for the final round.

The technology has to process natural language to understand "what did they mean" versus "what did they say," which has a lot of implications in the health care sector, said Frase. Patients are not using the terms doctors learned in medical school to describe their ailments, but more likely the terms they picked up from their parents growing up, she said.

A Watson-like system can take that information and co-relate it against all the medical journals and relevant information, and say, "Here's what I think and why," while showing its evidence for how it came up with the conclusion, according to Frase. The machine won't be making a diagnosis or treatment decisions-a doctor would-but the machine can present information to help the doctor, making diagnoses and treatment decisions much faster and more efficiently, said Frase. A similar situation exists for tech support, where the system would be able to figure out what the problem is.



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel