Watson, IBM’s latest DeepQA supercomputer, defeated its two human challengers during a demonstration round of Jeopardy on Jan. 13. The supercomputer will face former Jeopardy champions Ken Jennings and Brad Rutter in a two-game, men-versus-machine tournament to be aired in February.
However, the Jeopardy match-up was not the “culmination” of four years of work by IBM Research scientists that worked on the Watson project, but rather, “just the beginning of a journey,” Katharine Frase, vice president of industry solutions and emerging business at IBM Research, told eWEEK.
Supercomputers that can understand natural human language-complete with puns, plays on words and slang-to answer complex questions will have applications in areas such as health care, tech support and business analytics, David Ferrucci, the lead researcher and principal investigator on the Watson project, said at the media event showcasing Watson at IBM’s Yorktown Heights Research Lab.
Watson analyzes “real language,” or spoken language, as opposed to simple or keyword-based questions, to understand the question, and then looks at the millions of pieces of information it has stored to find a specific answer, said Ferrucci. “The hard part for Watson is finding and justifying the correct answer, computing confidence that it’s right and doing it fast enough,” said Ferrucci.
This is where Jeopardy comes in. The quiz show covers a broad range of topics, and the questions can be asked in a variety of ways, whether it’s quirky, straightforward or downright strange. Creating a machine that can take on human challengers on Jeopardy became a “rally cry” for researchers to think about question and answer processing in a “more open and different way,” Frase said.
“Grand challenges are a big deal to IBM,” said John Kelly III, IBM’s senior vice president and director of IBM Research. IBM’s last major challenge was Deep Blue, the supercomputer that took on Chess Grandmaster Garry Kasparov in 1987. Many of the supercomputers used by the Department of Defense are the “sons and grandsons” of Deep Blue, Frase said.
Jeopardy is significantly more complicated than chess, said Ferrucci. Chess can be broken down mathematically and there are finite combinations, he said, while Jeopardy has “infinite ways” to extract data. Watson needs to understand the clues, pick which categories to choose, decide how confident it is that the answer is correct and decide how to wager during for “Daily Doubles” questions or for the final round.
The technology has to process natural language to understand “what did they mean” versus “what did they say,” which has a lot of implications in the health care sector, said Frase. Patients are not using the terms doctors learned in medical school to describe their ailments, but more likely the terms they picked up from their parents growing up, she said.
A Watson-like system can take that information and co-relate it against all the medical journals and relevant information, and say, “Here’s what I think and why,” while showing its evidence for how it came up with the conclusion, according to Frase. The machine won’t be making a diagnosis or treatment decisions-a doctor would-but the machine can present information to help the doctor, making diagnoses and treatment decisions much faster and more efficiently, said Frase. A similar situation exists for tech support, where the system would be able to figure out what the problem is.
Watson Flexes Brain Power for $1 Million Jeopardy Tournament
There are potential applications for Watson systems in legal and security settings as well. The system, for example, can look up cases. While that’s possible with a search engine, just matching on keywords is “not always clear” what the case “was really about,” Frase said. The system would be able to absorb the nuances and subtleties of a case and present a better set of results when researching case precedents.
Researchers trained Watson with 200 million pages of text, or about 1 million books, ranging from sources such as encyclopedias, movie scripts, newspapers and even children’s book abstracts. Watson uses the data to analyze contextual clues and figure out how the information relates to each other. “Watson is not just storing all that information,” said Bernie Spang, director of strategy for the software group at IBM Research.
Instead of just storing data-which would make it a glorified search engine-Watson’s algorithms evaluate the context to understand and co-relate the information with other things it knows. For example, Watson knew BusinessWeek’s quote about former General Electric CEO Jack Welch, “If leadership is an art, then surely Jack Welch has proved himself a master painter,” according to Ferrucci. Faced with a Jeopardy-style question, “Welch ran this,” Watson would have to known that Welch was not a painter at GE, Ferrucci said.
Jeopardy was a “great application” to disprove, or prove, whether the DeepQA algorithms used to build Watson work for this kind of information learning, understanding and retrieval, said Spang.
Watson is powered by 10 racks of Power 750 servers running Linux, containing 15 terabytes of RAM and 2,880 processor cores operating at 80 teraflops, IBM said. Each Power7 system can run thousands of simultaneous analytics algorithms to sift through more than 15 terabytes of information stored in Watson’s “brain.” The data is stored in a DB2 database.
Under the “hood,” Watson is all open-source, using Eclipse as the tools platform along with Apache’s Hadoop and Unstructured Information Management Architecture (an “IBM Research creation”) to analyze unstructured data, said Spang.
The advances in processing and computing technology built into the Power7 chips were a result of the work that IBM did while building Deep Blue, Spang said.
At the media event, Jennings and Rutter played a practice round with Watson to answer 15 questions across categories such as “Chicks Dig Me” and “MC.” While Watson came out strong, Jennings and Rutter held their own, with the final score of Watson with $4,400, Jennings at $3,400 and Rutter at $1,200. As the game’s host Alex Trebek often says on real games, “It’s still anyone’s game.”
Taping for the two-game tournament begins Jan. 14, and will be broadcast Feb. 14, 15 and 16. Instead of shipping Watson out to Los Angeles, where the show is usually taped, IBM spent $1 million to build a replica set at its Yorktown Heights Research Center. The game board is much smaller than the original, but the host and players’ podiums are exactly the same, said Harry Friedman, executive producer of Jeopardy.
The first place winner will win $1 million, second place gets $300,000 and third place nets $200,000. IBM will donate 100 percent of Watson’s winnings to a charity, World Vision, while Rutter and Jennings said they will each donate half of their prizes to their respective charities.