Google Helping Refine How Computers 'Read' Language

Google is providing tools for researchers to help computers recognize and know the difference between words and their context such as "ladder" and "latter."

Helping computers better learn the context of words so that they can more accurately interpret what is being sought or said is the goal of a project being undertaken by Google's research team.

To help researchers find improved ways to help machines figure out the context of searches and inquiries, Google is contributing data sets that researchers can use to try to refine how computers "read" and "hear" words.

The effort was announced by Dave Orr, Google Research product manager, and Ciprian Chelba, a Google research scientist, in an April 30 post on the Google Research Blog.

"Language is chock-full of ambiguity, and it can turn up in surprising places," wrote Orr and Chelba. "Many words are hard to tell apart without context: most Americans pronounce 'ladder' and 'latter' identically, for instance."

Keyboard inputs on mobile devices have a similar problem, especially for IME keyboards, they wrote. For example, the input patterns for "Yankees" and "takes" look similar as users slide their fingers across the keypad. That makes it more difficult for devices to know or predict exactly what the user is ultimately seeking, they wrote.

That's where language models come in, because they are used to help show computers context of words. "These are used for predictive keyboards, but also speech recognition, machine translation, spelling correction, query suggestions, and so on," wrote Orr and Chelba. "Often, those are specialized: word order for queries versus Web pages can be very different. Either way, having an accurate language model with wide coverage drives the quality of all these applications."

One area where Google sees potential advances to improve language modeling is by having a large, standard set of words with benchmarks for easy comparison and experiments with new modeling techniques, wrote Orr and Chelba.

"To that end, we are releasing scripts that convert a set of public data into a language model consisting of over a billion words, with standardized training and test splits, described in an arXiv paper," Orr and Chelba wrote. "Along with the scripts, we're releasing the processed data in one convenient location, along with the training and test data. This will make it much easier for the research community to quickly reproduce results, and we hope will speed up progress on these tasks."

The benchmark scripts and data are freely available to all researchers who want to work with the data set.

"The field needs a new and better standard benchmark," wrote Orr and Chelba. "Currently, researchers report from a set of their choice, and results are very hard to reproduce because of a lack of a standard in preprocessing. We hope that this will solve both those problems, and become the standard benchmark for language modeling experiments. As more researchers use the new benchmark, comparisons will be easier and more accurate, and progress will be faster."

Google often works with languages. In December 2013, Google's Translate services, which help people communicate with others using non-native languages, added nine more languages—including five in Africa—to its offerings, which now provides translations for 80 languages.