AT&T Innovations Showcase Offers Glimpse of a Mobile Future

AT&T's annual showcase of its ideas-in-the-works emphasized smarter browsers and better user recognition.

NEW YORK—AT&T officials at the carrier’s Innovations Showcase April 4 stood surrounded by some of the ideas being worked on in the company’s Foundry, at the AT&T Labs and beyond.

Not all the ideas will make the transition to monetized products, but all of them, in their very varied ways, seek to address how AT&T's platform might "reshape how people run their lives,” said Abhi Ingle, AT&T's new vice president of ecosystem and innovation.

This year's showcase focused on customers—working parents, telecommuters and business travelers—who are savvy, have their devices connected, and are looking for ways to make those devices simplify and enhance their lives.

Several of the solutions relied on AT&T's Watson speech engine. That engine is also behind several enhancements, also announced April 4, that AT&T has made to the Speech API (application programming interface) it offers to developers.

Watson's speech recognition, natural-language processing and text-to-speech capabilities were used in the AT&T Translator, which provides near-real-time dictation from video.

Presenting it, a researcher described it as being like having a "personal UN translator," which is as precise a definition as one can hope for. Imagine an online business presentation taking place, and workers in France, Spain, Germany and the United States all able to dial in, view the video and read essentially instant subtitles in their own language.

The translation takes place in the cloud.

Similarly, the same researchers devised a text-messaging application that could translate a message in the cloud and deliver it in the recipient's chosen language (as stated in the phone's settings). It would be nothing to sign up for, require no special application and be carrier-agnostic. Send a message in Spanish, and receive a message in Spanish—even if the person on the other end wrote back in French.

"Eventually, we hope that these technologies will not only have a significant commercial value, but also enable people to understand each other better, despite their linguistic barriers," said Srinivas Bangalore, a researcher at AT&T Labs who contributed more than 1 million hours to the Watson speech engine.

A voice biometrics project also used Watson to develop a voice capture solution based on SAFE authentication technology that offers heightened security through just voice or a combination of voice, device, knowledge and facial recognition.

In an example of how SAFE might be connected to a bank account, a user was asked to type in a user name and then read aloud a sentence, generated by an algorithm to detect certain tones.

In an instance where it doubted (rightly) the user's identity, it reverted to a combination of voice and knowledge, asking the user to read aloud a sentence and correctly fill in the blank. "My favorite restaurant is ___."

Some of the innovations on display were far less sophisticated. That isn't to say they won't be useful.

One app helps to make users more environmentally conscious and to make smarter, "greener" choices. Another could use the phone's sensors to measure how far a user has run or walked and the calories they've burned, help to create a visual diary of the foods a person's eaten, and offer health advice based on actual knowledge of the user. It knows better, for example, than to suggest that a person who has never run 3 miles get up and run 10. It will suggest a walk, however.