ATandT Innovations Include BYOD Apps, Watson Speech Recognition

AT&T Labs is offering a look at the future, showing off nearly-there technologies, from more a intuitive GPS system to more helpful televisions to better speech-recognition technology for travelers and drivers.

AT&T Labs is offering a look at the future, showing off nearly-there technologies, from a more intuitive GPS system to more helpful televisions to better speech-recognition technology for travelers and drivers.

AT&T is quickly rolling out Long-Term Evolution (LTE) coverage alongside its Evolved High-Speed Packet Access (HSPA+) technology€”what it likes to call its other 4G network. However, the carrier wants people to know that there€™s more to the company than just making phone calls.

€œWhen you build a powerful network, people will want to do more with it than just make phone calls,€ Krish Prabhu, president and CTO of AT&T Labs, said at an April 19 "Living the Networked Life" event, where AT&T had gathered researchers, engineers and other smart folks to show off the big ideas they€™ve been working on to take advantage of AT&T€™s network and the increasingly capable devices it powers.

Using props that ranged from a Porsche 911 to a hotel doorknob, these creators showed off everything from application platform interfaces (APIs) that developers can grab and integrate into apps, to technologies that various industries might want to adopt and make their own. While all the researchers were mum about time frames, Prabhu said all the projects are in the €œfairly late stages of development€ and €œare here because they have great prospects.€

These €œart-of-the-possible€ technologies, as AT&T calls them, included:

A New Kind of Steering Wheel

In-car navigation is great, but instructions like €œturn left in 200 feet€ aren€™t super-clear when one€™s driving, and then there€™s the aspect of taking one€™s eyes off the road to consult the little on-dash display. Prompted by AT&T€™s €œIt Can Wait€ campaign, encouraging drivers to not text while driving, AT&T Labs Researcher Kevin Li started thinking more about driver safety and, with help from a few collaborators, came up with a haptics-enhanced steering wheel.

The wheel is fitted with 20 actuators, which are about the size of a screw head and spaced about 2 inches apart around the wheel, so that no matter where a driver€™s hand or hands are, they€™ll feel several of them. When the instruction comes to turn, the actuators work together to vibrate either clockwise or counter-clockwise, suggesting a right or left turn. The suggestion of the turn is intuitively understood, and the vibrations speed up as the driver nears the turn, making them more certain of the direction without taking their eyes off the wheel.

Shadow Interactions with Mobile Phone Projectors

Another project from Li, and an AT&T Labs summer intern, Lisa Cowan, looks to make better use of pico projectors in mobile photos. While the projectors are a neat tool, letting users project what€™s on their screens so the content is viewable to several people, Cowan and Li decided to address the matter of only the person holding the phone being able to manipulate the content. They came up with what they call ShadowPuppets, a system that lets users and the people around them manipulate the content on the screen€”scroll through photos, zoom in or out on a map or click on links€”using the shadow of their fingers on the projection. It could be used in a boardroom, with multiple people manipulating the screen during a presentation, or among friends, standing around trying to figure out where to eat, for example.

A New Way to Watch TV

A software called Content Augmenting Media (CAM) wants to change the way people watch television. Instead of channel surfing, looking for something they like, CAM delivers to a view the content they like. Using a mobile device or tablet€”and potentially through speech-to-text capabilities€”users input essentially tags, or keywords. These can be names of people, popular topics or locations, and they can be saved and turned on or off, so they don€™t need to be input again. When a program that matches any of the selected keywords is found, a window at the bottom of the screen can pop up to tell the user what it€™s found. Potentially, CAM could also offer program suggestions, based on what it knows the user is interested in.