: Osborne Keynote"> At stake is the next generation of speech-enabled applications, which would combine speech with text and graphics and give users the choice of which interface to usepoint-and-click, typing or voice. Such applications would have the greatest appeal for sales-force and field-force automation applications as well as telematics applications that can be accessed while driving a car, Osborne said."[Multimodal applications] are a way for humans to interact with technology," he said. "Its what we have to do to get machines to do what we want them to do rather than what they want us to do. "You cant just do voice, you cant just do graphics. We have to move technology forward to the way we live today." Wide deployment of multimodal applications would eventually herald a world of "transparent computing," Osborne said. "People would use computers and not know theyre using computers." Even if the W3C working group yields a common standard, challenges remain. Wireless networks remain slow and telecommunications companies, as well as most other technology companies are laboring under financial constraints in the hobbling tech economy. And even in marketing, there will be challenges. While multimodal applications will have an almost endless number of possibilities, most would add convenience more than any Earth-shifting technological breakthroughs. "Everyones looking for a killer app," said Osborne. "I dont think that there really is a killer app."
Multimodal applications could eventually gain acceptance in the consumer space as well, adding voice interfaces to televisions, refrigerators and microwaves. Users could select the way they interact with applications and devices, Osborne said.