Last week, I mentioned the Section 508 mandates for information accessibility in federal IT and noted the growing influence of these rules on software design. Coincidentally, I find these requirements being cited on the hardware side as well, in a technical paper that I saw this past week from Adomo Inc.--whose AdomoMCS appliance adds voice navigation and speech input/output to Microsoft Exchange.
Speech recognition is pigeonholed in many peoples minds as a stupid IT trick, something that works well enough to be interesting in a demonstration but badly enough to be a nuisance in real life. The problem, according to innovation theorist Clayton Christensen at the Harvard Business School, is that speech recognition has been mis-marketed toward the people who type the most, rather than being offered to the people who type the worst.
As anyone knows whos used the handwriting recognition on a late-model Pocket PC, current algorithms are surprisingly good at figuring out what word is coming next: Routine e-mail is sufficiently predictable that speech-to-text translation can be much more productive than thumb exercises on diminutive keyboards.
Adomos approach maximizes the leverage of whats already installed and working well enough: It doesnt create a parallel e-mail and contact management infrastructure, but simply adds new pathways in and out of Exchange (along with a snap-in management module for the Microsoft Management Console). Although Im no great fan of Exchange, this is a good demonstration of the positive role of a pervasive standard in spreading the cost of improvements across a large potential market.
Custom applications can also reach more users, in more situations, by using Adomos developer-oriented Voice Gateway appliance as front end to an application server. Were talking a standards-based product here, using VXML 2.0 semantics and http interaction for something close to plug-and-play integration.
Imagine the bridge of the Starship Enterprise without voice input/output between the crew and the computers. First, it would complicate the lives of the script writers no end: Imagine people in tense situations, not snapping out orders to the ships computer, but bending over keyboards and thinking out loud about what theyre typing. No, it would never work.
But perhaps its time for more enterprise (lowercase "e") applications to become equally oriented to speech for both receiving and giving information. People in cars are a growing market, by any measure, and cellular phones are the fastest-growing class of connected device. The standards and the technology are giving rise to the products that make this a practical proposition.
If I may indulge myself with a brief P.S.: A surprising number of readers criticized last weeks letter for saying that companies should be legally required to make their Web sites accessible to the blind. I say, "surprising," because the letter said nothing of the kind.
On the contrary, I narrated the courts reasoning in determining that existing law requires no such thing--although I did also argue that rational commercial self-interest should accomplish what the law, as yet, does not.
Thanks for your many letters.