The problem with putting more “I” in “IT” is the cost of paying people to do it. The initial cost of human data entry is high; the costs that result from data entry error are worse.
When data are captured infrequently, or with long delays, or with dubious accuracy, the systems that depend on that data can approach theoretical levels of performance in every other respect and still be useless–or worse than useless, if people are so impressed by the technology that they overlook the poverty of the data that drives it. But radical improvements in data capture, even though they have the greatest leverage on overall system effectiveness, bring with them controversial issues of privacy and control.
Its not controversial to suggest that the speed, capacity and connectedness of our systems have crossed most of the noticeable thresholds of “good enough”: Most systems today wait for people, much more than the other way around. This means that the most important remaining improvements that we can make are in the direction of giving our systems more information, in more direct and untouched-by-human-hands ways, about the real world around them. As Bill Gosper said at the MIT AI Lab, at least 30 years ago, “Why should we limit computers to the lies people tell them through keyboards?”
And when we look at the difference between how we live today and how we lived 30 years ago, its clear that Gospers challenge has been addressed with massive investment in automating or streamlining data entry. We buy gas at the pump by swiping a magnetic-striped card, not by waiting for a person to write down a number or run a mechanical roller over a piece of carbon paper; we get our groceries tallied by a bar-code scanner, not by someone trying to read a price tag.
What seems to many people a qualitative change, though, is the move toward wireless data collection: the transponder on the dashboard that pays your bridge or highway toll, or the wireless tags that may soon be embedded in all manner of products to monitor supply-chain activities. We have to hand someone our credit card, or be within line of sight of a bar code scanner, to grant access to personal data or to take part in a transaction; wireless technologies make that access much less apparent, even if those involved in their development promise that tags arent meant to be readableat distances of more than a meter.
More important than questions such as Java versus .Net, 64-bit versus 32-bit, and overseas versus U.S.-based software development effort is the question of how your enterprise applications can become more powerful and more valuable by giving them better access–more immediate, more detailed and more reliable–to information on what you have, where it is and what its been doing. RFID tags can do more than say, “Here I am”; they can also provide, for example, vital data on storage and transportation conditions for valuable goods, as radio technologies are combined with increasingly cheap and rugged sensors and with other infrastructures like the Global Positioning System.
But if you want to introduce these technologies smoothly, with high levels of customer acceptance, be sure that you dont snag the tripwires of nervousnessabout the entry of these technologies into everyday life.
Make sure that when data about individual people are involved, the individual has adequate information about the benefits of the technology–and that individuals retain choice, whenever possible, about the degree to which they can be discovered by the machine.