If Youre Going to Fail, Admit It
A system may fail without telling the user why, which is quite bad enough; worse still, it may fail invisibly, returning inaccurate results with no hint that anything has gone wrong.
Two trends in application development make failure warnings more important than ever. First, application developers are consuming a growing number of remote resources, created and supported by either contracted service partners or trusted public providers. Its therefore necessary for each new generation of applications to be ever more skeptical about what is known, and how its known, while being upfront with users when the application is depending on uncertain data or external logic.
The second trend, unfortunately, is in conflict with the advice that I just offered. Applications are increasingly migrating into handheld devices, automotive telematics systems, or other hardware environments that often lack general-purpose displays with rich facilities for communicating details to users. "The amount of code in most consumer products is doubling every two years," estimated Remi H. Bourgonjon, director of software technology at Philips Research Laboratory in Eindhoven, Netherlands, as quoted in Scientific American in 1994: I have no reason to think that this rate of growth has slowed.
Retail environments, for example, are confronting this issue as they try to smooth the path toward self-service checkout. My own experience with self-checkout systems at my local Home Depot has been less than satisfactory, with many opportunities to discover that I was failing to read the minds of the system designers and behave as they had planned.
Say what you will about the costs and complexities of a full-screen display, it at least gives developers plenty of options for adding new elements to the user interface as new issues are identified during testing of application prototypes; this path can lead to excessive complexity, to be sure, but at least the opportunity is there to balance the downside of complexity against the benefits of completeness. The embedded-application trend demands that failure scenarios be considered as early as possible during system design, so that limited-function display hardware doesnt become an excuse for deciding not to give the user important information.
I thought about this last Friday afternoon, when I booted up my GPS (Global Positioning System) receiver during a 46-mile backpack trip with a Boy Scout group in the northern part of Yosemite National Park. To my surprise, the altitude readout showed a figure of almost 9000 feet, which I knew was incorrect: My legs, in particular, were perfectly certain that wed spent the previous day hiking 13 miles to descend to 6600 feet.
After a moment, I realized what had happened: Being surrounded by trees, I didnt have enough separate satellite signals to enable three-dimensional navigation, so my receiver was updating my map coordinates but retaining the most recent previous altitude display. This is a perfectly reasonable default behavior, but it should have been accompanied by some warning that the altitude value was suspect: blinking, perhaps, or an italic font, or some other acknowledgment that the receiver knew that it had moved from the location where it had measured that 8980-foot value two days before.
My GPS confusion demonstrates that application designers need to think about the situations that their applications will face as sequences and combinations, rather than individual states. In a January 2000 study of software failure in medical devices, for example, some failures were found to occur when two or more boundary conditions were reached at the same time; either one might have been handled correctly, but the coincidence of two such states had not been adequately anticipated.
All of this assumes that a development team will think hard, in the first place, about the possibility of failure. If you dont admit that failures can occur, you probably wont do a very good job of detecting failure and limiting the damage that it does. Youll be more likely to build a system that works quite well when its assumptions are upheld, but that fails when conditions are other than those expected.
Users expectations wont be met, and thats failure by one of its most important definitions.
Tell me whats failing you at firstname.lastname@example.org.