I was reading through a Web page—the lengthy Frequently Asked Questions, in the somewhat resource-intensive Adobe PDF format, for a product that I was reviewing—when I noticed something alarming at the bottom of my screen. The indicator for free disk space, part of a customized cluster of sensors that I display with Symantecs Norton System Doctor utility, was dropping—precipitously.
My laptops hard disk headroom dwindled, in a matter of a few tens of seconds, from the two free gigabytes with which Id begun my day to less than a few hundred megabytes—and kept falling.
Since this particular machine runs Windows 98, Im in the habit of keeping a weather eye on the sensors that Ive configured to monitor its all-important GDI and User resource pools. When either of those 64Kbyte data pockets gets down to less than 10 percent free, a Win9x machine is headed for trouble—no matter how much of its other memory may still be uncommitted.
My personal approach to research, which often includes at least a dozen concurrent browser windows, must be among the worst-case scenarios for resource consumption; I have both a visible indicator and an audible alarm (an insistent violin riff) for this condition. But I dont normally think of disk space as something that needs to be watched on a second-to-second basis.
As my free space plummeted to only a few tens of megabytes, I felt like the Apollo 13 astronauts trying to stop the leak before it could kill me. I closed one browser window after another; finally, the free space stabilized at 38 megabytes free. Thats practically nothing, in the context of any modern operating system, although Im painfully aware that I used to think of a 10MB hard disk as a major new hardware resource.
I flushed both the browser cache and the Windows temporary file directory: I dont remember which one made the bulk of the contribution, but I saw with my relief that my disk space had returned to its accustomed level. I should have done a calmer, more forensic examination to see just what had threatened to consume my storage like the cockroach that ate Cincinnati, but I was too concerned about getting back to a state that operation.
My purpose in telling this story is not to share the adrenaline rush that comes from an unexpected threat, or the self-satisfaction that comes from knowing that prior preparation—in this case, taking the trouble to set up a sensor cluster and getting in the habit of watching it while I work—has prevented a nasty consequence.
My purpose, rather, is to urge that application developers bear three principles in mind.
(i) Never assume that any application can use whatever resources it sees fit. All applications should let users specify limits on what an application can consume, and all applications should be aware of their environment to the point that they offer to interrupt an operation before the rest of the system is forced into an untenable state—and to make sure that an application doesnt assume that a task is complete, when in fact it has stalled due to some outside resource constraint.
(ii) Give users as many different options as possible or tracking the progress of their tasks, and for interrupting tasks that are being bogged down by any resource shortage—whether it be local disk space, or remote server processing capacity, or bandwidth congestion in between. Few things are more aggravating than an application that was obviously conceived for use with abundant local resources, and that once commanded to take some action has no well-behaved way to say, "never mind."
(iii) Assume that youll fail to execute the first two principles correctly, and provide redundant means of having one process monitor another. Like the electronic link that lets a backup power supply initiate an orderly system shutdown, while theres still enough power to do so, our distributed network-based applications have to anticipate imperfect environments and deal with them gracefully—to run away, so to speak, that they may fight another day.