Peter Coffee: A surprising free-space scare got me to thinking that developers should install resource-usage controls.
I was reading through a Web pagethe lengthy Frequently Asked Questions, in the somewhat resource-intensive Adobe PDF format, for a product that I was reviewingwhen I noticed something alarming at the bottom of my screen. The indicator for free disk space, part of a customized cluster of sensors that I display with Symantecs Norton System Doctor utility, was droppingprecipitously.
My laptops hard disk headroom dwindled, in a matter of a few tens of seconds, from the two free gigabytes with which Id begun my day to less than a few hundred megabytesand kept falling.
Since this particular machine runs Windows 98, Im in the habit of keeping a weather eye on the sensors that Ive configured to monitor its all-important GDI and User resource pools. When either of those 64Kbyte data pockets gets down to less than 10 percent free, a Win9x machine is headed for troubleno matter how much of its other memory may still be uncommitted.
My personal approach to research, which often includes at least a dozen concurrent browser windows, must be among the worst-case scenarios for resource consumption; I have both a visible indicator and an audible alarm (an insistent violin riff) for this condition. But I dont normally think of disk space as something that needs to be watched on a second-to-second basis.
As my free space plummeted to only a few tens of megabytes, I felt like the Apollo 13 astronauts trying to stop the leak before it could kill me. I closed one browser window after another; finally, the free space stabilized at 38 megabytes free. Thats practically nothing, in the context of any modern operating system, although Im painfully aware that I used to think of a 10MB hard disk as a major new hardware resource.
I flushed both the browser cache and the Windows temporary file directory: I dont remember which one made the bulk of the contribution, but I saw with my relief that my disk space had returned to its accustomed level. I should have done a calmer, more forensic examination to see just what had threatened to consume my storage like the cockroach that ate Cincinnati, but I was too concerned about getting back to a state that operation.
My purpose in telling this story is not to share the adrenaline rush that comes from an unexpected threat, or the self-satisfaction that comes from knowing that prior preparationin this case, taking the trouble to set up a sensor cluster and getting in the habit of watching it while I workhas prevented a nasty consequence.
My purpose, rather, is to urge that application developers bear three principles in mind.
(i) Never assume that any application can use whatever resources it sees fit. All applications should let users specify limits on what an application can consume, and all applications should be aware of their environment to the point that they offer to interrupt an operation before the rest of the system is forced into an untenable stateand to make sure that an application doesnt assume that a task is complete, when in fact it has stalled due to some outside resource constraint.
(ii) Give users as many different options as possible or tracking the progress of their tasks, and for interrupting tasks that are being bogged down by any resource shortagewhether it be local disk space, or remote server processing capacity, or bandwidth congestion in between. Few things are more aggravating than an application that was obviously conceived for use with abundant local resources, and that once commanded to take some action has no well-behaved way to say, "never mind."
(iii) Assume that youll fail to execute the first two principles correctly, and provide redundant means of having one process monitor another. Like the electronic link that lets a backup power supply initiate an orderly system shutdown, while theres still enough power to do so, our distributed network-based applications have to anticipate imperfect environments and deal with them gracefullyto run away, so to speak, that they may fight another day.
E-mail eWEEK Technology Editor Peter Coffee
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.