Personal storage capacity growth is outpacing the speeds of both our processors and our network connections. This means that many ideas about how to manage future desktop and mobile storage are swimming against the tide, while recent developments are ignoring users most critical priorities.
Offloading personal data to managed servers is at best a mixed blessing if its actually cheaper to store, and takes less time to retrieve, on our own hard disks.
Compressing data is likewise fraught: Its often cheaper and faster today to make the space bigger than to squeeze the data smaller. And people show every inclination to keep on accumulating larger and richer collections of data, which they want to access in more immediate and more intuitive ways.
Processor speed, system main memory, on-board mass storage and network connection bandwidth are my measures of PC performance. Processor speed, often oversimplified to clock rate—the megahertz myth—has usually gotten most of the hype in marketing new machines. People quote Moores Law to assure one another that there will always be as much speed as their programmers can find a way to use.
Of the four performance parameters that Ive mentioned, however, mass storage size has been, by far, the fastest-growing value, with a compound growth rate of almost 80 percent per year since the IBM PC was introduced in 1981.
I came up with this figure for growth rate of storage by comparing an 80GB hard disk, typical of todays new machines, against the single-sided 160KB floppy that was offered when the PC made its debut. If we instead compare one of todays midsize hard disks—say, a 120GB unit—against a 10MB IBM PC XT, vintage 1984, the growth rate is still on the order of 55 percent per year.
Either way, the growth rate of storage outstrips the compound annual growth rates of processor speed (30 percent per year, based on 2.8GHz now versus 4.77MHz then); connection speed (40 percent per year, based on my DSL connections current measured 686K-bps download speed versus 300 bps then); or system memory capacity (less than 50 percent per year, taking 1GB as todays figure versus 128KB in 1981).
What we keep—our mass storage—grows most quickly. How much we use at any one time—our system memory—comes in second. Our rate of acquiring more stuff over the wire comes in a close third. How hard we work on it—processor speed, I mean—comes in last. Theres probably a book on character and values to be squeezed from that observation but not right now.
Instead, I want to talk about the problem—and it is a problem—that our typical desktop storage has grown by an overall factor of more than 12,000 while our processing speed has grown by less than a factor of 600. Even if you argue that a Pentium clock cycle does at least four times as much work as a clock cycle on an 8088 or 8086, the ratios are still out of balance by at least a factor of five—in the wrong direction, if you have a need for speed.
If you feel as if your computing experience was somewhat more responsive back when Ronald Reagan was in his first term, the numbers appear to agree—even before we factor in the greater processing burdens of rendering fonts, for example, instead of merely displaying terminal-style text and line graphics as we did in the days of DOS. Gosh, those DOS machines were fast.
Microsofts announcement last month pushing the release of WinFS beyond the already-distant “Longhorn” time frame is therefore a discouraging development. Todays users need the next level of abstraction, moving beyond files to represent sources and uses of data. Economics, as well as habit, will drive them to want that capability on a personal device.
Developers should not wait until post-Longhorn (2008 or later) to meet that demand. They should move now toward XML and other content-based representations and should give users task-centered rather than location-centered ways of storing and retrieving work—if not with the help of WinFS, then without it.
Technology Editor Peter Coffee can be reached at email@example.com.