Keeping Pace with Desktop Storage Demands

 
 
By Peter Coffee  |  Posted 2004-09-06 Email Print this article Print
 
 
 
 
 
 
 

Nothing has kept up with the runaway growth in personal archives.

Personal storage capacity growth is outpacing the speeds of both our processors and our network connections. This means that many ideas about how to manage future desktop and mobile storage are swimming against the tide, while recent developments are ignoring users most critical priorities.

Offloading personal data to managed servers is at best a mixed blessing if its actually cheaper to store, and takes less time to retrieve, on our own hard disks.

Compressing data is likewise fraught: Its often cheaper and faster today to make the space bigger than to squeeze the data smaller. And people show every inclination to keep on accumulating larger and richer collections of data, which they want to access in more immediate and more intuitive ways.

Processor speed, system main memory, on-board mass storage and network connection bandwidth are my measures of PC performance. Processor speed, often oversimplified to clock rate—the megahertz myth—has usually gotten most of the hype in marketing new machines. People quote Moores Law to assure one another that there will always be as much speed as their programmers can find a way to use.

Of the four performance parameters that Ive mentioned, however, mass storage size has been, by far, the fastest-growing value, with a compound growth rate of almost 80 percent per year since the IBM PC was introduced in 1981.

I came up with this figure for growth rate of storage by comparing an 80GB hard disk, typical of todays new machines, against the single-sided 160KB floppy that was offered when the PC made its debut. If we instead compare one of todays midsize hard disks—say, a 120GB unit—against a 10MB IBM PC XT, vintage 1984, the growth rate is still on the order of 55 percent per year.

Either way, the growth rate of storage outstrips the compound annual growth rates of processor speed (30 percent per year, based on 2.8GHz now versus 4.77MHz then); connection speed (40 percent per year, based on my DSL connections current measured 686K-bps download speed versus 300 bps then); or system memory capacity (less than 50 percent per year, taking 1GB as todays figure versus 128KB in 1981).

eWEEK Labs Henry Baltazar wonders if its time for personal hierarchical storage management. Click here to read his column. What we keep—our mass storage—grows most quickly. How much we use at any one time—our system memory—comes in second. Our rate of acquiring more stuff over the wire comes in a close third. How hard we work on it—processor speed, I mean—comes in last. Theres probably a book on character and values to be squeezed from that observation but not right now.

Instead, I want to talk about the problem—and it is a problem—that our typical desktop storage has grown by an overall factor of more than 12,000 while our processing speed has grown by less than a factor of 600. Even if you argue that a Pentium clock cycle does at least four times as much work as a clock cycle on an 8088 or 8086, the ratios are still out of balance by at least a factor of five—in the wrong direction, if you have a need for speed.

If you feel as if your computing experience was somewhat more responsive back when Ronald Reagan was in his first term, the numbers appear to agree—even before we factor in the greater processing burdens of rendering fonts, for example, instead of merely displaying terminal-style text and line graphics as we did in the days of DOS. Gosh, those DOS machines were fast.

Microsofts announcement last month pushing the release of WinFS beyond the already-distant "Longhorn" time frame is therefore a discouraging development. Todays users need the next level of abstraction, moving beyond files to represent sources and uses of data. Economics, as well as habit, will drive them to want that capability on a personal device.

Developers should not wait until post-Longhorn (2008 or later) to meet that demand. They should move now toward XML and other content-based representations and should give users task-centered rather than location-centered ways of storing and retrieving work—if not with the help of WinFS, then without it.

Technology Editor Peter Coffee can be reached at peter_coffee@ziffdavis.com.

To read more Peter Coffee, subscribe to eWEEK magazine. Check out eWEEK.coms Storage Center at http://storage.eweek.com for the latest news, reviews and analysis on enterprise and business storage hardware and software.

Be sure to add our eWEEK.com developer and Web services news feed to your RSS newsreader or My Yahoo page

 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel