Examining Disk Storage Reliability Specs

Disk storage, whether on a corporate desktop or shared on a company network, is arguably the most important element of any computer system.

Computers are all about information: creating it, manipulating it, retrieving it and, above all, storing it. That makes storage, and more particularly hard disks, the most critical element of any computer system, a statement that holds true whether the drives are on the corporate desktop or function as shared storage on a company network, alone or in an array. If part of your job is deciding on minimum requirements for hard disks, that's a strong argument for paying close attention to hard disk specifications. Here's a look at some key reliability specs to consider.

Life Expectancy

Drives, like people, have a life expectancy. For drives, it's called the service life or design life (because that's how long the drive was designed to remain in service). The service life is typically three to five years, but can be as high as 10 years. Knowing the service life is important, because failure rates rise rapidly at the end of service life. Assuming the drive lasts that long, you'll want to replace it at that point-before it fails. Knowing the service life is also important for understanding the MTBF (mean time between failures) spec.
What Mean Time Between Failures Isn't
MTBF is probably the single most widely misunderstood drive spec, even among people who are knowledgeable about computer hardware. It doesn't tell you anything about how long a drive will last, which is what most people think it means. MTBFs for the current generation of hard disks are typically anywhere from 500,000 to 1.2 million hours for desktop drives, and as much as 1.6 million hours for enterprise drives. That works out to roughly 57 to 180 years. Drives obviously don't last that long.