Green computing is a hot topic on data center managers' minds. Using green drives in a green data center can help improve operating efficiencies and reduce overall energy consumption costs. But how can you know that you truly have green drives? Knowledge Center contributor George Crump explains the key components of green drives and how to use them to create the most low-cost, energy-efficient green data center possible.
Greening the data center is a marathon that will take years to complete, if we ever truly will. A recent EPA report on data centers found that data centers consumed 1.5 percent of the nation's energy. This is also a global problem. Consumption, worldwide, will increase by almost 57 percent from now until 2030. Demand will continue to exceed supply, driving costs through the roof. Power efficiency is no longer just an environmental issue; it is a capital issue.
An area of focus to drive power efficiencies up is the use of "green" drives. Green drives are drives that can spin down and go into an idle or standby mode. These drives attempt to increase power efficiency during idle access times. Slowing the drive down decreases the power and cooling requirements. The downside to these drives is that, while these greener drives are more power-efficient, they are also slower to respond to access requests (either read or write).
The plan is to count on the 80/20 rule. In most enterprises, more than 80 percent of the data is inactive and does not need to be stored on high-speed disk. We estimate that at least five percent of an organization's data would be better served and more power-efficient on Solid State Disk. That leaves about 15 percent of data that should be on power-hungry, normal operating drives. If you could move 85 percent of your data to a greener environment, it would make a dramatic impact on power consumption costs.
A Holistic Approach Is Needed
As a result, storage manufacturers are adding these green drive technologies to their systems. The problem is that the entire system needs to be redesigned. For this to work, a supplier needs to take a holistic approach to the problem, fine-tuning each component in the storage system to make sure that it achieves peak operating efficiency. If not, traditional systems, because of their other components, will require too much power.
According to an IDC Report in 2008, the total cost to power and cool a drive is 48 total watts per drive. It requires 12 watts consumed by the hard drive, 12 watts consumed by the storage shelf (HBA, fans, power supply). It takes an additional 24 watts to cool the hard drives and the storage shelf. The total is 48 watts for one drive in a shelf (not your typical configuration). This does not include the cost to power and cool the primary storage controller or head.
There is little that a traditional storage manufacturer can do to control the power consumption of the system because they have limited control over how data is written to the drives in the array shelves. They have to build their systems assuming that all drives will be on at full speed and full power.
Why a Green Drive Is Green
A key component in order for green drives to achieve their power efficiencies is that they need the ability to be idle long enough to actually spin down. Drives that spin down require extra power to spin up, so they need to be idle long enough to offset the extra power used when spinning back up (which is almost double the 12 watts stated earlier). A drive that spins down, and then needs to spin up because of access, could actually use more power, not less. It also means that the array needs to be designed so that all the drives in the shelf do not need to spin up in order to fulfill the data request.
For example, if you have a RAID array that is striped across a 12-bay shelf, and you access a Word document off of that system, without intelligence all the drives need to spin up to full speed in order to fulfill that request. However, manual creation of array partitions that would have this intelligence would be very difficult to implement, and even harder to maintain. The result would be small LUNS (logical unit numbers) everywhere.
These issues further justify the need for a holistic solution, one that includes control over how blocks are written to disk and to what disks those blocks are written.
Step one before power managing anything is that the storage system needs to have the ability to understand and control how data is being written to, and read from, the system. This can be done at a block level or via the file system. It is absolutely critical that this data be understood by type and access model. If the wrong type of data is written to power-managed drives, power consumption can actually increase due to spinning the drives back up too soon after going idle. The type of data should be data that is at rest, has achieved a state of near permanence, but may be needed in the future (so easy search and retrieval is vital). Merely adding a shelf of power-managed drives to an existing array will do little, if anything, to increase power efficiency.
Some Vendors Are Positioned to Go Green
In primary storage, the only vendors that have that ability today are companies like 3PAR, Compellent and OnStor. Although none have yet announced the use of green drives, they are in the best position to use them. For backup and archive, Copan Systems is delivering this capability now. They have the ability to control where data is written and how it is accessed. All of these vendors would be able to move data to more idle drives as data ages, and then move it back to more active drives if it becomes suddenly active again. They would also have the ability to only spin up the drives that have the representative data on them, not entire shelves.
Once you can control where data is written, it then makes sense to look at drives that can be powered off as opposed to just spun down. There sometimes is a sense of fear about powering off a drive; usually the first question is "Will it turn back on?" For example, in a recent test, Copan Systems power-cycled a drive 60,000 times without impact to the MTBF (Mean Time Between Failures). The next question is, "Why risk the power off at all? Just use spin down drives."
In a primary storage system that is a logical assumption, especially if you follow the earlier model of moving 80 percent of your data to archive and five percent to Solid State Disk. There will simply be a lower drive count in primary storage systems going forward. In the near line archive, the results can be substantially different. In archive, especially one that has the potential to last for decades, you could have hundreds, if not thousands, of drives. A savings of even two watts (by powering off the drive) can save 2,000 watts--and this does not include the savings in power on the controller and other components. Eventually, you could power off entire shelves to save electricity.
More Reasons to Power Off Drives
The other benefit of powering drives off is increased MTBF rates. Especially in archive, the desire is to have these drives in service for years. The ability to power them off (with a test of the drives every 30 days) will increase the life expectancy of the drives while, at the same time, providing the assurance that the data on the drives is safe and secure.
Powering drives off is also a future-proof feature. There is nothing more power-efficient than a turn-off drive, as no technology will come out that will be more power-efficient than that. Also, we have been powering drives off for years in laptops and desktops and now in archive systems. It is a known, safe technology. By contrast, green drives are still in their infancy. They still need to be vetted through use in the data center. We can be sure that they will get more power-efficient. But, again, "off" is as far as you can go.
Turning the drives off also allows for denser packaging of the drives themselves. This saves floor space in addition to power. Denser packaging also requires unique mounting of the drives to reduce vibration. Called Counter Mounting, this vibration reducer significantly prolongs the life of the drive.
The Bottom Line
If you can control where data is being written to and read from, and you can power drives off (as opposed to just spinning them down), you can then design array shelves that are more power-efficient. This means that lower-power shelves can be created that are not designed to power all the drives in a shelf. This leads to reduction in heat generated and power required, by delivering power only to the drives that need to be accessed to fulfill the data request.
The key elements in reducing power consumption of disk drives will require more than just new drive technology. It will require data layout intelligence, shelf and drive housing intelligence, and improved software to manage data integrity over time.
George Crump is the founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. An industry veteran of over 25 years, he has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest integrators. He can be reached at email@example.com.