From HVAC to rack density to hot/cool aisles, eWEEK Labs recommends the computing models and energy-saving practices to focus on for the biggest rewards.
A lot of attention these days is being devoted to going green: Save the
planet, buy a hybrid, recycle, put lights on timers, don't waste paper and so
on. All of these things will help the environment, but let's come right out and
say it: Going green makes sense when a business saves capital and resources by
doing so. A warm feeling at night is not a compelling business reason for
going green, but saving millions of dollars on power and HVAC sure is.
Indeed, many businesses have saved significantly by implementing
environmentally friendly practices and trimming power consumption.
In 2009, organizations including IBM,
Sun, the National Security Agency, Microsoft and Google announced that they
were building green data centers.
The most recent announcement comes from IBM
regarding what it claims is the world's greenest data center-a project jointly
funded by IBM, New York state and Syracuse University. Announced in May
2009 and constructed in just over six months, the $12.4 million,
12,000-square-foot facility (6,000 square feet of infrastructure space and
6,000 square feet of raised-floor data center space) uses an on-site power
generation system for electricity, heating and cooling, and incorporates IBM's
latest energy-efficient servers, computer-cooling technology and system
The press release is filled with all sorts of flowery language about saving
the planet and setting an example for others to follow, but about three-fourths
of the way through we get to the bottom line: "This is a smart investment ... that
will provide much needed resources for companies and organizations who are
looking to reduce both IT costs and their carbon footprint."
How can you separate the wheat from the chaff when it comes to designing a
green data center? Where does the green-washing end and the true business case
The first thing to do is to understand several key principles of data center
design. This ensures that you maintain a focus on building a facility that
serves your organization's needs today and tomorrow.
Build for today and for the future. Of course, you don't know exactly
which hardware and software you'll be running in your data center five years
from now. For this reason, you need a flexible, modular and scalable
design. Simply building a big room full of racks waiting to be populated
doesn't cut it anymore.
Types of equipment-such as storage or application servers-should be grouped
together for easier management. In addition, instead of cooling one huge area
that is only 25 percent full, divide the facility into isolated zones that get
populated and cooled one at a time.
Most data centers incorporate a hot aisle/cold aisle configuration, where
equipment racks are arranged in alternating rows of hot and cold aisles. This
practice allows air from the cold aisle to wash over the equipment; the air is
then expelled into the hot aisle. At this point, an exhaust vent pulls the hot
air out of the data center.
It's important to measure energy consumption and HVAC. Not only will this
help you understand how efficient your data center is (and give you ideas for
improving efficiency), but it will also help control costs in an environment of
ever-increasing electricity prices and put you in a better position to meet the
increased reporting requirements of a carbon reduction policy.
There are currently two methods of measuring energy efficiency.
CADE (Corporate Average Datacenter Efficiency), developed by the Uptime
Institute (now 451 Group), multiplies IT efficiency (asset utilization times
energy efficiency of those assets) by physical efficiency (space used times
energy efficiency of the building). By this measure, larger numbers are better.
The measure I prefer to use is PUE (Power Usage Effectiveness), developed by
The Green Grid. PUE is calculated by dividing the total utility load by the
total IT equipment load. In this case, a lower number is better. Older data
centers typically have a PUE of about 3 or 4, while newer data centers can achieve
a PUE of 1.5 or less.
Rack density is a very important aspect of modern data center design.
Server consolidation and virtualization are leading us toward denser, and
fewer, racks. Blades and 1U to 3U servers are the norm. The denser the data
center, the more efficient it can be, especially if we're talking in terms of
construction costs per square foot: With the average data center costing $200
to $400 per square foot to construct, if you can cut the size of your data
center by 75 percent, you could save significant construction costs-perhaps
ranging into the millions of dollars.
However, denser racks mean increased power requirements and the generation
of more heat.
In the past, a rack might consume 5 kW, whereas today's denser designs
consume 20 kW or more. Conventional HVAC solutions could be used to cool a 5-kW
rack, but a 20-kW (or even 30- or 40-kW) rack requires a high-density cooling
solution, as well.
Look to implement rack-level cooling technologies using either water or
forced air. The IBM/Syracuse project
converts exhaust heat to chilled water that is then run through cooling doors
on each rack. A high-density cooling solution such as this removes heat much
more efficiently than a conventional system. A study conducted by Emerson in
2009 calculated that roughly 35 percent of the cost of cooling the data center
is eliminated by using such a solution.