Data center managers are on the hot seat lately. They not only have to cram in more servers per square inch than they ever wanted or thought theyd need, they also have to figure out how to do it without sending the electricity bill through the roof.
And theyre not entirely sure how to do it.
Traditionally, theyve had to worry only about getting as much power in as possible, not about making sure they used it efficiently.
“When it comes to data centers, cost isnt irrelevant, but its not about cost. Its about uptime,” said Rick Oliver, data center operations senior engineer at the University of Phoenix, a for-profit online university based in Phoenix.
So, as companies have built new facilities, its been more important to overbuild than underbudget. That has meant adding in as many air conditioning and other environmental controls as they practically could and talking the local utility company into running in as many redundant power lines as they ever expected to need.
“You go in thinking about the future, about the systems were going to have, and about heating and cooling them, in three years or five years,” said Seth Mitchell, infrastructure team manager for Slumberland, a furniture retailer based in Little Canada, Minn.
“You have to extrapolate where youre going to be because building a [data center] room is a fairly permanent thing. Its not easy to make changes to a permanent design.”
Not that Mitchell has much of a choice. Escalating energy costs, which seem to rise with every new conflict in the Mideast or with every Alaskan oil pipeline problem, are causing customers and technology vendors to rethink the data center.
On Aug. 16, engineers at the Lawrence Berkeley National Laboratory and about 20 technology vendors concluded a demonstration of DC power in a data center.
Hewlett-Packard is looking to nature to redesign the data center of the future, and suppliers ranging from Advanced Micro Devices to Intel to Sun Microsystems are trying to cut power costs.
“The people who spec and build the data centers are not the ones who pay the electric bill,” said Neil Rasmussen, chief technology officer and co-founder of American Power Conversion, in West Kingston, R.I. “Many of them didnt know what it was or even who paid it.”
As a result, data center managers are doubling as HVAC (heating, ventilating and air conditioning) experts as well as certified IT administrators.
In their efforts to “green” the data center, they are learning to unlearn a lot of data center architecture design that has been handed down over the years.
Any data center, but especially one crammed with servers stacked in compact chassis, is “a radical consumption of power, and the exhaust of power is heat; there is no way you can consume one without the other,” Oliver said.
But as the typical server unit has shrunk from a stand-alone pedestal the size of a filing cabinet to 2U (3.5-inch) stackables, 1U (1.75-inch) pizza boxes and even blades, both power and heat cause problems.
“The whole industry has gotten hotter and more power-hungry. Within the last five years, servers went from using around 30 watts per processor to now more like 135 watts per processor,” Oliver said. “You used to be able to put in up to six servers per rack; now its up to 42.”
Every kilowatt burned by those servers requires another 1 to 1.5 kW to cool and support them, according to Jon Koomey, a staff scientist at Berkeley National Laboratory, in Berkeley, Calif., and a consulting professor at Stanford University. Koomey has studied the cost and efficiency of data center designs.
Efficient systems depend on a circuitous power flow. “You bring the power in AC from the wall, convert it to DC through the battery backup, back to AC to the server, then to DC for the chip,” Koomey said. “Theres an awful lot of power loss in that.”
But even under ideal circumstances, most data centers are forced to buy more chassis than they really need and leave them partially empty to allow the heat to dissipate, Koomey said.
“If you put the most densely packed devices on the market now, whether they were blades or whatever, and you packed them [in full chassis] into data centers fully, you couldnt cool it,” Koomey said.
Next Page: Inefficiency experts.
Inefficiency Experts
Inefficiency Experts
Efficiency, however, is one thing not associated with data centers. Studies from The Uptime Institute indicate that 90 percent of corporate data centers have far more cooling capacity than they need.
Data centers examined by Uptime Institute analysts had an average of 2.6 times the amount of cooling equipment they needed but still had hot spots covering 10 percent of their total floor space.
One data center had 10 times more cooling capacity than it needed, considering its size and volume of equipment, and one-quarter of its floor space was still overheated, The Uptime Institute reported.
Server chassis designs arent particularly heat-efficient. Many designs are based on bakery-bread racks or industrial shelves, which can block the flow of air despite the “muffin fans” evacuating hot air from the top, the report said.
Even with the fans, temperatures of 100 degrees Fahrenheit werent unusual, which dramatically reduces the life span of the hardware and decreases its reliability.
Each 18-degree increment above 70 degrees reduces the reliability of an average server by 50 percent, the report said.
But the main cause of overheating is simply bad climate control. An average of 72 percent of the cooling capacity of major data centers bypassed the computing equipment entirely, the report said.
More than half of that cold air escaped through unsealed cable holes and conduits; an additional 14 percent was misrouted because the perforated floor plates that were supposed to direct the airflow were pointing in the wrong direction—sometimes out of the data center entirely under a raised floor or over a suspended ceiling.
Other plates routed cool air into vents that brought hot air back into the cooling system so that thermostats read the “hot” exhaust as much cooler than it actually was, causing the cooling system to misgauge the amount of cooling needed and slow down.
“It is remarkable that these facilities are running these mission-critical applications,” Koomey said.
“Its really more of an art than a science, and you end up with all these mistakes. And you have bad communication, typically, between the IT folks and the facilities folks. The IT folks will order a bunch of servers, and they will show up on the loading dock on a Friday, and the facilities folks didnt even know they were coming so they cant do anything.”
Planning the location of server chassis within the data center is sometimes comically simplistic, if not inept, said APCs Rasmussen.
“Youll see sometimes, when a customer is going to deploy a blade server [chassis], they have people walk around the data center looking for a cold spot and thats where they put the server,” Rasmussen said.
The easiest way to fix cooling problems is to fix obvious problems. The Uptime Institute report showed that an average of 10 percent of cooling units in the data centers studied had already failed, but they werent wired to an alarm and hadnt been manually checked for failure.
Other problems included subfloor cold-air streams that blew so hard the cold air came up 30 or 40 feet past the hot spots where they were needed.
Other common configurations allowed cold air to rise in every aisle between racks of servers, allowing it to be sucked into the bottom parts of the rack, but leaving hot air to evacuate through the top and circulate from there to a rack in the next aisle, leaving the bottom of the chassis consistently cool and the top hot.
Smart(er) Power
In the last five years, Slumberland has built up what Mitchell calls “a glorified network closet” into a full-fledged data center to support a national network of retail stores as well as a defined strategy of using IT to improve customer service. Improved retail reporting systems, delivery scheduling, inventory, accounting, warehouse management and distribution planning have made the companys operations more efficient and profitable.
But the technology itself is almost all centralized. Slumberland stores use diskless Wyse Technology workstations running Citrix Systems terminal emulation software. All data is stored on a back-end SAN (storage area network). The servers are small-footprint, high-density models, such as the dual-processor blade server with 12GB of RAM that runs the warehouse system.
“Its generally more efficient,” Mitchell said. “With the SAN, theres not a lot of wasted disk space, and were not paying to power and cool any extra disk space.”
The 2,300-employee, 105-store companys IT department is made up of six infrastructure specialists and just six other IT people. “Its very efficiently managed,” Mitchell said. “At the moment, we have one very heavily centralized organization from an IS perspective.”
The design of the data center is similarly focused on efficiency—in this case, the efficiency of the airflow—”so we dont need any more air conditioning than we have to,” Mitchell said.
Slumberland uses UPSes (uninterruptible power supplies) and cooling equipment from APC that are designed modularly to make it easier to add power later and which can be run at lower capacity to save energy when possible.
“The UPS systems … in that room started out with a 30 kW unit, [which] we have expanded into a 40 kW unit,” Mitchell said. The same unit can eventually scale to 80 kW, he said.
“The cooling equipment we use from APC isnt what they generally recommend, though, which is liquid, chilled water to the system,” Mitchell said. “There are a lot of efficiencies to that, but those of us who have had problems with roof leaks and other things are not eager to get running water near our running systems.
“We looked at it and decided we would be just fine with an air-cooled system,” he said.
A Growing Problem
A Growing Problem
The amount of electricity used by a typical data center rose 39 percent between 1999 and 2005, according to The Uptime Institutes study, which examined facilities with a combined total of more than 1 million square feet of data center space. Its not unusual for a large data center to draw a megawatt of electricity or more per month—enough to power 1,000 houses for a month.
Estimates of annual power costs for all U.S. data centers—which Koomey says are often unreliable because they tend to focus only on the power used by IT gear, not the air conditioning and other systems that support it—range as high as $3.3 billion.
A survey published by AFCOM, a society of data center managers, said 90 percent of AFCOM members are concerned that electricity costs or restrictions in electricity supply could slow or stop the construction of new data centers and impede the operation of existing centers.
The rise in electricity demand is partly due to the increasing power of the processors themselves—but only partly—according to Vernon Turner, an analyst at market researcher IDC, of Framingham, Mass.
The greatest part of the rise in power consumption in data centers is a years-long trend toward centralization of corporate computing, which, itself, was driven by a need to reduce the cost of supporting servers scattered across dozens or hundreds of locations, Turner said.
The cost of supporting six small e-mail servers spread across three states, for example, is a lot higher than that of supporting one or two big servers tucked into a rack the e-mail administrator can touch by scooting over on a chair, rather than traveling to a different building or state to fix a local problem.
“Despite IT budgets being flat, were still seeing strong double-digit deployment of new servers and storage devices into the data center,” Turner said. “Buying a bigger server is OK, but trying to buy a server thats stacked in the same chassis has pushed us into unnatural acts in the data center. Youre trying to force things together that dont necessarily play well because they have different requirements for power and cooling.”
“Enterprise data centers, the Fortune 100, have been aware of this for a long time, but the medium-sized guys have never had to think about it,” Turner said.
“Now the heat is really affecting the performance of neighboring devices.”
European companies also have been more aware of data centers as power sinks, partly because of higher costs but also because of a more fervent and effective environmental movement, APCs Rasmussen said.
Many U.S. companies are aware of environmental issues, as are technology companies, which agreed to design more power-efficient PCs, laptops and other devices under the Environmental Protection Agencys Energy Star program.
High-density, high-power units such as those turning data centers into saunas, however, havent been covered until now under the Energy Star program.
But the EPA has been working since January on a version of the program designed for servers.
The most significant part of the server-energy rating will be a consistent, objective measure of how much energy a piece of equipment actually uses, Koomey said.
Right now, manufacturers measure power consumption in so many subtly different ways that its not easy for customers to compare one with another, Koomey said.
The EPA isnt extending the Energy Star labeling effort to servers, Koomey said. Its using its contacts and history with the Energy Star program to bring together vendors and technology experts to establish a new energy-usage measurement thats consistent for many types of servers.
“If you cant measure [a servers energy use], you cant manage it,” Koomey said. “Its kind of appalling that people who are buying thousands of servers cant measure it.”
Power Problems
Common causes of data center energy leaks:
* Ducts and coils that were dirty or blocked
* Thermostats and humidity meters installed where they couldnt monitor effectively
* Sensors that dont work or that deliver erroneous data
* Supply and return pipes that are reversed
* Valves that are partially closed, unintentionally
* Solenoid-operated valves that fail due to high system pressure
* Pumping systems that cant deliver the volume of cooling necessary
Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.