Inefficiency Experts

 
 
By Kevin Fogarty  |  Posted 2006-08-21 Print this article Print
 
 
 
 
 
 
 


Inefficiency Experts

Efficiency, however, is one thing not associated with data centers. Studies from The Uptime Institute indicate that 90 percent of corporate data centers have far more cooling capacity than they need.

Data centers examined by Uptime Institute analysts had an average of 2.6 times the amount of cooling equipment they needed but still had hot spots covering 10 percent of their total floor space.

One data center had 10 times more cooling capacity than it needed, considering its size and volume of equipment, and one-quarter of its floor space was still overheated, The Uptime Institute reported.

Server chassis designs arent particularly heat-efficient. Many designs are based on bakery-bread racks or industrial shelves, which can block the flow of air despite the "muffin fans" evacuating hot air from the top, the report said.

Even with the fans, temperatures of 100 degrees Fahrenheit werent unusual, which dramatically reduces the life span of the hardware and decreases its reliability. Each 18-degree increment above 70 degrees reduces the reliability of an average server by 50 percent, the report said.

But the main cause of overheating is simply bad climate control. An average of 72 percent of the cooling capacity of major data centers bypassed the computing equipment entirely, the report said. More than half of that cold air escaped through unsealed cable holes and conduits; an additional 14 percent was misrouted because the perforated floor plates that were supposed to direct the airflow were pointing in the wrong direction—sometimes out of the data center entirely under a raised floor or over a suspended ceiling.

Other plates routed cool air into vents that brought hot air back into the cooling system so that thermostats read the "hot" exhaust as much cooler than it actually was, causing the cooling system to misgauge the amount of cooling needed and slow down.

"It is remarkable that these facilities are running these mission-critical applications," Koomey said. "Its really more of an art than a science, and you end up with all these mistakes. And you have bad communication, typically, between the IT folks and the facilities folks. The IT folks will order a bunch of servers, and they will show up on the loading dock on a Friday, and the facilities folks didnt even know they were coming so they cant do anything."

Planning the location of server chassis within the data center is sometimes comically simplistic, if not inept, said APCs Rasmussen.

"Youll see sometimes, when a customer is going to deploy a blade server [chassis], they have people walk around the data center looking for a cold spot and thats where they put the server," Rasmussen said.

The easiest way to fix cooling problems is to fix obvious problems. The Uptime Institute report showed that an average of 10 percent of cooling units in the data centers studied had already failed, but they werent wired to an alarm and hadnt been manually checked for failure.

To read more about DC power distribution, click here. Other problems included subfloor cold-air streams that blew so hard the cold air came up 30 or 40 feet past the hot spots where they were needed.

Other common configurations allowed cold air to rise in every aisle between racks of servers, allowing it to be sucked into the bottom parts of the rack, but leaving hot air to evacuate through the top and circulate from there to a rack in the next aisle, leaving the bottom of the chassis consistently cool and the top hot.

Smart(er) Power

In the last five years, Slumberland has built up what Mitchell calls "a glorified network closet" into a full-fledged data center to support a national network of retail stores as well as a defined strategy of using IT to improve customer service. Improved retail reporting systems, delivery scheduling, inventory, accounting, warehouse management and distribution planning have made the companys operations more efficient and profitable.

But the technology itself is almost all centralized. Slumberland stores use diskless Wyse Technology workstations running Citrix Systems terminal emulation software. All data is stored on a back-end SAN (storage area network). The servers are small-footprint, high-density models, such as the dual-processor blade server with 12GB of RAM that runs the warehouse system.

"Its generally more efficient," Mitchell said. "With the SAN, theres not a lot of wasted disk space, and were not paying to power and cool any extra disk space."

The 2,300-employee, 105-store companys IT department is made up of six infrastructure specialists and just six other IT people. "Its very efficiently managed," Mitchell said. "At the moment, we have one very heavily centralized organization from an IS perspective."

The design of the data center is similarly focused on efficiency—in this case, the efficiency of the airflow—"so we dont need any more air conditioning than we have to," Mitchell said.

Slumberland uses UPSes (uninterruptible power supplies) and cooling equipment from APC that are designed modularly to make it easier to add power later and which can be run at lower capacity to save energy when possible.

"The UPS systems ... in that room started out with a 30 kW unit, [which] we have expanded into a 40 kW unit," Mitchell said. The same unit can eventually scale to 80 kW, he said.

"The cooling equipment we use from APC isnt what they generally recommend, though, which is liquid, chilled water to the system," Mitchell said. "There are a lot of efficiencies to that, but those of us who have had problems with roof leaks and other things are not eager to get running water near our running systems.

"We looked at it and decided we would be just fine with an air-cooled system," he said.

Next Page: A growing problem.



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel