RICHARDSON, Texas—Dealing with the twin issues of power and cooling in the data center is going to take a combination of efforts from both the technology industry and the vendors that build and power those facilities.
The Digital Power Forum here Sept. 18-20 saw the continued merging of the disparate groups, with speakers ranging from IT giants like Advanced Micro Devices and Sun Microsystems to data center designers such as EYP Mission Critical Facilities to power supplies and cooling companies, including American Power Conversion and Emerson Networks Liebert unit.
However, the message from all three groups was the same: The rapid increase in density in data centers—fueled in large part by such technologies as blade servers—is a trend that will only accelerate, putting more pressure on these disparate parties to find solutions to the issues that have become key concerns of corporations.
“What we see here is the potential for a perfect storm,” said Larry Vetal, senior strategist of worldwide commercial marketing for AMDs Microprocessor Solutions Segment.
Rack density is increasing, Vetal said, but because of the resulting heat load, rack space is going unused—as much as 18 percent in the average data center, according to an AMD survey. In addition, corporations continue to view building more data centers as an answer to heat issues, which can be an expensive proposition. In the 1990s, facilities were planned for a power draw of 40 watts per square foot, he said. By 2010, that number could be as high as 500 watts per square foot.
That could drive the cost to build an average 50,000-square-foot data center from $20 million to $250 million, he said. Data center managers, IT and facilities departments need to find a way to take advantage of the benefits of more dense technologies—of being able to do more work in the same amount of space—without being hobbled by power and cooling concerns, Vetal and others said.
Moves are being made in all areas to address the problem. On the IT side, chip makers like AMD, Intel and Sun Microsystems are growing the number of cores on a single piece of silicon, while working to keep the power consumption down. Sunnyvale, Calif.-based AMD, like Intel, offers dual-core chips now; Vetal said that when AMD releases its quad-core processors next year, it will fit into the same power envelope—about 95 watts—as the current dual-core models. Other chip technologies include such features as on-board memory controllers and the ability to throttle down a core based on application demand.
IT vendors also are coming together to address the issues, most notably with the Green Grid Alliance, which was formed this spring to find ways to push the use of energy-efficient technologies. Vetal said the group—which includes AMD, Sun, Dell, IBM, Hewlett-Packard and VMware, among others—is formalizing its organizational structure, and will have announcements to make this fall.
Virtualization—the ability to run multiple applications and operating systems on a single server—and software-based management tools also are growing. However, Vetal said a key hurdle with the software tools is that they currently tend to be vendor-specific. Users are looking for the ability to manage the power and cooling of heterogeneous environments with a single tool.
“The biggest issue around these vendor tools is its OK if Im 100 percent an HP shop, or 100 percent a Dell shop, or 100 percent a Sun shop,” he said.
In addition, memory makers need to get more involved, said Jack Pouchet, director of marquee accounts for Liebert, of Columbus, Ohio. Memory doesnt throttle down when sitting idle, he said.
“They consume as much power as when theyre doing something,” Pouchet said. “The memory people still have to come to the table. … Its amazing how cheap memory is, but its killing us in the data center.”
Next Page: Growing interest from government agencies.
2
Pouchet and others also applauded the growing interest from governmental agencies. Both the U.S. Senate and House of Representatives passed legislation asking that the Environmental Protection Agencys Energy Star program look into the development and use of energy-efficient technology. In addition, the EPA in August announced its Server Energy Measurement Protocol specification for measuring power consumption of servers.
The state of California in September also passed a bill calling for a 25 percent reduction in greenhouse gas emissions by 2020, a move that will impact data centers, said Brad Binning, cooling systems business development manager for APC, in West Kingston, R.I.
For data center managers and facilities departments, the discussions here focused on such issues as more efficient power supplies and better use of cooling devices. A number of people also made the pitch for greater use of DC distribution in data centers. Johnny Gonzales, director of sales for Pentadyne Power, a flywheel maker in Chatsworth, Calif., argued that DC power would mean 20 to 40 percent less heat generated and 30 percent less power consumed.
DC power proponents point to the multiple conversion points inside the data center—where AC is converted to DC, and vice versa—as key places where heat is generated. They also argue that uninterruptible power supplies, or UPSes, used in an AC distribution are not efficient.
However, several people at the show said that UPSes—which traditionally have been about 75 to 80 percent efficient—are becoming more efficient, as much as 88 to 92 percent. Companies also are looking to put more intelligence into them, enabling them to throttle down when demand is low. Lieberts Pouchet also pointed out that UPSes and other parts of the electrical system only consume about 10 percent of the power that goes into the data center, compared with 50 percent by IT equipment and 25 percent by cooling devices.
“Its a smaller slice of the pie,” he said. “A 50 percent increase in UPS [efficiency] isnt a big change. A 20 percent improvement in IT equipment is a huge change.”
Pouchet suggested that while the promise of DC power will entice some people into using it, AC power is established and proven, and the prospects of improving that are better than widespread adoption of DC power.
Regarding cooling devices, Pouchet, Binning and others said that relying solely on traditional wall-mounted air conditioning units and raised floors will no longer work as density increases and energy consumption rises. Where six years ago a rack of 2U (3.5-inch) servers consumed 4 kilowatts, that number—with blade servers—is climbing to as much as 24kw and more, and will get to 40kw by 2009, Pouchet said. Businesses need to look at bringing cooling devices into the rows between server racks, and next to the racks themselves, Binning said.
“Were getting more dense,” he said. “Youve got to look at your cooling. Weve gotten to the point where after 4kw, the raise floor wont work anymore.”