STANFORD, Calif.—Panelists in a discussion on green data centers here Aug. 1 at the fourth Always On Stanford Summit were asked whether they thought the future might bring data centers that no longer need cooling equipment, thus cutting back substantially on power draw.
Somewhat surprisingly, the answer—across the board—was “yes.”
Now theres a concept: data centers that are so completely self-contained that no one has to worry about power intake or cooling system failures. It turns out that IBM, Hewlett-Packard, Sun Microsystems and undoubtedly other companies are already doing research and testing in this area and, in fact, are beginning to come out with no-cooling-necessary components, if not full data centers, at this time.
Sun might be the closest to having a self-sustained, no-outside-cooling-necessary data center.
“Weve already got a version of this self-contained data center in our Blackbox,” said panelist Subodh Bapat, a Sun vice president and distinguished engineer. “All you need is a concrete floor, a chilled water source and a power draw, and you have a portable data center that can be dropped in just about anywhere.”
Last Oct. 17, Sun unveiled Project Blackbox, which combines storage, computing, and network infrastructure hardware and software—along with high-efficiency power and liquid cooling—into modular units based on standard 20-by-8-by-8-foot shipping containers.
Each Blackbox holds up to 250 Sun Fire blade servers (standard 19-inch-wide size) and provides up to 1.5 petabytes of disk storage, 2 petabytes of tape storage, and 7TB of RAM.
Blades, which are servers that have some components removed for space, power and other considerations, are the fastest-growing server category in the United States and Europe, according to industry analyst IDC. They are generally the coolest-running type of server available.
The Blackbox itself needs no air cooling.
Well see “huge leaps forward” over the next few years when it comes to the no-cooling-needed data centers, Bapat said. “Were already on that track now, and were only going to continue to discover more ways to improve systems—through lower-power processors, better design and other components,” he said.
HP Senior Vice President for Technology Services Mike Rigodanzo pointed out that his company is leading the charge for better-tuned data centers—installations that use optimal designs for airflow and air conditioning-unit location, for example.
“Big [data center] rooms are not homogeneous,” Rigodanzo said. “Each one has its own airflow and design challenges, so services are needed to set up the center right the first time. Designing the center properly in the first place is essential to an efficient operation.”
New software that monitors the power draw across the data center and then calibrates it with the workload at hand on a dynamic basis will soon become available, Bapat said. That will become a major power-saving factor, he said.
“We do have a number of data center components now available that are rugged enough to withstand constant 50-degree Centigrade [122-degree Fahrenheit] temperatures,” said Steve Sams, IBM vice president of global sites and facilities.
“Its not hard to imagine that well eventually get to full data centers that wont need cooling equipment. These will be hundreds of times more efficient. And what a savings in power draw that will be.”
People in general are “pretty abysmal at predicting improvements in IT,” Sams said.
“Some day well look back and see that we could have improved a lot of things far earlier than we actually did,” he said.