No Cooling Necessary Data Centers Coming?

 
 
By Chris Preimesberger  |  Posted 2007-08-03 Email Print this article Print
 
 
 
 
 
 
 

Sun, HP and IBM all say that the self-sufficient data center isn't far away.

STANFORD, Calif.—Panelists in a discussion on green data centers here Aug. 1 at the fourth Always On Stanford Summit were asked whether they thought the future might bring data centers that no longer need cooling equipment, thus cutting back substantially on power draw. Somewhat surprisingly, the answer—across the board—was "yes."
Now theres a concept: data centers that are so completely self-contained that no one has to worry about power intake or cooling system failures. It turns out that IBM, Hewlett-Packard, Sun Microsystems and undoubtedly other companies are already doing research and testing in this area and, in fact, are beginning to come out with no-cooling-necessary components, if not full data centers, at this time.
Sun might be the closest to having a self-sustained, no-outside-cooling-necessary data center. "Weve already got a version of this self-contained data center in our Blackbox," said panelist Subodh Bapat, a Sun vice president and distinguished engineer. "All you need is a concrete floor, a chilled water source and a power draw, and you have a portable data center that can be dropped in just about anywhere." Last Oct. 17, Sun unveiled Project Blackbox, which combines storage, computing, and network infrastructure hardware and software—along with high-efficiency power and liquid cooling—into modular units based on standard 20-by-8-by-8-foot shipping containers.
Click here to read how Rackable Systems Concentro servers will compete against Sun Microsystems Project Blackbox. Each Blackbox holds up to 250 Sun Fire blade servers (standard 19-inch-wide size) and provides up to 1.5 petabytes of disk storage, 2 petabytes of tape storage, and 7TB of RAM. Blades, which are servers that have some components removed for space, power and other considerations, are the fastest-growing server category in the United States and Europe, according to industry analyst IDC. They are generally the coolest-running type of server available. The Blackbox itself needs no air cooling. Well see "huge leaps forward" over the next few years when it comes to the no-cooling-needed data centers, Bapat said. "Were already on that track now, and were only going to continue to discover more ways to improve systems—through lower-power processors, better design and other components," he said. HP Senior Vice President for Technology Services Mike Rigodanzo pointed out that his company is leading the charge for better-tuned data centers—installations that use optimal designs for airflow and air conditioning-unit location, for example. "Big [data center] rooms are not homogeneous," Rigodanzo said. "Each one has its own airflow and design challenges, so services are needed to set up the center right the first time. Designing the center properly in the first place is essential to an efficient operation." New software that monitors the power draw across the data center and then calibrates it with the workload at hand on a dynamic basis will soon become available, Bapat said. That will become a major power-saving factor, he said. To read what EMC plans to do to cut cooling and energy costs in the data center, click here. "We do have a number of data center components now available that are rugged enough to withstand constant 50-degree Centigrade [122-degree Fahrenheit] temperatures," said Steve Sams, IBM vice president of global sites and facilities. "Its not hard to imagine that well eventually get to full data centers that wont need cooling equipment. These will be hundreds of times more efficient. And what a savings in power draw that will be." People in general are "pretty abysmal at predicting improvements in IT," Sams said. "Some day well look back and see that we could have improved a lot of things far earlier than we actually did," he said. Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.
 
 
 
 
Chris Preimesberger Chris Preimesberger was named Editor-in-Chief of Features & Analysis at eWEEK in November 2011. Previously he served eWEEK as Senior Writer, covering a range of IT sectors that include data center systems, cloud computing, storage, virtualization, green IT, e-discovery and IT governance. His blog, Storage Station, is considered a go-to information source. Chris won a national Folio Award for magazine writing in November 2011 for a cover story on Salesforce.com and CEO-founder Marc Benioff, and he has served as a judge for the SIIA Codie Awards since 2005. In previous IT journalism, Chris was a founding editor of both IT Manager's Journal and DevX.com and was managing editor of Software Development magazine. His diverse resume also includes: sportswriter for the Los Angeles Daily News, covering NCAA and NBA basketball, television critic for the Palo Alto Times Tribune, and Sports Information Director at Stanford University. He has served as a correspondent for The Associated Press, covering Stanford and NCAA tournament basketball, since 1983. He has covered a number of major events, including the 1984 Democratic National Convention, a Presidential press conference at the White House in 1993, the Emmy Awards (three times), two Rose Bowls, the Fiesta Bowl, several NCAA men's and women's basketball tournaments, a Formula One Grand Prix auto race, a heavyweight boxing championship bout (Ali vs. Spinks, 1978), and the 1985 Super Bowl. A 1975 graduate of Pepperdine University in Malibu, Calif., Chris has won more than a dozen regional and national awards for his work. He and his wife, Rebecca, have four children and reside in Redwood City, Calif.Follow on Twitter: editingwhiz
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel