Part 2: Moving from an outdated, outmoded and inefficient data center cooling solution to something that can actually improve efficiency usually involves taking advantage of the latest technologies. Julius Neudorfer, founder of North American Access Technologies, examines the latest in cooling.
(Editor's Note: This is Part 2 of a 3-Part Series on Cooling Solutions for Virtual Servers. Click here for Part 1 or Part 3.)
New Technology: Close-Coupled Cooling
Several alternatives and enhancements have been made to this well-entrenched but aging raised-floor "standard." The various cooling manufacturers have developed systems that shorten the distance that the air has to travel from the racks to the cooling unit. Some systems are "inrow" with the racks and others are "overhead" They each offer a significant level of increase in the ability to cool racks at up to 20KW per rack. In addition, if the supporting infrastructure is available to support these new systems, they should provide a significant lowering of cooling costs. This should be possible since they are able to use much less power to move airflow to and from the racks, and they also minimize the mixing of the hot and cold air.
Hot-aisle containment requires that the hot aisle be sealed along with "inrow" style cooling units. This ensures that all the heat is efficiently extracted directly into the cooling system over a short distance. This increases the ability to effectively cool high power racks and also increases the cooling system efficiency.
Fully Enclosed Rack
By having cooling coils within a fully sealed rack, you can cool up to 30 KW in a single rack. These are offered by some major server manufacturers as a part of complete blade server support pack solution and also by some cooling vendors. This system offers the highest cooling density.
Some systems from major server manufacturers have even offered their own "fully enclosed" racks with built-in cooling coils that totally contain the airflow within the cabinet. This represents one of the most effective, highest-performance high-density cooling solutions today that will support standard "air-cooled" servers at up to 30KW per rack. Not only does this allow the servers to be properly cooled, it should also potentially offer the highest level of energy efficiency.
Today, all servers use air to transfer heat out of the server. Several manufacturers are exploring building or modifying servers to use "fluid-based cooling." Instead of fans pushing air through the server chassis, liquid is pumped through the heat-producing components of the server (i.e., CPUs, power supplies, etc). This technology is still in its infancy and is only in the testing and development stage. Moreover, because liquids can be a problem if leaked into electronic systems, this may pose a major hurdle to acceptance that may be difficult to overcome. This is different from using liquid (i.e., chilled water or glycol-based coolant) as a heat-removal medium for the CRAC.
The good news is that some of the new cooling technologies (i.e., inrow, overhead and enclosed) can be added or retro-fitted to existing data centers to improve the overall existing cooling. In addition, they can also be used only for specific "islands" to provide additional high-density area cooling.
Best Practice vs. Reality
One of the realities of any data center, large or small, is that computing equipment changes constantly. As a result, even the best-planned data center tends to have new equipment installed wherever there is space. This leads to some of the cooling issues. We all want to strive for "best practices," but the necessity to keep the old systems running while installing new systems sometimes means that expediency rules the day. We usually cannot stop to fully reorganize the data center to optimize the cooling. If you can take an unbiased look at your data center (i.e., avoid saying, "that's just how it has always been done"), you may find that many of the upcoming recommendations (see Part 3
of this series) can make a significant improvement to your cooling efficiency - many without disrupting operations.
Julius Neudorfer is the Director of Network Services and a founder of North American Access Technologies, Inc. Since 1987, Julius has been involved with designing Data and Voice Networks and Data Center Infrastructure. He personally holds a patent for a network-based facsimile PBX system. Julius is also the primary designer of the NAAT Mobile Emergency Data Center. Over the last 20 years, Julius has designed and overseen the implementation of many advanced Integrated Network Solutions for clients. He can be reached at firstname.lastname@example.org.