(Editor’s Note: This is Part 1 of a 3-Part Series on Cooling Solutions for Virtual Servers. Click here for Part 2 or Part 3.)
When power consumption is discussed in terms of watts per rack, in the mid-to-late 1990s it ranged from 500-1000W, and perhaps occasionally 1-2KW. Once we all got past the dreaded Y2K frenzy and started concentrating moving forward instead of remediation, the servers got smaller and faster. And they started drawing more power. Today, a typical 1U server draws 250-500W and, when 40 of them are stacked in a standard 42U rack, they can draw 10-20KW and produce 35,000-70,000 BTUs of heat. This requires 3-6 tons of cooling per rack. This was the amount of cooling typically specified for a 200-400 square foot room with 10-15 racks less than five years ago.
Blade servers provide even more space saving benefits, but as a result, require even higher power and cooling requirements. They can support dozens of multi-core processors, but are only 8-10U high. However, they can require 6-8KW each, and since a standard rack can hold 4-5 Blade servers, it can total 24-32 KW per rack.
Cooling and Virtualization has taken hold, and it is rapidly becoming the latest de facto computing trend. It has proven to work effectively and does have many benefits – some real, some hypothetical. One of the many claims is that it is more energy efficient, since it can (and usually does) reduce the number of “real” servers. Of course, part and parcel with the “upgrade” to a virtualized environment usually involves new high-performance, high-density servers. In and of itself, it is true that the server hardware does take less energy since there are usually fewer of them. However in practice, the concentration of high-density servers in a much smaller space, while a benefit, has created many real deployment problems.
Where Is the Downside?
OK, so if virtualization uses less space, and the servers use less energy overall, where is the downside?
Power Requirements. Yes, virtualizing the environment will use less server power overall since, if done properly, fewer servers are used. However, many existing power distribution systems cannot handle providing 20-30KW per rack.
Cooling Requirements: By implication, now that it seems that, if properly implemented, virtualization can potentially use less space and power by using fewer and denser servers, it should follow that they need less cooling. Therefore virtualization should more be energy efficient overall and, presumably, you have made your data center greener, so to speak.
This where the virtualization efficiency conundrum first manifests itself. As mentioned earlier, data centers that were built just five years ago were not designed for 10, 20, even 30KW per rack. As such, their cooling systems are not capable of efficiently removing that much heat from such a compact area. If all the racks were configured at 20KW-per-rack, the average power/cooling could exceed 500 watts per square foot. Even some recently-built Tier IV data centers are still limited to a 100-150W square-foot average. As a result, many high-density projects have had to spread the servers across half-empty racks in order to not overheat. This lowers the overall average power per square foot.
Traditional Raised-Floor Cooling
The “classic” data center harkens back to the days of the mainframe. It had a raised floor which served several purposes. It was used to easily distribute cold air from the Computer Room Air Conditioner (CRAC) and it also contained the power and communications cabling. While mainframes were very large, they only averaged 25-50 watts per square foot. Originally, to make it look neat and organized, everything was set up as rows facing the same way. In many cases, the cold air entered the bottom of the equipment cabinets and the hot air exited the top of the cabinets. The floor generally had no perforated tiles.
This actually was a relatively efficient method of cooling since all the cold air was going directly into the equipment cabinets and did not mix with the warm air. With the introduction of rack-mounted servers, the average power levels began to rise to 35-75 watts per square foot. Also, it became a problem that the cabinets were all facing the same way, since the hot air now exited out the back of one row of racks into the front of the next row. Thus, the “hot aisle/cold aisle” phenomenon came into being in the mid-to-late 1990s.
CRAC units were still located mainly at the perimeter of the data center, but the floor tiles now had vents (or were perforated) in the cold aisles. This worked better, and the cooling systems were able to keep up with the rising heat load by adding more and larger CRAC units that had higher-power blowers, and by increasing the size of the floor tile vent openings.
Still the Predominant Method of Cooling
This is still the predominant method of cooling in most data centers that have been built in the last 10 years – and many that are still in the design stage. Raised floors became deeper: two, three or four-foot are now somewhat common. This allows more and more cold air to be distributed using this “time-tested and proven” methodology. This is a cost-effective method only up to certain power level, though. Once past a certain power level, this method has multiple drawbacks. For one, it takes much more energy for the blower motors in the perimeter CRACs to push more air at higher velocities and pressures. As a result, they use much more energy trying to deliver enough cold air into a single 2’x 2′ perforated tile to support a 30-KW rack.
Floor Grates Replace Floor Tiles
These perforated floor tiles have even been replaced by floor “grates” now. This has been done in an effort to try to supply enough cold air to a rack that needs “tons” of cold air to cool the heat of high-density servers. As an aside, each 3.5KW produces 12,000 BTUs of heat, which requires 1 ton of cooling.
Unfortunately, 3.5KW-per-rack has been far exceeded many times over with the advent of the “1U” and Blade Server. Now, instead of specifying how many tons of cooling for an entire data center, we may now need 5-10 tons per rack!
As a result of the poor cooling path efficiency at such high heat loads, the amount of power that is used for cooling of high-density server “farms” has actually exceeded the power used by the servers themselves. In fact, in some cases, for every one dollar spent to power the servers, two or more dollars are spent for cooling. This is primarily due to this path efficiency problem. Ideally, it should use less than half the energy to cool – not twice as much.
In some cases, the traditional raised-floor perimeter cooling system for high-density applications is causing an overall increase in energy usage rather than a decrease. In addition, it is common that this method is unable to adequately cool a full rack of high density servers.
Non-Raised Floor Cooling
Once, a raised floor was considered the only way to cool a “real” data center. Now some newer cooling systems do not require a raised floor. These have placed the cooling system in close proximity to the racks. This not only improves cooling performance, it also improves cooling efficiency. These new systems can be used with existing raised floor systems or non-raised floors. They can be used as a complete solution or as an adjunct to overtaxed cooling systems.