Like many groups, IT people and Facilities people do not see things the same way. Chances are that if you are reading this in eWEEK, then you are in IT. Nonetheless, facilities personnel are usually the ones you call to address any cooling systems projects. They are primarily concerned with the overall cooling requirements of the room, expressed in BTUs or tons of cooling, and the reliability of systems that they have used in the past. They leave the racks to IT and just want to provide the raw cooling power to meet your entire heat load - usually without regard to the different levels of rack density.
The typical response from Facilities is to add more of the same type of CRAC that is already installed (if there is space). This may partially address the problem, but not very efficiently. Clearly, this "Facilities vs. IT" mentality can no longer work. There needs to be some mutual understanding of the underlying issues so that both sides can cooperate to come up with a more efficient solution, thereby being able to optimize the cooling systems to meet the rising high-density heat load.
Simple Low-Cost Solutions for Optimizing Cooling in Existing Installations
Clearly, the raised floor is the present standard and is not going to suddenly disappear. Several techniques (some low-cost or no-cost) can be implemented to improve the cooling efficiency of data centers dealing with high-density servers.
Blanking panels is by far the simplest, most cost-effective and most misunderstood item that can improve cooling efficiency. By ensuring that the warm air from the rear of the rack cannot be drawn back into the front of the rack via open rack spaces, it will immediately improve the efficiency. This may help save those racks/servers that are in borderline thermal situations from overheating (especially near the top of the rack).
Racks: If the back of your racks are cluttered with cables, chances are that it is impeding the airflow and causing the servers to run hotter than necessary. Make sure that the rear heat exhaust areas of the servers are not blocked.
Under floor: Cabling under the floor causes a similar problem by blocking and disrupting the cold air flow. Many larger data centers have 1-2 feet of under-floor depth just for cabling, so that it does not impact the airflow. While you cannot rebuild your data center, you can inspect and improve your under-floor cabling. Cables should be run and tightly bundled so that it has minimal impact on the airflow.
Floor Tiles and Vents
The size, shape, position and direction of floor vents, and the flow rating of perforated tiles, have great impact on how much cool air is delivered to where it's needed most. A careful evaluation of the placement and the amount of airflow in relation to the highest power racks can pay off as one of the best ways to maximize the cooling system efficiency. This delivers the cool air where it is needed most and minimizes the waste of cool air. Use different tiles, vents and grates to match the airflow to the heat load of the area.
Cables normally enter the racks though holes cut into the floor tiles. This opening, which is under almost every rack (usually in the rear of the rack), is a great source of cooling inefficiency since it basically "wastes" the cold air by allowing it to enter the back of the rack where it is totally useless. More significantly, it lowers the static air pressure in the floor which reduces the amount of cold airflow available for the vented tiles in front of the rack (where it is needed). Every floor tile opening for cables should be surrounded by air containment devices - typically a "brush" style grommet collar which allows cables to enter but blocks the airflow. This is an easy, low-cost fix that will have a large impact.
A recent development is the Cold-Aisle Containment system. It is best described as a system of panels that spans the top of the cold aisle from the top edge of the racks. It can also be fitted with side doors to contain the cold air even further. This blocks the warm air from the hot aisle from mixing with the cold air and concentrates the cold in front of the racks where it belongs.
It has always been the "rule" to use 68-70??F as a setting point to maintaining the "correct" temperature in a data center. In reality, it is possible to raise this somewhat, though carefully, by a few degrees. The most important temperature should be measured at the intake of the highest server in the warmest rack. While each manufacturer is different, most servers will operate fine at 75??F at the intake, so long as there is adequate airflow. (Check with your server vendors to verify their acceptable range).
Just as maintaining temperature is important, humidity is also maintained by the CRAC. The typical target set point is 50 percent humidity, with the hi-low range set at 60 percent and 40 percent. In order to maintain humidity, most CRACs use a combination of adding moisture and/or also "reheating" the air. This can take a significant amount of energy. By simply broadening your hi-low set points to 75 percent-25 percent, it will save a substantial amount of energy. (Again, check with your server vendors to verify their acceptable range).
Synchronize Your CRACs
In many installations, CRAC is not in communication with any other CRAC. Each unit simply bases its temperature and humidity setting on the temperature and humidity being sensed in the (warm) return air. Therefore, it is possible and even common for one CRAC to be trying to cool or humidify the air while another CRAC is trying to de-humidify or re-heat the air.
By reviewing the settings, it can easily be determined if this is the case, and you can have your cooling system contractor add a master control system or at least change the set points of the units to avoid or minimize the conflict. In many cases, only one CRAC is needed to control humidity. The others can have the hi-low set points at a much wider range to be used as a backup should the primary unit fail. Resolving this can save significant energy over the course of a year. It also can reduce wear on the CRACs themselves since they will be running only when really needed.
A thermal survey may provide surprising results and, if properly interpreted, can provide many clues to improving efficiency by using any or all of the above-mentioned items or methods.
Using house fans does not really solve any problems, but I mention it as a warning as what not to do, as I have seen many of these in many futile attempts to prevent equipment overheating.
Geographic - Climatic and Other Factors
While we have discussed many of the issues and technologies within the data center, the location of the building and the surrounding climate can have a significant impact on cooling efficiency.
A modern, large Multi-megawatt, dedicated Tier IV data center is designed to be energy efficient. It typically uses large water chiller systems with built in-economizer functions (see below) as part of the chiller system. This provides the ability to shut down the compressors during the winter months and only uses the low exterior ambient air temperature to provide chilled water to the internal CRACs. In fact, Google's super-sized data centers, like the one that is built in Oregon, are located there because the average temperature is low, water is plentiful and low-cost power is available.
Not everyone operates in the rarified atmosphere of a Tier IV world. The tens of thousands of small-to-medium-sized data centers that are located in high-rise office buildings or office parks may not have this as an option. They are usually limited to the building-based cooling facilities and are limited in the ability to use efficient, high-density cooling. Also, when the office floor plan is being laid out, many times the data center is given the space that no else wants. Sometimes the IT department has no say in its design. As a result, the size and shape may not be ideal for rack and cooling layouts. When your organization is considering a new office location, the ability of the building to meet the requirements of the data center should also be considered - not just how nice the lobby looks.
The Economizer Coil - 'Free Cooling'
Many smaller and older installations used a single type of cooling technology for their CRACs. It usually involved a cooling coil that was cooled by a compressor located in the unit. It did not matter if it was hot or cold outside, the compressor needed to run year-round to cool the data center.
A significant improvement was added to this basic system, a second cooling coil that was connected by lines filled with water and antifreeze to an outside coil. When the outside temperature was low (i.e., 50F or less), "free cooling" was achieved since the compressor could be used less or totally stopped (below 35F). This simple and effective system was introduced many years ago, but was not that widely deployed because of increased cost and the requirement to have a second outside coil unit. In colder climates, this can be a significant source of energy savings. While it is usually not possible to retro-fit this to existing systems, it is highly recommended for any new site or cooling system upgrade. The use of the economizer has risen sharply in the last several years, and in some states and cities, it is even a requirement for new installations. This is primarily used in areas with colder climates.
The Bottom Line
There is no one best solution to address the cooling and efficiency challenge. However, by a careful assessment of the existing conditions, a variety of solutions and optimization techniques can substantially improve the cooling performance of your data center. Some cost literally nothing to implement, while others have a nominal expense. They all will produce a positive effect. Whether it is 500 or 5,000 square feet, if done correctly, your data center will improve its energy efficiency. It will also increase the uptime since the equipment will receive more cooling and the cooling systems will not be working as hard to provide that cooling.
So, don't just think about "going green" because it is fashionable. In this case, it is necessary to meet the High-Density Cooling Challenge, while lowering energy operating costs.