Server fans: How to optimize the power used
Second only to the power supply, server fans have become a large user of power (other than the actual computing-related components themselves). As servers have become smaller and smaller-and now commonly pack several multi-core CPUs in a 1U-high server-the challenge of moving a sufficient amount of air through the server requires multiple small, high-velocity fans. They need to push air through very small restrictive airflow areas within the server and the very small intake and exhaust areas at the front and rear of the server chassis.
These fans can consume 10 to 15 percent or more of the total power drawn by the server. And since the fans are DC, they draw power from the power supply, thus increasing the input power to the server, again multiplied by the inefficiency of the power supply. In addition, in 1U servers, most or all of the airflow is routed through the power supply fans since there is virtually little or no free area on the rear panel to exhaust the hot air.
To improve efficiency, many new servers have thermostatically-controlled fans which raise the fan speed as more airflow is needed to cool the server. This is an improvement over the old method of fixed-speed server fans that run maximum speed all the time. However, these variable speed fans still require a lot of energy as internal heat loads rise and/or input air temperature rises.
For example, if the server's internal CPUs and other computing-related components draw 250 to 350 watts from the power supply, the fans may require 30 to 75 watts to keep enough air moving through the server. This results in the overall increase in server power draw as heat density and air temperature rises in the data center. In fact, studies that have measured and plotted fan energy usage versus server power and inlet air temperatures show some very steep, fan-related power curves in temperature-controlled fans of small servers.
The CPU is the heart of every server and the largest computing-related power draw. While both Intel and AMD offer many different families of CPUs, all with the goal of providing more computing power per watt, the overall power requirement of servers has continued to rise (since the demand for computing power has also risen).
For example, the power requirement for the Intel CPU varies from 40 to 80 watts for a Dual-Core Intel Xeon Processor to 50 to 120 watts for a Quad-Core Processor, depending on version and clock speed. As mentioned previously, many servers are configured with two, four or even eight dual or quad-core CPUs. And naturally, we all want the fastest servers we can buy today in the hope of having a 3-year usable life, before the next wave of software or applications overwhelms them.
It has been well documented that the average CPU is at idle over 90 percent of the time and only hits peak demand for very short periods, yet continuously draws a substantial portion of its maximum power requirement 24 hours a day. Moreover, even when servers are equipped with power-saving features in the hardware and software (as most servers are), these features are usually disabled by the server administrators.
One of the primary goals of virtualization is to decrease the number of servers that are mostly running at idle, and consolidate their function/application onto fewer, more powerful servers that run a higher average utilization rate.
Ultimately, the performance requirements and the types of computing loads your applications face will be the determining factor in your choice in the number and type of CPUs. Hopefully, by trying to match the computing load with the performance and number of CPUs, you will optimize the efficiency of each server.