How to Optimize the Energy Efficiency of Your Server - 1

 
 
By Julius Neudorfer  |  Posted 2009-03-05 Print this article Print
 
 
 
 
 
 
 

Energy efficiency in the data center is the hot topic of the day. We all want the fastest, most powerful servers for our data center. We want to optimize, virtualize and consolidate in the name of making our data centers more efficient and green. Here, Knowledge Center contributor Julius Neudorfer examines several key components that impact the total energy a typical server utilizes, as well as what it really costs to operate a server and how to optimize the energy efficiency of your server.

Everyone is looking at their data center efficiency and trying to quantify it and improve it. Of course, there are actually two separate groups addressing it from different positions, the facilities team and the IT group.

The facilities team is responsible for the power and cooling of the overall enclosed space. The IT group is in charge of the servers, storage and networking hardware. Typically, each side speaks to each other as little as possible-except when they have reached the limits of power or cooling (or both) in the server room or data center.

One group, The Green Grid, was created by both the IT equipment manufacturers and the power and cooling equipment manufacturers. The Green Grid has created, and has been promoting, the Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE) methods of measuring data center efficiency.

While I won't delve into all the details of PUE calculations here, the basic premise is that it represents the ratio of the total power consumed (including uninterruptible power supply and cooling) by the data center, including the IT load itself, divided by the IT load. A simple example is that if the total load is 200 kilowatts (kW) and the IT load is 100 kW, a PUE of 2.0 would result. While the PUE can vary from 1.x to 3.x, a PUE of 2.0 is a fairly common operating ratio for many data centers.

However, even these new measurement standards, oddly enough, do not directly address any IT equipment efficiency-only the power and cooling equipment efficiency. And, while this article is not focused on data center infrastructure efficiency, it is important for the IT department to understand and consider that for every watt of IT equipment, the data center infrastructure requires additional energy to support it.

Even the United States government, after spending a significant amount of time and resources, has not been able to fully define and regulate the power efficiency of the data center and the servers and IT equipment (according to an EPA report to Congress in August 2007). It is still in the process and, according to the EPA ENERGY STAR Computer Server Stakeholder Meeting of July 9, 2008 in Redmond, WA, the first-tier rules are expected to become finalized in 2009.

In the rush to optimize, virtualize and consolidate in the name of making our computing-related operations more effective and efficient (and, of course, green), we have heard many server manufacturers profess that their products provide the most computing power for the least amount of energy. Only recently have the server manufacturers even begun to discuss or disclose the efficiency of their servers. Currently, there are no real standards for the overall energy efficiency of servers.

There are several key components which impact the total energy a typical server utilizes: these components are power supplies, fans, CPUs, memory, hard drives, I/O card and ports, and other motherboard components and supporting chip sets. These main components exist in both conventional servers and in blade servers.

However, in the case of the blade servers, some items-such as power supplies, fans and I/O ports-are shared on a common chassis, while the CPU and other related motherboard items are located on the individual blades. Depending on the design of the blade server, the hard drives can be located on either the chassis or the blades. In addition to the aforementioned list, the operating system and virtualization software will also impact the overall usable computing throughput of the hardware platform.

Every manufacturer likes to claim that their product or platform is the most energy-efficient. However, while each one may have a particular sweet spot (for example, the chip set may be more efficient with a particular operating system), overall they all utilize the same basic components and are in the same boat when it comes to the power being used by these components.

Of course, we all want the fastest, most powerful servers for our data center. So, although energy efficiency (for example, green) is the watchword, historically it would seem we only think about energy usage when our power and/or cooling systems are maxed out and may need to be upgraded. Normally, when we need to know how much power the server requires, we turn to the name plate. However, it just represents the maximum amount of power the unit could draw, not the actual power draw in reality. Let's now examine what it really costs to operate a server.

 



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel