Such metrics already exist for desktop computers and laptops, under the EPAs Energy Star program.
Measuring server performance is harder, however, because servers rarely operate at full capacity, and the measures each vendor uses to scale back power use under lighter loads vary widely, according to Jon Koomey, staff scientist at Lawrence Berkeley National Laboratory and consulting professor at Stanford.
Koomey, who researched and published one of the first real-world estimates of power usage and waste in corporate data centers, chaired the conference EPA sponsored in January to bring vendors and analysts together to create a useful metric.
Power use and waste in corporate data centers has become a hot issue lately, as the cost of oil and electricity have risen.
A survey conducted by Ziff Davis Media, which was sponsored by AMD, found that 70 percent of large organizations track power consumption and cooling, but that 30 percent have used either one as one of the criteria they use to evaluate new equipment. Thirty percent are investigating data-center power consumption as a way to lower costs.
Both the House and Senate are already considering legislation to restrict data-center power use.
Data center managers gave Intel an earful on the subject at the 2006 user conference, even though both Intel and AMD have already begun shipping processors that draw less power.
AMD, Dell and other manufacturers have also created a group called the Green Grid Alliance to address the issue.
International Data Corp. estimates that electricity data centers $3.3 billion annually nationwide, and that the number of servers in the United States will jump 50 percent over the next four years.
Much of that growth will continue to come from the continuing trend toward centralizing computing power in large servers in a data center or other location, rather than on stand-alone servers in business units, according to Vernon Turner, an analyst at IDC.
That consolidation saves money by reducing the cost of maintaining servers in many locations, but makes the data-center power issue more intense by stacking much more of a companys computing resources in dense racks that are more difficult to cool than stand-alone servers.
The lack of a metric that can be used to compare one companys server to another, and to compare different form factors such as blades or rack-mounted servers, makes it difficult for data-center managers to predict either their power or cooling needs, Koomey said.
The EPAs Server Energy Measurement Protocol is designed to provide a standard reference by measuring power requirements for servers at different load levels.
The average corporate server runs at between 15 percent and 20 percent of capacity, according to a draft version of the specification. Therefore measurements of power use at peak capacity are unrealistic.
The specification includes a specific protocol vendors can use to test Intel- and AMD-based servers at their peak capacity and at a range of other load levels as well.
That should allow customers to make decisions based on power use of several different servers, according to the capacity at which they expect those servers to run, Koomey said.
The specification is due to be finalized in September, and will go into effect by July 1, 2007, according to EPA documents.