Fujitsu Laboratories wants to make servers more efficient and reduce power consumption in data centers by enabling organizations to more precisely measure the energy required to run various workloads.
The company has developed a technology that builds upon a power-management feature already in Intel x86 Xeon processors that measures energy consumption at the CPU level. With Fujitsu’s technology, businesses will be able to measure how much energy is being used at the core level, tracking information such as clock cycles and cache-hit percentages, according to company officials.
The result is a more accurate assessment of the power needed to run individual software programs, which will mean more energy-efficiency programming, lower server power use and improved software performance, they said.
Power consumption continues to be a concern in data centers and supercomputer environments. Fujitsu officials noted that a high-end high-performance computing (HPC) system can consume as much as 18 megawatts. In addition, a recent report from the Department of Energy’s National Renewable Energy Laboratory found that data centers in the United States consumed 78 megawatt hours of power in 2010, representing 2 percent of the country’s total electricity demand. That number grew to 91 million MWh three years later, or 2.4 percent of U.S. demand.
In Japan, the Ministry of Internal Affairs and Communications has said that data centers in that country consume an average of 7.72 billion kWh per year, Fujitsu said.
According to company officials, reducing the power required to run programs on servers is a key way to make data centers more efficient, and understanding the energy being consumed by existing software is important in understanding how to develop more efficient software.
Intel server chips use a feature called Running Average Power Limit (RAPL), which is designed to measure and control electricity in the processor. However, that measurement is done at the CPU level; Fujitsu’s technology extends that capability to the individual cores, where Fujitsu officials said the software runs.
Gaining greater insight into the core level will offer a more detailed picture of the energy requirements of the software, they said.
The company is releasing few details about the new technology, though officials said that along with the information gleaned on a per-core basis, the technology itself has little impact—about 1 percent of overhead—on the software performance.
Fujitsu engineers plan to present details of the technology at the Summer United Workshops on Parallel, Distributed and Cooperative Processing 2015 in Japan Aug. 4. The company is currently testing the technology with plans to implement the technology in 2016. Fujitsu officials said they are considering using the technology in the company’s own data centers.