The vendors will use Xeon processors and ProLiant servers to create a system that will offer 1-petaflop performance and low power use.
Intel and Hewlett-Packard are
working together to build a highly energy-efficient supercomputer that will
include the chip maker's upcoming Xeon Phi coprocessors and a design that uses
warm water to cool the servers.
The supercomputer, announced Sept.
5, will be used by the Department of Energy's National Renewable Energy
Laboratory (NREL) for research into numerous energy-related issues, including
renewable energy and energy-efficient technologies, according to Intel
The $10 million system, which will
include about 3,200 Xeon processors and another 600 or so Xeon Phi coprocessors
and include various
ProLiant servers from HP, will eventually offer total peak performance of
more than 1 petaflop (a thousand trillion floating-point operations per second)
while driving up the energy-efficiency rating of NREL's data center.
Installation will begin in November,
with full compute capacity coming online in the summer of 2013, according to
the chip maker.
"The heart of NREL is based on
a powerful combination of the Intel Xeon processor E5 product family, which
leads the data center industry in performance per watt, and Intel Xeon Phi coprocessors
which are setting new records for energy efficiency," Raj Hazra, vice
president and general manager of Intel's Technical Computing Group, said in a
statement. "We are proud that the very best energy-efficient processing
technology in computing is the foundation for the supercomputer that will drive
the research for renewable energy and energy-efficient technologies."
The Xeon and Xeon Phi technologies
are key parts of Intel's high-performance computing (HPC) efforts. The company
has targeted HPC as a key growth area-along with cloud and networking-that
promise as much as 20 percent annual increases. Intel in March rolled out its Xeon E5-2000 processors
, which offer up to eight
cores, 80 percent better performance over the previous generation and 50 percent
better energy efficiency, with executives targeting the portfolio at HPC and
Xeon Phi is the brand that Intel has
wrapped around its Many Integrated Core (MIC) technology, which has been in
development for more than two years. Intel officials are fashioning the Xeon
Phi chips as co-processors that will work with CPUs such as Xeons to bring
parallel-processing capabilities to particular applications, enabling them to
run faster than they would on traditional Xeons while consuming less power. The
Xeon Phi chips, built on Intel 22-nanometer manufacturing process, will have
more than 50 cores when they are released later this year.
Accelerators like the Xeon Phi are
becoming increasingly popular in such areas as HPC and supercomputing as more
applications are designed to take advantage of their parallel processing
capabilities and their energy efficiency. Most of the focus with accelerators
has been on graphics chips from vendors like Nvidia and Advanced Micro Devices,
but Intel instead is focusing its efforts in the area on the x86 architecture.
Intel executives have argued that having x86-based coprocessors offers an
advantage over GPUs because of their ability to run more existing code.
"At NREL, we have taken a
holistic approach to sustainable computing," Steve Hammond, NREL
computational science director, said in a statement. "This new system will
allow NREL to increase our computational capabilities while being mindful of
energy and water used. We will take advantage of both the bytes of information
produced and the BTUs produced."
The supercomputer, which will be
housed at the Energy Systems Integration Facility being built in Colorado, will
leverage current Xeon E5-2670 processors, future 22nm chips built on Intel's
Ivy Bridge architecture and the Xeon Phi chips. It will include ProLiant SL230s
and Sl250s servers from Intel powered by the current Xeon E5-2670s, and
next-generation systems that will run on the upcoming Ivy Bridge and Xeon Phi
Between the computing technologies
from Intel and HP and the warm-water cooling capabilities, the new data center
will be among the most energy-efficient, according to Intel and NREL. Applying
the power usage effectiveness (PUE) standard, which measures the energy
efficiency of a building, the data center should get a PUE rating of 1.06. The
ideal PUE rating is 1.0; an average data center PUE rating is about 1.92,
according to the Environmental Protection Agency.
The cooling will be done by sending
warm water into computing racks to absorb the heat coming from the system. The
water, which will run as high as 95 degrees, will then be circulated to heat an
office and lab space next door, or to heat other parts of the NREL campus. The
design for the cooling system was created by people from Intel, HP and NREL.
System vendors have been using water
and other coolants for several years to help remove heat from data center
facilities. Water cooling tends to be more efficient than air. Intel this week
announced it is conducting tests to determine if PCs can be submerged in a mineral oil solution
from Green Revolution