NASA is planning to use IBM's iDataPlex server array to build a new supercomputer that will help the federal space agency collect data from satellites that are observing both the Earth and deep space. When the high-performance cluster is complete, this NASA supercomputer will offer a top performance of 42 teraflops. To build this supercomputer, IBM will use Intel's quad-core Xeon processors.
NASA is turning to IBM
and its iDataPlex server array to build a new supercomputer
execute 42 teraflops and will help collect data from a
series of satellites that observe both the Earth and the universe.
On Sept. 23, IBM is
planning to announce that it will build the new supercomputer at the NASA Center
for Computational Science,
Md. When complete, this cluster-style
supercomputer based on the iDataPlex array will offer a top performance of 42
teraflops and use 1,024 quad-core Intel Xeon processors. NASA is then planning
to combine this new system with the existing Discovery supercomputer at NCCS,
which will then offer a combined performance of 67 teraflops.
In addition to collecting a range of data from observational
satellites, the new supercomputer cluster is slated to help NASA with its
climate and weather modeling as well as creating simulations to explain cosmic
phenomena such as black holes.
IBM is known for its
range of supercomputers, including the well-known Blue Gene systems and its
system at Los Alamos National Laboratory in New Mexico, which broke the
With clusters built around the iDataPlex system, IBM
uses more industry-standard hardware, including Intel processors and InfiniBand
When IBM first announced
iDataPlex, Big Blue presented the array as a way for companies to use x86-based hardware
to build data centers that can support a cloud computing
However, IBM has announced a series of
contracts with companies that will put iDataPlex to use as a supercomputer.
In addition to NASA, the University
of Toronto has contracted with IBM
to build Canada's
most powerful supercomputer using iDataPlex, which will create a system that
offers a performance of 360 teraflops. Microsoft
is also using a system based on iDataPlex
that will test its HPC (high-performance
computer) operating system HPC
In the case of NASA, Herb Schultz, a marketing manager with IBM's
Deep Computing division, said the NASA requirements for this supercomputer are similar
to the needs of those companies developing their own cloud infrastructure or
building out a business based on Web 2.0 technology.
"If you look at HPC
requirements and the emerging requirements of Web 2.0, whether it's social
networking or gaming, there are a lot of similarities and a lot of it has to do
with being able to scale out in a cost-effective manner," said Schultz. "They
also need to manage their power and cooling constraints and they need something
that is incrementally global and something that is standard and running Windows
or Linux on an x86 platform ... For the kind of work NASA needs, it's not too
different than what we first talked about when we introduced iDataPlex."
At the same time, IBM had
to take into account NASA's requirements for power and cooling as well as fitting
the computer into a constricted space.
"When it comes to supercomputers, it used to be performance,
price performance and can you run my codes," said Schultz. "Now, it's basically
customers are asking that the system works underneath a certain power
consumption threshold or they ask what are your flops per watt and can I put
this in here without rearranging my data center."
With the iDataPlex cluster for NASA, IBM
is offering a number of power and cooling technologies including the company's
Rear Door Heat Exchanger, which is a series of sealed tubes within the cabinet
that are filled with chilled water that helps cool the heated air before it
comes out of the servers. IBM
also rotated the racks within a typical iDataPlex array cabinet, which creates
an environment that is wide but not deep, and allows the company to squeeze
more servers into the system while leaving room for switches.
The new NASA supercomputer will also take advantage of IBM's
xCAT (Extreme Cluster Administration Toolkit) management software and the
company's General Parallel File System, which will allow the cluster to create
and maintain large file sets based on the amount of data the cluster can