IBM and Sun Microsystems are looking to bring supercomputing into the “petaflop” era.
The two IT giants will detail the specifics behind their new supercomputer systems to the audience attending the 2007 International Supercomputer Conference in Dresden, Germany, which kicks off June 26—two systems that promise to break the petaflop barrier in terms of performance. A petaflop equals 1 quadrillion calculations per second.
By contrast, IBMs Blue Gene/L system, which is installed at the Department of Energys Lawrence Livermore National Laboratory in Livermore, Calif., offers 280.6 teraflops, or 280.6 trillion calculations per second, and sits on top of the current Top 500 supercomputer list. Members of the ISC are expected to announce the updated Top 500 supercomputer list later this week, and Big Blue is expected to retain the top spot.
James Staten, principal analyst for IT infrastructure and operations research at Forrester Research, saw an early demonstration of Suns new supercomputer—the Constellation—and called the new system “an impressive, very powerful system.”
However, Staten said, while companies such as Sun and IBM use events like the ISC to impress industry watchers and one another with their technical achievements, the market for such massive computing systems is small.
“It also represents a measure of one-upsmanship, to be sure,” Staten said. “All these companies—Sun, IBM, [Hewlett-Packard], Cray and others—are always trying to outdo themselves, and thats fine—thats capitalist business. Frankly, there arent that many customers for computers like this anywhere around the world.”
These two supercomputers are not meant for companies and institutions with tight budgets. The Sun system will cost about $59 million, while the IBM supercomputer runs between $1.3 million and $1.7 million for each server in the system cluster.
For IBM, which is headquartered in Armonk, N.Y., this years ISC will be used to introduce the companys next generation of supercomputers—the Blue Gene/P system—that will eventually replace the L system, said Herb Shultz, a product marketing manager for IBMs Deep Computing division.
“When we came out with the original Blue Gene system, IBM really gave the market something that there was no substitute for,” Shultz said. “With Blue Gene/P, we are giving entities like government labs, universities and large industrial customers a system that can run even longer simulations and do more to explore areas like nuclear physics, climate models and astronomical studies.”
The Blue Gene/P system looks to offer three times the computing power of IBMs previous Blue Gene supercomputer. The system now offers a scale ranging from 1 petaflop to 3.5 petaflops when fully configured with 256 server racks.
IBM will use its own Power Architecture with the Blue Gene/P system. Each Blue Gene chip will use four PowerPC 450 processing cores. The chip offers top clock speed of 850MHz and can perform 13.6 billion calculations per second. The current crop of Blue Gene chips are dual-core chips with a clock speed of 700MHz.
The older and new Blue Gene chips use the same thermal envelope, and the newer supercomputer offers greater performance while using about 20 percent more power, Shultz said.
The new Blue Gene chips also offer more memory and SMP (symmetric multiprocessing), which is designed to support multithreaded software applications. The new supercomputer also offers a new interface, which will make writing applications for the system easier for developers. (The supercomputers operating system is based on Linux.)
A typical Blue Gene/P system board will hold 32 microprocessors, and the average 6-foot rack server will hold 32 of these boards, which gives the system more than 4,000 processing cores per server rack.
A 72-rack Blue Gene/P system with 294,912 processing cores will achieve the 1 petaflop of computing performance, Shultz said. A 216-rack cluster offers 3 petaflops of performance. At the ISC, IBM plans on sharing the benchmarks it achieved with a two-rack Blue Gene/P system, which should place the supercomputer at No. 30 on the Top 500 list.
IBM also announced that it is working with four institutions to install the Blue Gene/P systems. The first of these supercomputers will be delivered to the U.S. Department of Energys Argonne National Laboratory in Argonne, Ill.
Not to be outdone, Sun will debut its Constellation supercomputer in Dresden on June 26. While not as large as IBMs Blue Gene, it promises to deliver nearly 2 petaflops of performance.
The Constellation is the result of collaboration between Sun and the Texas Advanced Computing Center at the University of Texas in Austin. It features 82 SunFire blade servers, two Sun Magnum ultra-dense switches, an Infiniband host interface (with 288 ports), next-generation Mellanox HCA (high-contrast addressing) and a Sun Fire X4500 storage cluster with 480TB per rack.
The core switch supports up to 3,456 nodes, and each custom rack supports 48 server modules, chief architect Andy Bechtolsheim said.
The Constellation also features Solaris, Linux, OpenMPI, Open InfiniBand interfaces and management, x64 Computing Architecture, and InfiniBand DDR interconnect. Its compute speed is estimated at 1.7 petaflops, and it will store up to 10 petabytes of data, Bechtolsheim said.
This will be the second Constellation system built by Sun, which constructed a similar system for Tokyo Tech last year.
The Constellation is expected to be one of if not the most powerful computing platforms in the world, Bechtolsheim—one of the four co-founders of Sun in 1982—told a group of journalists and analysts in a preview session last week in Menlo Park, Calif.
“This will easily outperform any computer on the list right now,” Bechtolsheim said. “Its 20 times faster than any of them. But we have to make it a reality first.”
“Were hoping we can get this thing built and operational before the November Top 500 computer listing is made,” Bechtolsheim said. “But were still waiting on the availability of the chips.”
The Constellation will run on Advanced Micro Devices quad-core Opteron processors, dubbed “Barcelona,” which have not yet been released, Bechtolsheim said.
“Of course, we have our in-house chips, which we used to test capabilities, but we dont have the production-ready ones yet. [AMD is] supposed to be getting them to us very soon,” Bechtolsheim said.
“But what is good is that a lot of the technology and good ideas that go into these machines will eventually make it into our personal computers at some point. Sort of like NASA and all the technology it buys over the years.”