Alternative Supercomputer Metrics Sought

 
 
By Mark Hachman  |  Posted 2004-11-08 Print this article Print
 
 
 
 
 
 
 

Two alternative methods of benchmarking supercomputers are nearing completion, as the most popular metric for benchmarking performance is applied to generate a new list of the world's most powerful systems.

Two alternative methods of benchmarking supercomputers are nearing completion, as the most popular metric for benchmarking performance is being applied Monday to generate a new list of the worlds most powerful systems. The latest revision of the Top500 list of supercomputers is expected to be released late Monday before the SC2004 supercomputing show kicks off in Pittsburgh. NASAs SGI-based "Columbia" supercomputer and the University of California/Lawrence Livermore Labs IBM BlueGene/L supercomputer are likely candidates to sit atop the rankings, although NECs recently announced SX-6s 65-teraflop performance could top both of them when NEC administers the Top500 benchmark, known as Linpack. The trouble with the list, critics say, is that it only tracks results from a single test, something that is usually considered to be an imprecise assessment of a systems performance. To address Linpacks limitations, two other initiatives have surfaced: the HPC Challenge benchmark, which has begun generating a significant sample of results, and ApexMap, whose code is now available for public download. The HPC Challenge data is currently in a "0.7 beta" stage; the final version should be released early in 2005, according to Jack Dongarra, the author of Linpack and one of the coordinators of the HPCC effort.
Other, lesser-known metrics are also available. The Top Application Performers list at Purdue University uses the Standard Performance Evaluation Corporations HPC2002 realistic application benchmark, but the list appears to not have been updated in some time. The Department of Defenses Advanced Research Project Agency (DARPA) Information Processing Technology Office is also involved in the creation of performance metrics, although the agency is working toward developing measurements of supercomputer deployment and execution, according to Bob Graybill, a program manager there.
Above all, the Top500 lists most polarizing feature is its single benchmark. Such a limited measurement of performance has drawn criticism. On the other hand, the list has been circulating since 1986, providing a historical timeline to track performance. Moreover, the code is known and understood. "Its invaluable," said David Bailey, chief technologist at Lawrence-Berkeley National Laboratorys Computational Research Laboratory in Berkeley, Calif. "You have so much data with it. You can actually see trends, technology changes in vector computers." The Linpack benchmark also serves as a stress test for a supercomputer, as it "exercises everything in the system," said David Barkai, who works for Intel Corp., of Santa Clara, Calif., as a high-performance computing (HPC) architect. "The revelation for me was that running Linpack for us was a great diagnostic tool," said Walt Brooks, chief of the NASA Advanced Supercomputing (NAS) Division in Mountain View, Calif., who oversaw the "Columbia" clusters development. Brooks said he had used the Linpack code to test the Infiniband connections between the nodes, that they had pushed "a little beyond" the limits of the specification. Brooks said the agency plans to announce soon that it has tied together four of the Columbias SGI servers, each containing 512 Intel Itanium 2 "Madison 9M" processors, into a 2,048-processor cluster that is actually running fluid-dynamic code used to simulate the worlds oceans. Next Page: Is Linpack a serious test?



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel