Supercomputers for the Masses?

 
 
By John Taschek  |  Posted 2003-06-09 Email Print this article Print
 
 
 
 
 
 
 

Labs On-Site at Stanford: PC cluster shows enterprise potential.

Ten years ago, supercomputers were multimillion-dollar systems usually used for massive projects, such as modeling Earths climate or nuclear reactions. Today, they are called HPCCs, or high-performance computing clusters, and they are basically free when built from spare PCs. More important, they are quickly becoming suitable for mainstream enterprise computing.

HPCCs look completely different from traditional supercomputers: They are fan-cooled, not water-cooled, and they sit in racks and use off-the-shelf components. And while the inventor of supercomputers—Cray Research Inc.—may have cranked out only two or three computers a year a decade ago, companies including Dell Computer Corp., Red Hat Inc. and Microsoft Corp. are now building hundreds of postmodern supercomputers at a time.

The changes in supercomputing can be seen most clearly in academia, where the New Age supercomputers are commonly used.

eWEEK Labs recently visited Stanford University, in Stanford, Calif., which was setting up a 300-node cluster comprising Dell systems running Red Hat Linux. The goal is to use the cluster at Stanfords Bio-X—a massive, state-of-the-art facility funded predominantly by Jim Clark of Silicon Graphics Inc. fame. The role of Bio-X is to bring together the different sciences—including engineering, physics, medicine and biology—so researchers can better share resources, planning and data.

Stanford began building its cluster late last year, with help from Dell and Intel Corp. By last month, it was tuning the cluster to compete for a spot on the Top 500 Supercomputers list, a directory of the most powerful supercomputers in the world published biannually by the University of Mannheim, in Germany; the University of Tennessee; and the National Energy Research Scientific Computing Center.

Ironically, just a few years ago, the Top 500 list comprised mostly SGI systems based on Cray technology. Now, there is just a sprinkling of Crays in the mix.

The fastest Cray on the list is at No. 39, clocking in at 1,166 gigaflops—nearly a thousand times faster than a Cray Y-MP circa 1988. Interestingly for the enterprise, the performance of the No. 39 Cray system, which is used by the government for unknown but probably defense-related modeling, is dwarfed by systems running Red Hat Linux that are far less expensive to build and operate. The fastest Linux cluster, run by the Lawrence Livermore National Laboratory, clocks in at nearly 6,000 gigaflops.



 
 
 
 
As the director of eWEEK Labs, John manages a staff that tests and analyzes a wide range of corporate technology products. He has been instrumental in expanding eWEEK Labs' analyses into actual user environments, and has continually engineered the Labs for accurate portrayal of true enterprise infrastructures. John also writes eWEEK's 'Wide Angle' column, which challenges readers interested in enterprise products and strategies to reconsider old assumptions and think about existing IT problems in new ways. Prior to his tenure at eWEEK, which started in 1994, Taschek headed up the performance testing lab at PC/Computing magazine (now called Smart Business). Taschek got his start in IT in Washington D.C., holding various technical positions at the National Alliance of Business and the Department of Housing and Urban Development. There, he and his colleagues assisted the government office with integrating the Windows desktop operating system with HUD's legacy mainframe and mid-range servers.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel