Stanfords original goal was to place in the first 70 of the Top 500. However, after the system was built, Steve Jones, architect of the Stanford clustering project, said the best he hoped for was a spot in the first 200 in the benchmark. (The numbers are still being crunched, but eWEEK estimates that the Stanford system will come in at about 170.) Although low-cost computers can be used in a cluster, the network switching fabric has a significant impact on performance. Because of cost concerns, Jones was forced to use a 100BaseT (Fast Ethernet) network backbone instead of the far-faster Gigabit Ethernet fabric. "The switching fabric has a huge impact on our placement for the Top 500 list," said Jones. "Due to costs, we sacrificed network speed in the beginning. Replacing the switching fabric will put us where we should be on the list."So why did Jones and Stanford even want to participate in the benchmark test, knowing that the eventual upgrade to a new switching fabric would change its position so dramatically? Jones said running the benchmark helped tune the cluster, providing performance gains that are already benefiting Stanford scientists and researchers. Clusters make only marginal sense in the enterprise right now, with specific instances in which they can be used. Reza Rooholamini, director of engineering for operating systems and clustering in Dells product group and head of Dells clustering group, said HPCCs are gradually moving out of academia and into the enterprise and that there are three main commercial areas of interest right now: oil exploration; bioinformatics; and the automotive industry, for use in crash test simulations. "The applications are typically technical applications," said Rooholamini, "but the organizations that use them are commercial, money-making businesses."
The fastest high-performance clustering interconnects are devices such as Myrinet, made by Myricom Inc. However, these interconnects are usually expensive—about $1,000 a pop (or $300,000 for a typical HPCC) and $100,000 more for the Myrinet switch. This is too pricey for most academic concerns, but if Stanford had gone with Myrinet, it could easily have jumped up more than 100 places on the list. As it stands, Jones said he will most likely upgrade the switching fabric to Gigabit Ethernet by the fall and run the benchmark again in November.