Close
  • Latest News
  • Artificial Intelligence
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Logo
  • Latest News
  • Artificial Intelligence
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Home Applications
    • Applications

    Supercomputers for the Masses?

    By
    John Taschek
    -
    June 9, 2003
    Share
    Facebook
    Twitter
    Linkedin

      Ten years ago, supercomputers were multimillion-dollar systems usually used for massive projects, such as modeling Earths climate or nuclear reactions. Today, they are called HPCCs, or high-performance computing clusters, and they are basically free when built from spare PCs. More important, they are quickly becoming suitable for mainstream enterprise computing.

      HPCCs look completely different from traditional supercomputers: They are fan-cooled, not water-cooled, and they sit in racks and use off-the-shelf components. And while the inventor of supercomputers—Cray Research Inc.—may have cranked out only two or three computers a year a decade ago, companies including Dell Computer Corp., Red Hat Inc. and Microsoft Corp. are now building hundreds of postmodern supercomputers at a time.

      The changes in supercomputing can be seen most clearly in academia, where the New Age supercomputers are commonly used.

      eWEEK Labs recently visited Stanford University, in Stanford, Calif., which was setting up a 300-node cluster comprising Dell systems running Red Hat Linux. The goal is to use the cluster at Stanfords Bio-X—a massive, state-of-the-art facility funded predominantly by Jim Clark of Silicon Graphics Inc. fame. The role of Bio-X is to bring together the different sciences—including engineering, physics, medicine and biology—so researchers can better share resources, planning and data.

      Stanford began building its cluster late last year, with help from Dell and Intel Corp. By last month, it was tuning the cluster to compete for a spot on the Top 500 Supercomputers list, a directory of the most powerful supercomputers in the world published biannually by the University of Mannheim, in Germany; the University of Tennessee; and the National Energy Research Scientific Computing Center.

      Ironically, just a few years ago, the Top 500 list comprised mostly SGI systems based on Cray technology. Now, there is just a sprinkling of Crays in the mix.

      The fastest Cray on the list is at No. 39, clocking in at 1,166 gigaflops—nearly a thousand times faster than a Cray Y-MP circa 1988. Interestingly for the enterprise, the performance of the No. 39 Cray system, which is used by the government for unknown but probably defense-related modeling, is dwarfed by systems running Red Hat Linux that are far less expensive to build and operate. The fastest Linux cluster, run by the Lawrence Livermore National Laboratory, clocks in at nearly 6,000 gigaflops.

      Page Two

      Stanfords original goal was to place in the first 70 of the Top 500. However, after the system was built, Steve Jones, architect of the Stanford clustering project, said the best he hoped for was a spot in the first 200 in the benchmark. (The numbers are still being crunched, but eWEEK estimates that the Stanford system will come in at about 170.)

      Although low-cost computers can be used in a cluster, the network switching fabric has a significant impact on performance. Because of cost concerns, Jones was forced to use a 100BaseT (Fast Ethernet) network backbone instead of the far-faster Gigabit Ethernet fabric. “The switching fabric has a huge impact on our placement for the Top 500 list,” said Jones. “Due to costs, we sacrificed network speed in the beginning. Replacing the switching fabric will put us where we should be on the list.”

      The fastest high-performance clustering interconnects are devices such as Myrinet, made by Myricom Inc. However, these interconnects are usually expensive—about $1,000 a pop (or $300,000 for a typical HPCC) and $100,000 more for the Myrinet switch. This is too pricey for most academic concerns, but if Stanford had gone with Myrinet, it could easily have jumped up more than 100 places on the list. As it stands, Jones said he will most likely upgrade the switching fabric to Gigabit Ethernet by the fall and run the benchmark again in November.

      So why did Jones and Stanford even want to participate in the benchmark test, knowing that the eventual upgrade to a new switching fabric would change its position so dramatically? Jones said running the benchmark helped tune the cluster, providing performance gains that are already benefiting Stanford scientists and researchers.

      Clusters make only marginal sense in the enterprise right now, with specific instances in which they can be used. Reza Rooholamini, director of engineering for operating systems and clustering in Dells product group and head of Dells clustering group, said HPCCs are gradually moving out of academia and into the enterprise and that there are three main commercial areas of interest right now: oil exploration; bioinformatics; and the automotive industry, for use in crash test simulations. “The applications are typically technical applications,” said Rooholamini, “but the organizations that use them are commercial, money-making businesses.”

      Page Three

      Rooholamini, based in Round Rock, Texas, pointed to some problems with running enterprise applications. For one thing, it is difficult to run database applications—the core of enterprise computing—on a cluster because of issues related to distributed queries. IBM and Oracle Corp. have cluster-capable databases (IBMs DB2 and Oracles Oracle9i RAC), but it is not an easy task to set them up, said Rooholamini. And, generally speaking, it is nearly impossible to take existing applications and recompile them as message-passing applications that take advantage of computing clusters.

      It may be easier to use the so-called grid engines to distribute application workloads. Microsoft, for one, is working to combine the grid and clustering technologies. Greg Rankich, product manager, Windows Server Product Management Group, in Redmond, Wash., said Microsoft is focusing at least some of its efforts on distributing business solutions—in part because of customer demand for consolidation, but also to take advantage of spare CPU cycles.

      Rooholamini agreed that grid and HPCC technologies are merging. “We look at grid [computing] as an evolution of HPCCs,” said Rooholamini. “From a technology perspective, if I design my HPCC and distribute my compute nodes across a larger geography, then I am solving and incorporating things that are necessary for a grid. If I can eliminate latencies in the grid, then I have tackled some of the obstacles.”

      However, Rooholamini estimates it will be another three years before we see grid-aware applications.

      Labs Director John Taschek can be reached at john_taschek@ziffdavis.com.

      John Taschek
      As the director of eWEEK Labs, John manages a staff that tests and analyzes a wide range of corporate technology products. He has been instrumental in expanding eWEEK Labs' analyses into actual user environments, and has continually engineered the Labs for accurate portrayal of true enterprise infrastructures. John also writes eWEEK's 'Wide Angle' column, which challenges readers interested in enterprise products and strategies to reconsider old assumptions and think about existing IT problems in new ways. Prior to his tenure at eWEEK, which started in 1994, Taschek headed up the performance testing lab at PC/Computing magazine (now called Smart Business). Taschek got his start in IT in Washington D.C., holding various technical positions at the National Alliance of Business and the Department of Housing and Urban Development. There, he and his colleagues assisted the government office with integrating the Windows desktop operating system with HUD's legacy mainframe and mid-range servers.
      Get the Free Newsletter!
      Subscribe to Daily Tech Insider for top news, trends & analysis
      This email address is invalid.
      Get the Free Newsletter!
      Subscribe to Daily Tech Insider for top news, trends & analysis
      This email address is invalid.

      MOST POPULAR ARTICLES

      Latest News

      Zeus Kerravala on Networking: Multicloud, 5G, and...

      James Maguire - December 16, 2022 0
      I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
      Read more
      Applications

      Datadog President Amit Agarwal on Trends in...

      James Maguire - November 11, 2022 0
      I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
      Read more
      IT Management

      Intuit’s Nhung Ho on AI for the...

      James Maguire - May 13, 2022 0
      I spoke with Nhung Ho, Vice President of AI at Intuit, about adoption of AI in the small and medium-sized business market, and how...
      Read more
      Cloud

      IGEL CEO Jed Ayres on Edge and...

      James Maguire - June 14, 2022 0
      I spoke with Jed Ayres, CEO of IGEL, about the endpoint sector, and an open source OS for the cloud; we also spoke about...
      Read more
      Applications

      Kyndryl’s Nicolas Sekkaki on Handling AI and...

      James Maguire - November 9, 2022 0
      I spoke with Nicolas Sekkaki, Group Practice Leader for Applications, Data and AI at Kyndryl, about how companies can boost both their AI and...
      Read more
      Logo

      eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

      Facebook
      Linkedin
      RSS
      Twitter
      Youtube

      Advertisers

      Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

      Advertise with Us

      Menu

      • About eWeek
      • Subscribe to our Newsletter
      • Latest News

      Our Brands

      • Privacy Policy
      • Terms
      • About
      • Contact
      • Advertise
      • Sitemap
      • California – Do Not Sell My Information

      Property of TechnologyAdvice.
      © 2022 TechnologyAdvice. All Rights Reserved

      Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.

      ×