Sun Microsystems is looking to shine a light on its high-performance computing ambitions.
While IBM and its Roadrunner supercomputer have gained the lion’s share of attention when it comes to HPC (high-performance computing) in the past month, Sun is slowly mapping out a strategy that aims to increase its presence within academic institutions and in the commercial marketplace.
It began in 2007, when the company introduced its Constellation supercomputer, which promised an HPC cluster system that would break a petaflop or 1 quadrillion calculations per second. Then, Sun used that design to develop a new HPC system at the TACC (Texas Advanced Computing Center) in Austin, Texas, that would be the first real-world test of Constellation’s ability.
At the International Supercomputer Conference in Germany June 17 to 20, Sun detailed its plans to offer a scaled-down version of the original Constellation design providing up to 7 teraflops-7 trillion calculations per second-in a single rack with a smaller switch that would be more suitable for businesses that need HPC abilities without the price tag and complexity associated with a large-scale installation such as TACC.
“People are embracing high-performance computing in a whole number of ways in the commercial market for a number of reasons,” John Fowler, Sun’s executive vice president for systems, told eWEEK during an interview.
“One interesting reason is that when times are economically challenging people actually get more interested because they can use simulation and analysis to more efficiently find answers to problems than they would by doing it the old-fashioned manual way,” Fowler said.
Sun’s ambitions for its HPC business are driven by more than just the bragging rights associated with getting a system listed in the Top 500. In a report issued earlier in 2008, IDC found the HPC market grew 15.5 percent in 2007, to $11.6 billion. This market is expected to reach $15 billion by 2011.
Still, Sun has a long way to go to prove it is a viable player in a market that includes IBM, Hewlett-Packard, Dell and old stalwarts such as Cray and SGI. In the past four years, IBM has dominated the market, first its Blue Gene systems and now with Roadrunner.
While Sun reported that its TACC “Ranger” supercomputer had a performance of more than 500 petaflops, the Top 500 released June 18 calculated its maximum performance at 326 teraflops using the Linpack benchmark. While that was enough to place Ranger iin the list’s top five, Ranger was far behind Roadrunner, which reached more than a petaflop of performance, and also behind some of IBM’s Blue Gene systems.
In the interview, Fowler said Sun’s Constellation design remains capable of breaking the petaflop barrier, but he said the company had no plans or customers willing to install a system that large by the year’s end. That could change by 2009, although Fowler declined to discuss specifics on any future Sun deals or contracts.
“By 2009, I think there will be more than one system breaking a petaflop,” Fowler said.
Where Sun Plans to Shine
Where Sun does see an edge is within the commercial space and where it can deliver a modified Constellation system that can support an entire business, such as an automobile manufacturer looking to test cars with computer simulations, or just one department within an enterprise, for applications such as CAD or other computing-intensive projects.
“What has changed is that you can get relatively large amounts of computing power rather inexpensively, and so departments in many cases can have their own high-performance computing, whereas 10 or 15 years ago that was not reasonably feasible,” Fowler said.
While there are commodity parts within Constellation, such as Advanced Micro Devices’ quad-core Opteron processors, Fowler said Sun can deliver the other parts that enterprises need for supercomputing, including storage, management software and services.
One HPC component that he indicated Sun believes it can firmly stamp with its own logo is storage.
The company is continuing to develop Lustre, a shared file system that is used with some of the larger supercomputers in the world, including some of IBM’s systems. Fowler said Sun is also working to improve the data streams within these types of cluster HPC systems by shortening the time it takes to bring data off of a storage device, through a fabric and then into the processor.
Fowler said Sun is working toward creating a file system that can support up to 500GB per second. Right now, the TACC supercomputer has a file system that can handle about 80GB per second, so there is room for significant improvement.
“If you look at clusters today, the performance has increased enormously because of multicore CPUs. But the storage subsystems have not, and so there is this huge mismatch between how fast can the computer run versus how fast you can get data in and out. And so for us that has been the No. 1 thing we have been working on,” Fowler said.