Nvidia is looking to bring the power of a supercomputer cluster to the desktop.
At the 2008 Supercomputing Conference in Austin, Texas, Nvidia will demonstrate a new HPC (high-performance computer) design that will allow OEMs to pack between two and four Nvidia GP-GPUs (general purpose graphics processing units) with a workstation form factor.
Nvidia executives are scheduled to discuss the new HPC design Nov. 18.
This new HPC design, which Nvidia is calling the “Personal Supercomputer,” is the latest effort by Nvidia to bring its graphics technology into the supercomputing and high-performance computing markets. While most HPC clusters and supercomputers are powered by conventional CPUs, Nvidia is betting that its GP-GPUs can offer the types of performance that scientists, researchers and other workers in the HPC market need now to run these types of massive workloads.
Unlike a traditional CPU, a GP-GPU contains hundreds of smaller stream processing cores, which then allow an application’s instructional threads to run in parallel. Once the data is broken down into smaller and smaller pieces, the GP-GPU allows for higher throughput and better performance without relying on cranking up the clock speed to make the application run faster.
So far, the market for GP-GPUs remains a niche part of the overall HPC market. Nvidia has taken the lead with its line of Tesla GP-GPUs, but Advanced Micro Devices has also entered this market with its line of FireStream GPUs. In the next 18 months, Intel will also enter the HPC market with a processor called “Larrabee” but this product will be based on conventional processing cores.
In addition to its Tesla products, Nvidia has developed a compiler and a set of development tools called Compute Unified Device Architecture or CUDA, which allows application developers to use a variant of the C programming language to program a GPU like a CPU.
Rob Enderle, an analyst with the Enderle Group, said that while the market for GP-GPUs in high-performance computing remains a small part of this space, he noted that Nvidia has made some significant strides in this area and has shown that it attracts developers and now OEMs to its products.
“There are a lot of demands right now for supercomputers and this type of design won’t replace supercomputers as much as reduce the lines of people waiting to use one,” said Enderle. “Right now, Nvidia is ahead [of AMD and Intel] but it’s still an emerging market and we are still right at the cutting edge in regard to where this is going.”
While Nvidia and AMD are focused mostly on selling these types of GP-GPUs to research institutions and universities, Enderle said that both these companies are looking to expand into other markets that include aerospace design and medical imaging.
The Nvidia HPC design is based on the company’s Tesla C1060 Computing Processor, which is made up of 240 stream processing cores. Each Tesla C1060 offers 4GB of dedicated memory and 933 gigaflops of single precision floating point performance. When OEMs use the Nvidia design to create these workstations, they can create computers that use two, three or four of these Tesla C1060 GP-GPUs.
Several PC vendors have lined up to offer new workstations based on the Nvidia design, including Dell, Lenovo and Asus. Although Nvidia did not provide a specific price, Andy Keane, the general manager of GPU Computing at Nvidia, said these workstations would be priced at less than $10,000. Some large research facilities, such as the University of Illinois at Urbana-Champaign have already begun experimenting with these types of individual workstations.
While the Nvidia supercomputer workstation design does offer the convenience of working at a desk, Keane said these types of desktops would not replace more traditional HPC clusters and supercomputers any time soon. Nvidia’s vision is to allow researchers and others to move back and forth between an HPC cluster and their desktops to reduce the amount of time it takes to work on problems and crunch data.
“Right now, they have to do their work on the cluster,” said Keane. “A lot of researchers have regular notebook computers and then they write the code on their notebook and then they have to go the cluster and deploy the code on the cluster. What a lot of people now realize is that they can get the efficiency and performance with a workstation that has multiple GPUs. They are still going to have to use both, but a lot of the research can be done on the desktop.”