Stanford Builds Parallel Computing Lab

 
 
By Scott Ferguson  |  Posted 2008-04-30
 
 
 

Stanford University is planning to delve into the world of parallel computing.

On May 2, the university, along with some of world's largest IT companies, plans to unveil the Pervasive Parallelism Lab, which looks to develop new ways to create applications that can take advantage of the ever-increasing number of multicore processors coming into the marketplace.

The lab, which will have a $6 million budget during the next three years, will not only look for ways to develop new programming languages that make it easier to create applications that work with parallel computing-breaking down information into smaller parts to take advantage of multiple processing cores-but also to create the hardware to house these new multicore processors.

The companies that are supporting the new center have a lot to gain by creating more software that works in parallel. Intel, Advanced Micro Devices and Nvidia-three companies that have each been bringing more and more multicore chips to market each year-have agreed to support the lab.

In addition, IBM, Hewlett-Packard and Sun Microsystems have agreed to contribute.

The announcement from Stanford comes after Intel and Microsoft announced in March that they would together contribute $20 million to develop new centers for creating new applications and easier methods for programming in parallel. These UPCRCs (Universal Parallel Computing Research Centers) are being established at the University of California at Berkeley and the University of Illinois at Urbana-Champaign.

Breaking down information

The entire IT industry is taking a serious look at how parallel computing can assist application developers working to take full advantage of the multicore x86 processors being developed by Intel and AMD.

Nvidia is developing GPUs (graphics processing units) that have multiple cores and require applications that work in parallel to harness the full power of the graphics chip.

Instead of increasing the clock speed of each new generation of chip, Intel, AMD, Nvidia and other chip makers have turned to multicore designs to increase performance. The problem now is moving developers from serial programming to parallel programming, which is much harder and an area where there is not a lot of expertise at this point.

"Right now, there are tens of millions of multicore chips that are being underutilized and what they are creating here makes a lot of sense," said John Spooner, an analyst with Technology Business Research who follows the processor industry. "You need developers out there writing software that can suck up all the performance that multicore processors offer."

Universities such as Stanford and the University of California are where the next generation of developers can be trained to develop these new programming languages.

"Parallel programming is perhaps the largest problem in computer science today and is the major obstacle to the continued scaling of computing performance that has fueled the computing industry, and several related industries, for the last 40 years," Bill Dally, chair of the Computer Science Department at Stanford, said in a statement.

While companies like Intel are ready to move into a parallel world, it's not clear whether businesses are ready to leave the old ways behind. Many companies have systems and applications running in serial programming and some IT observers say they believe that these businesses might not want to make a switch that could render mission-critical applications obsolete.

Rocket Fuel