Intel and Microsoft are setting aside $20 million to establish a pair of academic centers that will study and develop new methods to increase the use of parallel computing, the two companies announced March 18.
The UPCRC (Universal Parallel Computing Research Centers) are being established at the University of California at Berkeley and the University of Illinois at Urbana-Champaign. In addition to the $20 million donation, both universities are slated to donate several million dollars to the two centers through academic grants.
The two centers will focus on accelerating the adoption of parallel computing in both the development of applications and the next generation of hardware that will be built using multicore microprocessors.
IT giants Intel and Microsoft are turning to universities and their research departments at a time when the industry is taking a serious look at how parallel computing-breaking down information into smaller parts to take advantage of multiple processing cores-can assist application developers working to take full advantage of the multicore x86 processors being developed by Intel and its main rival, Advanced Micro Devices.
On March 17, Intel announced that it was moving forward with its Nehalem chip, which will use a new microarchitecture. The company said the Nehalem processor will scale from two to eight cores and offer two instructional threads per core.
It takes more than fast hardware
Andrew Chien, director of Intel Research, said during a conference call that his company began to realize in early 2000 that it could no longer rely on increasing the clock speed of its processors in order to provide additional performance for applications. Now, the chip maker is looking to move the application development industry toward developing software that can run in parallel in order to take advantage of multicore processors that run at a more modest frequency.
The problem is moving developers away from serial programming and into parallel programming, which is much harder to develop and an area where there is not a lot of expertise at this point.
“We have ridden an increase in processor performance and scalability that has been driven by Moore’s Law, of course, and by frequency scaling and gigahertz scaling, and over the last few years, the whole industry has shifted to emphasis on scaling processor performance by the use of parallelism,” Chien said.
“The use of parallelism provides the promise of delivering much more energy-efficient computing capability,” Chien added. “It also appears that parallelism is the path forward to the unprecedented levels of performance that we need to keep delivering in order that this engine of growth and progress [keep] going.”
The investment in parallel computing also comes at a time when Microsoft is moving to expand its reach into developing much more complex operating systems and applications. In an interview with Reuters earlier in March, Craig Mundie, Microsoft’s chief research and strategy officer, said the company is increasingly looking at parallel computing as a way to provide greater performance for the types of software it’s developing and the hardware that it will run on.
Tony Hey, Microsoft’s corporate vice president of External Research, gave as an example of the uses of parallel computing being able to develop personal health care assistants that will be able to tell users what’s wrong with their health or what medication they need on a particular day.
Still no timeline
What Intel and Microsoft did not announce is specifically when they expected that the industry would make this shift. While Intel is moving forward with delivering more cores on each new generation of chips, it was not clear from the announcement whether Microsoft is developing the compilers or a next-generation operating system that would make parallel computing easier.
Roger Kay, an analyst with Endpoint Technologies Associates, said he believes that Intel is facing competitive pressure from the likes of AMD and Nvidia as both companies are working to increase awareness of parallel computing and its ability to work with their processors.
“Intel is on the inexorable path toward many more cores, and they have already laid out the problem of how do you give employment to all those cores,” Kay said.
“At one point, more than a year ago … Intel told me that they were going to slow-roll their road map to a degree and that they were not going to bring all these cores out as fast as they were technically able because they couldn’t figure out how to program for eight cores and that would be an inhibiter to adoption,” Kay added. “They didn’t want to have silicon lying around and no one using it.”
The other obstacle, Kay said, is convincing enterprise buyers that they need to upgrade to systems that take full advantage of parallel computing. Some companies might worry that their older software will not work using servers and operating systems that have been maximized for parallel computing.
Marc Snir, professor of computer science at the University of Illinois, said he believes that getting programmers accustomed to parallel computing at the college level first is one way to bridge this gap.
“Every computer will be a parallel computer … and we must bring democracy to parallel computing [and] make every programmer a parallel programmer,” Snir said.