Intel Opens Parallel Computing Centers for Exascale Push

The chip vendor will not only look to drive HPC hardware improvements, but it also is looking for programming partners for software development.

Intel officials for several years have been talking about getting the industry to exascale level of computing by the end of the decade, and have been making moves to reach that goal, from introducing their many-core Xeon Phi coprocessors to buying QLogic's InfiniBand business.

Now the giant chip maker is looking to bring in more partners on the exascale initiative by opening Parallel Computing Centers around the world and sending out requests for other companies to collaborate on the effort.

The push to exascale computing will not only take hardware—from processors to servers—but also optimize today's applications and create new software that will leverage the computing nodes and entire system and work at all levels, from workstations to high-end supercomputers, according to Raj Hazra, vice president in Intel's Datacenter and Connected Systems Group and general manager of company's Technical Computing Group.

"Through these centers, Intel hopes to accelerate the creation of open standard, portable, scalable, parallel applications by combining computational science, hardware, programmer tools, compilers, and libraries, with domain knowledge and expertise," Hazra wrote in an Oct. 22 post on Intel's blog.

Intel and other tech vendors, as well as research institutions and other organizations, are pushing to get beyond petascale computing and to the exascale level, a thousand-fold increase over petascale and a level that they hope can be reached by 2018. The U.S. government also is throwing its weight behind the exascale push with a $126 million allocation. Exascale computing would have far-ranging impacts on such compute-intensive industries as engineering, biology, oil and gas, national security and engineering.

Intel officials in November 2012 introduced the first Xeon Phi coprocessors designed to help ramp up the performance of high-performance computing (HPC) systems without driving up power consumption, similar to how some systems run GPUs from Nvidia or Advanced Micro Devices to work with the processor in such servers. A key part of the coprocessors and accelerators is enabling or speeding up parallel workloads, where larger jobs are split up into smaller ones and calculations are run simultaneously.

Intel's new Parallel Computing Centers will help get workloads ready for parallel computing, according to James Reinders, director of parallel programming evangelism at Intel.

"The centers represent investments that I think of as digging into code to help make real applications more prepared to use parallel computing," Reinders wrote in a post on Intel's blog. "Parallel computing challenges are about enabling the future of computing not just tuning for one hardware direction or another. That's the challenge that these centers are taking on."

For Intel, a foundation of its parallel computing efforts is what officials call "neo-heterogeneous computing," the idea that HPC environments will be heterogeneous, with the use of both processors and coprocessors or accelerators. Via Xeon Phi, Intel provides heterogeneity in hardware, but—because they're based on the x86 architecture—they offer a programming model and languages that are common with Intel's x86 processors.

"The need for neo-heterogeneous computing is enormous," Reinders wrote. "It combines the promise of heterogeneous to make deliver better compute density, compute performance and lower power consumption, while including the benefits of neo-heterogeneous computing to maintain programming flexibility, performance and efficiency for developers."

This is an advantage over GPUs from Nvidia or AMD, which require the recompiling of some software to run on the accelerators, according to Intel. However, officials with both companies have said the amount of recoding for GPU accelerators is minimal and not a real obstacle to their use.

According to Intel's Hazra, the first five Parallel Computing Centers will be at Cineca in Italy, Purdue University, Texas Advanced Computing Center (TACC) at the University of Texas in Austin, the University of Tennessee, and Zuse Institut Berlin in Germany.