Intel Unveils Xeon Phi Roadmap for HPC

 
 
By Jeffrey Burt  |  Posted 2014-11-18 Print this article Print
 
 
 
 
 
 
 
Intel Xeon Phi

At the SC14 supercomputing show, the chip maker also gave more details about its Omni-Path interconnect, which will challenge InfiniBand.

Intel officials are pushing forward with the company's Xeon Phi co-processor timetablet, preparing for the rollout next year of its second-generation Knights Landing product and beginning to talk about Knights Hill, which will follow.

At the SC14 supercomputing show in New Orleans, the chip maker also is opening up about its plans around Omni-Path, the high-speed interconnect optimized for high-performance computing (HPC) environments that officials say will challenge InfiniBand.

The Xeon Phi products—which offer more than 60 x86 computing cores—were designed in the first generation to be co-processors that run alongside Xeon chips in HPC systems. Like GPU accelerators from Nvidia and Advanced Micro Devices, Xeon Phi is meant to enable HPC organizations to run high-performance systems while keeping down the power consumption and overall costs.

The first iteration, Knights Corner, was first introduced in 2012 and is seeing its adoption in supercomputers increasing. According to the organizers of the Top500 list of the world's fastest systems, 25 of those supercomputers use Intel's MIC (Many Integrated Cores) Xeon Phi co-processors, up from 17 systems on the June list. Fifty of the supercomputers leverage Nvidia's GPU accelerators.

Now, as Intel prepares to launch the second-generation Knights Landing—which system makers will be able to use as either a primary processor or a co-processor in HPC servers that will begin hitting the market in the second half of 2015—officials are already talking about the upcoming third generation, Knights Hill.

Intel officials are giving few details about Knights Hill—it will be built on the company's 10-nanometer processor, will be higher performing and more power-efficient than its predecessors and will leverage the second generation of the Omni-Path interconnect. But Charlie Wuischpard, vice president and general manager of workstations and HPC for Intel's Data Center Group, said in press briefing before the SC14 show that it is important for the company to show its continued investment in the portfolio.

"We've got to show that this isn't a one-or two-generation investment, but it's a multi-generation investment," Wuischpard said.

He noted that there are a number of projects regarding Knights Hill underway with customers, though most of the focus throughout 2015 will be on Knights Landing. Wuischpard said that more than 50 system makers are expected to build systems leveraging Knights Landing as processors—and more using them as co-processors—and that to date, customers have committed more than a total of 100 petaflops of system compute to deals using Knights Landing.

System OEMs are looking to offer HPC customers both GPU accelerators and Xeon Phi co-processors in new systems. Also at the show, Hewlett-Packard announced new server trays for its Apollo supercomputers that can hold either GPUs from Nvidia or Xeon Phis. In addition, Dell introduced its highly dense PowerEdge C4130 that can support either the GPU accelerators or Intel co-processors.

Knights Landing will include DDR4 memory and a new on-package stacked memory technology developed with Micron Technology designed to increase supercomputer performance.

Intel also is spending time outlining its plans for Omni-Path. Given the high numbers of processor cores that HPC systems run, having a fast interconnect fabric is crucial. Over the past several years, Intel has invested billions of dollars to build up its networking capabilities, including buying technologies from Cray and QLogic as well as companies like Fulcrum Microsystems.

At SC14, Intel officials gave more details around Omni-Path, which until now had been called Omni Scale. According to the company, Omni-Path, which will first appear with Knights Landing, will offer 100G-bps line speed and up to 56 percent lower switch fabric latency than is found in compute clusters running InfiniBand. It will offer better scaling (48 ports) than InfiniBand (36 ports), enabling HPC organizations to run clusters with higher port density—up to 1.3 times better than with InfiniBand—and up to 50 percent fewer switches.

In addition, the chip maker launched the Intel Fabric Builders Program with seven inaugural members and the goal of enabling third parties to build offerings on top of Omni-Path and to create an ecosystem around the technology.

Intel also is expanding the number of Parallel Computing Centers, which now number more than 40 in 13 countries. The centers are being used to modernize HPC community codes, according to the chip maker.

 
 
 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel