Intel’s upcoming version of its Xeon Phi technology will be a key step in the push toward more widespread adoption of high-performance computing and toward the eventual goal of exascale computing by the end of the decade, according to company officials.
Speaking at the SC ’13 supercomputing show in Denver Nov. 19, Intel officials said the “Knights Landing” version of the Xeon Phi chip will bring with it significant improvements in both performance and power consumption, not only though its 14-nanometer manufacturing process but also via its ability to be used as either a primary compute processor or as a coprocessor in conjunction with another host processor, such as Intel’s Xeon server chips.
Knights Landing will be a key departure from the current 22nm Xeon Phi coprocessors and the GPU accelerators being offered by Nvidia and Advanced Micro Devices, all of which are designed to help ramp up the performance of high-performance computing (HPC) systems without increasing the amount of power they consume, officials said.
Knights Landing will have many cores (the current coprocessors offer as many as 60), will be able to run single-threaded and parallel-processing applications, will offer greater integrated on-package memory capabilities and improved interconnect and—as a host processor—will not have to wait for applications to pass through a primary processor, as coprocessors and GPU accelerators have to today, according to Joe Curley, director of marketing for Intel’s Technical Computing Group.
“It will be a processor and will be able to be used as a coprocessor,” Curley told eWEEK in an interview before the start of the SC ’13 conference.
That ability will mean faster processing times, less bandwidth and smaller systems, all important elements as the industry pushes toward its goal of exascale computing by 2020. Intel, other tech vendors, research institutions and governments—the U.S. government is putting $126 million behind the effort—are looking to get beyond petascale computing and reach the exascale level, which would be a thousandfold increase over petascale.
Intel last month announced it was opening parallel computing centers worldwide to help get workloads ready for parallel computing, according to Intel officials.
Nvidia and AMD for the past several years have been pushing their GPU technologies as accelerators for HPC systems. Also at the SC ’13 show, Nvidia officials unveiled the Tesla K40, the next-generation accelerator that will offer twice the memory and 10 percent more performance than the current K20X GPUs. Four days earlier, AMD rolled out its FirePro S10000 12GB Edition GPU for big data and HPC workloads.
Intel Execs Talk Xeon Phi at Supercomputing Show
However, Intel officials say the x86-based Xeon Phi chips—the Knights Landing chips could come later in 2014—are a key part of their neo-heterogeneity pitch, noting that while HPC environments will use both processors and coprocessors or accelerators, Xeon Phi enables Intel to offer common and familiar underlying programming model and tools. The company in June began giving glimpses of what to expect from Knights Landing.
Organizations running HPC environments are showing interest in the Xeon Phi coprocessors as well as GPU accelerators. According to organizers of the Top500 list of the world’s fastest supercomputers released Nov. 18, 53 of those 500 systems use either coprocessors or accelerators—38 use Nvidia GPUs, 13 systems have Intel’s Xeon Phi and two use AMD’s Radeon technology. In addition, four of the 10 fastest systems use them: Two run on Nvidia GPUs and two on Xeon Phi.
At the same time, more enterprises also are looking at HPC systems for their data centers, particularly given the growing capabilities and the lower prices. Where once such systems were the domains of nations and well-funded institutions, increasingly enterprises can afford them, and know they need to, Intel’s Curley said.
“If you have a competitor using that technology and you aren’t, you are at a competitive disadvantage,” he said.
Along with the continued development of Xeon Phi, Intel officials at the show also noted other efforts they’re making in the HPC space. Intel announced its HPC Distribution for Apache Hadoop, which combines Intel’s Distribution for Apache Hadoop and the company’s Enterprise Edition of Lustre. The combination gives enterprises a solution for storing and processing large data sets. At the same time, Intel rolled out its Cloud Edition for Lustre, offering a scalable, parallel file system that is available via the Amazon Web Services Marketplace.
The new Lustre solution is a pay-as-you-go product aimed at dynamic applications such as rapid simulation and prototyping, according to Intel officials. It also can help when HPC workloads need to rapidly move to the cloud, they said.