Intel officials for months have been talking about its many-core Xeon Phi “Knights Landing” processor, a chip that the company is positioning to compete with GPU accelerators such as Nvidia’s Tesla products in high-performance computing environments and such emerging markets as machine learning.
Intel announced at the ISC High Performance 2016 supercomputing show June 20 that Knights Landing— originally unveiled in November 2015 and now known as the Xeon Phi 7200 family—is available. Officials said they expect a broad array of new systems from OEMs and powered by the processor to hit the market.
The four chips that make up the Xeon Phi family come with 64 to 72 x86 cores, and include integrated memory and I/O fabric. Intel also is rolling out its HPC Orchestrator software stack based on OpenHPC.
With Knights Landing, Intel is making a hard push to compete with in a high-performance compute (HPC) space being fueled by a range of new workloads, from deep learning and machine learning to artificial intelligence (AI)—all of which require high-performance parallel processing. Nvidia and Advanced Micro Devices (AMD) for almost a decade have been promoting their GPU accelerators for HPC systems, improving the performance of applications running on them while helping to keep power consumption down.
Intel also offers field-programmable gate array (FPGA) accelerators, which the company inherited when it bought Altera last year for $16.7 billion.
Xeon Phi grew out of Intel’s Larrabee effort, which was aimed at developing the chip maker’s first GPU before the project was cancelled in 2009. Intel then put its efforts behind an initiative to create a many-core chip using its x86-based Intel architecture. The first Xeon Phi chips were designed to be coprocessors that run alongside Intel’s Xeon server processors, and the company has seen some traction with them in the HPC space.
According to the latest Top500 list of the world’s fastest supercomputers released June 20, 93 of the 500 systems use some sort of accelerator technology. Sixty-seven of them use Nvidia’s GPUs, but 26 use Xeon Phi technology. Three use a combination of both.
However, Knights Landing is different from its predecessor. The chip can be used as a host processor or a coprocessor, and includes integrated fabric and 16GB of integrated stackable high-bandwidth memory (HBM). The chips are expected to start shipping in September, with prices ranging from $2,438 for the 7210 to $6,254 for the high-end 7290.
Intel officials said the new Knights Landings chips will deliver five times the performance of GPU accelerators, as well as eight times the performance-per-watt and nine times the performance per dollar.
They also said the more than 30 systems OEMs and channel systems—including Hewlett Packard Enterprise, Cray, Dell, NEC, SGI, Inspur and Sugon—are building or planning to build servers based on the Xeon Phi, and more than 30 software vendors are have optimized their applications for the technology. Even though the chips won’t be on the market officially for a couple of months, Intel for the past six months has been shipping them to early customers, and more than 100,000 units have been sold or are on order.
“Some of the early customers are running every cluster on Xeon Phi,” Charles Wuischpard, vice president of Intel’s Data Center Group and general manager of its HPC Platform Group, said in a conference call with journalists and analysts before the ISC 16 show kicked off. “Others are running a mix of Xeon Phi and Xeon [on their systems].”
Intel Shipping Xeon Phi ‘Knights Landing’ Processors
During the conference call, Wuischpard focused on the roles Xeon Phi and the company’s Scalable System Framework can play in the emerging AI and machine learning spaces. Those are areas Nvidia is investing in heavily, developing such technologies as the new Tesla P100 GPU accelerator—the version for servers with PCIe was announced at ISC 16—and the DGX-1, a system based on the P100 aimed at deep learning and AI. It was announced at Nvidia’s GPU Technology Conference in April.
Wuischpard noted Intel also has been working on technologies for the emerging markets, but that company until now has been “too quiet about it.” The Xeon Phi processors are well-armed to tackle the work of training the neural networks needed for machine learning because they are faster and more scalable than GPUs, and the company’s Xeon processors are the most widely used chips for the less compute-intensive inference workloads in machine learning, he said.
For AI, Intel offers its Scalable System Framework, which includes compute, storage, memory, fabric and software.
The Xeon Phi chips are part of a larger effort by Intel to grow its capabilities in the data center rapidly. The company is continuing to innovate its Xeon family of server chips, and also is developing a range of other technologies for next-generation data centers, including its FPGAs, silicon photonics, Omni-Path interconnect architecture and Optane memory offerings.
The vendor’s Data Center Group has become a key part of the company, with first-quarter revenue hitting $4 billion, a 9 percent jump over the same period in 2015.