Officials with both Nvidia and AMD say Intel’s coprocessors lag their GPU accelerators in performance and energy efficiency, key metrics in the HPC space.
Organizations with high-performance computing (HPC) environments over the past few years have increasingly turned to GPU accelerators from the likes of Nvidia and Advanced Micro Devices to ramp up the performance of their supercomputers while keeping power consumption in check.
Now Intel is looking to muscle in on the trend, offering its new Xeon Phi coprocessors to work with traditional CPUs to boost the capabilities of the massive systems while giving organizations the benefits of working within the familiar Intel and x86 environments.
These accelerators and coprocessors—which work with CPUs to help supercomputers run compute intensive and highly parallel workloads essentially by throwing large numbers of cores at them—took center stage at last week’s SC12 supercomputer show. Both Nvidia and AMD rolled out powerful new GPU accelerators, while Intel unveiled the first of its Xeon Phi coprocessors.
At the same time, the Titan supercomputer
—a Cray XK7 system that uses both AMD Opteron server chips and Nvidia’s new K20 GPU accelerators—topped the Top500 list of the world’s fastest supercomputers. Debuting on that list—and coming in at number seven—was the Stampede supercomputer, which comprises Intel-based Dell servers and includes Xeon Phi coprocessors.
With the introduction of the Xeon Phi chips, the competition begins, and some analysts believe the Intel coprocessors could prove a threat to Nvidia’s strong—90 percent—share of the market.
“What this reflects is a fairly common argument between existing and new technologies,” Charles King, principle analyst at Pund-IT Research, told eWEEK
in an email. “It really boils down to whether customers and partners (like ISVs and OEMs) will gain performance benefits from new solutions that justify the investments necessary to commercialize those new technologies. On the GPU side, NVIDIA is emphasizing … the performance of systems using its GPUs but not talking as much about the time/cost required to port existing applications to the platform, training programmers and programmers to gain the maximum advantage of the new systems, etc. Intel’s response is along the lines of, ‘What if you could have an alternative that delivers similar or better performance and will run existing apps and code natively?’ That’s a powerful argument for many players, especially those involved in the commercial HPC space.”
Patrick Wang, an analyst with Evercore Partners, in a report Nov. 15, wrote that “the launch of Xeon Phi marks the beginning of a new era and a new antagonist for [Nvidia].” As Intel ramps up the performance of the coprocessors, the problems for Nvidia will only increase, Wang wrote, saying that “the question is not IF but WHEN [Intel will] gain traction” in the market.
Accelerators like GPUs and—now—Intel’s coprocessors are growing in popularity as HPC organizations in industries such as energy, financial services, health care, science and digital content creation increasingly are turning to supercomputers to run their highly parallel workloads. At the same time, system energy efficiency is at a premium, with organizations looking to drive down the power and cooling costs in these increasingly dense, hyperscale data centers. Of the 500 supercomputers on the Top500 list released Nov. 12, 62 used GPU accelerators or coprocessors.
Both Nvidia and AMD over the past few years have been pushing their low-power, many-core graphics technologies as ideal accelerators, with Nvidia grabbing the bulk of the market. However, Intel officials, with their Xeon Phi coprocessors, are looking to make inroads. Already Intel Xeon chips power many supercomputers—they are in 76 percent of the Top500 list systems—and now the vendor is looking to leverage that presence to push its Xeon Phis.
Intel’s coprocessors have been eight years in the making, and are the first products out of the giant chip maker’s Many Integrated Cores (MIC) program. At the SC12 show, Intel unveiled two versions that have 60 or more cores—the Phi 5110P and 3100—that will come out next year, though noting that early customers, such as the Texas Advanced Computing Center (TACC), where Stampede is being built, are using custom models of Phi.
During a recent day-long workshop at TACC for journalists, Intel officials argued that the x86-based Xeon Phi coprocessors are a better alternative for HPC organizations than GPU accelerators. Most programmers already are familiar with the x86 architecture and its tools—from compilers and run environments to debuggers libraries and workload schedulers—and most workloads already are optimized for x86, said James Reinders, director of parallel programming evangelism at Intel. In addition, the x86 coprocessors can run operating systems independent of the CPUs, according to Intel.
Workloads running on Xeon Phis have to undergo significantly less recoding than those running on GPU accelerators, Reinders said. Organizations running highly parallel workloads “don’t need multiple versions of processors for different architectures.”
The Stampede supercomputer
, once fully operational, will have a performance of 10 petaflops (quadrillions of calculations per second), and TACC Director Jay Boisseau said that the Xeon Phi coprocessors will account for about 70 percent of the performance. During the workshop, Boisseau reiterated the benefits of having accelerator technology based on x86.
“X86 has been around a long time, and people are pretty familiar with the architecture,” he said, adding that while GPU accelerator technology is good, “programmability is a problem.”