Intel Xeon Phi Coprocessors Challenge Nvidia, AMD in HPC Market
The same day Intel announced the Xeon Phi chips, Nvidia and AMD both unveiled the latest generations of their respective GPU accelerators. Nvidia announced its 28-nanometer Tesla K20 and K20X GPUs, the first based on the Kepler architecture. The Cray-based Titan supercomputer boasts 560,640 processors, including 261,632 of Nvidia's K20x GPU accelerators. Titan offers a performance of 17.59 petaflops, of which the Nvidia accelerators account for about 90 percent. AMD unveiled its FirePro S10000 GPU accelerator on Nov. 12, with officials noting that it is based on the company’s Graphics Core Next architecture, which enables the GPU to simultaneously boost power for both compute and graphics workloads. Officials with both Nvidia and AMD dismissed Intel’s Xeon Phi technology. Sumit Gupta, general manager of Nvidia's Tesla Accelerated Computing unit, said Intel is several years behind in accelerator technology, and noted that the Xeon Phi coprocessors barely beat Nvidia’s previous Fermi GPUs in performance and power consumption. “Their new product is in the same ballpark as our three-year-old product in efficiency, so they’re very behind schedule in energy efficiency,” Gupta told eWEEK.John Gustafson, CTO of AMD’s graphics business, echoed what Gupta said, noting that Intel is “behind the wave with what we do.” AMD, which relies primarily on the OpenCL programming language, has been offering GPU accelerators for more than three years, giving the company a head start in the field. In addition, the idea that the x86 architecture gives Intel an advantage over GPU accelerators also doesn’t make sense, Gufstafson told eWEEK. No matter whether they use the GPUs or Intel coprocessors, programmers still have to recompile their software, he said. “We all have to change our codes,” Gufstafson said. AMD’s Opteron server chips are based on the x86 architecture, but the company also ensures that organizations that want GPU capabilities can find them with AMD. (AMD is expanding that idea into its server chip business, announcing late last month that it will start making ARM-based server chips in 2014.) “What AMD does is offer the right tool for the right job,” Gustafson said, pushing back at Intel’s insistence that x86 is the right architecture for any job. “When you’re a hammer, everything looks like a nail.” Cray has used AMD Opteron chips and Nvidia GPU accelerators for years, but earlier this month announced that the first supercomputers in its next-generation XC family of systems will run on Intel’s Xeon processors and will leverage both the Xeon Phis and Nvidia’s Tesla GPUs. Barry Bolding, Cray's vice president of corporate marketing, told eWEEK that over the years, it’s been proven that GPUs can make applications very fast. The Xeon Phi coprocessors look promising, but the question now is, “can you get the same performance out of them that you can with GPUs? We believe you can.” Pund-IT analyst King agreed, and said Intel’s Xeon Phi technology could prove to be a tough competitor for GPUs if the performance is good. “Right now, GPUs have captured a good deal of attention in the research/university HPC space,” King said in an email to eWEEK. “That’s great from a mindshare perspective, but it’s a long road to commercial validation, let alone success. I believe a lot of what comes next depends on how Intel’s coprocessors measure up in overall price/performance to GPUs. If they beat GPUs or even come close, commercial HPC vendors are likely to stick with x86. Cool technologies aside, at the end of the day commercial vendors are trying to make a living. The most successful vendors are those who understand that point and do all they can to support it.”
He also pushed back at the idea that being based on the x86 architecture gives Intel any advantages. While the coprocessors may be able to run the same languages and tools that traditional Xeon CPUs can, Gupta pointed to Nvidia’s CUDA programming language, which works in C/C++ or Fortran and supports OpenACC tools. Overall, 395 million CUDA GPUs have shipped, and CUDA has been downloaded 1.5 million times, he said. In addition, it is being taught in 62 countries, and systems from the likes of Cray, Hewlett-Packard, IBM, SGI and Asus are becoming available with the K20 and K20X GPUs.