Intel may have scrapped its “Larrabee” discrete graphics processor, but no one should mistake the move for a sign that the chip maker is moving away from the business of general-purpose GPUs, according to industry analysts.
Company officials are looking for ways to expand the Intel Architecture into new segments, and the demand for GPGPU computing is only going to grow. Intel’s decision Dec. 4 to shelve the Larrabee project due to what a spokesperson called development issues was an indication of the kind of challenge the company is putting in front of itself as it works to create a product to compete with offerings from Nvidia and Advanced Micro Devices.
However, while the development of the first-generation Larrabee chip may not have been what Intel officials wanted, they can apply what they’ve learned to their next GPU effort, as well their work in developing many-core CPUs, analysts said.
“Building a highly parallel, high-performance product is really hard,” Nathan Brookwood, an analyst with Insight 64, said in an interview. “It’s taken ATI [which AMD bought in 2006 for $5.4 billion] and Nvidia many generations … to get this kind of high performance out of a manageable chip.”
John Spooner, an analyst with Technology Business Research, agreed.
Intel’s decision says more about “the difficulty of bringing a new graphics architecture to market,” Spooner said. “It’s not a build-it-and-they-will-come situation for Intel. Intel needs to work with developers and get them on board if it [is going to have] any chance of selling a large number of Larrabee chips. The Nvidia/ATI model of high-end discrete graphics processors [plus] programming for them is working well enough. Developers understand it [and] are comfortable with it, and having a third horse in that race is a bigger undertaking than maybe Intel thought.”
Intel officials have been talking about Larrabee since 2007, and after some delays, appeared set to release it in the first quarter of 2010. The company demonstrated Larrabee at its Intel Developer Forum in September, and at the Supercomputing show in November officials showed off an over-clocked Larrabee chip topping the 1-teraflop (trillion floating-point operations per second) mark.
However, instead Larrabee will be released as a development platform for computer graphics and the HPC (high-performance computing) space.
GPGPUs Take Off
Intel officials say Intel will continue to develop graphics technology, but have not been specific about plans.
Intel’s decision gives AMD and Nvidia some running room as they compete against each other in the GPGPU space. Both companies have aggressively pushed their graphics technologies for general-purpose workloads, and the HPC field has been embracing the idea for its highly parallel computing workloads.
AMD officials have called AMD’s ability to make both traditional CPU and GPGPU products a key differentiator in the competition with Intel and Nvidia, and have merged AMD’s computing and graphics businesses in an initiative called Fusion.
Meanwhile, Nvidia in November at the SC event unveiled its upcoming line of new Tesla processors based on its “Fermi” architecture. The Tesla 20 series chips, which are scheduled for release in May 2010, will offer the performance of traditional CPUs at a fraction of the cost and power, according to officials.
The Fermi architecture will also feature more than 3 billion transistors and 512 CUDA cores. The core count is significantly higher in GPUs than in CPUs, where AMD and Intel are planning to move into the eight- and 12-core range in 2010.
In an interview in November, Boyd Davis, general manager of Intel’s server platforms group marketing, said Intel is experimenting with tighter graphics integration with its Xeon server chips, but added that the current demand for GPGPUs in the HPC space is fairly narrow.
Davis also said organizations will find that once Intel’s eight-core “Nehalem EX” Xeon CPU is released in early 2010, they will be able to do many of their parallel-computing workloads on that processor.
Spooner agreed that the sector of the HPC space that uses GPGPUs is not one that Intel has historically courted.
“It’s not a market that a company like Intel is geared toward,” he said. “Intel is more about taking a standard technology, like x86, and shipping millions upon millions of units. So it was attempting to use x86 to bump into graphics, but the development side of it proved to be a larger undertaking than maybe it first thought.”
However, Brookwood said while that area of the HPC market may be an emerging one, GPGPUs are getting a lot of interest in the field.
“In HPC, it is the rage,” he said.
A growing number of supercomputers that are making the Top500 list of the world’s fastest systems are taking advantage of CPU-GPU co-processing. Brookwood pointed to the fifth-fastest computer, the Tianhe-1 at the National Supercomputer Center in China, which has 6,000 nodes that each hold an Intel Xeon processor and an ATI Radeon GPU. The computer peaks at 1.5 teraflops.
He also noted that officials with the Oak Ridge National Laboratory and the supercomputing center at the University of Illinois at Urbana-Champaign both have said their next supercomputers will be based as much on GPUs as CPUs.
Count on Intel
In addition, OEMs are increasingly bringing GPUs into their HPC systems. At the SC show, Cray, Dell, Hewlett-Packard, NEC and SGI all showcased systems running Tesla GPUs, while Appro demonstrated its HyperPower GPU performance clusters, which feature both Nvidia Tesla GPUs and Intel’s Xeon CPUs.
Brookwood said Intel’s upcoming Nehalem EX processor will be a great product, but questioned whether it could substitute for GPUs in these HPC workloads.
“When you look at the raw floating-point power of the Nehalem EX versus the raw floating-point power of ATI … it’s no contest,” he said. “These GPUs have been honed over many generations to crank out floating-point performance, and they just have gigaflops and teraflops to spare when compared with what Nehalem EX can provide. And that’s not a slam at Nehalem EX.”
It’s this growing demand for GPGPU capabilities and the new applications that will grow up around them that will have Intel continue to work on the foundation of Larrabee, the analysts said.
“They’re very persistent, and they will persevere until they get it right,” Brookwood said.
Intel is also patient, Spooner said. “Intel’s taking a very long view with the new business segments it’s targeting, including graphics, phones [and] consumer electronics,” he said. “I’m sure the goal is to deliver hardware over time.”
In the meantime, Intel will be able to take what it’s learned from Larrabee, including the development of many-core chips, and apply it elsewhere. Indeed, Intel researchers announced Dec. 2 a prototype of a 48-core CPU that they call a “single-chip cloud computer” with 10 to 20 times the computing power of a current Core processor.
The move to make Larrabee a development platform also makes sense, the analysts said.
“Without the programmers, nothing of this kind would be worth anything,” Brookwood said. “You need to give programmers a platform they can explore.”
Given the fits and starts of Larrabee’s history, it made sense for Intel to dump it and start over, he said.
“Intel deserves a lot of credit for knowing when to bite that bullet and not keep beating a dead horse,” Brookwood said.