Intel's Looming Battle with Nvidia

 
 
By Scott Ferguson  |  Posted 2008-03-28 Print this article Print
 
 
 
 
 
 
 


}

In June 2007, Nvidia signaled that it was ready to move out of the traditional graphics market by using its own GPU technology within HPC. The result is called Tesla, which offers 128 processing cores that work in parallel and provide more than 500 gigaflops (500 billion floating point calculations per second) of performance.

Since most developers do not create applications that work exclusively with a GPU, Nvidia also developed CUDA (Compute Unified Device Architecture), a programming language that allows the GPU to be programmed like an x86 CPU. This seems to mean that Nvidia will not need to develop its own CPU or buy a company such as Via, which makes low-watt x86 processors.

Andy Keane, general manager of GPU computing at Nvidia, said the fact that both Intel and AMD are working toward integrating the GPU onto the silicon shows that the CPU has reached the limits of Moore's Law, the observation that computing performance doubles about every two years. Intel and AMD are trying to add performance by incorporating the graphics onto the silicon itself, Keane said.

In Nvidia reasoning, the GPU-not a traditional x86 chip with more processing cores-is the key in moving computing forward. With a GPU, Keane said that Nvidia can keep expanding the die size-Intel is moving to shrink its silicon-which allows the company to add more features onto the die, increasing performance and securing its place in HPC as well as a host of other fields.

"People will very quickly figure out that a separate GPU-a GPU that is not on the die [with the CPU]-provides a better experience in both lifestyle or graphics applications than a free GPU that's been integrated onto the CPU," he said.

While this sort of offering can address issues within standards PCs-discrete versus integrated graphics-Nvidia is also betting that the GPU alone can meet the needs of the HPC market.

What Nvidia is doing with Tesla is twofold.

First, it's using the technology to increase performance within the data center by outfitting servers with a much faster processing engine. The second goal is to bring HPC to workstation PCs and bring complex, scientific applications out of the data center and to the desktop.

"How do you give scientists and engineers the ability to run a good portion of their applications at the desktop?" Keane asked. "That's the exciting thing with HPC. All of those scientists that have had to use shared resources have watched their applications slow down .... Giving a person more and more compute power is much better than concentrating it in the backroom. The GPU is something that can do that because we are already in your PC."

McGregor said the HPC area remains wide open with enough room for Nvidia, AMD and Intel to offer a number of competing products that look to solve problems within this field. While Nvidia focuses on the GPU, McGregor said developments such as Intel's 80-core terascale processor and its low-watt Silverthorne core will help that company's efforts to move further into HPC field.

Another advantage Intel has is its willingness to invest and work with the developer community.

Along with Microsoft, Intel donated $20 million to fund research into developing a new generation of developers who work with multicore, multithreaded processors, Spooner said.

"Nvidia has CUDA and Intel has their development tools and they are both trying to make it as easy as possible in order to win over developers," Spooner said.



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel