Nvidia is looking to bring its graphics processing know-how into the data center for the first time.
On June 20, the Santa Clara, Calif., company introduced its Tesla processor that executives said will allow the company to translate its graphics processor technology into HPC (high-performance computing).
The Tesla GPU (graphics processing unit) marks Nvidias first attempt to penetrate the enterprise beyond its traditional role as a producer of graphics technology. The companys Quadro processors are mainly used for digital content creation and 3-D graphics, while its GeForce graphics processor is used in video games and other entertainment products.
The Tesla GPU is considerably more powerful. It uses 128 parallel processors that can deliver up to 518 gigaflops of parallel computation in either a high-density PC or workstation. A gigaflop is a billion floating-point calculations per second.
This type of compute power, according to Nvidia, makes the Tesla GPU an ideal processor for a number of highly specialized fields that need HPC capabilities, such as oil and gas companies, the geosciences, molecular biology and medical diagnostics.
However, the Tesla GPU is not meant as a substitute for a traditional CPU, but is designed to work in conjunction with a traditional processor to provide additional computing power, said Andy Keane, the general manager of GPU computing at Nvidia.
By allowing the softwares instructional threads to run in parallel, the processor provides higher throughput in multithreaded applications.
Nvidia also unveiled on June 20 a computing server that the company is touting as an example of the cooperation between GPU and CPU using Tesla technology. This 1U (1.75-inch) system houses eight Tesla GPUs and offers more than a 1,000 parallel processors. This system, when coupled with a standard server with multicore processors, will add teraflops of performance with its parallel processing ability.
The Tesla GPU also offers better performance per watt. Nvidias Computing Server will use about 550 watts of power during peak capacity, Keane said.
In addition to the new GPU and server, Nvidia unveiled what the company calls its Deskside Supercomputer, a high-density workstation that includes two Tesla GPUs that are attached through a standard PCI-Express connection, which then offers eight teraflops of compute power.
In a note to investors on June 21, Douglas Freedman, an analyst with American Technology Research, wrote that Nvidias entrance into the market will likely challenge IBMs Power architecture and Intels Itanium processors by bringing HPC to a terminal. Freedman did add a note of caution that the market might not be ready for a GPU-based supercomputer just yet.
“However, our enthusiasm is tempered on Tesla in the near-term as we are not sure the market is ready for a Tesla solution, nor that [Nvidia] is ready to roll out the solution in any meaningful volume this year,” Freedman wrote.
“We will await adoption of Tesla before incorporating it into our numbers, but we believe in the long run this market will drive the next leg of growth for the GPU.”
In 2006, Nvidia also unveiled its CUDA (Compute Unified Device Architecture), software that allows for thread computing on GPUs and CPUs. Thread computing allows hundreds of on-chip processor cores to simultaneously communicate and cooperate to solve complex computing problems.
The companys CUDA development environment is supported on both Linux and Microsoft Windows XP operating systems.
The standard configuration of the Computing Server costs $12,000, while the Deskside Supercomputer begins at $7,500.
The GPU Computing processor costs $1,499. Both the Tesla processor and the supercomputer will be available in August. The server will become generally available sometime later in 2007.
Editors Note: This story was updated to include information and comments from an analyst.