Nvidia Brings Tesla P100 GPU Acceleration to PCIe Servers
The move will expand the reach of the company's powerful GPU accelerator, which can be used for such emerging workloads as AI and deep learning.Nvidia officials are adding to the company's portfolio of graphics processors aimed at emerging markets like deep learning, artificial intelligence and computer vision with a new version of its powerful Tesla P100 GPU. At Nvidia's GPU Technology Conference in April, CEO Jen-Hsun Huang introduced the Tesla P100, a GPU for data centers built on the company's Pascal architecture and a 16-nanometer FinFET manufacturing process and aimed at high-performance computing (HPC) environments to address new workloads that require high levels of parallel processing. The first version announced at the Nvidia conference was for the new NVLink interconnect technology. At the ISC High Performance 2016 show this week in Frankfurt, Germany, Nvidia officials unveiled the P100 GPU accelerator for PCIe, an interconnect technology common on most servers. The new chip, which will be available in the fourth quarter, delivers 4.7 teraflops of double-precision performance and 9.3 teraflops of single-precision performance, according to the company. It also provides 18.7 teraflops of half-precision performance with Nvidia's GPU Boost technology. It will come in two versions—one with 16GB of High-Bandwidth Memory (HBM2) and 720GB/second of memory bandwidth, and the other with 12GB HBM2 and 540GB/second of memory bandwidth. Nvidia said system OEMs like Hewlett Packard Enterprise, Dell, Cray, IBM and SGI are working on systems that will incorporate the P100 for PCIe.
The move to support PCIe is important to making supercomputing capabilities available to more scientists and researchers, according to company officials. Most systems include a PCIe slot, while NVLink, which is faster than PCIe, is less widely available. Nvidia estimates that two out of every three scientists don't have access to the compute cycles they need on HPC systems to do their work.