Nvidia is rolling out what officials are calling the most powerful GPU for PCs, though the product shouldn’t be confused with graphics technology found in mainstream systems. The new Titan V, with 21.1 billion transistors and the ability to deliver 110 teraflops of performance, is aimed at scientists and researchers working on high-performance computing tasks such as simulations and artificial intelligence.
The idea behind the GPU, which Nvidia introduced this week at the Conference on Neural Information Processing Systems (NIPS) in Long Beach, Calif., is to make the power of supercomputers more accessible to scientists, according to Nvidia CEO Jensen Huang.
The Titan V has nine times the horsepower of its predecessor, up to 12GB of High-Bandwidth Memory 2 (HBM2) technology and improved power efficiency, and is based on the same Volta architecture that is driving Tesla V100 server GPU accelerators.
And the Titan V is not cheap, coming in at $2,999.
“Our vision for Volta was to push the outer limits of high performance computing and AI,” Huang said in a statement. “We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links. With Titan V, we are putting Volta into the hands of researchers and scientists all over the world.”
The Tesla GPUs are being broadly adopted by such server OEMs as Dell EMC, Hewlett Packard Enterprise, IBM, Lenovo and Huawei, and are aimed at such emerging workloads as AI, machine learning and data analytics.
The GPU is also being leveraged by cloud provider Amazon Web Services in its new high-end P3 instances, as well as in Microsoft’s Azure cloud in NCv3, an Azure virtual machine series based on the Tesla V100 GPU. Other cloud providers, including Alibaba, Baidu, Oracle and Tencent, also are expected to bring the GPU into their public cloud environments.
The Volta architecture doubles the energy efficiency of the previous Pascal design, and new Tensor Cores in the Titan V are the driving force behind the ninefold performance increase over the company’s previous PC GPU. The GPU comprises six graphic processing clusters and 640 Tensor Cores. Volta also includes independent parallel integer and floating-point data paths, which means it is more efficient on workloads with a mix of computation and addressing calculations, and has a combined L1 data cache and shared memory unit that both improves performance and simplifies programming, officials said.
Organizations can buy the Titan V, which—along with the previous-generation Titan Xp GPU—will be supported on the Nvidia GPU Cloud, which includes the company’s deep learning software stack.
Huang and other Nvidia officials have targeted AI and deep learning as key growth areas for the company and are seeing growth in the data center business as a result. According to numbers from the most recent financial quarter, gaming was still the top revenue driver for Nvidia, bringing in $1.56 billion for the period. However, the data center business generated $501 million in revenue, more than double from the same period last year and 20 percent more than the previous quarter. Officials noted that shipments of the Tesla V100 ramped in the third quarter, due in large part from demand from cloud providers and the high-performance computing (HPC) market.
The key markets also include scientists working on inference and training of neural networks, and the company is pushing to make its AI capabilities available through the cloud, which would expand its customer base, according to Huang. In addition, vertical markets—including automotive, health care, logistics and robotics—also are looking to leverage AI technologies.
“All of these segments we’re now in a position to start addressing because we’ve put our GPUs in the cloud [and] all of our OEMs are in the process of taking these platforms out to market,” Huang said during a conference call in November, according to a transcript on Seeking Alpha. “And we have the ability now to address high-performance computing and deep learning training as well as inference using one common platform. We’ve been steadfast with the excitement of accelerated computing for data centers, and I think this is just the beginning of it all.”