SAN JOSE, Calif.—Nvidia officials are giving researchers and developers tools they need for their work in the growing field of deep learning.
During his March 17 keynote address at the GPU Technology Conference 2015 (GTC) here , Nvidia CEO Jen-Hsun Huang unveiled several new products, including the latest GeForce GPU—the GTX Titan X—and software and a hardware appliance aimed at developers and data scientists researching deep learning technologies. In addition, Huang announced a compute platform for self-driving cars and gave some details on Pascal, the next-generation GPU architecture due next year.
The new and upcoming offerings feed into Nvidia's efforts around deep learning—also known as machine learning—which is based on the idea of giving compute systems the capability to leverage a growing database of information and neural networks to learn and improve over time. During his talk, Huang focused much of the deep-learning discussion around image recognition, but said the technology will have applications in a broad range of areas, from voice recognition and self-driving cars to medical research and search.
GPUs, with their massively parallel computing capabilities, will play a key role in the development of deep learning, the CEO and other company executives said.
"The topic of deep learning is probably as exciting an issue as any in this industry," Huang told the more than 4,000 attendees at the show during his two-hour keynote, which was devoted entirely to the subject.
Nvidia has been pushing into the high-performance computing (HPC) and scientific computing fields for many years, in large part through its development of GPU accelerators. HPC organizations have increased their use of GPU accelerators from Nvidia and Advanced Micro Devices—as well as x86 co-processors from Intel—to improve the performance of their systems while holding down power consumption.
The growth for the company has been rapid, Huang said. In 2008, the company saw 150,000 downloads of its CUDA—Nvidia's accelerated computing platform—and sold 6,000 Tesla GPUs. There were 27 CUDA applications available in 2008. Now, all told, there have so far been 3 million CUDA downloads. There are 319 CUDA applications and 450,000 Tesla GPUs powering supercomputers in HPC environments.
The key for many Nvidia customers is speed—how fast they can run their workloads and how fast they can get results, he said.
"Without speed, you can't do the work you want," Huang said.
Speed is a driver behind the new Titan X GPU, he said. Titan X comes with 8 billion transistors, 3,072 CUDA cores and up to 7 teraflops of peak single-precision performance. The GPU, based on the Maxwell architecture, also offers up to 12GB of memory, doubling the capacity of current GPUs.
The chip, which is available now and costs $999, is aimed first at the gaming space, but Huang and other executives said its capabilities fit in easily with the deep learning field. Pointing to the AlexNet neural network, Huang said it took Titan X three days to train the network using the 1.2 million-image ImageNet data set, as compared with the 43 days it took a 16-core Intel Xeon processor.
In addition, Huang showed off Nvidia's new Digits DevBox, an appliance armed with four Titan X GPUs and the company's Digits Deep Learning GPU Training System software. The system is aimed at helping developers and researchers more easily develop neural networks. Huang said he doesn't expect to sell a lot of the Digits DevBoxes, which will cost $15,000 when they start shipping in May. It's meant for developers, not the general public.
"It’s not meant to be a business," the CEO said. "It's meant to help you.