At Nvidia’s GPU Technology Conference 2015 in March, much of the focus for CEO Jen-Hsun Huang and other company executives was on the growing field of deep learning—the idea that machines can learn and improve over time based on their experiences.
During the show, Nvidia officials unveiled an array of products and tools—from the latest GPU technology to a new appliance—aimed at accelerating the work researchers and developers are doing in the field. In addition, three keynote speakers—including officials from Google and Baidu—talked about the strides made in deep learning, and there were almost four-dozen demonstrations and projects on display showing off deep learning efforts made possible by GPUs.
The company this week is unveiling enhancements to its GPU-accelerated software that officials said will significantly boost the performance of the deep learning projects being run by researchers and data scientists by enabling them to build more accurate neural networks through better and faster model training and model design.
The news announced July 7 at the International Conference on Machine Learning (ICML) 2015 in France included updates to Nvidia’s DIGITS, CUDA and cuDNN software, all designed to make it easier for organizations to use GPU-accelerated technology in their efforts.
Companies and researchers in the high-performance computing (HPC) and supercomputing fields for years have been leveraging GPU accelerators from Nvidia and Advanced Micro Devices to boost the performance of their systems without significantly increasing power consumption. Nvidia officials believe the highly parallel processing capabilities enabled by the company’s GPUs can help drive the deep learning push, given the demands from neural networks.
According to Will Ramey, senior product manager for GPU computing at Nvidia, the practical uses for deep learning span everything from image classification and action recognition to speech recognition and translation and natural language processing. Self-driving cars will need to be able to not only follow a path from one place to another, but recognize everything around them—from oncoming vehicles and pedestrians to road signs and detours—and react accordingly.
In the medical field, use cases include breast cancer cell mitosis detection and brain image segmentation, Ramey told eWEEK.
“All of these different examples are about deep learning, and GPUs are at the core,” he said.
The enhanced software offerings fit in with that focus on GPU computing, Ramey said. The key for the new version of DIGITS is support for up to four GPUs, which helps lead to up to two times faster neural networks training, according to the company. DIGITS—Deep Learning GPU Training System—is software built atop CUDA and cuDNN that helps users design, train and validate deep neural networks. DIGITS 2 lets users spread training workloads over multiple GeForce GTX Titan X GPUs in parallel, rather than having to rely on a single chip, reducing the training time from weeks to days. Tasks that took two days to run can now be done in a single day, and where researchers may have only been able to run a single workload in a day, they can now run two jobs in a single day.
DIGITS 2 also comes with a graphical user interface, making it more accessible to a wider range of users.
Nvidia also is enhancing cuDNN (CUDA Deep Neural Network), a GPU-accelerated library of mathematical algorithms for neural networks that developers can use in their machine learning efforts. Most significantly, cuDNN 3 now supports 16-bit flowing point data storage in GPU memory, essentially doubling the amount of data that can be stored. It also optimizes memory bandwidth and enables researchers to train larger neural networks.
“You can run training models up to two times faster on a single GPU,” Ramey said.
The latest generation of the CUDA toolkit—version 7.5—adds the 16-bit flowing point data storage support, though currently the Tegra X1 GPU offers that capability.
The DIGITS 2 preview release is available now, while the cuDNN 3 library will be available in top deep learning framework in the coming months, according to Nvidia officials.