SAN JOSE, Calif.—Nvidia officials are giving researchers and developers tools they need for their work in the growing field of deep learning.
During his March 17 keynote address at the GPU Technology Conference 2015 (GTC) here , Nvidia CEO Jen-Hsun Huang unveiled several new products, including the latest GeForce GPU—the GTX Titan X—and software and a hardware appliance aimed at developers and data scientists researching deep learning technologies. In addition, Huang announced a compute platform for self-driving cars and gave some details on Pascal, the next-generation GPU architecture due next year.
The new and upcoming offerings feed into Nvidia’s efforts around deep learning—also known as machine learning—which is based on the idea of giving compute systems the capability to leverage a growing database of information and neural networks to learn and improve over time. During his talk, Huang focused much of the deep-learning discussion around image recognition, but said the technology will have applications in a broad range of areas, from voice recognition and self-driving cars to medical research and search.
GPUs, with their massively parallel computing capabilities, will play a key role in the development of deep learning, the CEO and other company executives said.
“The topic of deep learning is probably as exciting an issue as any in this industry,” Huang told the more than 4,000 attendees at the show during his two-hour keynote, which was devoted entirely to the subject.
Nvidia has been pushing into the high-performance computing (HPC) and scientific computing fields for many years, in large part through its development of GPU accelerators. HPC organizations have increased their use of GPU accelerators from Nvidia and Advanced Micro Devices—as well as x86 co-processors from Intel—to improve the performance of their systems while holding down power consumption.
The growth for the company has been rapid, Huang said. In 2008, the company saw 150,000 downloads of its CUDA—Nvidia’s accelerated computing platform—and sold 6,000 Tesla GPUs. There were 27 CUDA applications available in 2008. Now, all told, there have so far been 3 million CUDA downloads. There are 319 CUDA applications and 450,000 Tesla GPUs powering supercomputers in HPC environments.
The key for many Nvidia customers is speed—how fast they can run their workloads and how fast they can get results, he said.
“Without speed, you can’t do the work you want,” Huang said.
Speed is a driver behind the new Titan X GPU, he said. Titan X comes with 8 billion transistors, 3,072 CUDA cores and up to 7 teraflops of peak single-precision performance. The GPU, based on the Maxwell architecture, also offers up to 12GB of memory, doubling the capacity of current GPUs.
The chip, which is available now and costs $999, is aimed first at the gaming space, but Huang and other executives said its capabilities fit in easily with the deep learning field. Pointing to the AlexNet neural network, Huang said it took Titan X three days to train the network using the 1.2 million-image ImageNet data set, as compared with the 43 days it took a 16-core Intel Xeon processor.
In addition, Huang showed off Nvidia’s new Digits DevBox, an appliance armed with four Titan X GPUs and the company’s Digits Deep Learning GPU Training System software. The system is aimed at helping developers and researchers more easily develop neural networks. Huang said he doesn’t expect to sell a lot of the Digits DevBoxes, which will cost $15,000 when they start shipping in May. It’s meant for developers, not the general public.
“It’s not meant to be a business,” the CEO said. “It’s meant to help you.
Nvidia CEO Wraps New GPU, Development Boards in Deep Learning
The systems will be built one at a time, and those interested can go to the Nvidia website and essentially apply to get one.
Nvidia also is offering another platform, the Drive PX, for car makers looking to build self-driving cars. It will be available in May for $10,000 and is powered by two of the company’s Tegra X1 chips. Nvidia officials first talked about the platform at the Consumer Electronics Show (CES) in January.
Huang said the development platform will be a complement to the advanced driver assistance systems (ADAS) that are found in cars now . ADAS works to warns drivers when their car is drifting into another lane or stops the car automatically before it hits another car.
The next generation of ADAS will include technology—both hardware and software that can be updated via the cloud—that will essentially enable cars to learn from their experiences. The Drive PX platform “will augment ADAS software with deep-learning networks,” Huang said.
The platform is another example of Nvidia’s transition away from just being a chip maker to being a solutions provider, according to Danny Shapiro, senior director of Nvidia’s automotive business.
“It’s not just computers,” Shapiro said during an question-and-answer session after the keynote. “It’s the complete system.”
Huang also took a look at the road map for Pascal, a new architecture due in 2016 that will bring three new features—mixed precision performance, three-dimensional memory and NVLink—that enable Pascal to offer 10 times the performance of the current Maxwell architecture.
Mixed precision capabilities will lead to better performance, while 3D memory increases bandwidth, according to the company. In Pascal’s case, it brings it up to about 750 Gb/s, more than double the 350 Gb/s in GPUs now, according to Huang. NVLink greatly increases the number of GPUs that can be linked together, from four to 64. Having the three features is important, he said.
“Getting more bandwidth is easy,” Huang said. “Getting more capacity is easy. Getting more capacity and more bandwidth is really, really hard.”