New Nvidia Chips Enable Big Leap in Deep Learning, Virtual Reality

Nvidia took the wraps off a new server called the DGX-2 it says offers breakthrough performance for deep learning applications in a smaller, more energy efficient package than was previously required to get similar performance today.

Huang Nvidia Server Keynote

SAN JOSE, CALIF.—Chipmaker Nvidia made a series of big product announcements here at its annual GPU Technology Conference (GTC) for developers highlighted by faster graphics chips, a workstation designed to compete with far more expensive supercomputers and a new high-powered server.

A clearly energized CEO Jensen Huang demonstrated during his two-hour keynote how far the company’s products had advanced far beyond the graphics chips where it first made its name.

“Nvidia is an architecture company,” said Huang, noting the company now has 370 partners working on self-driving car systems based on an Nvidia open platform designed to help development of autonomous cars, trucks and taxis.

Huang showed off the new Nvidia Drive Constellation, a computing platform based on two different servers. The first server runs the new Nvidia Drive Sim software that simulates a self-driving vehicle's sensors, including cameras, lidar and radar. 

The second server runs the Nvidia Drive Pegasus, a complete autonomous vehicle software stack capable of processing huge amounts of simulated data as if it were coming from the sensors of a car driving on the road through a range of conditions including traffic jams, nighttime road conditions and inclement weather. 

Autonomous Cars a Major Use Case

Huang noted that car companies are limited in the amount of real world testing they can do. He said it takes a year for a fleet of 20 autonomous cars to drive a million miles. “With 10,000 Constellations, we could cover 3 billion miles in a year,” he said.

Nvidia’s announcement follows the recent news that a pedestrian was hit and killed by an autonomous car being tested even though the car also had a human driver monitoring it. Huang took note of the tragedy.

“Safety is the single most important thing, probably the hardest computing problem the world has ever encountered,” he said. “The accident last week shows that we have to solve this problem step by step. There is so much at stake because we have an opportunity to save so many lives in the future.”

Nvidia announced a major upgrade to its Tesla V100 data center GPU, doubling the memory to 32GB. It also announced a new GPU interconnect fabric called Nvidia NVSwitch that enables up to 16 Tesla V100 GPUs simultaneously communicate at record speeds as high as 2.4 terabytes a second.

HPC Stands to Benefit Greatly from New GPUs

Huang said the Tesla V100 GPUs will help data scientists develop more sophisticated deep-learning models with greater accuracy. High-performance computing apps also will benefit from the improved performance of having double the memory to access than the previous 16GB offered.

Cray, Hewlett Packard Enterprise, IBM, Lenovo, Supermicro and Tyan announced they will begin rolling out new Tesla V100 32GB systems later this spring. Oracle Cloud Infrastructure also announced plans to offer Tesla V100 32GB GPUs in the cloud in the second half of the year.

Huang also showed off the new Quadro GV100 GPU with Nvidia RTX technology that he said will make real-time ray tracing available to millions of artists and designers for the first time.

The new processor is designed to let media and entertainment professionals see and interact with their creations with correct light and shadows. They will also be able to perform complex renders up to 10 times faster than with a CPU alone.

Other potential customers include product designers and architects who will be able to create interactive, photo-real visualizations of massive 3D models in real time.

High-End Image Rendering Now Much Faster

Huang estimated that about a billion images are rendered every year, but that number could rise substantially with the advent of the Quadro GV100 GPU. “We think the estimate will jump by a factor of 10, and you will do the rendering faster and most importantly save money,” said Huang.

The keynote wrapped up with a preview of a virtual-reality project Nvidia is working on called Project Holodeck that was also demonstrated at last year’s conference. In it, a real-life driver uses a virtual car to drive a real car from his remote location.

“This project is about thinking through the entire path of how to deploy self-driving cars into the world,” said Huang. “The driver is the backup system, but in a driverless car who is the backup? We know VR has the ability to teleport us into a new world. Imagine a car in the world somewhere that needs help.”

The demonstration drew applause from the crowd. It was also a bit of life imitating art. There’s a scene in the new "Black Panther" movie where a special agent is able to fly a real fighter aircraft by operating a virtual one in a fixed location.

David Needle

David Needle

Based in Silicon Valley, veteran technology reporter David Needle covers mobile, bi g data, and social media among other topics. He was formerly News Editor at Infoworld, Editor of Computer Currents...