Nvidia officials are looking to press their advantage in the fast-growing artificial intelligence space with the introduction of the company’s new Tesla T4 GPUs and a new platform and software aimed at the inference side of the AI equation.
At the vendor’s GTC technology conference in Tokyo this week, Nvidia CEO Jensen Huang showed off the new GPU, which is based on Nvidia’s Turing architecture that was introduced last month. At the time, the first of the Turing GPUs unveiled were aimed primarily at gamers. At the GTC Japan event, Huang turned his attention to the data center, including hyperscale environments.
Along with the Tesla T4 GPUs, the CEO announced the TensorRT software to help drive the development of voice, video, image and recommendation services and the TensorRT Hyperscale Inference Platform, powered by the T4 GPUs and aimed at enhancing inference tasks in such industries as automotive, manufacturing robotics and health care.
The news of the Turing-based GPUs and other offerings aimed at inference jobs came amid a rush of announcements by Huang that hit on everything from self-driving cars to autonomous medical devices. It’s part of Nvidia’s larger years-long drive to leverage the parallel-processing capabilities to push the company into a leadership position in the emerging market of AI and its subsets, machine learning and deep learning.
AI essentially can be divided into two halves—training and inference. Training involves pushing massive amounts of data through neural networks to help them learn. Inference refers to taking what the neural networks have learned and having machines put that into use. A simple analogy is the school system: Students go through years of school to learn—the training—and then put what they learn to use when they go out into the world, which equates to the inference side. A few years ago, GPUs were mostly used for training while CPUs were leveraged for inference.
However, in recent years, Nvidia officials—like rival Intel and others—have sought to enable their chips to be used for both. It’s not surprising: Company executives expect the inference market to grow to $20 billion within the next five years. For Nvidia to take full advantage of the larger AI market, it will need to be able to do both training and inference.
AI features increasingly are being built into applications to enable capabilities like natural language processing, language, image recognition and recommendations, such as those offered by such companies as Amazon and Google. They will increasingly be used by companies to drive new capabilities in their businesses.
“There is no question that deep learning-powered AI is being deployed around the world, and we’re seeing incredible growth here,” Huang said during his keynote address. “The number of applications that are now taking advantage of deep learning are growing exponentially. Hyperscale data centers can’t run just one thing; they have to run everything.”
Nvidia two years ago rolled out the Tesla P4 GPU—the T4’s predecessor—which was designed specifically for AI tasks. According to Nvidia, T4, which is designed for scale-out servers, delivers up to 12 times the performance in the same power envelope, including being five times faster in speech recognition inference and three times faster in video inference. It comes loaded with 2,560 CUDA cores and 320 Turing Tensor cores, 16GB of GDDR6 memory and 320 GB/s of bandwidth.
Nvidia officials said the T4 GPUs, packaged in a PCIe form factor that uses 75 watts of power, offer up to 40 times the inferencing performance of CPUs.
The TensorRT Hyperscale Inference Platform includes both the T4 GPUs and new inference software and can run multiple deep learning models and frameworks at the same time.
“As a result, the usefulness and utilization go way up,” Huang said. “If each node can run any model at the same time, then the utilization of this server will be maximum.”
Nvidia officials said Google is planning to use the T4 chips in its systems that power its cloud platform, and a range of other vendors—including Microsoft, Cisco Systems, Dell Technologies, IBM and Hewlett Packard Enterprise—also have voiced support for the platform.