Lenovo to Expand Use of Nvidia GPUs for HPC, AI, Deep Learning

The vendor this week also unveiled new offerings to support such Microsoft technologies as Azure, Windows Server 2016 and Storage Spaces Direct.

Nvidia GPU

Lenovo officials this week put a focus on the company's data center infrastructure offerings with new capabilities in such areas as high-performance computing, artificial intelligence, deep learning and—in conjunction with Microsoft—the cloud.

The company announced that it is expanding its use of the latest GPU accelerators from Nvidia to boost performance and power efficiency in systems designed for high-performance computing (HPC), artificial intelligence (AI) and virtual desktop infrastructure (VDI). The company is looking to boost its presence in the growing HPC and supercomputer space, and GPUs and other accelerators are becoming important components in the space to drive system performance while keeping power consumption down.

In the latest Top500 list of the world's fastest computers, 93 systems used accelerators or coprocessors, with 63 of them using Nvidia GPUs. Lenovo will use Nvidia's Tesla P100, P40 and P4 GPUs in servers aimed at HPC and newer deep learning workloads and the chip makers Tesla M10 GPU and GRID technology for VDI environments.Lenovo's embrace of the latest Nvidia GPUs comes as interest in AI and deep learning grows among users, according to Pat Moakley, director of Flex System product marketing at Lenovo.

"With customers increasingly looking at application areas like deep learning or artificial intelligence (AI), they require the raw compute power housed in GPU accelerators because ordinary CPUs are not able to handle these workloads efficiently," Moakley wrote in a post on the company blog. "Currently, Lenovo customers across the spectrum of HPC and enterprise are looking to AI and deep learning as key pillars of their future."

Nvidia in April introduced the massive P100 GPUs that pack 150 billion transistors and is built or data center and cloud environments. Earlier this month, company officials unveiled the P4 and P40 (pictured), which are aimed at the part of the deep learning process called "inference," which has been the domain of CPUs from Intel. The newest GPUs are part of Nvidia's larger push into the fast-growing AI and deep learning spaces. All the latest GPUs are based on Nvidia's Pascal architecture.

Deep learning essentially has two parts: training (where neural networks are taught such tasks as object identification) and inference (where they use this training to recognize and process unknown inputs, such as Siri understanding a user's question and then responding correctly). Most training is done with GPUs, while most inference work is done with CPUs. However, Nvidia is looking to push its GPUs into the inference space, while Intel wants x86 chips to be used for training as well.

Lenovo's Moakley wrote that the Tesla P100 will be used in PCIe-based servers in HPC and mixed-use data center environments. He noted that the GPU delivers up to 4.7 teraflops of double-precision performance and that a single P100 node can replace up to 32 traditional CPU nodes. The P4 and P40 GPUs will be used in systems running deep learning inference tasks, he wrote.

In VDI deployments, workloads are moving to increasingly graphics-rich applications and operating systems, including Microsoft's Office 2016/365 and Windows 10. Lenovo's new offerings for the VDI market armed with the Nvidia M10 GPUs and GRID platform will better support the graphics demands.