Lenovo officials this week put a focus on the company’s data center infrastructure offerings with new capabilities in such areas as high-performance computing, artificial intelligence, deep learning and—in conjunction with Microsoft—the cloud.
The company announced that it is expanding its use of the latest GPU accelerators from Nvidia to boost performance and power efficiency in systems designed for high-performance computing (HPC), artificial intelligence (AI) and virtual desktop infrastructure (VDI). The company is looking to boost its presence in the growing HPC and supercomputer space, and GPUs and other accelerators are becoming important components in the space to drive system performance while keeping power consumption down.
In the latest Top500 list of the world’s fastest computers, 93 systems used accelerators or coprocessors, with 63 of them using Nvidia GPUs. Lenovo will use Nvidia’s Tesla P100, P40 and P4 GPUs in servers aimed at HPC and newer deep learning workloads and the chip makers Tesla M10 GPU and GRID technology for VDI environments.Lenovo’s embrace of the latest Nvidia GPUs comes as interest in AI and deep learning grows among users, according to Pat Moakley, director of Flex System product marketing at Lenovo.
“With customers increasingly looking at application areas like deep learning or artificial intelligence (AI), they require the raw compute power housed in GPU accelerators because ordinary CPUs are not able to handle these workloads efficiently,” Moakley wrote in a post on the company blog. “Currently, Lenovo customers across the spectrum of HPC and enterprise are looking to AI and deep learning as key pillars of their future.”
Nvidia in April introduced the massive P100 GPUs that pack 150 billion transistors and is built or data center and cloud environments. Earlier this month, company officials unveiled the P4 and P40 (pictured), which are aimed at the part of the deep learning process called “inference,” which has been the domain of CPUs from Intel. The newest GPUs are part of Nvidia’s larger push into the fast-growing AI and deep learning spaces. All the latest GPUs are based on Nvidia’s Pascal architecture.
Deep learning essentially has two parts: training (where neural networks are taught such tasks as object identification) and inference (where they use this training to recognize and process unknown inputs, such as Siri understanding a user’s question and then responding correctly). Most training is done with GPUs, while most inference work is done with CPUs. However, Nvidia is looking to push its GPUs into the inference space, while Intel wants x86 chips to be used for training as well.
Lenovo’s Moakley wrote that the Tesla P100 will be used in PCIe-based servers in HPC and mixed-use data center environments. He noted that the GPU delivers up to 4.7 teraflops of double-precision performance and that a single P100 node can replace up to 32 traditional CPU nodes. The P4 and P40 GPUs will be used in systems running deep learning inference tasks, he wrote.
In VDI deployments, workloads are moving to increasingly graphics-rich applications and operating systems, including Microsoft’s Office 2016/365 and Windows 10. Lenovo’s new offerings for the VDI market armed with the Nvidia M10 GPUs and GRID platform will better support the graphics demands.
Lenovo to Expand Use of Nvidia GPUs for HPC, AI, Deep Learning
In conjunction with Microsoft’s Ignite 2016 show this week, Lenovo announced new and enhanced offerings developed with the software giant for such technologies as the Azure cloud platform and Windows Server 2016.
“Leveraging Lenovo’s industry-leading servers and Microsoft’s built-for-the-cloud operating system, our shared customers can run their business on a trusted software-defined cloud-inspired infrastructure,” Brian Connors, vice president and general manager of strategic technologies and business development at Lenovo, said in a statement.
Among the new offerings is Lenovo’s cloud configuration for Microsoft’s Storage Spaces Direct. Lenovo is part of Microsoft’s Windows Server Software-Defined program and is offering hyperconverged Storage Spaces Direct and other software-defined data center (SDDC) products. The latest combines Windows Server 2016 capabilities with Lenovo’s rack servers to create cloud-based infrastructure products for Microsoft’s Hyper V and SQL database workloads, officials said.
Lenovo updated its database configurations for Microsoft’s SQL Server Data Warehouse Fast Track and introduce a new Cloud Configuration for Hyper-V, which runs on the vendor’s x3650 M5 two-socket servers. Other configurations use Lenovo’s Flex System and x2850 M6 servers, so customers can choose the configurations best suited for their needs.
Lenovo, which continues to build out its server portfolio after buying IBM’s x86 server business for $2.1 billion in 2014, also will run Microsoft’s Azure Stack on its systems. The Azure Stack, which will be available next year, enables enterprises to bring the benefits of the software maker’s public cloud into their data centers and create hybrid cloud environments. At the show, Hewlett Packard Enterprise and Dell Technologies also announced plans to run the Azure Stack on their systems.
In addition, the company introduced the entry-level ThinkServer RS160 for SMBs, a certified reference architecture for Apache Hadoop environments, and the D1212 and D1224 direct attach storage (DAS) appliances.