Ian Buck, vice president of accelerated computing at Nvidia, said those changes are happening quickly.
"Data center workloads are changing," Buck wrote in a post on the company blog. "Not long ago these systems were primarily used to handle storage and serve up web pages, but now they're increasingly tasked with AI [artificial intelligence] workloads like understanding speech, text, images and video or analyzing big data for insights. Billions of consumers want instant answers to a multitude of questions, while enterprise companies want to analyze mountains of data to better serve their customers' needs."
GPUs from Nvidia and Advanced Micro Devices, as well as other accelerators, are being used by high-performance computing (HPC) organizations and increasingly by enterprises to improve the performance of their systems while controlling power consumption. Nvidia in April announced the massive P100 GPU, which uses the NVLink technology. The company is offering the P100 in its own system, the DGX-1, which officials called a supercomputer for AI and deep learning that includes eight of the GPUs and two Intel Xeon chips.
IBM with its latest Power8 server is the latest vendor to use the P100, but Nvidia officials have said they expect other systems makers—including Hewlett Packard Enterprise (HPE) and Dell—to roll out x86 systems with the GPU. The 2U (3.5-inch) Power S822LC runs on two Power8 chips with up to 20 cores and can hold up to four Nvidia P100 GPUs. Along with NVLink, the system also offers PCIe 3.0 and CAPI interconnect capabilities. It comes with up to 1TB of memory and can support hard drives or solid-state drives (SSDs) for storage.
Boday said the new server can cost as much as 30 percent less in some configurations when compared with x86-based servers. Pricing begins at $5,999.
The announcement of the S822LC comes a week after IBM officials gave more details about the company's upcoming Power9 processors, which will offer up to 24 processing cores and be able to run an array of accelerators, including GPUs, FPGAs and application-specific integrated circuits (ASICs). The architecture also will embrace Nvidia's upcoming NVLink 2.0 and PCI Express 4.0.