SEATTLE-The computing industry’s march toward exascale capabilities will be hindered unless technology vendors can solve the growing problem of power consumption, according to the head of Nvidia, a company that has benefited from the drive for greater energy efficiency.
During a keynote address Nov. 15 at the SC 11 supercomputing show here, Jen-Hsun Huang, president and CEO of Nvidia, called exascale computing “the next frontier for our industry.” Exascale computing will enable faster, more powerful high-performance computing (HPC) applications in such industries as energy, medicine and defense.
The industry’s goal is to reach the exaflop level of computations by 2019, while doing so within a 200-megawatt limit. It would require a significant leap forward-currently the fastest computer in the world, Fujitsu’s K Computer, has reached the 10.51-petaflop (quadrillion floating point operations per second) level. Reaching the exaflop level would require about 100 times better performance.
It’s an attainable goal, Huang said, but only if the power puzzle is solved.
“Supercomputing is now power-limited, just like a cell phone, just like a tablet,” he said. “This is our gravity. … Energy efficiency is our single critical imperative.”
The issue of power efficiency in systems isn’t new, and isn’t only a supercomputing issue. The demand for servers that consume less power is coming from all sectors, and has led Intel and Advanced Micro Devices to rapidly increase the energy efficiency of their x86-based chips, adding cores as a way of improving performance while introducing better power management capabilities and other features.
However, while CPUs like those from Intel and AMD are fast, they’re also complex and are made for single-thread computation, Huang said. What will drive HPC going forward is parallel processing, and what will propel that are graphics processing units, or GPUs. Nvidia is by far the world’s top GPU vendor.
Its graphics chips were initially used in game consoles and workstations, but researchers and scientists in recent years have begun using them for HPC workloads due to their high performance, parallel processing capabilities and low power consumption. GPUs also are being used in conjunction with CPUs to help accelerate application speeds, a trend that is growing fast. In June, 17 of the systems on the Top500 list of the world’s fastest supercomputers used accelerators; on the list released Nov. 14, 39 used accelerators, most of them being Nvidia GPUs.
Recent announcements, including several made at SC 11, illustrate the increasing popularity of graphics accelerators in HPC environments. Nvidia and several European supercomputing centers will build a hybrid supercomputer-one that uses both CPUs and GPUs-at the Barcelona Supercomputing Center that officials said eventually will enable exascale computing while using 15 to 30 percent less power than a system using traditional chips.
In addition, supercomputer-maker Cray said it will take over the National Science Foundation’s Blue Waters project-which IBM dropped in August over concerns about costs and technical details-and build a massive system that will offer a sustained performance of a petaflop of performance. The supercomputer will be based on Cray’s new XK6 systems, which will be powered by AMD’s 16-core Opteron 6200 “Interlagos” chips, and also will include GPUs from Nvidia.
Parallel computing is difficult, Huang said, and a challenge is to make moving to a parallel environment easier. Nvidia is teaming up with Cray, The Portland Group and CAPS entreprise to create OpenACC, an effort to create a standard for parallel computing. The goal is to enable researchers, scientists and corporations to run applications in a parallel fashion on heterogeneous CPU/GPU systems. Parallel programmers will be able to outline directives to the compiler, which will do all the work to optimize the applications for GPU-accelerated environments.
“It’s going to bring a lot more people to parallel computing,” Huang said.