The GPU maker is giving away its PGI OpenACC compiler to academic researchers, while making it available to commercial developers in a 90-day trial.
Nvidia wants to make it easier for academic researchers to adopt the OpenACC standard for running parallel computing workloads.
With the International Supercomputing Conference (ISC) going on this week in Germany, Nvidia officials announced that the GPU maker was releasing its OpenACC Toolkit, a free offering that comes with an array of OpenACC parallel programming tools.
Already, more than 8,000 researchers and scientists have adopted the OpenACC programming standard since it was first introduced four years ago by such vendors in the high-performance computing (HPC) field as Cray and Nvidia to make it easier for users to adopt parallel computing to speed up the performance of their workloads, according to Nvidia officials.
By making the OpenACC Toolkit available for free, Nvidia is hoping to convince more users in the HPC space to embrace graphics technology in their computing environments.
The toolkit "has everything a developer needs to get up and running on GPU computing," Roy Kim, group marketing manager at Nvidia, told eWEEK
OpenACC is designed to enable code written in C, C++ and FORTRAN to be offloaded from the CPU to an attached accelerator—such as GPUs from Nvidia or Advanced Micro Devices or x86 Xeon Phi many-core coprocessors from Intel—to help boost application performance.
The toolkit—which can be downloaded
immediately—includes the PGI Accelerator Fortran/C Workstation Compiler Suite for Linux, which supports the OpenACC 2.0 standards, is free to academic researchers and programmers, and is available to commercial developers in a free 90-trial period. There also is the NVProf Profiler, which helps users determine where to add OpenACC directives to accelerate their code, code samples and documentation.
An important upcoming feature of the PGI OpenACC compiler is that it not only will speed up OpenACC code on GPUs, but also now on multi-core x86 CPUs, according to Kim. That means if an organization has a system that doesn't use GPUs, the compiler can still parallelize the code on x86 CPU cores and boost the performance.
The performance boost is five to 10 times in a system using a GPU than in systems with only the CPUs. The x86 CPU portability feature is in beta with select customers now, and will be available in the fourth quarter, according to Nvidia officials.
"This is a really powerful feature that a lot of developers are looking for," Kim said.
Organizations in the HPC space are increasingly using parallel computing to run their workloads faster and more efficiently by leveraging the GPU accelerators from Nvidia or AMD or Intel's Xeon Phis to boost the performance of their systems while keeping power consumption in check. Workloads can be broken up with the various tasks running in parallel on the accelerator cores before being brought back together once the work is done.
On the latest Top500 list
of the world's fastest supercomputers that was released this week, 88 of the systems used accelerator technology, up from the 75 on the last list from November 2014, according to organizers. More than half use Nvidia Tesla GPUs, followed by almost three-dozen using Xeon Phi coprocessors and four using Radeon GPUs from AMD. Four are running a combination of Nvidia GPUs and Xeon Phis.