The high-performance computing space is becoming a key market for ARM, which is looking to expand the reach of the company’s low-power silicon designs beyond smartphones and tablets.
Over the past couple of years, ARM and its chip-making partners have been making strides in a market that has been dominated by the likes of Intel and Advanced Micro Devices with their x86 processors, and IBM and its Power architecture.
ARM-based chip makers like Applied Micro and Cavium have been promoting their systems-on-a-chip for more than a year to the high-performance computing (HPC) community. The SOCs are developed on the 64-bit ARMv8-A platform. ARM scored a big win a year ago when supercomputer maker Cray announced plans to evaluate the ARM architecture—among others—as part of the company’s Adaptive Supercomputing initiative, which is aimed at creating a unified supercomputer architecture that can support a range of processing technologies, including GPU accelerators and Intel’s Xeon Phi coprocessors.
The initiative dovetails with its participation in the Department of Energy’s (DoE) FastForward 2 project.
In March, ARM chip maker Cavium announced it was adding support for Nvidia’s Tesla GPU accelerators to its Thunder-X SoCs.
At the SC 15 supercomputing show in Austin, Texas, which starts Nov. 15, ARM will showcase its latest move in the HPC space: new math libraries tuned for 64-bit processors built on the ARMv8-A architecture. The ARM Performance Libraries are key math routines designed to improve the performance of computational software running on ARM-based HPC systems, according to the company.
It’s a move that illustrates the importance of software as ARM pushes to make inroads in the HPC space against Intel, according to Daniel Owens, product manager of compilation tools development solutions group at ARM.
“It’s quite critical because as companies [move] off x86 and onto ARM, it’s important that [software] performance is in there,” Owens told eWEEK during the company’s TechCon 2015 show last week.
HPC organizations are key early adopters of ARM-based servers, and the company’s math libraries will mean that computation software running on these systems will run faster, officials said. The libraries leverage the specific microarchitecture innovations and features—from memory hierarchy to pipeline configuration—within each chip-making partners’ SoCs to ensure the best system and software performance.
ARM’s math libraries are based on the Numerical Algorithms Group’s Library, which ARM officials said offers a baseline of tested numerical and statistical algorithms that can be used to create variants for the ARMv8-A architecture and bring BLAS, LAPACK and FFT math routines optimized for ARM systems quickly to the market. The ARM libraries are built using the latest compilers, multithreaded and optimized for Advanced SIMD architecture.
ARM officials also want to make it easier for organizations to port their software to ARMv8-A-based platforms—such as servers powered by 64-bit ARM Cortex-A72 and Cortex A-57 SoCs—by offering binary distributions of such HPC open-source applications as ATLAS, OpenMPI, NumPy and TAU in the ARM Performance Libraries. Any changes that are needed to port the applications to the ARM architecture will be sent back to the repositories for other HPC organizations to use.
The ARM Performance Libraries are available for licensing and support by ARM, according to the company.
The effort in HPC is part of a larger push by ARM to bring its 64-bit architecture to the server space. Like their enterprise counterparts, HPC organizations are looking at ARM SoCs for their low-power capabilities and because they are looking for more competition among their chip suppliers, according to Darren Cepulis, data center architecture and evangelist in ARM’s server marketing business development unit.
The math libraries are an important part of that effort, said Javier Orensanz, director of marketing for development solutions at ARM. The libraries cover the key HPC applications that either didn’t run or didn’t run well on ARM, Orensanz told eWEEK.
“We think we are pretty much there,” he said. “We’re looking for any gaps so we can close them.”