Mobile Devices, Cloud, Applications Drive Server Design Diversity

 
 
By Jeff Burt  |  Posted 2013-12-31 Email Print this article Print
 
 
 
 
 
 
 


However, ARM officials have pointed to the growth of open-source technologies in data centers, the company's strong partnerships and the wide support for ARM in the open-source community.

"Open source is the great equalizer," Lakshmi Mandyam, director of server systems and ecosystem for ARM, told eWEEK in April when HP unveiled a new Moonshot system. "I don't think the gap [between ARM and Intel in server processor technology] is as much as you might think."

The ARM community also will need to rebound from the recent collapse of Calxeda, a leading voice for ARM in the data center. Calxeda executives said the company's failure had more to do with timing—rolling out products before the industry was ready for them—than with the idea of ARM SoCs in servers. Patrick Moorhead, principal analyst with Moor Insights and Strategy, said it also had to do with how much change enterprises are willing to put up with.

"Data centers didn't want too many software transitions, from X86 to 32-bit ARM to 64-bit ARM," Moorhead told eWEEK. In the end, scale-out data centers were only open to one potential change. There is still a market desire for very dense servers and the technology that provides this, lower-power SoCs tied together by intelligent fabric. Intel has made huge advances here, but there are no less than 10 ARM-based companies focused on specialized silicon for specific workloads that are chomping at the bit to make inroads. It will be an interesting 2014 as 64-bit ARM servers make their presence."

AMD also is pushing its heterogeneous computing strategy, the idea of combining CPUs with GPUs, digital signal processors and other accelerators to increase server performance and power efficiency and to enable them to handle increasingly parallel workloads. The foundation of the effort is AMD's APUs, which offer integrated CPUs and GPUs on the same silicon.

AMD and other chip makers, including ARM, Imagination Technologies, Qualcomm, Samsung and Texas Instruments are key members of the Heterogeneous System Architecture (HSA) Foundation, which is working to create standards for system designs that leverage CPUs, GPUs and other accelerators.

Accelerators also are a point of contention in the high-performance computing (HPC) arena. AMD and Nvidia are promoting their respective GPU technologies as accelerators to help HPC systems increase performance without increasing power consumption, important factors as supercomputers and other such systems become more powerful and handle increasingly heavy workloads. During the SC '13 supercomputing show in November, both Nvidia and AMD unveiled new GPU acceleration technologies and Nvidia announced that IBM will support GPU accelerators in its Power systems.

For its part, Intel is answering with its x86-based many-core Xeon Phi coprocessors, which are part of the chip makers "neo-heterogeneity" initiative. Intel executives note that HPC environments will use both processors and coprocessors or accelerators and say Xeon Phi enables Intel to offer common and familiar underlying programming model and tools. Intel officials in November released details about the upcoming next generation of Xeon Phi, the 14-nanometer Knights Landing, which are due next year and will be capable of being used either as coprocessors or host processors.

The use of coprocessors or accelerators is expected to grow in the HPC field. According to the compilers of the Top500 list of the world's fastest supercomputers, 53 systems on the November list use either GPU accelerators or coprocessors—38 of which use Nvidia GPUs and another two that use AMD's. Thirteen are using Xeon Phi coprocessors.



 
 
 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel