Amazon Web Services, which got into the processor business in 2015 when it bought Annapurna Labs, is now putting homegrown Arm chips into new cloud instances, a move that puts it ahead of other public cloud providers and is a significant step for Arm’s ambitions in the data center.
AWS officials made the announcement on the eve of the company’s re:Invent show this week in Las Vegas, taking the wraps off new EC2 instances that are powered by the new Graviton processors which are based on Arm cores and include custom-built silicon. The A1 instances are primed for workloads where cost and performance are important, according to Jeff Barr, chief evangelist for AWS.
“They are a great fit for scale-out workloads where you can share the load across a group of smaller instances,” Barr wrote in a blog post. “This includes containerized microservices, web servers, development environments, and caching fleets.”
Engineers with the massive public cloud provider have been working to develop processors since the Annapurna acquisition. The company in early 2016 announced a lineup of Arm-based Alpine systems-on-a-chip (SoCs) and associated technologies for system makers and service providers to leverage in such connected-home devices as gateways, WiFi routers and network-attached storage (NAS) devices for such jobs as video streaming, secure storage and the internet of things (IoT).
Barr said the company also has built and released two generations of ASICs that offload EC2 system functions to AWS’ Nitro system, enabling all of the hardware to be leveraged for customer instances. The Nitro system was designed to enable AWS to more quickly develop and launch new instances types, which this year has included such offerings as R5 and R5d instances for memory-intensive workloads, high-memory instances, and M5a and R5a instances powered by Advanced Micro Devices Epyc server chips.
In addition, a “few years ago the team started to think about building an Amazon-built custom CPU designed for cost-sensitive scale-out workloads,” he wrote. AWS several years ago hired a number of engineers from Calxeda, a pioneer in Arm-based server chips that went out of business
The move lets AWS further its advantages over competitors like Microsoft Azure and Google Cloud. Microsoft officials last year announced plans to incorporate Arm-based SoCs from Cavium in their Azure cloud and earlier this year worked with Cavium engineers in demonstrating the chip maker’s ThunderX2 processor running in a server. The demonstration was part of Microsoft’s Project Olympus open-source cloud server initiative.
“I would expect Azure to accelerate its adoption of Arm processors that it announced early last year with Cavium,” Patrick Moorhead, principal analyst with Moor Insights and Strategy, told eWEEK. “I am not expecting Google to embrace Arm for a while as it is deploying IBM Power chips.”
AWS Dominating Cloud Market
Spending on cloud infrastructure continues to grow, according to analysts with the Synergy Research Group. In the third quarter, spending jumped 45 percent year-over-year, and AWS remains the dominant cloud provider, with more than 34 percent of the market, more than the next four companies—Microsoft, IBM, Google and Alibaba—combined.
For Arm, the adoption by AWS is important as it looks to push its highly efficient SoC designs into the data center. A number of chip makers are working on Arm-based server chips, most notably Cavium—which is now part of Marvell Technology—and startup Ampere, which has ex-Intel executive Renee James as CEO. Intel owns more than 95 percent of the server chip market, but recent years have seen a rise in competition from established players like IBM and AMD as well as Arm chip-making partners.
Amazon’s Graviton chip “was one of the ‘big breaks’ Arm needed to increase its credibility in the server ecosystem,” Moorhead said. “Arm is already prevalent in networking and storage.”
The new A1 instances, which are all optimized for the cloud provider’s Elastic Block Storage (EBS) service, are available in five sizes at lower costs than other instances, Barr said. The number of virtual CPUs range from one to 16, with two to 32GB of memory and a price range of $0.0255 to $0.4080 per hour. All have EBS bandwidth of up to 3.5Gb/s and network bandwidth of up to 1Gb/s.
They’re available now in the U.S. East (Northern Virginia and Ohio), U.S. West (Oregon) and Europe (Ireland) regions in on-demand, reserved instance, spot, dedicated instance and dedicated host forms.