Amazon Web Services has announced Cluster Compute Instances for Amazon EC2, a new instance type designed for high-performance computing (HPC) applications and other demanding network-bound applications.
Amazon Web Services has announced Cluster Compute
Instances for Amazon EC2, a new instance type designed for
high-performance computing (HPC) applications and other demanding
In a July 13 press release on the new offering,
Amazon said customers with complex computational workloads such as
tightly coupled parallel processes, or with applications sensitive to
network performance, can now achieve the same high compute and
networking performance provided by custom-built infrastructure while
benefiting from the elasticity, flexibility and cost advantages of
"Businesses and researchers have long been
utilizing Amazon EC2 to run highly parallel workloads ranging from
genomics sequence analysis and automotive design to financial
modeling," said Peter De Santis, general manager of Amazon EC2, in a
statement. "At the same time, these customers have told us that
many of their largest, most complex workloads required additional
network performance. Cluster Compute Instances provide network latency
and bandwidth that previously could only be obtained with expensive,
capital intensive, custom-built compute clusters. For perspective, in
our last pre-production test run, we saw an 880 server sub-cluster
achieve a network rate of 40.62 TFlops - we're excited that Amazon EC2
customers now have access to this type of HPC performance with the low
per-hour pricing, elasticity, and functionality they have come to
expect from Amazon EC2."
Indeed, Amazon officials said that before the
company produced Cluster Compute Instances for Amazon EC2, enterprises
with advanced HPC needs typically had to fund expensive, in-house
compute clusters by purchasing dedicated, purpose-built hardware.
However, with Cluster Compute Instances, users have access to the
high-performance computing capabilities they need - with pay-as-you-go
pricing, the ability to scale on-demand, and no upfront investments.
Amazon officials said Cluster Compute Instances
provide more CPU than any other Amazon EC2 instance. Customers can also
group Cluster Compute Instances into clusters to enable applications to
get the low-latency network performance required for tightly coupled,
node-to-node communication that is typified by many HPC applications.
Moreover, depending on usage patterns, applications can see up to 10
times the network throughput of the largest current Amazon EC2 instance
"Many of our scientific research areas require
high-throughput, low-latency, interconnected systems where applications
can quickly communicate with each other, so we were happy to
collaborate with Amazon Web Services to test drive our HPC applications
on Cluster Compute Instances for Amazon EC2," said Keith Jackson, a
computer scientist at the Lawrence Berkeley National Lab, which
features the Department of Energy's National Energy Research Scientific
Computing Center (NERSC). "In our series of comprehensive benchmark
tests, we found our HPC applications ran 8.5 times faster on Cluster
Compute Instances for Amazon EC2 than the previous EC2 instance types."
"The high-performance networking of Cluster
Compute Instances for Amazon EC2 fills an important need among
scientific computing professionals, making the on-demand and scalable
cloud environment more viable for technical computing," said David
Patterson, an expert on HPC and co-inventer of RAID, RISC and other
Cluster Compute Instances complement other AWS
offerings designed to make large-scale computing easier and more cost
effective, Amazon officials said. For example, Public Data Sets
on AWS provide a repository of useful public data sets that can be
easily accessed from Amazon EC2, allowing fast, cost-effective data
analysis by researchers and businesses, Amazon said in its press
release. These large data sets are hosted on AWS at no charge to the
community. Additionally, the Amazon Elastic MapReduce service enables
low-friction, cost effective implementation of the Hadoop framework on
Amazon EC2. Hadoop is a popular tool for analyzing very large data sets
in a highly parallel environment, and Amazon EC2 provides the scale-out
environment to run Hadoop clusters of all sizes.
To get started using Cluster Compute Instances for Amazon EC2, visit http://aws.amazon.com. More information on Amazon EC2 and Cluster Compute Instances can be found at http://aws.amazon.com/hpc-applications.
Darryl K. Taft covers the development tools and developer-related issues beat from his office in Baltimore. He has more than 10 years of experience in the business and is always looking for the next scoop. Taft is a member of the Association for Computing Machinery (ACM) and was named 'one of the most active middleware reporters in the world' by The Middleware Co. He also has his own card in the 'Who's Who in Enterprise Java' deck.