Amazons Elastic Compute Cloud

 
 
By Jeff Cogswell  |  Posted 2008-10-14 Email Print this article Print
 
 
 
 
 
 
 


The umbrella system known as AWS encompasses many different technologies. One of the newest technologies is called Amazon Elastic Compute Cloud, or EC2 for short.

When Amazon engineers were first developing AWS, they decided that, in addition to simply distributing the work among separate computers, they would make sure the individual computers offered their services on an as-needed basis.

For example, if someone logs into a large site such as Amazon and wants to explore DVDs, there's no reason for the server handling the comment boards to set aside storage space for that session if that person isn't using the comment boards. Amazon has built its large distributed system to allocate resources only when they're needed and then to let go of the resources when they're no longer needed.

In a small Web server environment, such allocation is done quite a bit, as programmers know how to allocate objects when necessary. On a large system, this becomes especially important, and this is where the concept of cloud computing comes in. In the cloud computing world, each piece of the large Web server becomes a service, dedicated to a particular task. These services are run on separate hardware. Software running on other computers can access the service as needed.

That's what cloud computing really is: multiple systems providing services to each other. Software functions as a service (hence the phrase "software as a service"). But in the case of cloud computing, these individual systems are usually themselves huge; they're not just individual Web servers, but rather more likely huge data warehouses. When all the big pieces are working together as a whole spread out about the Internet, the whole shebang is called the cloud.

The modern cloud has advantages over SAAS, grids and utility computing. Find out why.


Now take this concept to the next level: The individual services are built so they're only used when needed. Imagine that users want to make use of some of these services for their own software. They'd like to run some software on the Web that could benefit by being placed on one of these services and their software would make use of a few other services. They don't need all of the services available, but only a portion.

That's what Amazon's EC2 is all about. They can simply rent space on the Amazon cloud and only pay for the parts they need and use. Their software can, while it's running, allocate space (even the virtual equivalent of additional computers; I'll get to that shortly). Everything will be available to the users, and they'll only be charged for what they use.

To customers, this can be a huge benefit, because their software gets to run on a huge system that's capable of crunching through millions of transactions a day; they don't have to actually own the physical hardware. They also don't need to simply subscribe to a hosting service where they get a virtual area of a single physical PC sitting in a data center. The result is that their software functions as powerfully as the big players.

This cloud approach to computing isn't being done exclusively by Amazon. However, Amazon's own system is called the Elastic Compute Cloud because engineers consider it elastic. A user's software can request services on the fly and grow as needed while it's running, and then let go of resources and shrink back down when there's less need. In other words, the software is elastic, growing on an as-needed basis, with users charged for the services they use. That is contrary to a traditional hosting service, where users pay a fixed price per month for a fixed limit on resources.

There are some similarities to traditional hosting. When customers have an application running under EC2, they create a virtual server much like the virtual servers in smaller hosting providers. From there, their system can grow as needed by adding additional virtual servers. These virtual servers are basically the smallest atomic unit available.

Next, as their software runs, they might have coded their software to determine that it needs additional virtual machines while it's running. Their software might need to do some serious number crunching, and they don't want to devote their current virtual machine to this process. They can allocate a second virtual machine to this process, a virtual machine that has all the power of a single stand-alone PC. When the crunching is complete, the second virtual machine can provide the results of the crunching back to the first virtual machine, and then shut down so another Amazon customer can use the process time.



 
 
 
 
Jeff Cogswell is the author of Designing Highly Useable Software (http://www.amazon.com/dp/0782143016) among other books and is the owner/operator of CogsMedia Training and Consulting.Currently Jeff is a senior editor with Ziff Davis Enterprise. Prior to joining Ziff, he spent about 15 years as a software engineer, working on Windows and Unix systems, mastering C++, PHP, and ASP.NET development. He has written over a dozen books.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel