Robust Internet connectivity plus virtualization, along with a heavy dose of managed hosting experience equals infrastructure as a service, a utility model where compute, storage and virtual machine instances are available at a moment’s notice and paid for based on usage.
Amazon’s Elastic Compute Cloud (EC2) is the granddaddy of infrastructure-as-a-service (IaaS) offerings and is setting many of the de facto standards for how early adopters use the companies on-demand, readily available computing instances. In particular application programming interface (API) methods for the deployment and management of public cloud resources in Amazon’s EC2 are gaining traction.
At the same time, a plethora of IaaS providers are entering the market, often distinguishing themselves based on service level commitments, hypervisor platform, management access and other premium services. The growing number of options means IT managers at medium to large enterprises must factor in not just compute cost but service level agreements (SLAs), tiers of service and technical compatibility with existing, on-premise virtualization tools. .
Based on discussions with vendors and customers, it’s clear that the IaaS market is just entering the “early adopter” phase. So, if your company has considered IaaS but has not yet taken the plunge, you’re in a pretty big crowd. As IT managers consider how IaaS might play out as a strategic platform on which applications are developed and deployed, consideration should be given to a number of technical factors. What hypervisor platform is offered by the provider? Many premium services are built using VMware’s vSphere virtualization platform. Open-source Xen and Red Hat’s kernel virtualization mode (KVM) and other options are sometimes available. In all cases, there is a question of how the virtual machines and the workloads on those systems are managed by the service provider. What storage choices are offered? How does IT get at systems that are in trouble?
There are a range of adjacent issues, some technical in nature, some not that are also crucial to understand. How is the SLA measured and what steps are taken to ensure a speedy recovery from unexpected downtime? What management tools will work with the IaaS environment? Is it possible to mix and match traditional hosted services with IaaS systems? What cost-control measures are available? What is the disaster recovery plan for the IaaS provider, and what are the options when it comes to getting applications up and running after a loss of service? What are the implications for regulated and/or sensitive data when it comes to encryption, accidental loss and legal action that seeks to search and examine data that is used or stored outside your organization’s direct control?
And the elephant in the room is what will happen to on-premise IT staff if a portion of the private data center is shipped to an IaaS provider? It’s clear that widespread outsourcing will reduce the need for cable pullers and rack-and-stack front line staff. At the same time, the number of subject matter experts covering everything from storage and networking to system administration and security will also likely shrink as workloads move off premise.
Cost Savings and Competitive Advantage
According to Terremark, customers who already outsource IT to companies such as Accenture or IBM, and especially companies that are well into on-premise virtualization projects, are the best-positioned to move to IaaS offerings. John Engates, CTO for Rackspace said, “IaaS as an on-demand, pay-as-you-go product gets business into the cloud with applications that you have today. This moves companies in the direction of a cloud architecture in preparation for PaaS [platform as a service.]”
Taking a step back, we can see that on-premise, data center server virtualization provided tremendous efficiency gains for x86-based applications. The idea is that IaaS takes the logic of virtualization a step further in that the hardware required for these newly mobile, virtual workloads can be concentrated in a service provider, thus freeing the business of ongoing costs associated with running a private data center. Besides eliminating costs associated with hardware and underlying configuration and maintenance of storage, network, security and the hypervisor platform, IaaS sets the standard for rapid provisioning of development and test environments.
APIs and Customer Requirements
Service providers most often use APIs to enable configuration and management of workloads running in a public cloud IaaS. Amazon EC2 APIs have emerged as almost a de facto standard. Even so, cloud providers large and small usually provide customer access to managed systems via a flavor of API.
According to Rackspace’s Engates, OpenStack, an open cloud computing platform founded by Rackspace and others, enlisted Jorge Williams, now principal architect at Rackspace. The resulting Rackspace Cloud Files API, among others, are becoming standards in their own right, Engates said.
Along with noting how APIs are used in public cloud offerings, IT managers would do well to note the June 2011 work of the <http://www.opendatacenteralliance.org/the-alliance> Open Data Center Alliance (ODCA). This customer group issued eight sets of customer requirements that outline an open data center usage model. The models are available here.
On Nov. 3, the ODCA and vendor-led Distributed Management Task Force announced a collaboration between the two bodies to accelerate the development of industry standards for managing cloud environments. The DMTF has a track record of pushing forward work that eases IT managers’ chores, making this joint effort one that is likely to have a positive impact on IT managers who make the jump to IaaS.