OpenStack is out to be the Linux of the cloud infrastructure world-the project, founded by NASA and Rackspace, is aimed at rounding up the various compute, storage and networking components that make up a public or private cloud into an open-source cloud operating system.
Just as most people who use and deploy Linux rely on distributions to take care of the many packaging and configuration details required to get up and running, the OpenStack world will have its own distributions.
I’ve been testing one such OpenStack distribution, called StackOps, which makes it rather easy to get up and running with a single-node OpenStack implementation, suitable for early testing and for familiarizing oneself with this fast-moving cloud computing project. StackOps consists of an Ubuntu Linux-based distribution, which, paired with a Web-based Smart Installer application, speeds the process of configuring and deploying OpenStack clouds.
For my tests, I stuck mostly to single-node configurations, in which the controller, network, storage and compute nodes that make up an OpenStack cloud are piled onto a single machine. For uses beyond testing, the StackOps Smart Installer also supports dual and multinode OpenStack configurations. With that said, StackOps, and OpenStack in general, has yet to approach the level of maturity of a typical Linux distribution.
The components that underlie OpenStack are solid, but the integration and tools situation reminds me of the early Xen hypervisor tests performed by eWEEK Labs in 2005 and 2006. For now, putting OpenStack into production will require in-house or outsourced expertise-StackOps, which charges nothing for its distribution or its Smart Installer application, sells services around its offering.
Moving forward, I expect to see several different OpenStack distributions available alongside StackOps. Ubuntu Linux Server, on which StackOps is based, is set to ship OpenStack as its default private cloud option starting with version 11.10 this fall. At its recent Synergy conference in San Francisco, Citrix announced an OpenStack distribution of its own, called Project Olympus. Where StackOps turns to KVM or QEMU for delivering compute services, Project Olympus will default to Citrix’s own XenServer.
StackOps in the Lab
StackOps in the Lab
I tested StackOps on a handful of different machines, starting with a single-node deployment on a beefy Lenovo W520 mobile workstation, and moving on to single and multinode deployments using a white box server powered by AMD Opteron 4000 series processors and a handful of VM-based nodes hosted by the VMware vSphere infrastructure in our lab.
Deploying a single-node OpenStack cloud involved installing StackOps on my test machine and visiting a simple Web-based configuration agent running on my server. The configuration agent would redirect me to a Smart Installer service hosted at stackops.org, where I could spec out a single-node configuration to be pushed back to my test machine.
I had the option of creating an account at stackops.org in order to save my single-node configuration for future use-I could, for instance, blow away my installation, reinstall StackOps and push down a previously saved configuration. For dual or multinode configurations, creating an account is mandatory, in order to keep setup details consistent between separate nodes in the same OpenStack deployment.
For my single-node deployments, there were few configuration choices to make-I had to indicate the range of addresses on my network to reserve for guest instances, and I had to point out a storage partition on which to store images.
I was not able to get a dual or multinode configuration up and running, due to issues I experienced correctly establishing the separate service network that these setups require. StackOps and the OpenStack project itself sport a great deal of good documentation, but I would have liked to see more sample configurations.
OpenStack supports a great deal of diversity in its components, and StackOps helpfully narrows things down a bit. At the hypervisor level, I had the option of KVM or Qemu, with the latter, slower-performing option available for nodes without hardware extensions for virtualization-as is the case with VM-based nodes.
OpenStack supports both the Rackspace and Amazon cloud APIs, but StackOps sticks with the Amazon APIs, which I was able to manipulate using the open-source euca-tools package from Eucalyptus Systems. As such, the basic commands and procedures I used to operate my test cloud were familiar to me. As with Amazon EC2, I used a set of credentials generated from my test cloud to access the service and to upload, launch, terminate and assign public IP addresses to my instances.
I tested with Ubuntu Server 10.04 and Windows Server 2008 R2 guest instances, with unremarkable performance. After all, in the basic configuration I used for my tests, my micro-cloud boiled down to an Ubuntu server running VMs under KVM.
A simple configuration like the one I tested is sufficient to begin familiarizing oneself with OpenStack, particularly with the details around creating and manipulating guest instances. Also, I used my simple clouds to experiment with the in-development OpenStack Dashboard project, which provides a Web interface to an OpenStack cloud that resembles Amazon’s EC2 Web console.
Eventually, eWEEK Labs may turn over a chunk of our testing infrastructure to an OpenStack cloud, but we’ll be looking to see the management tools surrounding the project mature further before taking that step.
Jason Brooks is Editor in Chief of eWEEK Labs. Follow Jason on Twitter at jasonbrooks, or reach him by email at jbrooks@eweek.com.