StackOps in the Lab

By Jason Brooks  |  Posted 2011-06-20 Print this article Print


StackOps in the Lab

I tested StackOps on a handful of different machines, starting with a single-node deployment on a beefy Lenovo W520 mobile workstation, and moving on to single and multinode deployments using a white box server powered by AMD Opteron 4000 series processors and a handful of VM-based nodes hosted by the VMware vSphere infrastructure in our lab.

Deploying a single-node OpenStack cloud involved installing StackOps on my test machine and visiting a simple Web-based configuration agent running on my server. The configuration agent would redirect me to a Smart Installer service hosted at, where I could spec out a single-node configuration to be pushed back to my test machine.

I had the option of creating an account at in order to save my single-node configuration for future use-I could, for instance, blow away my installation, reinstall StackOps and push down a previously saved configuration. For dual or multinode configurations, creating an account is mandatory, in order to keep setup details consistent between separate nodes in the same OpenStack deployment.

For my single-node deployments, there were few configuration choices to make-I had to indicate the range of addresses on my network to reserve for guest instances, and I had to point out a storage partition on which to store images.

I was not able to get a dual or multinode configuration up and running, due to issues I experienced correctly establishing the separate service network that these setups require. StackOps and the OpenStack project itself sport a great deal of good documentation, but I would have liked to see more sample configurations.

OpenStack supports a great deal of diversity in its components, and StackOps helpfully narrows things down a bit. At the hypervisor level, I had the option of KVM or Qemu, with the latter, slower-performing option available for nodes without hardware extensions for virtualization-as is the case with VM-based nodes.

OpenStack supports both the Rackspace and Amazon cloud APIs, but StackOps sticks with the Amazon APIs, which I was able to manipulate using the open-source euca-tools package from Eucalyptus Systems. As such, the basic commands and procedures I used to operate my test cloud were familiar to me. As with Amazon EC2, I used a set of credentials generated from my test cloud to access the service and to upload, launch, terminate and assign public IP addresses to my instances.

I tested with Ubuntu Server 10.04 and Windows Server 2008 R2 guest instances, with unremarkable performance. After all, in the basic configuration I used for my tests, my micro-cloud boiled down to an Ubuntu server running VMs under KVM.

A simple configuration like the one I tested is sufficient to begin familiarizing oneself with OpenStack, particularly with the details around creating and manipulating guest instances. Also, I used my simple clouds to experiment with the in-development OpenStack Dashboard project, which provides a Web interface to an OpenStack cloud that resembles Amazon's EC2 Web console.

Eventually, eWEEK Labs may turn over a chunk of our testing infrastructure to an OpenStack cloud, but we'll be looking to see the management tools surrounding the project mature further before taking that step. 

Jason Brooks is Editor in Chief of eWEEK Labs. Follow Jason on Twitter at jasonbrooks, or reach him by email at


As Editor in Chief of eWEEK Labs, Jason Brooks manages the Labs team and is responsible for eWEEK's print edition. Brooks joined eWEEK in 1999, and has covered wireless networking, office productivity suites, mobile devices, Windows, virtualization, and desktops and notebooks. JasonÔÇÖs coverage is currently focused on Linux and Unix operating systems, open-source software and licensing, cloud computing and Software as a Service. Follow Jason on Twitter at jasonbrooks, or reach him by email at

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel