The Management Console

By Jason Brooks  |  Posted 2007-03-26 Print this article Print

During our tests, we downloaded an rPath-based MediaWiki appliance in Virtual Iron format, dropped the appliances virtual disk into the appropriate folder in our Virtual Iron management server and assigned the disk to a new VM. Without VS Tools support, however, the virtual appliance was significantly less useful.

If pressed, we probably could have adapted the supported Red Hat kernel to run our MediaWiki appliance, but wed rather see Virtual Iron take care of that.
It wasnt too tough to create new VMs using Virtual Irons Management Console, but the process is definitely rougher around the edges than that of VMwares virtualization products. For one thing, its necessary to visit different parts of the console to configure a VMs CPU and RAM settings, its network adapters, and its virtual disks. During our tests of VI3, we connected our VMware ESX servers to the FTP server on which we store, among other things, operating system installation images. We could then attach these images to VMs wed created as virtual CD or DVD drives, install from those images, and then access their contents once our machines were installed. With Virtual Iron, the VM creation interface sports a handy drop-down menu of available installation images, but these images had to reside in a particular folder on our management server to show up on the list. This would have meant copying images from our standard FTP store to that particular server. Click here to read about a free virtualization option for Windows users. We ended up dumping the Windows Server 2003 machine that wed initially chosen to host the management server in favor of a CentOS 4.2 server with our OS image store mounted as a Sun NFS (Network File System) share. We then symlinked the iso images we wanted to use to the requisite Virtual Iron directory. This was not a tough workaround, since wed planned on trying out the management server on both Windows and Linux hosts anyway, but wed like to see future Virtual Iron versions develop more flexible access to storage. We could access and control our VMs through console windows that we launched from the management interface. With virtual instances for which wed installed VS Tools, we could power cycle, reboot or shut down the VMs, but we could not pause them, which is something were accustomed to being able to do with other virtualization products. We also missed having a snapshotting functionality similar to what VMware offers, but we could clone our virtual disks and later replace a machines disk with a clone, thereby restoring to an earlier point in time. Virtual Irons LiveMigrate feature worked fine for our guests with VS Tools installed and with disks stored on our iSCSI appliance: We just dragged the VMs from one node to the other and hit the confirm button. Each migration took less than 15 seconds to complete. Our experience with Virtual Irons LiveRecovery feature wasnt so smooth. We tried yanking the power cord from one of our nodes that was hosting the Windows Server 2003 and CentOS guests, and the management server told us that it wasnt attempting an autorecovery because the node "may be still active." We then tried disconnecting one of our nodes from the management server, but this didnt trigger an autorecovery, either. It turns out that we were bumping up against safeguards that prevent so-called "split brain" scenarios, and we didnt have a chance to sort out these issues before the end of our testing. Hardware As mentioned earlier, Virtual Iron requires server hardware with AMD-V or Intel VT hardware extensions for its host nodes. The management server doesnt require any particular processor type, but redundancy and fast I/O is important for the management server because the nodes depend on it. Virtual Iron 3.5 supports a maximum of 32 CPUs and 96GB of RAM per node, and the product can expose as many as eight CPUs to its guest machines. We tested Virtual Iron 3.5 on a pair of Dell PowerEdge 430 servers with Intel 3GHz Pentium D processors and 2GB of RAM each. Each machine sported three NICs—one for the management network, one for an iSCSI network, and one for accessing the Internet and other servers in our environment. (Virtual Iron maintains a hardware compatibility list for its products here.) Virtual Irons iSCSI support is new in Version 3.5, and the list of supported iSCSI hardware is somewhat slim at this point. We thus turned to the same do-it-yourself Openfiler-based iSCSI target with which we recently tested VI3. After some initial troubles in setting our network configuration, we were able to access the volume we created in Openfiler for use with Virtual Iron, slice it up into disks and install VMs without further trouble. We also could install VMs in the disks local to each of our nodes, but we could not use LiveMigrate with machines configured in this way. Advanced Technologies Analyst Jason Brooks can be reached at Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.

As Editor in Chief of eWEEK Labs, Jason Brooks manages the Labs team and is responsible for eWEEK's print edition. Brooks joined eWEEK in 1999, and has covered wireless networking, office productivity suites, mobile devices, Windows, virtualization, and desktops and notebooks. JasonÔÇÖs coverage is currently focused on Linux and Unix operating systems, open-source software and licensing, cloud computing and Software as a Service. Follow Jason on Twitter at jasonbrooks, or reach him by email at

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel