One of the most important new features in Windows Server 2008 R2 is support for live migration of virtual machines in the product's Hyper-V virtualization role. Live migration enables running virtual machines to shift from one node to another without interrupting the applications running on the VM. The previous version of Hyper-V offered Quick Migration, which involves a short period of downtime during the migration.
Hyper-V taps the new Cluster Shared Volumes feature within Windows Server's Failover Clustering role to accomplish Live Migration. The Cluster Shared Volumes feature compensates for the fact that Windows' NTFS is not a clustering file system by supplementing NTFS with an additional layer of logic that keeps track of which VM "owns" a given shared storage LUN at a particular time.
On the positive side, Microsoft's use of NTFS for Cluster Shared Volumes keeps the volumes more broadly accessible than they might otherwise be. Full access to VMware's VMFS, for instance, is only available to VMware's own products.
The downside of this approach is that Microsoft's Cluster Shared Volumes work with a narrower range of storage hardware than do VMware's VMFS volumes.
Specifically, Microsoft's Cluster Shared Volumes require storage systems with support for persistent reservations. In my tests, this meant that I could not use the same Linux-based OpenFiler iSCSI storage appliance that we typically use with VMware testing to evaluate Cluster Shared Volumes. I opted instead for the OpenSolaris-based NexentaStor storage appliance, which, when used with Sun's new COMSTAR storage subsystem enabled, delivered the persistent reservation support that Cluster Shared Volumes requires.
Once I'd sorted out my shared storage issues, I had to smooth out an additional wrinkle regarding the Windows Server 2003 Active Directory domain in our test lab.
We'd configured the domain to use the backward-compatible mixed mode functional level to which Server 2003 defaults. Our two Hyper-V host machines sported machine names of more than 16 characters, which appeared to lead to intermittent network access problems. It wasn't until I truncated the machine names to 16 characters that everything worked as expected.
In the cases of both the shared storage and the directory issues I experienced, the Failover Clustering role's validation wizard helped point to the errors-to varying degrees. With regard to storage, the wizard told me exactly what I needed to do. However, with the directory issues, the wizard pointed vaguely to problems confirming that my nodes lived in the same organizational unit. I had to sort out the exact issue through trial and error.
After everything was set up, I was able to migrate running VMs from one Hyper-V node to another with very little noticeable downtime.
I tested the seamlessness of Live Migration by creating an R2 virtual machine running the Remote Desktop Services role. I configured my RDS instance to serve up Word 2010 as a RemoteApp and opened up Word on a Windows 7 system on my network. After starting a new Word document and beginning to type, I kicked off a Live Migration operation through R2's Failover Clustering management console and switched back to my document. During the migration, I noticed a momentary hiccup in the responsiveness of my remote Word session, but none of what I was typing was lost.
Live Migration was very easy to use, but I found the process of configuring and using Live Migration in Windows Server 2008 R2 significantly more complicated than with VMware ESX server and VirtualCenter. In contrast to VMware's product, where all tasks are gathered together in a purpose-built interface, the tasks required to configure Cluster Shared Volumes in Windows Server involve visits to various existing and new Windows utilities.
Executive Editor Jason Brooks can be reached at [email protected]