VMware vSphere 5.0 continues to set the pace for data center x86 server virtualization and remains the clear leader for IT managers who need a virtual infrastructure that can handle production workloads while containing operational costs.
The vSphere 5.0 ship date is imminent but as yet undisclosed; however, eWEEK Labs obtained an advance copy.
In assessing the technology, IT managers should look for significant changes to functions such as HA (high availability), VMware’s DRS (Distributed Resource Scheduler) and new network-monitoring tools, and a complete reliance on the ESXi hypervisor. Despite changes to the VMware licensing model, the bottom line remains the same: Organizations will pay a premium to use the enterprise-class components that make up vSphere 5.0.
IT managers who are already using vSphere 4.1 or 4.0 will quickly come up to speed on this latest version. For experienced users, changes that bolster existing features-including enhancements in the CLI (command-line interface), HA and VMware’s exclusive use of ESXi (ESX hosts are still fully supported in vSphere 5.0)- while powerful, are not radically different from previous versions. Where they are significantly different, as in HA, my tests show that the change usually reduces the amount of training needed to use the feature.
One area that will need some new thinking is the sizing and outfitting of physical hosts. The new configuration maximums allow for the creation of virtual machines with up to 1TB of memory and up to 32 virtual CPUs. I can’t say much about how these giant systems would work. Our modest-sized workloads running on medium-to-slow speed iSCSI storage worked well. I will be following up with enterprise managers who are using the giant-sized VMs to see how well these Macy’s Thanksgiving Day parade-sized systems perform in the field. I’ll be paying special attention to the physical machine configurations needed to run these much larger VMs as well.
How We Tested
I used four Intel-based servers, Cisco 3560G switches and an OpenFiler iSCSI storage management system to host a sneak preview copy of VMware vSphere 5.0 for most of August. The two stalwart HP servers, a DL360 G6 and a DL380 G6, were equipped with Nehalem-class Intel Xeon processors. The other systems, a Lenovo RD210 and an Acer AR380 F1, had more advanced Intel processors and more memory, 12GB and 24GB of RAM, respectively.
I started the first round of tests by doing an in-place upgrade of the two HP systems, going from vSphere ESX 4.1 to ESXi 5.0. Migrating the systems was a piece of cake. Where I would have benefited from more planning was in migrating VMware’s VMFS (Virtual Machine File System) storage and networking.
The VMFS goes from 3.0 to 5.0 in this release of vSphere. The VMFS 5.0 does away with variable block size formatting by using only 1MB blocks. vSphere 5.0 can use either file system, and further testing and field experience will be needed to make a recommendation about the best approach to use in mixed environments. My tests showed that it is possible to upgrade in place although the process took several steps and a fair amount of reading and planning to get our VMFS 3.0 data stores correctly migrated to VMFS 5.0. The virtualization team will definitely need to involve the storage team in this planning process to ensure a smooth transition.
In a similar, but less successful vein, I also implemented the latest version of the VMware vDS (vSphere Distributed Switch). In the end, I discarded all the existing networking, and implemented the networking from scratch. While it is possible to migrate from vNetwork Standard Switches (a virtual switch created on a single, physical vSphere host) to a vDS, this process takes considerable planning. Further, IT managers will have the greatest chance of success if they start with hosts configured with similar numbers of NICs (network interface cards) and similarly configured standalone Standard Switches.
Note that vDS was introduced in vSphere 4.0. Those already using a vDS will find that it is relatively simple to upgrade the switch. The journey will be considerably more involved for organizations migrating from Standard Switches. I performed various migrations, most where the VMs were shut down and my small number of hosts joined to the vDS, one at a time. It is also possible to use host profiles to transition physical host systems onto the vDS.
The biggest changes in vDS in vSphere 5.0 are the addition of some quite basic network troubleshooting features. I was able to use the newly added network monitor port (a feature of physical switches that coincides with the age of the dinosaurs) to analyze virtual network traffic without needing to route the traffic to an external, physical network.
Adding a monitoring port is an important step in the maturation of the VMware vDS. The old saw that “you can’t manage what you can’t monitor” holds true: The addition of a monitor port is a significant improvement in the vDS.
Even so, it’s clear from my tests that networking is currently an “also-starring” role in vSphere 5.0, playing second fiddle to the first-rate part of creating and managing VMs. Migrating to the vDS and correctly configuring the vDS for production use will require that significant networking expertise be added to the virtualization team. Pooling physical host NICs and configuring profiles to correctly apply policies to these pooled resources was finicky and easily broken when compared with the process of creating and maintaining VMs.
High Availability
One way that VM maintenance was improved in vSphere 5.0 was in the new HA features. Primary and secondary nodes are gone, replaced with a master-slave concept that eliminates planning the location of these nodes. Instead, participating systems elect a master as needed. Also gone is dependency on DNS (Domain Name System) services. A wizard-based interface speeds up HA deployment chores. In this version of HA, I was easily able to use the storage subsystem as a secondary heartbeat monitor that provided a redundant check on host status.
I turned on the HA function in my test cluster and was able to see the host status, such as the number of physical host systems connected to the current HA master. I was also able to see the number of protected and unprotected VMs and which data stores were selected during the set-up process to provide secondary communication between the hosts, as a backup to the management network. Almost all this configuration was performed behind the scenes by vSphere 5.0. I completed the HA setup in a matter of minutes in my test network.
Pulling the plug on various hosts resulted in the failover of VMs within the cluster as was expected.
More Management
For the first time, VMware’s DRS (Distributed Resource Scheduler) has been extended to include storage. Implementing Storage DRS was a straightforward process of defining policies for my VMs. Over time, Storage DRS made decisions about the best host for particular VMs and also balanced VM access to storage resources according to service levels I specified in my policies.
For the first time, the vCenter Server is available as a virtual appliance. This first vCenter Server, the management hub for any vSphere domain, is provided as a virtual machine running on SUSE. I used the new vCenter Server virtual appliance throughout my tests. While it shows first-version flaws-for example, networking details such as DNS are actually defined using the command line, not in the Web-based console-the appliance worked well.
vSphere 5.0 is the first version to provide only the ESXi host hypervisor. For some time, VMware has been urging users to adopt the small-footprint ESXi over ESX, with good reason. ESXi takes up only about 100MB on the physical host. It is easy enough to manage the physical host systems from vCenter. ESXi does have a basic network configuration interface. For the most part, however, IT managers will be using the newly enhanced CLI, batch files and vCenter to interact with physical hosts.