VMware vSphere 4.1 Features Large Capacity Cluster, VM Density

 
 
By Cameron Sturdevant  |  Posted 2010-07-13
 
 
 

VMware vSphere 4.1 Features Large Capacity Cluster, VM Density


VMware vSphere 4.1 continues to lead the enterprise virtual machine platform pack. New memory management, storage and network control features enable resource pool creation that improves scale while reducing performance drags.

Virtual machine management gains increased in importance in the vSphere 4.1 platform, and data center managers should plan on assigning virtualization experts to ensure that the new features lead to improved host utilization and automated scale-out of VM systems.

For a quick look at vSphere 4.1, click here.

During eWEEK Labs tests I learned that vCenter 4.1-the command and control module of VMware's virtual infrastructure world-is now 64-bit only. IT managers should build in extra planning and migration time to move any vCenter 4.0 or older servers to systems that are running a 64-bit OS as part of the move to vSphere 4.1. 

The payoff for the vCenter transition is a substantial increase in the number of VMs per cluster and the number of physical hosts that each vCenter can handle. I was not able to test the posted limits due to hardware constraints. VMware states that the latest version of vCenter can handle 3,000 VMs in a cluster and up to 1,000 hosts per vCenter server. Both of these large numbers are a threefold increase over the stated capacity of VMware vSphere 4.0.

Aside from the sizable scale increase enabled by this version of vSphere 4.1, the main advances in the platform are evolutionary extensions of capabilities that improve how the platform handles VM resource contention. During tests, I used the new I/O controls in networking and storage to govern resource use. 

IT managers who are already accustomed to using resource controls in VM CPU settings will have a leg up when it comes to using I/O controls in both network and storage areas. Even with the CPU control heritage, my use of network and storage control features revealed a fair number of "Version 1" limitations. 

Testing Controls


 

Network I/O control prioritizes network traffic by type when using network resource pools and the native VMware vNetwork Distributed Switch. Network I/O control only works with Version 4.1 of the vNetwork Distributed Switch; not Cisco Nexxus V1000 and not the standard switch from VMware. IT managers who are already using the vNetwork Distributed Switch will need to upgrade to Version 4.1. 

While it takes advanced network expertise to design and tune the policy that runs network I/O controls, the actual implementation of the feature is quite simple. Entering the parameter changes to enable the feature and set the specific physical network adapter shares is just a matter of walking through a couple of configuration screens that are easily accessed from the vSphere client. I was able to assign low, medium, normal, high or a custom setting that designated the number of network shares-a policy designation that represents the relative importance of virtual machines that are using the same shared resources-that would be allocated to VM, management and fault-tolerant traffic flows.

Storage I/O controls were equally easy to configure once the policy decisions and physical prerequisites were met. In my relatively modest test environment it was no trouble to run storage I/O controls on a single vCenter Server. I tested this feature on an iSCSI-connected storage array. It also works on Fibre Channel-connected storage, but not NFS (Network File System) or Raw Device Mapping storage. There are other requirements and restrictions that make this feature one more suited for evaluation for strategic implementation, including tiered storage system certifications.

Virtual machines can be limited based on IOPS (I/O operations per second) or megabytes per second. In either case, I used storage I/O controls to limit some virtual machines in order to give others priority. I found the large number of considerations-for example, each virtual disk associated with each VM must be placed under control for the limit to be enforced-meant that I spent a great deal of time figuring out policies to get a modest amount of benefit when my systems were actually running. 

Memory

VMware included a handy memory innovation in vSphere 4.1 called "memory compression." IT managers would do well to become familiar with the feature, as it is enabled by default. In my tests I saw improvements in virtual machine performance after I artificially constrained the amount of physical host memory. As my VM systems started to access memory to handle test workloads, my ESX 4.1 system started to compress virtual memory pages and store them to a compressed memory cache.

Since accessing this memory is significantly faster than swapping memory pages to disk, the virtual machines ran much faster than when this feature was disabled and the same workloads were started. System and application managers will likely need to work together to work out the best formula for utilizing memory compression. I made extensive use of the memory performance metrics to see what was happening to my test systems as I constrained the amount of host memory. IT managers should expect to devote at least several weeks of expert analysis to determining the most effective memory compression configuration for each workload.

Housekeeping

In addition to the changes made in handling system resources, VMware did some housekeeping in the incremental release of vSphere. The vSphere client is still available in the vCenter 4.1 installation bits but is no longer included in the ESX and ESXi code. Instead, users are directed to a VMware Website to get the management client. There were some minor changes made to various interface screens, but nothing that will puzzle an experienced IT administrator.

Rocket Fuel