VMwares Virtual Infrastructure 3 offers enterprises an impressive, mature framework for making virtualizations promises a reality.
eWEEK Labs installed VMwares ESX Server, which forms the foundation of Virtual Infrastructure 3, onto a variety of Intel- and Advanced Micro Devices-powered servers.
We bound the boxes together under VMwares VirtualCenter management server. From there, we installed several Linux, Windows and Solaris virtual machines onto our ESX hosts, and we were pleased with how the ESX/VirtualCenter duo enabled us to fine-tune our VM implementations.
Companies looking to consolidate single-application servers, to squeeze more out of under-utilized hardware, to extend the availability of their networked services or to get a surer handle on the machines in their data centers would do well to evaluate Virtual Infrastructure 3, which can deliver compelling results in any of these scenarios.
Our experience with VI3 (or Virtual Infrastructure 3.0.1, to be exact) was not, however, devoid of rough patches. For one thing, we were disappointed by the Windows centricity of VI3s management tools. In our opinion, one of greatest attributes of VMwares Server,
Player and Workstation products is their support for Linux as well as Windows. In contrast, VI3s Virtual Infrastructure Client runs only on Windows, and the products licensing server also is Windows-only.
Speaking of licensing, we found VMwares product licensing somewhat confusing. In fact, we spent at least as much time poring over VI3 documentation regarding licensing as we spent studying high-end VI3 features such as VMotion live migration.
We turned twice during our testing to the aid of VMware licensing representatives, whom we were able to contact via an instant messaging interface built into VMwares Web site. On the bright side, the licensing representatives were well-equipped to get us pointed in the right direction.
VMwares product line is the clear leader among x86- and x86-64-based server virtualization products, and VI3 is the firms flagship product. We do recommend keeping an eye on the emerging Xen-based offerings from Virtual Iron and XenSource, as well as on the Xen-based functionality thats built into Novells SUSE Linux Enterprise Server and Red Hats upcoming Red Hat Enterprise Linux 5. (Stay tuned for eWEEK Labs forthcoming investigation of these Xen-based challengers.)
Also worthy of consideration are the operating system-level virtualization capabilities offered by Sun Microsystems Solaris 10 and the Windows- and Linux-based products from Virtuozzo. Each of these products provides solid resource management controls for virtualization.
A VI3 for all servers
VMware sells VI3 in three different editions: VI3 Starter, which is limited to servers with a maximum of four CPU sockets and 8GB of RAM, does not support SAN (storage area network) or iSCSI storage, and costs $1,000 per pair of CPU sockets; VI3 Standard, which carries no server CPU or RAM limits, supports SAN and iSCSI storage, can expose as many as four virtual processors to guest VMs, and costs $3,750 per pair of CPU sockets; and VI3 Enterprise, which adds support for VMotion live migration, VMware HA (High Availability) guest failover support, VMware DRS (Distributed Resource Scheduler) and VMware Consolidated Backup, costs $5,750 per pair of CPU sockets. For more information on VI3 pricing, go here.
We could issue each of our ESX Servers and our VirtualCenter server their own license file, or we could create one license file containing enough entitlements to run all of our systems and serve up that license file through a Macrovision Flex licensing server running on a Windows system.
Theres a tool on VMwares Web site for converting an activation code into the combination of licenses your company requires. We ran our licensing server from within a VM on one of our ESX Servers. This gave us enough leeway to keep our servers running even during license server reboots, but wed advise setting up a separate license server for production.
During our tests, we explored VI3s Standard Edition functionality, as well as its VMotion capabilities, but we scarcely scratched the surface of the products high availability and dynamic resource balancing attributes. We plan to fully explore these features in a future story.
Next Page: Management capabilities.
We tested VI3 on a Sun x4200 server with four dual-core 2.393GHz Opteron processors and 7.87GB of RAM; an IBM x3655 server with four dual-core Opterons and 4GB of RAM; and a pair of IBM eServer 325 servers, each with dual 1,595MHz Opteron processors and 2GB of RAM. We opted to add the eServers to the mix to test VI3s VMotion live migration capabilities.
To migrate virtual machines from one host to another via VMotion, the processors of each host must be fairly similar—as we learned, the dual-core Opterons that drove our Sun and IBM boxes werent close enough.
Be sure to check out VMwares HCL (hardware compatibility list) before planning your own VI3 implementation. In any case, once we arrived at a working configuration, we were pleased by the smoothness with which VI3 managed VMotion migrations during testing.
In fact, we were pleasantly surprised to find that our eServers were equipped to join the party at all, since the machines include only IDE drives and previous versions of ESX server required SCSI drives to operate. During our tests, we were able to install ESX Server on the IDE drives of our eServers, although those drives werent available to us for use as VMFS-formatted data stores once wed brought the systems up. (VMware VMFS is the high-performance cluster file system for storage virtualization.)
Instead, we provided those machines—as well as our two other, beefier test boxes—with shared storage via an iSCSI SAN that we cobbled together for testing using the open-source Openfiler project.
We downloaded an Openfiler appliance from rPath.org, set it up and had our four ESX Server systems consuming iSCSI storage within about an hour. Again, while this configuration worked well for testing purposes, wed advise sticking to approved SANs from VMwares HCL for production purposes.
Among 32-bit operating systems, VI3 explicitly supports Windows NT4 Service Pack 6a through Windows Server 2003 R2 and Vista; Red Hat Enterprise Linux Versions 2.1 through 4; Novells SUSE Linux Enterprise Server Versions 8 through 10; Novells Open Enterprise Server; Novell NetWare Versions 5.1 through 6.5; and Suns Solaris 10.
VI3 supports the 64-bit versions of Windows Server 2003 R2, Red Hat Enterprise Linux 3, SUSE Linux Enterprise Server 10 and Solaris 10. We also tested a couple of 64-bit rPath Linux-based OS appliances, the 32-bit version of Debian “Etch” and the most recent release of Solaris Express, the test branch of which will become Solaris 11. All of these systems ran well during tests of VI3.
We accessed our ESX Servers individually and as a group through the VirtualCenter server, using a Windows client application based on Version 1.1 of Microsofts .Net Framework.
Beyond its Windows-only limitations, we were happy overall with the Virtual Infrastructure client, through which we could create and configure individual virtual machines, as well as manipulate configuration options related to our ESX Server hosts.
For most operations, such as adding a hard drive to a guest VM, wed make the changes we wished and hit OK, and we could then move on to other operations while a status bar in a Recent Tasks window at the bottom of the interface ticked off the operations progress toward completion.
For other activities, such as those involved in configuring iSCSI targets for particular hosts, the client interface locked up until the operation was done, barring us from undertaking unrelated activities for other ESX Server hosts. We were tempted at times to launch a second instance of the client to get back to work while waiting for these sorts of operations to finish.
VI3 offers a rather broad set of resource allocation tools; we used them to reserve CPU, memory and disk resource levels for specific VMs, as well as to define more broadly the shares of available resources to devote to particular machines or pools of machines.
VI3 supports authentication using Microsoft Active Directory, as well as a VI3-specific authentication scheme that we used in our testing and through which we could create separate administrative users authorized to carry out particular roles on individual ESX Servers or on the set of test servers as a whole.
Advanced Technologies Analyst Jason Brooks can be reached at firstname.lastname@example.org.