IT managers who are used to being praised for the huge equipment and operational savings accrued from implementing server virtualization solutions from VMware, Citrix XenServer and now Microsoft Hyper-V are facing the task of preserving these savings with effective and efficient virtual machine management.
Between server, desktop, application and storage virtualization, IT managers will face a growing threat of rising management costs unless effective management and capacity planning tools are put in place, and soon. VM sprawl-the creation of virtual machines without a life cycle plan for ongoing utilization monitoring, configuration management (including patch management), and an automated process for VM placement according to physical host, business process and security policy constraints-must be stopped at once.
Check out the eWEEK Labs review of Microsoft System Center Virtual Machine Manager 2008.
If your organization creates VMs without at a life cycle plan, you are in immediate peril.
Currently installed tools from BMC Software, CA, Hewlett-Packard and IBM/Tivoli aren’t yet ready for the next challenge IT managers face when it comes to x86 server virtualization. And while the existing management tools that prevailed in production environments in the age of one app/one hardware server won’t do to manage dynamic virtual environments, we have learned enough from these systems to understand what should be in a cross-platform virtualization tool.
Even organizations that are currently only using VMware should put cross-platform management tools on the strategic short list for 2009. Arell Chapman, assistant vice president of network administration for Michigan-based United Bancorp, is a user of Vizioncore virtualization management tools and has deployed a VMware platform.
“We are not currently looking to use more than one virtualization platform in production for at least a year,” Chapman said. “We are considering testing Microsoft Server 2008, and we plan to conduct a head-to-head test of VMware and Citrix’s respective VDI [virtual desktop infrastructure] platform sometime next year.”
A Brief History of Time
Management platforms in the one-app/one-server age used time and events to determine system health. Performance and utilization thresholds were measured with a yardstick calibrated to time. If more than a specified number of critical events happened in a certain amount of time, an alarm was fired. If a server or application failed to issue a heartbeat packet over a certain number of seconds or minutes, that meant trouble.
Virtual machines break this simple-and formerly useful-method of system and application management. VMs can hibernate when not needed or move to a new location based on utilization rules. Just as bad, new systems can come online but not be discovered by the management system for hours, certainly, or even days or weeks.
In the previous age, system discovery just didn’t need to happen that frequently. It was often an unnecessary burden on the network to enable overly chatty polling mechanisms. It still is, which is why cross-platform system management tools should use information provided via an API from the hypervisor to instantiate and maintain system and application monitoring rather than relying on simple polling.
Tight Integration
In the same vein, capacity planning tools should be tightly integrated with the hypervisor. Data about available resources, the rate of VM creation, the resource requirements of these VMs, the length of time these systems exist and the business constraints that govern VM placement all factor into new physical hardware buying plans.
As virtualization moves from the test/development environment into production, we will pass from simply reusing and recycling existing equipment into prudently acquiring new physical resources to support ongoing virtualization.
It couldn’t be happening at a worse time. Business leaders both have been weaned from the hideously inefficient practice of demanding one app/one server and have come to enjoy the tremendous cost savings and drastically reduced deployment times. As these benefits meet the limit of what can be done with existing equipment, IT managers will need to go to business leaders to get new hardware to support future computing growth. All at a time when organizations are confronted with the worst economic conditions since the 1930s.
All That Is Old Is New Again
Capacity planning and inventory management have always been a part of good IT operations. Having the right spare equipment in the right place and using cost/benefit analysis to line up priorities with business needs are fundamental IT skills. What is new is that virtualization puts a twist on capacity planning that will separate the good from the best in IT. Because physical hardware is shared among departments, commingling utilization on shared hardware, chargeback and capacity prediction become much more difficult tasks.
Business users will get reports on actual utilization and will expect to pay only for the resources they use. In the one-app/one-server age, business units blithely paid for equipment, utilities and IT staff regardless of how much they actually used. With the advent of detailed usage reports and budget scarcity, those days are gone.
In the heady days and months ahead, effective management of virtualized IT resources across platforms and accounting for future capacity needs will be the new measure of IT effectiveness and efficiency.
eWEEK Labs Technical Director Cameron Sturdevant can be reached at csturdevant@eweek.com.