Companies have embraced virtualization for the many efficiencies the technology brings to the data center. But with the increasing use of virtualization comes an increasing challenge: managing and securing all those virtual server instances.
In fact, the benefits of virtualization are eroded when virtual machines are not controlled by a life-cycle process.
VM sprawl-the creation of VMs without regard for ongoing utilization monitoring, desired-state configuration management, or an automated process for correlating VM and physical host performance characteristics-can be remediated in a cost-effective manner if IT managers get the balance right between ease of VM creation and management oversight.
What follows are best-practice suggestions based on eWeek Labs’ virtual testing implementation and tips from virtual infrastructure users and management vendors.
The bottom line for IT managers is that VM sprawl can be controlled, but only if IT expertise is combined with an almost ruthless adherence to procedures designed to enforce standard configurations, maximum resource utilization and dynamic reallocation of computing capacity.
The Physical and Virtual Relationship
Consolidating workloads that currently run on underused physical systems is what makes virtual systems so attractive. In a Ziff Davis Enterprise Editorial Research survey conducted for eWEEK, 75 percent of respondents said that improving server utilization was among the main drivers leading to a virtualization implementation at their organizations.
Long after the honeymoon with virtualization ends-that is, when maximum physical server utilization is achieved-it may be that management efficiency will rise in importance for virtualization projects. IT managers will have to use management systems to ensure that virtual systems are maintained in a desired configuration state that includes security and operational patches to both the operating system and applications.
In other words, virtualization drivers that scored low on the survey-including lowered staff costs-will become much more important when server utilization and the accompanying hardware cost reductions are driven out of the equation by widespread use of virtualization.
The management question is critical because there is a relationship between physical resources and VMs. This is especially true of VM performance. Unused CPU cycles, excess network bandwidth and underused RAM create the virtual real estate upon which entire “cities” of virtual systems have been created. Virtual server sprawl is created when management systems designed for purely physical systems don’t keep up with tracking the relationship between physical machines and VMs.
Transforming traditional physical management into a hybrid that manages both physical machines and VMs is the first step in controlling sprawl. But it’s not the only step. Sprawl is created if there is a loss of control after the machines are created, if there is no orderly plan for maintaining machines in a desired configuration once they are placed in production, or if machines are abandoned but not decommissioned when they are no longer used.
The question of when to terminate a VM is most applicable to test and development environments, where there is a need to ensure the orderly decommissioning of virtual systems. As projects end, IT managers will need to take down unused systems so that physical compute resources can be reallocated. IT managers should ask project leaders to specify a date when the virtual system will be turned off and to use management tools to monitor server utilization. Ferret out owners of unused systems to ensure they have a legitimate need for the resource.
However, before virtual systems are taken down, they must be created, which is also one of the best places to start managing virtualization.
Server virtualization brings industrial production to the data center. One key to industrial design is placing a premium on creating products that are cost-effective in mass production.
In the case of server virtualization, IT managers should create a catalog of standard server configurations from which business managers can choose to install new applications. For example, Option A could be a single-processor system with 2GB of RAM and 10GB of storage with a single HA. Option B could be a two-processor system with 4GB of RAM and 20GB of storage. The purpose of creating and enforcing a standard catalog of virtual systems is to prevent customized server sprawl, where each VM is handcrafted for each application. Maintenance costs associated with customized servers can eat up much of the cost savings gained from implementing a virtual infrastructure.
One of the best ways to ensure that servers are correctly configured and easily maintained is to provide business managers with enough standard configuration choices to satisfy most of their application needs and to disallow the return of handcrafted server customizations that drive up the amount of maintenance labor time.
Implement a resource SLA (service-level agreement) as part of virtual server instantiation. Go as far up the organizational chart as necessary to get an authoritative decision on which applications will be preordained winners and losers in the event of a contest for available physical resources. Be ready to offer suggestions based on application type. For example, order processing and the associated database systems are easy candidates for a guaranteed high SLA because they (presumably) conduct high-value transactions.
Create only the number of systems needed to adequately support an application. For organizations that are new to virtualization, and where there are virtualization skeptics, plan on allowing at least six months to prove the reliability of the virtual system creation process. If IT managers can show business managers that creating servers as needed based on actual performance reports works, then it will be possible to right-size applications from the get-go. For organizations that forecast IT budgets based on capacity trends, it is especially important to accurately measure server use.
It is also important to ensure that system owner information is up-to-date. The ease with which virtual systems can be created also makes it easier for server owners to forget about these VMs. Unlike physical systems that require an extensive budget process, a physical implementation, an accounting depreciation and sometimes lease company accountability, virtual systems can easily sit forgotten and unaccounted for. Aside from taking up virtual processing overhead, these forgotten systems soak up management cycles because they must be updated. Further, idle virtual systems can be a security hazard.
One of the great timesavers of a virtual infrastructure is the ability to instantiate new servers by cloning an existing system, and it’s important that new systems are cloned from approved images. It may be quicker to clone a VM from one already in production, but tracking the lineage of that system to ensure it has the right security patches and service configurations is extremely difficult. Also consider that a VM cloned from a system operating in one part of your network may not be correctly configured to operate in another part. For example, an internal Web portal system that gets cloned and placed in a DMZ would be a sitting duck.
Moves and Changes
Moves and Changes
It’s relatively easy with physical systems to understand the interdependencies of various applications because the systems are often kept in close physical proximity to one another. With virtual systems, the interdependencies may not be as obvious. It’s therefore important to have a plan for tracking dependencies and where interdependent systems move in the physical infrastructure.
Understand the policies that enable the creation and movement of virtual systems. Losing control of virtual server positions as the result of moving a virtual system to another physical host to improve performance can lead to sprawl. Most virtualization platforms include the ability to monitor virtual server performance and the availability of physical compute resources. When a VM reaches a predefined high utilization rate, the allocation module can move virtual systems to physical hosts with more available power. It is up to IT managers to monitor this process and ensure that virtual server resources are accounted for, regardless of the physical host where the virtual system currently resides.
In October, the Center for Internet Security released its first benchmark for securing VMware’s ESX Server 3.x. According to CIS, “the benchmark is a compilation of security configuration actions and settings that -harden’ virtual machines.”
Many longstanding configuration management vendors, including Configuresoft, provide management tools that check desired-state configuration against actual configurations for physical and virtual systems. Consider that VMs share physical resources. Systems that require stringent security configuration should be co-hosted on the same physical hosts to prevent low-priority virtual servers that may not be as diligently cared for from taking shared physical resources from high-priority virtual servers.
Connect the dots between physical and guest systems and ensure that configuration management on both types of systems is in place to help manage a virtualization rollout.
Monitor, Manage, Report
Monitor, Manage, Report
Implement a system management platform that can integrate with your chosen virtualization platform(s).
Virtual systems share physical resources that were never shared in the one physical server/one application model. From the beginning of a virtualization project, it’s important to understand how to correlate physical system performance with the hosted machines running on the physical resource.
Monitor the tempo of VM creation to see if auditing intervals are frequent enough. It may be prudent to increase compliance audit frequency in response to rapid VM deployment.
Requests for capacity planning forecasts are an inevitable outgrowth of successful virtualization projects. In organizations where virtualization evolves from “great to have” to “standard operating procedure,” IT managers will have to accurately measure the amount of physical resources needed to host the expected increase in virtual infrastructure requirements. Only management systems that track utilization over time will be able to provide reasonable metrics for sizing future IT hardware requirements.
Maturing virtualization infrastructures that have been in place for two or three years will increasingly be measured against a performance yardstick that disallows infrastructure management mistakes. Today, there is still a halo glow around wringing huge productivity gains from existing hardware. Tomorrow, it will be expected, and downtime caused by uncontrolled virtual infrastructure sprawl will be a mark against IT.