As heterogeneous data centers become more complex, so does the scale of the virtualization project. Managing not only multiple platforms but countless hypervisors, operating systems and even virtualization tools causes exponential levels of complexity. In addition, virtual machines can rapidly proliferate, leading to poor resource utilization and excessive IT overhead.
Enterprises can, however, adopt virtualization management practices that transform the IT environment from a landlocked resource into a model of utility computing-by following what I believe are the top 10 steps to successful virtualization in a mixed data center environment:
Step No. 1: Take it slow
Are you anxious to reap the full benefits of virtualization and under pressure to rein in costs? Avoid the “leap before you look” scenario. Virtualizing all at once is bound to cause unforeseen problems. Take small steps first, and then monitor resulting performance and management issues. Then move on to more virtualization opportunities.
Step No. 2: Evaluate current server workloads
Gaining a thorough understanding of server workload is a critical first step to determining which applications should be virtualized. The best candidates for virtualization are applications running on Web, infrastructure or application servers. However, applications with high performance sensitivity or very high I/O requirements would not do well in a virtualized environment.
Gather detailed data for all hardware and software assets across the data center, and analyze workload utilization to develop optimal server consolidation plans. Then use the server utilization data collected to generate a hardware utilization report that identifies workload and resource mismatches such as under- or over-utilized servers.
Step No. 3: Analyze the complete workload life cycle
Analyzing workloads at a single point in time is not particularly helpful, since workloads can vary drastically by time of day, season or other variables. Record utilization data over a significant time period such as a financial quarter end to ensure that all ebbs and flows in resource utilization are captured accurately. You can then develop a workload profile that provides a clear picture of server utilization trends and anomalies in CPU, disk, memory and network utilization rates. This brings consistency and predictability to workload management. It also reveals the necessary trend analysis data to automate provisioning and capacity planning.
Step No. 4: Apply “what-if” modeling
Use what-if modeling to find the best combination of hardware and virtual hosts to maximize utilization, avoid resource contention and forecast future workloads. It is also prudent to run scale-up and scale-out scenarios to ensure sufficient capacity for current and future needs without over-provisioning the consolidated environment-a common and costly problem.
Step No. 5: Check your management tools
Many virtualization management tools only allow you to manage one type of hypervisor. If you have completely separated your virtual server farm and physical data center, you might be fine with separate virtualization tools. However, for those organizations looking to create and maintain a dynamic data center where they can easily move between physical and virtual machines, one tool would be advisable.
Step No. 6: Test before going live
Test virtualized servers to ensure application performance doesn’t slow down due to combining too many resource-intensive applications on the same physical server. This is where workload migration tools come in handy. They’re used to decouple data, applications and operating systems from the underlying hardware and stream them to any physical or virtual platform. This makes it easy to deploy virtual machines to test servers before going into a live production environment.
Step No. 7: Take advantage of dynamic provisioning
The act of virtualization only takes us so far. The next step is to adjust processing power on demand. For example, you could set a threshold so that when usage of a critical application hits 80 percent, a new server is automatically brought online. This type of intelligent resource management gives organizations the ability to design data centers that respond to their business needs.
Case in point: A large educational organization virtualized its entire Web front end. Through virtualization analysis, it discovered one site had higher traffic in the early morning while another experienced spikes in the afternoon. By dynamically reallocating resources based on time of day, it was able to reduce its server footprint by 80 percent, hardware costs by 30 percent and the entire IT budget by 18 percent.
Step No. 8: Continuously analyze
Virtualization isn’t a one-time endeavor. It’s a strategy for ensuring the continuation of optimal system utilization and performance. Workloads and resource demands change over time, necessitating periodic rebalancing to keep the data center running in an optimal state. Fortunately, there are new virtualization analysis tools that make it easy to continuously monitor, move and consolidate workloads.
Step No. 9: Gain visibility
Use management tools to gain a clear view of the total virtual machines on each server, what workloads are being added and at what rate. This vastly improves resource planning. Moreover, it gives companies the ability to track workloads and allocate IT charges to business units based on actual disk, CPU and network usage.
Step No. 10: Leverage disaster recovery scenarios
Employing virtualization solutions for business continuity gives organizations a way to get more out of their virtualization investment. The portability of virtualization also lends itself to more efficient disaster recovery. All server workloads, whether they’re on physical or virtual machines, can be duplicated to virtual machine backups. If an outage occurs, the workloads simply fail over automatically to the duplicate virtual machines.
Conclusion
As organizations delve further into virtualization, it’s easy to fall into many of the same traps that exist as physical environments. Better capacity planning and modeling are necessary to avoid resource bottlenecks and virtual sprawl. Virtual servers require a new management approach that encompasses upfront planning, workload life cycle analysis and continuous monitoring. Ultimately, those who take these logical steps can fulfill the promise of virtual infrastructure.