Virtualizing a desktop PC, a server, a storage array or an entire data center to obtain better efficiencies and draw less power certainly makes a lot of sense.
As more enterprises each day finish their testing and QA projects, they are putting “virt,” as it is casually known, into production play-whether it’s market-leading VMware, Citrix’s XenSource, Microsoft Hyper-V or a lesser-known hypervisor.
The logic of server virtualization in data centers is very compelling. Businesses are empowered to consolidate all their underutilized Windows, Linux and Solaris systems sprawled throughout their data centers and remote locations, and in doing so they save on precious floor space and electrical draw. Over time, these efficiencies can add up to substantial savings on a company’s bottom line.
“The reality is that the underpinning hypervisor technologies are mature, robust and efficient, contrary to sporadic expressions of security concerns that have been aired,” Bob Waldie, CEO of Opengear, a next-generation IT infrastructure management company, told me.
Opengear’s Management Gateway enables secure remote access and control of all the computers and communications devices in a distributed network.
For some basic steps to secure your virtual environment, click here.
Generally, virtual servers are now being hosted on reliable hardware platforms that are designed to meet the intense network, performance and security demands that come with virtualization, Waldie said. Because the hardware and software are now ready for prime time, server virtualization in the data center is growing. However, a virtualization layer adds complexity, and the consolidation brings intensity, Waldie said.
“These two unavoidable attributes have a swag of hidden costs and substantive downsides and risks,” he said. ‘So the compelling value proposition of virtualization does not apply to all situations, and for smaller data centers and computer rooms, it generally does not apply at all.”
With all this in mind, Waldie put together a group of key “red flags” for IT managers and CTOs to consider before committing a data center system, or parts of that system, to virtualization.
6 Red Flags to Data Center Virtualization
Red Flag No. 1: Are you sure that virtualizing will, in reality, deliver you a positive ROI?
The fact is that when you look at total cost of ownership of data centers, a) the acquisition costs for systems are falling, and b) the power and cooling costs are rising. However, it is the management and administration costs that are ramping up.
Managing the new layer of complexity introduced with virtualization comes at significant cost, and this cost can be much greater than power/space savings. This must be factored into every virtualization project.
Red Flag No. 2: Do you have the IT staff to deal with increased complexity?
Consolidation gives an appearance that you have a decreased workload because there are fewer physical servers to manage, but the fact is you have increased the server count. This is because you still have exactly the same number of servers running the same applications, but they are now virtualized and more complex to manage. You can’t use simple tools such as serial console/KVM; plus, your IT staff now has a new layer of hypervisor software to manage.
Red Flag No. 3: Are you resourced to manage the likely increase in demand?
Virtualization makes it easier for businesses to add more IT functions. You can run more applications without going through the corporate complexity of purchasing new hardware, and they can be up and running much easier without waiting for delivery/installation. The downside of reducing these barriers is that virtualization invariably increases demands for new and expanded services from the business, so you need to be prepared.
Red Flag No. 4: Are your data center layout and power and cooling facilities/management sophisticated enough to manage consolidation?
Fact: Only a small percentage of data centers monitor the power consumption and the temperature profiles at each rack. Many don’t even have planned rack layouts in their data centers that will enable them to manage the “hot spots” that can result from consolidation.
While this shortfall makes the shareholders of Schneider Electric in Paris (who now own APC) smile, this is one area where managers definitely should bring in expert advice before committing to virtualization (which always brings increased processor utilization). “Virt” often is accompanied by moves to more energy-intense blade servers and extra hardware for high availability, which can result in the power-burn per rack growing tenfold and more in spots.
Do You Have the Tools to Virtualize Your Data Center?
Red Flag No. 5: What impact will virtualization have on your level of service?
When you stack multiple workloads onto a single server (and Red Hat’s new Qumranet is specified to support 50-odd virtual servers on each physical server), it is even more essential to keep your physical servers running. So it is important to be planning to implement high-availability solutions, with multiple network and power supply failovers, from the get-go.
If you are running VMware ESX, you will need to have plans from the outset on how you will use tools like VMotion to relocate your most mission-critical servers/services to enable preventive maintenance and for disaster recovery.
Red Flag No. 6: Do you have the tools to be able to monitor/ manage your new sensitive complex environment (rack-side and remotely)?
Fact: In virtualizing servers, you can’t rely on the simple tools such as serial console/KVM switches/LCD drawers you had been using. These were fine for controlling old servers with physical keyboard mouse ports and real operating system environments, but they are of zero value in accessing the service processors in headless blade servers, or the virtual Linux/Windows server running on the hypervisor on the blade.
Also, your PDUs and UPSes are now a critical piece of the infrastructure that need to be controlled at each rack, so you will need to look at new tools such as Opengear KCS. And you’ll need vendor-agnostic tools because while it is a VMware virtual world today, in the coming years Sun Microsystems, Red Hat and Microsoft will also be major players, and you need to be able to monitor and manage all these-at the rack side and remotely.