Aside from “the hypervisor has never been compromised,” virtualization platform makers aren’t likely to talk too much about security concerns. Impediments to selling lots of licenses fast-such as questions about securing a virtualized IT infrastructure-are about as welcome as flies at a picnic.
However, for IT managers who are rolling out virtual servers and networks in the data center, security should be a chart-topping concern with a bullet.
For one thing, virtualization platforms have made it far easier and much faster to create and deploy servers and applications than was possible when physical limitations governed system rollouts. For another, security tools and practices that worked in the physical world can be seriously compromised by the very qualities that make virtual machines so appealing-mobility across physical resources, demand-based provisioning such that server resources appear and disappear at a rate never seen in the physical world, high utilization of individual physical server resources, and the blurring of roles between systems and network management.
Shavlik NetChk Protect 6.5 enables offline VM patching. Check out eWEEK Labs’ findings.
The data center network perimeter is an important security boundary. However, the traditional-and now I must add physical-character of the network perimeter is subject to increased stress by the rate and quantity of change inside the data center that have been brought about by x86 server virtualization.
Increased stress is created by the sheer proliferation of VMs. Creating security policy for applications on physical systems was hard enough to keep on top of. With the time needed to deploy a new server reduced from weeks to hours, network security personnel must interject to ensure that these swiftly made systems don’t start leaking data or carrying malware just as quickly.
Security stress is also created by the rapid movement of systems inside the data center. Physical systems could be relied upon to stay put–the very antithesis of VM productivity. When a VM moves today, it is far from certain that the security policies that govern how that resource is protected will move with it.
Finally, stress is placed on traditional security methods because virtualization breaks down walls between the traditional silos of system, application and network specialties, with security a distant after-thought in the gold rush-like dash to take advantage of the tremendous economic savings of x86 server virtualization.
The plodding methodology of adding a physical server to the data center was due in part to the fact that a truck had to deliver a piece of hardware. In addition, that hardware had to be physically connected to the network, which meant that the systems people-who have some knowledge but usually no access to network equipment configuration-had to interact with other IT staff.
Let’s face it, having a second or even third (the applications group) set of eyes on the process likely increased the care with which new systems were put in place. With virtualization, it is possible to have a single IT technician instantiate a new system fully provisioned in the virtual switch in a matter of minutes and with no other oversight. Given the fragile and complex nature of IT infrastructure, that is a stressful event.
What can stay the same, what needs to change and where virtualization improves security are key considerations with which IT managers must contend. The answers to all of these questions are contested, and no clear winning product or strategy has been yet declared.
However, some trends and practices are emerging that portend the future.
Traditional security tool vendors are beginning to shape their wares for the virtual world. Microsoft, after suffering a decade of criticism over the insecurity of its Windows operating system, has taken pains to ensure that security is in the DNA of its virtualization offerings. In addition, VMware, the 800-pound gorilla of virtualization, is advancing VMsafe based on APIs that enable secure operation of its platform.
What Can Stay the Same
Firewalls, intrusion prevention systems and VLANs (virtual LANs) that form the connections between physical systems must stay in place and be maintained. However, the functions of these systems need to move inside the virtual network that is created to connect VMs. Usually this virtual network is created using a virtual switch that resides with the VMs on top of the hypervisor on a physical system.
Today, when intraVM traffic needs to be monitored for security purposes, it’s common to route that traffic out to a physical system designed for that purpose. Once processed, the traffic is sent back onto the virtual network. It almost goes without saying that this likely makes the network the bottleneck for increased productivity among the VMs.
A hybrid solution of this type also ties VMs to the physical systems on which they are installed, unless elaborately architected physical systems are in place to support this activity if the VMs migrate to a different physical host. There are so many problems with this approach that only the fact that there are very few alternatives explains why it would be used at all.
The first problem is that using the hybrid approach clings to the recently ended era when machines came online and stayed online until death or retirement. Security products necessarily developed a static, accretive approach to understanding the physical and logical connections of systems.
In physically oriented tools, security policy revolved around brittle, static models of the network. To be quite frank, even what we see today as the glacial pace of change in the data center still often outpaced IT managers’ ability to keep up with topographical and logical changes embodied in traditional IT security tools.
What Must Change
What is already clear in the nascent world of data center VM security is that the processor resource is the currency that governs security tools. Security products that consume even 2 to 3 percent of CPU per VM will likely soon be too expensive to implement. This is due to the cumulative cost of security-for example, on a physical host with four VMs, each protected with a security agent that uses 3 percent of physical CPU (for a total of 12 percent)-when the physical server CPU utilization rate is now expected to be 70 to 80 percent. Adding the security overhead, the already sizable load on the physical CPU is a very hefty penalty.
Aside from CPU cycles, security policy will have to be adapted to face the new reality imposed by VM proliferation, spontaneity (sometimes online, sometimes not) and mobility. It is very likely that IT managers will have to increase the number and expertise of security personnel devoted to security policy creation and maintenance as the percentage of VMs increase in the data center. This is because security policy-usually defined as who is supposed to be able to access what resource with an allowed and expected outcome-is difficult under the best of circumstances. As I’ve outlined, the premise and current implementation of VM technology has created the opposite of the best circumstances for security policy development.
When Virtualization Is Safer
It is thought that the introduction of any software increases the risk that the system will be compromised. While I agree in theory, it has been the case thus far that in the relatively short life of the hypervisor, it has proved to be much less susceptible to fault and compromise than nearly any application-and certainly the Windows operating system.
The isolated VM environment in the hypervisor platform is key to the high rates of physical-to-virtual server consolidation, and as a by-product makes it possible to securely run more than one application on the same physical server. A security area that bears watching is best practice guides for putting VMs of differing security requirements on the same physical host. Enterprises may want to keep only VMs that process regulated data, such as credit card information, on physical systems that process similar information. Likely a better best practice guide is to keep systems with low security value clumped together regardless of function, while concentrating high-value VMs on physical equipment that is correspondingly designed for high availability. Keeping high-value systems together makes it easier to allocate security resources, such as policy development, to these systems.
Also worth noting are security developments among virtualization platform vendors. In February, VMware announced VMsafe, an initiative to improve secure operation of virtual infrastructure while also helping to reduce the amount of virtualization resources devoted to this task.
VMsafe is a set of APIs developed by VMware that enable third-party vendors to monitor and control network traffic to and from virtual machines, as well as data on each VM at the server level. Data can be used by security vendors to perform security analysis without the need to sit in-line at the network level or to reside on each server (VM).
The VMsafe initiative is designed to simplify the security integration and optimize solutions to use fewer host resources and provide overall host-based and network-based security. VMsafe technology is still in the early stages of readiness.
Symantec and McAfee, along with other third-party security tool makers, are in the early stages of implementing tools that use the increased access to APIs in VMware’s Virtual Infrastructure. IT managers should watch developments in this promising arena. However, if third-party tools don’t make significant progress by the first anniversary of the initiative, then a reassessment of the program will likely be in order.
Based on my recent work with Microsoft’s Hyper-V, security has moved from an add-on to a core feature at the company noted for releasing security and product patches on the second Tuesday of every month. In both the Hyper-V line and in the recently released Application Virtualization product, Microsoft has put secure operations at the center of product implementation. IT managers should watch here to see if the security implementation practices get easier to implement over time, as my work showed that a fair amount of time is required to fully configure the security bells and whistles.