Virtualization has become almost a no-brainer for organizations looking to optimize resources. But as the number of virtual machines increases, so, too, do management and security concerns.
eWEEK Labs Technical Director Cameron Sturdevant recently spoke with Hezi Moore, the founder and chief technology officer of Reflex Security, about the challenges organizations face when building out their virtual environments.
Thinking about the data center, and especially data center security, what are the biggest differences between physical and virtual IT infrastructure?
One of the biggest differences is that the virtualized infrastructure is much more mobilized, more dynamic. Server mobility is a challenge, server sprawl. Another large issue is cross-functional management related to who manages what portion of the network.
If you look at the physical infrastructure, we had groups of IT that managed different portions of the infrastructure, from the networking guy to the security guy to the server guy and even the application guy. And then you had people who managed the data center on a day-by-day basis. To apply a change to the data center, even if it was just a cable change, you had to go to policies and processes that were documented, so IT had a handle on what happened and when it happened-who did it and why.
Today, in a virtual infrastructure, you can move a wire, you can move from one switch to another with just a mouse click, and those changes usually don’t get documented. So, it’s only [being done] by one guy, the one who manages the infrastructure, the networks, the virtual networks. This is really what we call cross-functional management across the entire infrastructure.
It seems to me that the hypervisor presents a new factor here. First of all, it’s a new piece of software between the physical hardware itself and the operating system.
If you look at the hypervisor itself, it’s really a very, very short piece of code that’s developed and designed with security in mind. So, if you look at the attacks out there that are really trying to penetrate the hypervisor layer, none of them has been able to very successfully cause any damage. Most sneak into the hypervisor and run another machine without the user knowing. But none of these attacks is attacking the hypervisor itself.
So, definitely there is a new piece of code in the data center, the hypervisor, but it’s a very secure piece of code, and the hackers are really focusing on … the tools running on top of the hypervisor that will give them access.
Security Challenges of Hypervisors
Let’s talk about characteristics of the hypervisor that facilitate change but that also introduce a new dimension to security. First of all, in a virtualized environment we have applications moving from host to host and from physical systems to virtual systems. And there are service-oriented architectures, where applications will be moving multiple servers to perform a business function, and those can move based on policy. Talk about some of the security challenges that these things raise for IT managers today.
Yes, there are plenty of security challenges. What they are going to target is not really the hypervisor, but the [VMware] Vmotion tools used to move applications and actually take advantage of this movement to attack the server itself.
There are a lot of security risks associated with moving an application and server from one location to another. One of them is tracking the system that moved. If you have a firewall in between, and the firewall prevented some users from accessing this application when you moved the application, you need to make sure you moved that firewall to the location with the policy that was on the firewall. So, if you had any external security and you’re moving a VM, you need to make sure that you’re moving all the security with that.
But the mechanism to move a server and application, because of the speed requirements, they’re usually not encrypted. So, if you look at Vmotion, for example, they request that you run it on a closed network, and the reason is because you’re not going to have the time-if you want to do it in a real-time event-to encrypt and decrypt the information that moved from one system to another. So … if somebody got access to this particular network, then they’ve got access to all of the VMs, all of the servers.
Those are the challenges: How do you apply security to those moving parts, and how do you make sure that no one penetrates the Vmotion layers?
You’re an advocate for defense in depth. This is an old concept in IT security. Does this concept of a layered approach gain any new characteristics in a virtual environment?
It’s not that it gains characteristics, it’s emphasizing the need for that. And the reason is, in virtualization, there is no one method that can feed all. You can’t just say, “OK, I’ve run anti-virus on the VM, and, that’s it, I’ve resolved the issues.”
To really understand, take the example of moving an application. When you move the application, anything that’s running with the application will move with it. But there is nothing that moves at the network level. You need defense in depth to be able to have something to defend you at the network level, have a solution to defend you at the host level, and you need more defense in depth because there are too many moving parts, there are too many changes that are happening in the infrastructure. If you don’t keep track of them, you’re going to eventually have security issues, you’re going to have exposures, you’re going to have security risks in your network.
So, when you move a VM from one location to another, if somebody did that and you don’t know who did it and why they did it, it’s possible that this VM that was supposed to be behind a firewall inside the network got exposed to the external network. You need to make sure that you have the right tools and the right capabilities to track and monitor those events.
So, defense in depth in the virtual environment is much more important than in the physical environment, where a server usually sits static and doesn’t move very rapidly.
Applying Defense in Depth
Do you think that the movement of these machines is the unique place where defense in depth needs to be applied and configured in a virtual infrastructure?
I don’t think it’s just the movement. That’s one example.
What’s another example?
Another example is the configuration that happens inside an environment. There is a huge amount of configuration that happens and has to be monitored. You need to know who did it. Who has the right access to go and change configuration on a switch? The server guy can go and change configuration on a switch. Maybe you put a particular port in mirroring mode, and he didn’t know how bad it is, and now somebody can listen to that traffic.
Cross-functional management is a major issue that needs to be dealt with. Intentionally and not intentionally, configuration errors can happen inside the environment.
Visibility inside the environment is very important because you have all of those changes that happen-you know, VMs moving around, switches that move from one location to another, VLAN [virtual LAN] changes. You need to be able to manage and monitor all of that.
So, you’re talking about reintroducing cross-functional controls into the virtual infrastructure.
Exactly.
Doesn’t that mean slowing down the change, which would mean reducing some of the convenience that makes a virtual environment so attractive?
If you do it smart enough, then it won’t. It will definitely add a little more complexity, but when you deal with security with issues of connectivity and things like that, you have to make the right decision.
Do I want to put a server on the network in 2 minutes? It used to be three weeks when we did it in hardware. You know, there is something in between.
I used to have to go buy a server, stage it, rack it, stack it, all of that process. Yes, that’s too slow. Taking an image and dropping it into the virtual infrastructure without reporting it-without documenting why, who did it-also is not the right way to do it. So, there is something in between.
When client/server was first introduced, people started building the infrastructure. And then they started thinking, “Now, how do I use my management tools to really manage the client/server infrastructure?” And they realized, “You know what? The management tools that they use on mainframes are not really working for me in the client/server environment, and we need to change the processes and the tools to fit client/server.”
Something happened in the virtualization space. We first built the infrastructure, then we realized that the processes and the mechanisms that we were using for the physical infrastructure were not really working for the virtual infrastructure; how do we build something in between?
Putting an image on the network in 2 minutes is not the right way to do it, I don’t think. Waiting for a server for three weeks isn’t the right way to do it. There’s something in between. Does that make sense?
Yes. I want to switch gears for a moment. Everyone makes security solutions for VMware products. Now that Microsoft has released its Hyper-V, should IT managers expect to see more cross-platform security products?
Definitely. No doubt this is going to happen.
Should it be a requirement?
It should be a requirement, definitely. And I will tell you why. People are still going to buy Windows servers, and, with the licensing of Windows servers, if you buy Windows 2008 Server, you’re going to get the virtualization almost for free. So, people are going to run it.
It’s happening with operating systems: Linux versus Windows. Almost every data center that you go to today will have Linux and Windows servers.
You’re definitely going to see multiplatform virtualization, and what we’re working on here at Reflex is a way to let the user manage [different platforms] exactly the same. And [users] don’t even really know the difference between the virtualized platforms that they’re using. They manage the environment, they manage the infrastructure exactly the same. We give you a visualization capability, a diagram of what’s happening inside your virtual data center-you’re going to see it as one unit. You’ll be able to identify which one is running on top of Microsoft and which one is running on top of VMware, but it’s all going to be unified.
In the virtual infrastructure, at the end of the day it still runs on a physical environment.
Again, I believe that, no matter what, you’ve got to have applications or servers that aren’t going to be virtualized. Maybe in the future people will be able to deal with the I/O issues and all of that, and be able to virtualize most of it, but for at least the next few years, you’re going to have a mix of hardware and software, and even the hardware that’s running the virtual infrastructure.
You’re going to still need physical security, but I think what’s going to happen is consolidation in the physical space. You’re not going to need the amount of security in a physical environment.
What kind of physical security specifically will be needed?
Mostly security for network devices-firewalls, IPSes [intrusion prevention systems], gateway anti-virus.
Will it be important to correlate that physical security with what’s happening in the virtual infrastructure?
Definitely, definitely.
Why?
Because it’s one network.