We went about solving problems, and we found one that [for which a lot of customers would need a solution]-server consolidation. Once people figured out the sort of value of running their things on the virtualization layer, we had an easier sell up to the next level. If we were doing it on one machine, let us do it across the data center, and let us manage it. The change is that we went from being someone trying to explain virtualization to people-and I think we still do a lot of that-to showing them how it can help solve problems, and we have established ourselves as someone who can solve problems. So companies are more excited and ready when we come up and announce a new thing.Where does security fit in to the whole formula? It's sort of a subtle relationship. Here's an example. You have some big machine with multiple cores, and you have Windows or Linux on it, so why don't you just put a whole bunch of applications on it and run it at the same time? One of the reasons why people didn't do that was best practices for security sort of forced them the other way: You put each service on its own machine. But some of that was responsible for the sprawl of servers that we got. If you got [the server] working, the last thing you wanted to do was mess with it or try loading something else on it and risk breaking it. Those were the sort of things that got us in when it came to server consolidation. The bottom line is that you are still taking the same software that may have security holes in it and running it inside a virtual machine. We do a really good job of faithfully emulating the hardware so that any security hole that occurred on the real hardware would also occur on a virtual machine. And, so, the latest push is to say, 'OK, is there something we can do within the virtualization layer to patch those holes that are inevitably going to be there so that it actually runs better in a virtual machine than it runs on the real hardware?' You describe yourself as being more concerned with the lower level of virtualization. Where is that part of the company going, and where's the technology going? In the last year, we have been pretty aggressive in pushing out, what I feel, are the last core pieces of the platform. One of which is what we call record/replay technology, which gives the ability-with very low overhead-to record an execution and replay it. That's the building piece for a lot of stuff going forward, [including] continuous availability technology. In fact the VM [Safe] API is the other big piece that I was involved in. We will obviously continue to work on things like getting overhead down in order to have performance tuning down at the lower level. I think the low-level hypervisor is maturing-the core set of functionalities that are in it are pretty much there. That's why it kind of makes sense to view it as bundled with the hardware if you think it's not going to change a whole lot. It makes sense to treat it like hardware or a core piece of what you are doing. Do you see hypervisor technology changing? When I think of the hypervisor, I think of the lowest level of partitioning, so, if you are talking about how it schedules VMs and how it figures out which resources to give to which VM, that is happening on the whole datawide version of it but is also happening inside the box. Within these boxes you have tens of cores running individually, so it makes it a very interesting challenge in making a decision of what to do there.
Recently, at VMworld Europe, we decided to open up these APIs for security vendors to help them build out solutions on top of VMware. We fundamentally believe that we can increase the security within an organization by doing this. So, here again, [we didn't have a lot of] the expertise that you need to do this, but the security companies are full of people who have that kind of expertise. A natural way of doing it was to open up what they needed to allow them to do their job.