Mendel Rosenblum still straddles two worlds.
A professor within Stanford University’s Computer Science department, Rosenblum teaches and studies how modern operating systems work. He’s also chief scientist at VMware, a company he helped start 10 years ago (and where his wife, Diane Greene, is president and CEO).
While 2007 was breakout year for Rosenblum’s once-small startup venture, this year promises to bring new challenges. Microsoft, Citrix and Oracle are making their way into the virtualization space with formidable products that look to challenge VMware’s role as the dominant player in a growing industry.
Rosenblum sat down with eWEEK staff writer Scott Ferguson to discuss topics ranging from the future of VMware to the role of the hypervisor to how the company plans to work in areas traditionally controlled by Microsoft and other operating system providers.
Can you tell us where virtualization technology is now and where you see it going in the next five to 10 years?
In some sense, we built out certain pieces, like virtual infrastructure, where we can go in and take over all of the computing from some organization. Obviously, we are going to keep pushing so we can go into any organization and do all of their computing in virtual machines regardless of how much resources are consumed.
The change that we are seeing now seems like it’s going to be even bigger than the change we got by taking over the hardware-we are seeing the ability to repackage the way software is distributed.
The way I like to think about it is that you used to buy a computer and then you install the operating system and then you install the applications and then you would configure it. If you stood back and looked at that, it’s an awful lot of self-assembly in terms of it taking a lot of time.
One of the things we have been able to do with the virtualization layer is basically say, why don’t you get someone else to do all that-the assembly work-and then just drop down the whole working virtual machine?
If you look at the history … people were buying virtualization with the idea of being able to consolidate a whole bunch of servers onto one. You had fewer boxes to manage and you could manage it all with fewer people. It became a very easy argument for an IT person to make that it basically saves money by doing it this way.
Now what we are seeing is pushing it the step further and, rather than having to spend all your time building these sort of environments up, have someone who really knows what they are doing-an expert-and actually have them build the environment for you and hand it to you.
We were initially calling it virtual appliances, and that seems to have caught on and people are excited about it. At VMworld, I sort of demoed our vision of it and where it’s going to. So instead of just a virtual machine, you are talking about whole services.
What is VMware’s vision and focus now that the company is starting its 10th year?
When we started this thing, no one else was really excited about virtualization.
A lot of the effort was to sort of convince people who thought it was a cool idea [wondered whether] it was useful. So we kind of transitioned out of that.
Now it’s pretty widely established that [virtualization] is a better way of doing things. So, as we got bigger and more aggressive, we started to get more aggressive about the problems we took on. It’s always been sort of the focus of the company to look around and see what the big pain points of the customers are and how can we solve them through this virtualization technology.
If you look at the early days, we weren’t really exactly sure what we were going to do with the technology, and one of the reasons we didn’t have [venture capital] funding was because they wanted to see the big thing we were going to do. They said, ‘Oh, that’s cool technology, but what’s it good for?’ And I would [give them] a list of 10 things it’s good for and they would say, ‘No, we want one thing it is good for, not 10 things,’ and it turns out that’s what we did.
Security Issues
We went about solving problems, and we found one that [for which a lot of customers would need a solution]-server consolidation. Once people figured out the sort of value of running their things on the virtualization layer, we had an easier sell up to the next level. If we were doing it on one machine, let us do it across the data center, and let us manage it.
The change is that we went from being someone trying to explain virtualization to people-and I think we still do a lot of that-to showing them how it can help solve problems, and we have established ourselves as someone who can solve problems. So companies are more excited and ready when we come up and announce a new thing.
Recently, at VMworld Europe, we decided to open up these APIs for security vendors to help them build out solutions on top of VMware. We fundamentally believe that we can increase the security within an organization by doing this. So, here again, [we didn’t have a lot of] the expertise that you need to do this, but the security companies are full of people who have that kind of expertise. A natural way of doing it was to open up what they needed to allow them to do their job.
Where does security fit in to the whole formula?
It’s sort of a subtle relationship.
Here’s an example. You have some big machine with multiple cores, and you have Windows or Linux on it, so why don’t you just put a whole bunch of applications on it and run it at the same time? One of the reasons why people didn’t do that was best practices for security sort of forced them the other way: You put each service on its own machine. But some of that was responsible for the sprawl of servers that we got. If you got [the server] working, the last thing you wanted to do was mess with it or try loading something else on it and risk breaking it.
Those were the sort of things that got us in when it came to server consolidation.
The bottom line is that you are still taking the same software that may have security holes in it and running it inside a virtual machine. We do a really good job of faithfully emulating the hardware so that any security hole that occurred on the real hardware would also occur on a virtual machine.
And, so, the latest push is to say, ‘OK, is there something we can do within the virtualization layer to patch those holes that are inevitably going to be there so that it actually runs better in a virtual machine than it runs on the real hardware?’
You describe yourself as being more concerned with the lower level of virtualization. Where is that part of the company going, and where’s the technology going?
In the last year, we have been pretty aggressive in pushing out, what I feel, are the last core pieces of the platform. One of which is what we call record/replay technology, which gives the ability-with very low overhead-to record an execution and replay it. That’s the building piece for a lot of stuff going forward, [including] continuous availability technology.
In fact the VM [Safe] API is the other big piece that I was involved in. We will obviously continue to work on things like getting overhead down in order to have performance tuning down at the lower level. I think the low-level hypervisor is maturing-the core set of functionalities that are in it are pretty much there. That’s why it kind of makes sense to view it as bundled with the hardware if you think it’s not going to change a whole lot. It makes sense to treat it like hardware or a core piece of what you are doing.
Do you see hypervisor technology changing?
When I think of the hypervisor, I think of the lowest level of partitioning, so, if you are talking about how it schedules VMs and how it figures out which resources to give to which VM, that is happening on the whole datawide version of it but is also happening inside the box. Within these boxes you have tens of cores running individually, so it makes it a very interesting challenge in making a decision of what to do there.
Could Hypervisor Replace the Operating System?
I think we are finally going to get the hardware vendors to build hardware support; we are working with a number of companies on that, and that’s going to be a long-term process. We now have companies providing hardware support from the CPU side, and we have them working on memory support from the hardware side and also the I/O devices. But [hardware support for virtualization] is still in its infancy in terms of being deployed widely and designed for commodity hardware.
The people in our group here that work on the low-level hypervisor have pretty good job security for a while, I think. The interfaces may have matured, but the underlying hardware is changing around. It’s becoming friendlier to virtualization and better able to support it.
I think a lot about interfaces and implementations … The nice thing is that once you know about interfaces, you can change the implementation underneath. Our vision of the data center is that you can buy a brand-new box and it should be able to drop in and run all the old workloads and the other stuff that is running in the data center.
Will the hypervisor eventually replace the operating system?
Clearly, the answer is no-the hypervisor is not going to get rid of the operating system.
The hypervisor that we are exporting is still a pretty low-level abstraction. Most programmers would much rather have a hierarchical file system that they can deal with instead of a raw disk, or the sort of raw memory of a machine versus the kind of nice virtual memory that we get on a modern operating system. Clearly, you want to have some kind of operating environment that makes that level of interface nice for the application that is going to be programmed for it and do the useful work.
I think the error of the one operating system that will be used for everything-[where] you buy a machine pre-installed with an operating system, and that operating system has to be as general and supportive of anything you might to do with it-is going to go away. Now, the operating system will be chosen by the applications.
Does VMware see its software getting in the way of the spot that Microsoft has traditionally owned-the spot between the hardware and applications?
If I were the incumbent operating system maker or a Linux kernel developer or a Microsoft kernel engineer and I had been sitting directly on the hardware, would I want another layer shoved underneath me? And, by the way, that layer is going to make all the resource management decisions of what hardware I get. The answer would be no.
It’s sort of human nature not to oppose that.
Unfortunately, [operating system makers] have been kind of stuck with the fact that they have millions of lines of code … You could probably do something that you wouldn’t need to have this layer behind, but I think it’s been amply proven that once you have software as complex as a modern operating system environment, it’s really hard to change it in any substantial way, even with the resources that Microsoft or the whole open software community have.
[If you asked someone now] how would you construct an operating system? They would surely start drawing a picture and say, ‘Well, at the lowest level, you would put very primitive services and everything needed to trust, and, on top of that, you build this sort of layered birthday cake … I think this is an incredibly fortuitous way we can switch to a much better structure and make it really highly compatible and sort of evolve into it, rather than come up with a new operating system environment that you structure better but that everyone else has to port to and build software for.