Joe Wagner, Novell's senior vice-president and general manager for systems and resource management, recently joined Ziff Davis Media Editorial Director Mike Vizard for an IT Link podcast. A transcript of that interview follows.
One of the topics that is on everybodys mind these days is virtualization. Everybodys talking about it, yet I kind of feel like were in some kind of perennial state of pilot development. What is your sense of where we are in the adoption rate of virtualization these days?
Great question. Virtualization is a great disrupter to the traditional IT platforms and the adoption rate, I think, has been phenomenal. Its been growing very rapidly. But as you point out, its been used mainly in the test and development areas, and just recently moving into production. In those areas, whats preventing it has been the ability to manage virtualization and to provide the same types of assurance and risk management that our IT users expect.
So are you saying that basically the systems and network management tools havent really kept pace with virtualization because it really is a whole new operating system kind of environment?
Thats correct. Virtualization offers new paradigms of computing, new flexibility, new dynamic infrastructures to be possible. It makes your computing much more fluid. And like any technology that releases the availability like virtualization does, what typically comes in advance is the ability to control it. What users have seen is even the lack of knowing where their virtual machines are running and that, in itself, would prevent you from using it on very secure infrastructures in certain IT installations.
So whats going to solve that problem? If we dont have the tools to manage it, we cant really deploy it? And then we feel like were stuck in some kind of Catch 22.
I think those solutions you see rapidly coming to the market today - Novell is bringing some to the market, as well as others - is the ability to manage a virtual environment much like a physical environment. One of the things that users are experiencing is in that transition, theyre coming to realize that virtual environments are as difficult and require as much management as a physical environment, so theres a good analogy there to let you know whats coming. And the capabilities like Novells ZENworks Orchestrator to know where virtual machines are, discover them in advance, track where theyre used, account for their use, do cost accounting on their use - are brand new solutions just coming to market. Ours was announced in November 2006 and I think youll see the industry move rapidly toward bringing those types of capabilities to market.
Novell is supporting Xen, if Im not mistaken, but are you going to have a management approach that supports multiple virtual machine environments in the same way that youll support, say, a Windows and Linux environment today?
Sure. The desire of every CIO that Ive ever met in my career has been to have a single vendor, or to have at least a few vendors to work with to standardize their IT infrastructure. And virtualization, much like hardware, much like operating systems - we believe that Novell will become heterogeneous. The introduction of Xen and the introduction of Meridian for Microsoft are just examples of that. The capability we announced with ZENworks Orchestrator in November of last year, out of the box manages a heterogeneous virtual machine environment. So we already provide that capability to manage virtual machines created by VMware, or virtual machines created by Xen infrastructures and virtual machines that will eventually be created by Microsoft. That capability exists today from Novell.
Is there any fundamental difference in your mind between the different approaches to creating these virtual machines, whether its Xen, Microsoft or VMware, or is it just simply a matter of where they got started and what platform they happen to be targeting?
At the hypervisor level, the fundamentals are similar. The maturity of implementation is different, so you hear these very complex terms - full virtualization, paravirtualization, the ability to do live migration - all capabilities that are associated to the maturity and the depth of the hypervisor. So there are differences, but eventually, that layer, we strongly believe, will commoditize and commoditize very rapidly. At the management layer, there are strong differences. One of them is whether or not youre using open standards or proprietary standards. And VMware today, for example, allows you, an external party like Novell, to manage their virtual machines only if you already buy their management, so thats a very proprietary-oriented infrastructure. Novell, with the XenSource project, is bringing to the market a very open standards approach, all of the interfaces and APIs fully available for our management tools to interact with the virtual machines or anyone elses virtual machine management capabilities. And well see what Microsoft does. Its yet to be fully brought to market.
In the context of where does virtualization show up, what role will virtualization play at, say, the processor level, and what role will it play at the operating system level? And if its playing at that level, doesnt it kind of sediment into the fabric of the infrastructure, and what am I going to actually need to buy then on top of that?
Well, virtualization plays, you could say, almost at every layer of the typical IT stack, and even places you already know, in storage, today, virtual storage. We hear that term quite often. Youll be hearing virtual network capabilities. Within the traditional setup that you described, we have virtualization of the operating system running on a hypervisor. We have virtual capabilities at the application layer itself, by virtualizing applications and providing application streaming. Where I think your question drives is whether or not we see the hardware vendors or even the chip manufacturers picking up virtualization. We at Novell certainly see that trend. Its happening today where the hardware vendors are clearly picking up virtualization technologies, clearly thinking about options for the actual implementation, providing hardware thats more readily available for virtualization of operating systems. And the trend that you see, lets say, with Intels vPro technology, virtualization on a chip, I think, is foreshadowing that the hypervisor layer may eventually end up on the chip and youll have chip-ready virtualization.
Virtualization at Every Layer.