Virtualization may seem new, but it was around when PC Week made its debut. The roots of today’s data center virtualization are easily traced to early mainframe systems of the 1970s and ’80s.
I started working in the IT industry in 1987, supporting PC-based emulation software that replaced terminals for Digital VAX systems (where the “V” stood for “virtual”).
Even then, I was dealing-on the PC side of IT-with memory utilities that provided primitive virtualization services. The Quarterdeck Extended Memory Management system was a well-known aid used to stretch expensive memory utilization in DOS-based PCs. I supported a similar product called Referee, a TSR utility that juggled memory use for DOS-based PC applications.
After 10 years in the field, I landed a technology analyst job at PC Week covering network and systems management platforms. I was honored to join a crack team, and started work under the tutelage of Michael Surkan and John Taschek. I picked up the art and science of reviewing the latest advances in enterprise data center and desktop computing products. Although the management platforms I covered were central to maintaining a data center with the lowest ratio of staff to machines, it was clear I had a beat that only a mother-or my news counterpart for many years, Paula Musich-could love.
Today, I cover virtualization technologies including VMware’s vSphere 4, Microsoft’s Hyper-V and the Xen family of virtualization tools. The technology embodied in these tools carries on the grand tradition of early virtualization projects: to remove physical barriers that limit compute capacity.
The really interesting development that distinguishes modern virtualization technology is the use of commodity hardware to create compute resource pools. Another distinguishing feature of virtualization today is the blazing rate of change it enables.
Moore’s Law, which cast light on the rate and scale of change in hardware computing abilities, explains a phenomenon that seems like a quaint buggy ride compared with the change rate in virtual IT infrastructure today. And the rate of change has enabled a qualitative change in application deployment, backup, disaster recovery and even application retirement. Running a data center today without effective tools for managing the virtual and physical infrastructure is akin to running the Titanic at normal cruise speed through an iceberg field.
It’s clear to me that the use of data center virtualization will become standard practice for all applications in just a few years. As this happens, effective managment of the physical and virtual resources that make up the transformed data center will take center stage in importance.
The history of x86-based virtualization makes it easy to see why management (and even security) has taken a backseat in the drive to implementation. Just getting multiple virtual machines to run in the amazingly diverse ecosystem of commodity hardware is hard enough without worrying about how to keep track.
AMD and Intel eased virtual machine resource constraints by adding hardware extensions to their CPUs. The latest generation of Intel CPUs based on the Xeon 5500, or “Nehalem,” family takes this support even further. But this just emphasizes the fact that, until recently, very little “interference” from management tools could be tolerated when getting production workloads to run reliably on commodity hardware.
As virtual machines increasingly supplant dedicated physical systems, it will be interesting to watch the flowering of management systems that corral both physical and virtual systems. The biggest names in the data center have long been associated with their management platforms: BMC, CA, Dell, IBM and HP have always included management tools to ensure that the underlying compute infrastructure was up and running. Now virtual machine managers will be added to this mix.
The data center of the past 25 years will resemble the data center of the next 25 years in that it will have a physical existence. But the location and capabilities of the data center we are creating today will be vastly more flexible and capable because of virtualization.
Technical Director Cameron Sturdevant can be reached at [email protected].