How Virtualization Figures into Power Savings
How does virtualization figure into this power-saving equation? Does it
save-or cost us-energy?
When you transfer a vast amount of data on a virtualized basis, you're going to be activating areas within the data center that have probably cooled down and not processed anything in awhile. So the local building or energy management systems may have throttled those areas down to save energy.
But you need to be able to go there because virtualization, to be successful, has two components to an equation that most people don't realize: You virtualize IT, but you also have to have the equal virtualization of the facility.
So, "VxIT," from a mathematical perspective, is equal to "VxFacility." You have to keep the two in harmony. There is a reason for that. When you virtualize a process on the IT side on a data center that is not a green field, the problem is that the data center was designed with upper and lower [power] limits.
We always knew what happens when you exceeded [a power limit]: The system shuts down. But we did not understand what would happen if you could actually drive a process below its design requirements. Power and cooling are designed for a window of operation. When you go below the lower limits of the window, what happens from a cooling perspective? Systems will shut off. Our root cause analysis is done, the data center crashed, yet no one knows why. The system simply shut down.
What happened was, virtualization saw there was a problem [and] it transferred the workload someplace else, so that line went above the design requirement again. Same thing with the power systems. The frequency among multiple UPSes [uninterruptible power supplies] can become unstable. When that instability exceeds the threshold level, they'll take themselves offline.
The safety circuits are operating; they're doing what they were designed to do. So what's the answer? The answer is to understand that when you virtualize the IT, you have to review the facility part.