How to Implement Green Data Centers with IT Virtualization

The use of virtualization technology is usually the first and most important step companies can take to create energy-efficient and green data centers. Virtualization is the most promising technology to address both the issues of IT resource utilization and facilities space, power and cooling utilization. IT virtualization, along with cloud computing, is the key to energy-efficienct, flexible and green data centers. Here, Knowledge Center contributor John Lamb describes the concept of IT virtualization and indicates the significant impact that IT virtualization has on improving data center energy efficiency.


The most significant step most organizations can make in moving to green data centers is to implement virtualization for their IT data center devices. The IT devices include servers, data storage, and clients or desktops used to support the data center. There is also a virtual IT world of the future-via private cloud computing-for most of our data centers.

Although the use of cloud computing in your company's data center for mainstream computing may be off in the future, some steps towards private cloud computing for mainstream computing within your company are currently available. Server clusters are here now and are being used in many corporate data centers.

Although cost reduction usually drives the path to virtualization, often the most important reason to use virtualization is IT flexibility. The cost and energy savings due to consolidating hardware and software are very significant benefits and nicely complement the flexibility benefits. The use of virtualization technologies is usually the first and most important step we can take in creating energy efficient and green data centers.

Reasons for creating virtual servers

Consider this basic scenario: You're in charge of procuring additional server capacity at your company's data center. You have two identical servers, each running different Windows applications for your company. The first server-let's call it "Server A"-is lightly used, reaching a peak of only five percent of its CPU capacity and using only five percent of its internal hard disk. The second server-let's call it "Server B"-is using all of its CPU (averaging 95 percent CPU utilization) and has basically run out of hard disk capacity (that is, the hard disk is 95 percent full).

So, you have a real problem with Server B. However, if you consider Server A and Server B together, on average the combined servers are using only 50 percent of their CPU capacity and 50 percent of their hard disk capacity. If the two servers were actually virtual servers on a large physical server, the problem would be immediately solved since each server could be quickly allocated the resource each needs.

In newer virtual server technologies-for example, Unix Logical Partitions (LPARs) with micro-partitioning-each virtual server can dynamically (instantaneously) increase the number of CPUs available by utilizing the CPUs currently not in use by other virtual servers on the large physical machine. This idea is that each virtual server gets the resource required based on the virtual server's immediate need.