The most significant step most organizations can make in moving to green data centers is to implement virtualization for their IT data center devices. The IT devices include servers, data storage, and clients or desktops used to support the data center. There is also a virtual IT world of the future-via private cloud computing-for most of our data centers.
Although the use of cloud computing in your company’s data center for mainstream computing may be off in the future, some steps towards private cloud computing for mainstream computing within your company are currently available. Server clusters are here now and are being used in many corporate data centers.
Although cost reduction usually drives the path to virtualization, often the most important reason to use virtualization is IT flexibility. The cost and energy savings due to consolidating hardware and software are very significant benefits and nicely complement the flexibility benefits. The use of virtualization technologies is usually the first and most important step we can take in creating energy efficient and green data centers.
Reasons for creating virtual servers
Consider this basic scenario: You’re in charge of procuring additional server capacity at your company’s data center. You have two identical servers, each running different Windows applications for your company. The first server-let’s call it “Server A”-is lightly used, reaching a peak of only five percent of its CPU capacity and using only five percent of its internal hard disk. The second server-let’s call it “Server B”-is using all of its CPU (averaging 95 percent CPU utilization) and has basically run out of hard disk capacity (that is, the hard disk is 95 percent full).
So, you have a real problem with Server B. However, if you consider Server A and Server B together, on average the combined servers are using only 50 percent of their CPU capacity and 50 percent of their hard disk capacity. If the two servers were actually virtual servers on a large physical server, the problem would be immediately solved since each server could be quickly allocated the resource each needs.
In newer virtual server technologies-for example, Unix Logical Partitions (LPARs) with micro-partitioning-each virtual server can dynamically (instantaneously) increase the number of CPUs available by utilizing the CPUs currently not in use by other virtual servers on the large physical machine. This idea is that each virtual server gets the resource required based on the virtual server’s immediate need.
Cloud Computing: Exciting Future for IT Virtualization
Cloud computing: exciting future for IT virtualization
Cloud computing is a relatively new (circa late 2007) label for the subset of grid computing that includes utility computing and other approaches to the use of shared computing resources. Cloud computing is an alternative to having local servers or personal devices handling users’ applications. Essentially, it is an idea that the technological capabilities should “hover” over everything and be available whenever a user wants.
Although the early publicity on cloud computing was for public offerings over the public Internet by companies such as Amazon and Google, private cloud computing is starting to come of age. A private cloud is a smaller, cloudlike IT system within a corporate firewall that offers shared services to a closed internal network. Consumers of such a cloud would include the employees across various divisions and departments, business partners, suppliers, resellers and other organizations.
Shared services on the infrastructure side such as computing power or data storage services (or on the application side such as a single customer information application shared across the organization) are suitable candidates for such an approach. Of course, IT virtualization would be the basis of the infrastructure design for the shared services, and this will help drive energy efficiency for our green data centers of the future.
Because a private cloud is exclusive in nature and limited in access to a set of participants, it has inherent strengths with respect to security aspects and control over data. Also, the approach can provide advantages with respect to adherence to corporate and regulatory compliance guidelines. These considerations for a private cloud are very significant for most large organizations.
Cluster architecture for virtual servers
There are now many IT vendors offering virtual servers and other virtual systems. Cluster architecture for these virtual systems provides another significant step forward in data center flexibility and provides an infrastructure for very efficient private cloud computing. By completely virtualizing servers, storage and networking, an entire running virtual machine can be moved instantaneously from one server to another.
Client Virtualization
Client virtualization
A great potential in energy savings is client, or desktop, virtualization. Various studies have estimated energy savings of more than 60 percent by using client virtualization. Client virtualization-often called thin-client computing-is not a new concept and goes back at least 15 years. In fact, thin-client computing, where the server does all of the computing, is similar in concept to the terminals we used to connect to the mainframe before the advent of the PC.
Benefits of client virtualization
The significant benefits of client virtualization and the use of thin clients are the low cost of ownership (including lower energy use), security and reliability. Boot image control is much simpler when only thin clients are used-typically, a single boot image can accommodate a very wide range of user needs and can be managed centrally. Thin-client technology can be a significant benefit, for example, in supporting help desks where everyone at the help desk needs to access the same server applications.
Risks of client virtualization
The major risks to moving to thin-client technology include the loss of flexibility when moving from a thick client. Our laptops are thick clients and give us the flexibility to use them anywhere-with or without a network connection. Also, a server that supports thin clients must have a higher level of performance since it does all of the processing for the thin clients. Thick clients also have advantages in multimedia-rich applications that would be bandwidth-intensive if fully served.
But the major risk in moving to thin clients is loss of flexibility. On some operating systems (such as Microsoft Windows), software products are designed for personal computers that have their own local resources; trying to run this software in a thin-client environment can be difficult.
So, client virtualization through thin-client computing gives us very significant benefits but there are also concerns. A good place to start with client virtualization is the help desk, where the benefits usually greatly outweigh the concerns.
John Lamb is a Senior Certified IT Architect with IBM Global Servicesin New York. He has authored or co-authored numerous technical papers and articles, as well as five books on computer technologies including the May 2009 book: “The Greening of IT: How Companies Can Make a Difference for the Environment.” John holds a Ph.D. in Engineering Science from the University of California at Berkeley. He can be reached at jlamb@us.ibm.com.