IBMs Green Advice

 
 
By Darryl K. Taft  |  Posted 2010-06-21 Email Print this article Print
 
 
 
 
 
 
 


 

The development of virtualization has enabled companies to scale without increasing their footprint. How is IBM making it possible to enhance utilization across the data center?

Virtualization provides for more effective utilization of resources while allowing clients to provision services to users faster. At one of our strategic outsourcing data centers in Lexington, Ken., IBM removed 85 percent of the servers and transferred the applications onto the remaining 15 percent of servers, enabling us to increase IT capacity by eight times without requiring any increase in power, space or cooling. This example further proves the enormous opportunity generated by server virtualizations-one existing server taking on the workload originally on six.

What are the benefits to supporting cloud computing in the new data center?

Modular data centers provide a more flexible way to meet unpredictable changes in business and IT demand. Additionally, as the IT landscape continues to change, clients find it more difficult to predict their capacity requirements over time. We have seen the cloud improve operating and capital costs to IT environments and enhance responsiveness to our clients around the world. By building in smaller increments, clients can better align their data center capacity to the needs of the business-adjusting capacity up (or down) as needed.

Can you discuss this vision in more detail and how it relates to IBM's Smarter Data Center Initiatives?

Data centers need to meet three critical requirements: availability, capacity and efficient operations including energy management. As the environments we manage have become more complex, there is an increased demand for more instrumented and interconnected systems to meet these needs while reducing costs. Our newest state-of-the-art data center in RTP [Research Triangle Park], N.C., has over 40,000 sensors linking our IT equipment, data centers and building automation systems into a single management system. IBM is now able to better manage all the resources and avoid up to 15 percent a year in energy costs while improving our long-term operational efficiency.

According to Gartner, 40 percent of unplanned application downtime is caused by application failures and 40 percent by operator error. Can you discuss how IBM addresses this through initiatives to help customers better manage their data centers?

The growing demand for availability and operational efficiency creates challenges that necessitate a new approach to service delivery-an approach that incorporates increased reliance on automation and technology, improved process management skills and metrics, and streamlined organizational governance. We have helped our clients advance the people aligned with operational aspects of a data center while reducing costs. An example of this work can be seen in our Enterprise Command Center in Boulder, Colo., where we manage the operations of more than 60 data centers in the U.S. to meet the highest service levels. By standardizing on tools-using one knowledge management system with consistent metrics and increasing the levels of automation-we have continued to provide leadership availability while reducing staff by 50 percent.

IBM as a company (and you in particular) has probably witnessed more data center build-outs than most organizations. What advice do you have for clients on how to avoid common problems? Where do you see IBM trying to position itself in the next few years?

With the economic conditions stabilizing, we are seeing over 80 percent of the Fortune 2000 clients focused on driving a return to growth and an increase in large data center projects. Over the years, we have learned a number of lessons from our engagements on troubled projects that could have initially been avoided. First, clients need to document and cost out their real business requirements. In many cases a wish list creeps into the process that has not been based on the needs of the business. Secondly during the design phase there are real opportunities to make decisions that trade off increased capital for potentially significant operational costs. In a number of cases, the client has no clear governance process to make the design trade-off decisions quickly in order to optimize the total cost of the project. Lastly a number of our clients have strong real estate expertise and attempt to take on the data center approach themselves when it may have been many years since they were last involved in data center activity. These are unique and complex projects where outside expertise can have enormous value.

I also understand IBM is tapping into the expertise of architects around the world to assist in your data center build-outs. Can you discuss these collaborations in more detail?

Planning and designing a data center from concept to implementation is a challenging task and requires skills across a broad talent pool from the ecosystem of partners we just mentioned as well as architect and engineering partners. By working with leading local and global architects, we can influence the adoption of the more standardized, "plug and play" approach across the construction industry to provide clients with a more cost-effective and rapidly deployable solution. 




 
 
 
 
Darryl K. Taft covers the development tools and developer-related issues beat from his office in Baltimore. He has more than 10 years of experience in the business and is always looking for the next scoop. Taft is a member of the Association for Computing Machinery (ACM) and was named 'one of the most active middleware reporters in the world' by The Middleware Co. He also has his own card in the 'Who's Who in Enterprise Java' deck.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel