The explosion of connectivity, online applications and new services has driven the average business to increase its server capacity by six times and storage by 69 times over the last 10 years. The increased demand to keep up with growth and manage rising energy costs, as well as increased concern for the environment, places significant pressure on aging data center infrastructures. Many companies are under pressure to reduce their carbon footprint. While “going green” and growing business appear to be diametrically opposed goals, IBM has pioneered technologies that demonstrate how green IT can actually provide a much more efficient and competitive infrastructure to support emerging business requirements.
IBM’s resident green IT expert, Steven Sams, spoke with eWEEK’s Darryl K. Taft about Big Blue’s efforts in this area. Sams is vice president of Site and Facilities Services at IBM. He is responsible for a worldwide organization that is dedicated to helping clients identify their requirements, current capabilities and best options for data centers. This includes building new facilities and optimizing, relocating or consolidating existing facilities. The Site and Facilities Services organization that Sams oversees includes 700 specialists in more than 40 countries that have built over 30 million square feet of customer raised floor.
Can you discuss some of the ways IBM has been promoting green IT and how IBM‘s Global Technology Services (GTS) is helping clients redesign data centers?
We’ve focused on designing data centers to be the most cost-effective solutions while providing flexibility to meet clients’ unpredictable changes in IT demand-and to do it in a manner which is faster to deployment. In the past few years we’ve designed over 400 data centers around the world with a modular, “plug and play” approach, which has provided clients with 20 to 30 percent lower energy costs than traditional methods. The client response has been tremendous-according to recent industry surveys, over 80 percent of clients plan to adopt the plug-and-play approach, a drastic increase from a reported 11 percent only two years ago.
Would you explain the distinction between the “plug and play” approach that IBM provides in this space versus the traditional data center touted by other providers?
A traditional approach includes a large, monolithic data center build that is filled over time as capacity needs of the business increase. This approach was effective when technology power and cooling requirements remained relatively stable and business growth was predictable. IBM’s modular, plug-and-play approach is more cost-effective and flexible by allowing clients to build in smaller increments, pay as they grow and add capacity when needed. Consequently, deferring as much as 40 to 50 percent of the capital and operating costs. Along with increased changes in technology rates and unpredictability in most businesses, our approach (a design with additional space for components such as UPS, chillers and cooling) allows the data center to be more flexible by adopting to change.
IBM‘s collaboration with the world’s leading providers of data center physical infrastructure equipment has helped to bring out this vision of “plug and play” data centers to market. Can you discuss the role of this ecosystem?
We believe that increasingly efficient use of capital and operational costs cannot be driven by one company. Rather, the technologies should encompass the expertise from multiple companies. IBM is establishing a new leadership approach to data center design that helps to solve many of the past cost and flexibility challenges-allowing clients to plug in more cooling and power capacity as necessary. The mechanical and electrical equipment which runs the data center represents over 60 percent of the capital costs to build a new data center. IBM believes working collaboratively with the world’s leading suppliers in power, cooling, monitoring and management and network cabling are critical to the adoption of a data center which can scale on-demand and significantly reduce capital and operating costs. Moreover, these partnerships allow us to go to market on a global scale with innovative solutions.
IBMs Green Advice
The development of virtualization has enabled companies to scale without increasing their footprint. How is IBM making it possible to enhance utilization across the data center?
Virtualization provides for more effective utilization of resources while allowing clients to provision services to users faster. At one of our strategic outsourcing data centers in Lexington, Ken., IBM removed 85 percent of the servers and transferred the applications onto the remaining 15 percent of servers, enabling us to increase IT capacity by eight times without requiring any increase in power, space or cooling. This example further proves the enormous opportunity generated by server virtualizations-one existing server taking on the workload originally on six.
What are the benefits to supporting cloud computing in the new data center?
Modular data centers provide a more flexible way to meet unpredictable changes in business and IT demand. Additionally, as the IT landscape continues to change, clients find it more difficult to predict their capacity requirements over time. We have seen the cloud improve operating and capital costs to IT environments and enhance responsiveness to our clients around the world. By building in smaller increments, clients can better align their data center capacity to the needs of the business-adjusting capacity up (or down) as needed.
Can you discuss this vision in more detail and how it relates to IBM‘s Smarter Data Center Initiatives?
Data centers need to meet three critical requirements: availability, capacity and efficient operations including energy management. As the environments we manage have become more complex, there is an increased demand for more instrumented and interconnected systems to meet these needs while reducing costs. Our newest state-of-the-art data center in RTP [Research Triangle Park], N.C., has over 40,000 sensors linking our IT equipment, data centers and building automation systems into a single management system. IBM is now able to better manage all the resources and avoid up to 15 percent a year in energy costs while improving our long-term operational efficiency.
According to Gartner, 40 percent of unplanned application downtime is caused by application failures and 40 percent by operator error. Can you discuss how IBM addresses this through initiatives to help customers better manage their data centers?
The growing demand for availability and operational efficiency creates challenges that necessitate a new approach to service delivery-an approach that incorporates increased reliance on automation and technology, improved process management skills and metrics, and streamlined organizational governance. We have helped our clients advance the people aligned with operational aspects of a data center while reducing costs. An example of this work can be seen in our Enterprise Command Center in Boulder, Colo., where we manage the operations of more than 60 data centers in the U.S. to meet the highest service levels. By standardizing on tools-using one knowledge management system with consistent metrics and increasing the levels of automation-we have continued to provide leadership availability while reducing staff by 50 percent.
IBM as a company (and you in particular) has probably witnessed more data center build-outs than most organizations. What advice do you have for clients on how to avoid common problems? Where do you see IBM trying to position itself in the next few years?
With the economic conditions stabilizing, we are seeing over 80 percent of the Fortune 2000 clients focused on driving a return to growth and an increase in large data center projects. Over the years, we have learned a number of lessons from our engagements on troubled projects that could have initially been avoided. First, clients need to document and cost out their real business requirements. In many cases a wish list creeps into the process that has not been based on the needs of the business. Secondly during the design phase there are real opportunities to make decisions that trade off increased capital for potentially significant operational costs. In a number of cases, the client has no clear governance process to make the design trade-off decisions quickly in order to optimize the total cost of the project. Lastly a number of our clients have strong real estate expertise and attempt to take on the data center approach themselves when it may have been many years since they were last involved in data center activity. These are unique and complex projects where outside expertise can have enormous value.
I also understand IBM is tapping into the expertise of architects around the world to assist in your data center build-outs. Can you discuss these collaborations in more detail?
Planning and designing a data center from concept to implementation is a challenging task and requires skills across a broad talent pool from the ecosystem of partners we just mentioned as well as architect and engineering partners. By working with leading local and global architects, we can influence the adoption of the more standardized, “plug and play” approach across the construction industry to provide clients with a more cost-effective and rapidly deployable solution.