How to Deploy Higher-Level Building Blocks for Web 2.0 and Cloud Computing Data Centers
Today's Web 2.0 and cloud computing data centers have reached a critical juncture, as demand for their services has collided with existing architectures and technologies. Today's data centers are reeling from the high costs of power, capital equipment, network connectivity and space, and are hindered by serious performance, scalability and application complexity issues. Web 2.0 and cloud computing enterprises must focus all resources on their core business of providing leading-edge application services. Here, Knowledge Center contributor John Busch explains why higher-level building blocks are needed to effectively exploit these advanced Web 2.0 and cloud computing technologies.Today's data centers are at a critical juncture in their development. The full potential of Web 2.0 and cloud computing technologies has been hindered by spiraling power costs, unprecedented complexity, and limitations in the existing IT architectures that support these technologies. Existing architectures were never designed to support the rapid growth of data, users and traffic in the Web 2.0 world. To addresses these challenges, the industry is beginning to move to "data center 2.0," where new approaches to data management, scaling and power consumption give businesses the room they need to grow. These 2.0 data centers leverage standard low-cost x86 servers, Gigabit Ethernet interconnect and open-source software to build scale-out applications with tiering, data and application partitioning, dynamic RAM (DRAM)-based content caching servers and application-layer node failure tolerance.
These loosely coupled architectures have enabled service scaling-but at a very high price. Today's data centers are reeling from the high costs of power, capital equipment, network connectivity and space. They are also hindered by serious performance, scalability and application complexity issues.