Today's data centers are at a critical juncture in their development. The full potential of Web 2.0 and cloud computing technologies has been hindered by spiraling power costs, unprecedented complexity, and limitations in the existing IT architectures that support these technologies. Existing architectures were never designed to support the rapid growth of data, users and traffic in the Web 2.0 world. To addresses these challenges, the industry is beginning to move to "data center 2.0," where new approaches to data management, scaling and power consumption give businesses the room they need to grow.
These 2.0 data centers leverage standard low-cost x86 servers, Gigabit Ethernet interconnect and open-source software to build scale-out applications with tiering, data and application partitioning, dynamic RAM (DRAM)-based content caching servers and application-layer node failure tolerance.
These loosely coupled architectures have enabled service scaling-but at a very high price. Today's data centers are reeling from the high costs of power, capital equipment, network connectivity and space. They are also hindered by serious performance, scalability and application complexity issues.
Advances in multi-core processors, flash memory and low-latency interconnects offer tremendous potential improvements in performance and power at the component level, but adapting them to realize such benefits requires major engineering and research efforts. Because Web 2.0 and cloud computing enterprises must focus on their core business, higher-level building blocks are needed that can exploit these advanced technologies.