How to Build High-Performance Data Centers for Optimized App Delivery

By Sunil Cherian  |  Posted 2010-04-26

How to Build High-Performance Data Centers for Optimized App Delivery

A typical enterprise has a mix of Microsoft applications, enterprise applications, multimedia applications and cloud computing applications. Whether it is a large enterprise or a small and midsize business, this inflated application infrastructure in the data center introduces challenges related to application availability, application performance, application and server traffic management, bandwidth utilization, network security, and energy and cooling costs. Collectively, these challenges define common application delivery problems.

Organizations are increasingly dependent on Web-enabled CRM, ERP applications and Microsoft applications for performing day-to-day, business-critical operations. Recent adoption and usage of Web 2.0 applications-content management systems (CMSes), wikis, blogs and other online collaboration tools-are taking the enterprise productivity to the next level. Businesses can extend services to subsidiaries and partners across the world thanks to the advent of service-oriented architecture (SOA)-based application architectures.

However, popularity of applications also means increased traffic on the network, more application transactions, and high response times. Heavy dependence on application infrastructure means IT must ensure availability and acceleration of all data center applications to provide the best user experience possible. Furthermore, IT should implement best practice security measures for networks, servers and applications to ensure security, privacy and compliance. Implementing these best practices with reduced costs for power, cooling, bandwidth, and software and hardware licensing optimizes data center resources and improves ROI.

To architect an optimized application delivery infrastructure, consider the following seven points:

Point No. 1: 24/7 application access

IT must prepare for preventing both planned and unplanned downtimes, as many applications and services have software links with multiple critical servers-and SOA environments contain composite applications that use several business components. In particular, unplanned downtimes are costly in terms of revenue and reputation.

Data centers should utilize application delivery controllers (ADCs) with intelligent server load balancing to distribute the load among servers based on health of servers (CPU, connection load, uptime, and application limits) and prevent overloading any single server. Server load balancing offers high availability and delivers 99.999 percent ("five nines") uptimes for applications.

Planned downtimes are more common when IT is rolling out enterprise updates, performing routine maintenance or deploying new applications. The ability to automatically bring standby servers online and redirect users to alternate sites or servers during planned downtimes ensures high availability of applications.

Application Fluency

Point No. 2: Application fluency

An ADC with Layer 4 server load balancing offers simple scalability, but lacks intelligent distribution of requests based on Layer 7 information such as HTTP headers, URLs, application cookies and other TCP/IP protocol information.

With intelligent and flexible Layer 7 server load balancing, content no longer needs to be duplicated on each server to improve capacity. This results in lower application software and hardware licensing costs, as well as lower integration and IT maintenance and management costs.

Point No. 3: Offloading complex business logic

Often, e-commerce or Web services business logic requires that users communicate with different applications in order, based on the nature of the application and content for executing specific transactions. Since many applications store specific kinds of information on one back-end server, session persistence is required for completing transactions. However, implementing the entire business logic in applications is impractical and increases application complexity.

The ability to offload business logic onto ADCs with server load balancers simplifies application codes and reduces rewrites. An ADC with an application-intelligent load balancer that can maintain session persistence between users and applications based on client request information [such as Uniform Resource Identifier (URI), HTTP header, cookie, host name, and URL] simplifies application complexity and business logic, and improves uptimes.

Optimizing User Quality of Experience (QoE)

Point No. 4: Optimizing user quality of experience (QoE)

Applications can be complex, feature-rich, demanding, and written with performance often relegated as an afterthought or left to IT. Unfortunately, poorly written applications increase server TCP connections, increase user response times, and reduce application usage-which can severely impact productivity and revenue.

To address this challenge, offloading CPU-intensive TCP connections and multiplexing client HTTP transactions dramatically improve server performance. ADCs take I/O off of servers and let servers do what they do best: serve valuable content. This reduces the amount of physical servers and associated application software licensing costs, while improving user response times by an order of magnitude.

Another performance challenge is that servers repeatedly serve over 60 percent of Website content, reducing server utilization and increasing response times for users who request unique content. A simple solution is to offload static content to a front-end cache that acts as a proxy for application servers. This reduces server connections, increases server utilization and improves HTTP response times.

The user experience can also be affected by serving large Microsoft documents such as XML files, style sheets or Flash components. These hog internal networks, consume expensive server CPU cycles, slow WAN downloads, increase response times and could bring servers down. To optimize the user experience, a hardware compression offloading device frees up expensive server CPU cycles, reduces amount of data on the wire, delivers consistent response times, increases bandwidth utilization, and lets servers deliver the content.

In summary, deploying an application delivery controller in the data center that integrates Layer 4-7 server load balancing, TCP offloading, caching and compression delivers availability, acceleration, security and scalability to applications-and reduces TCO.

Improving Bandwidth Utilization

Point No. 5: Improving bandwidth utilization

Treating recreational, mission-critical, and non-business traffic with equal priority causes many problems. It can bring networks down, provide an unfair advantage to a few users and applications, allow traffic peaks that consume all available bandwidth, and cripple productivity for all.

IT should control, filter and rate different types of traffic on the network: time-sensitive traffic [voice over IP (VOIP), multimedia, streaming, and video conferencing], best effort (Web, instant messaging, and FTP), and unwanted (non-corporate and peer to peer). 

Prioritizing and classifying application traffic based on protocol, port or Layer 7-based information optimizes networks for delivering core business applications, improves response times, and improves bandwidth utilization and ROI.

Point No. 6: Implementing best practice security methods

Organizations benefit from Secure Sockets Layer (SSL) security, but high volumes of SSL traffic drag down even the most powerful, general-purpose Web server hardware. To comply with regulations [such as the Health Insurance Portability and Accountability Act (HIPAA), the Sarbanes-Oxley Act, or the Payment Card Industry Data Security Standard (PCI DSS)] or to comply with liability or self-imposed requirements for securing application services, IT can take advantage of hardware-based SSL offloading. Hardware SSL acceleration offloads and secures mission-critical applications, removes tradeoffs between security and acceleration, and increases trust of online users.

In addition to application intrusion attacks, DDoS (distributed denial of service) attacks are back and bigger than before. Network latency and packet loss increase with DDoS attacks, forcing applications to slow to a crawl. Networks become congested, and packet loss and latency increase. Integrating ADCs with DDoS protection capabilities protects networks, servers, and applications, in addition to improving Website, application, and server performance. This results in happy customers, increased business productivity, and reduced IT expenses.

Point No. 7: Resource consolidation and ROI

Older-generation data centers use separate devices for server load balancing, SSL offloading, caching, compression, DDoS protection, and firewalling of Layer 4-7 applications. On an aggregate, these devices would consume 1,000 or more watts annually. But by using integrated ADCs, IT can reduce hardware footprint, rack, power and cooling costs.

Sunil Cherian is Vice President of Product Marketing at Array Networks. A member of the founding team at Array, Sunil has also served as vice president of product management and director of engineering at Array. Previously, Sunil served as senior architect for Alteon WebSystems Inc., where he was responsible for several Layer 4-7 technologies. Before that, Sunil worked with Lucent, Octel and VMX. Cherian holds a Bachelor's degree in Computer Science and Engineering from the College of Engineering, Trivandrum, India, and a Master's degree in Computer Science from the State University of New York at Albany. He can be reached at

Rocket Fuel