Todays New Approach to Capacity Management

 
 
By Imad Mouline  |  Posted 2010-10-08 Email Print this article Print
 
 
 
 
 
 
 


Today's new approach to capacity management

Today's composite Web applications require new approaches to capacity management. First and foremost, any effort to optimize performance under various load sizes must be based on a realistic view of the user's actual experience, which is the only reliable source for pinpointing what user segments may be vulnerable to a performance degradation. In other words, performance testing from the user's actual browser or device-also known as an "outside in" approach-is the only way to truly understand the user experience under various load sizes and as subjected to an extremely wide range of performance-impacting variables beyond the data center.

One point worth emphasizing here is that you need to measure user performance across all those geographies where your key user segments are based. Some third-party services may perform very differently from one location to the next (for example, New York or Los Angeles) and the cascading effect of performance deterioration described earlier can progress very differently. If you don't test across key user geographies, you risk leaving important user segments behind.

Once you understand the user's experience and know which segments may be vulnerable to a performance drop-off, you can then trace back through all the elements standing between these users and your data center to identify problem areas where capacity may need to be added. The source may be internal (within your data center) or external; for example, you may diagnose the source of a slowdown as a third-party or cloud service provider. Armed with this knowledge, you can then highlight and verify the performance breakdown in order to enforce service-level agreements (SLAs) and elicit the necessary capacity additions-ideally before users even become aware of the performance issue.

Instead of managing capacity based on utilization, new approaches to capacity management identify the "breaking points" of the individual elements that collectively impact the user experience under various load sizes. This represents a much more informed approach to capacity management, and enables optimal trade-offs between performance and infrastructure investment.

As an example, an organization may determine that a Web farm operating at 50 percent utilization is able to maintain an acceptable response time of just below four seconds for the most critical user segments and geographies under heavy load. However, as utilization creeps up past 50 percent, performance may begin to drop off. With this knowledge, the organization can design and partition systems based on an optimal utilization level of 50 percent, which strikes the proper balance between too many idle resources and the risk of reputation-damaging problems from not enough capacity.




 
 
 
 
Imad Mouline is CTO of Gomez. Imad is a veteran of software architecture, research and development. Imad is a recognized expert in Web application development, testing and performance management. Imad's breadth of expertise spans Web 2.0, cloud computing, Web browsers, Web application architecture and infrastructure, and software as a service. Prior to Gomez, Imad was CTO at S1 Corp. He can be reached at imouline@gomez.com.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel