Web 2.0 is often described as an evolution, from the Web-as-information source (that is, Web 1.0) to the Web as a more engaging, participatory medium. The Web page has evolved accordingly, from a static download with limited functionality to a starting point for a rich Web experience full of complex applications and third-party services that take users down elaborate paths of functionality.
Today, a single Web 2.0 transaction can consist of viewing a product catalog, filling a shopping cart and executing a transaction-all from within the same Web page. This is thanks to rich Internet applications such as AJAX, Flash and Flex. Measuring performance-or the response time or speed at which applications and services are delivered to users-is no longer as simple as testing page-per-page response time.
A new approach to measuring the performance of this new type of Web 2.0 transaction is needed. And since so much of the processing of Web 2.0 applications happens client-side (within users’ browsers), businesses need a more accurate way of testing performance, particularly under load.
Finding your Web application’s weakest link
Web 2.0 applications are highly complex and include an average of six third-party services that provide additional content and functionality. While third-party services help deliver a more comprehensive, satisfying online experience, they also present a liability by comprising up to 50 percent or more of the time a user spends waiting for a Web site or application to load.
Today, the performance of a complex Web application in its entirety hinges on the performance of each and every third-party element comprising it. Together, these elements comprise a highly interdependent Web application delivery chain where the poor performance of any one element in the chain can mean the difference between a successful and disappointing user experience.
Keep in mind that when a user accesses a third-party enabled application or service-an advertisement, for example, which may block or slow the main content coming from your site-they are not leaving your single Web page. So it matters little to users what is causing the slow performance; they will simply think your site is slow and associate a negative Web experience with your business. This hurts your brand image, revenues and customer satisfaction.
For these reasons, businesses cannot make assumptions regarding the performance of the external components that come between their data centers and their users’ browsers. This is important during normal traffic periods, but is especially critical during peak traffic periods given shrinking customer thresholds for poor online performance. Case in point: in a recent study, it was found that a mere one-second increase in response time can reduce online sales conversions by seven percent.
Load Testing Evolves for Web 2.0 World
Load testing evolves for Web 2.0 world
In this context, Web 2.0 applications require a new approach to load testing known as Load Testing 2.0. Load Testing 2.0 stands apart from more traditional solutions by simultaneously creating realistic production-equivalent loads, while previewing how the Web experience really scales under various load sizes across all key markets.
Load Testing 2.0 accomplishes this by combining load generated from the cloud with load generated from real-world user desktops. Additional points of differentiation include the following three:
Point No. 1: Load Testing 2.0 covers the entire Web application delivery chain
Load Testing 2.0 covers the entire Web application delivery chain and exposes how the extremely broad range of third-party elements performs from the user’s perspective under various load sizes. This helps businesses identify, isolate and fix performance problems anywhere in the chain-before users are impacted. This contrasts with Load Testing 1.0 approaches, which tend to be lab-based and focused only on internally-developed and managed applications. Simply put, Load Testing 1.0 approaches are walled off from external third-party performance variables and are incapable of delivering a realistic view of user performance.
Further along in the food chain, Load Testing 1.5 solutions generate and apply production-equivalent loads from external Internet backbone or cloud locations in order to assess application performance impact. But Load Testing 1.5 does not combine these “synthetic” loads with real-world user desktop loads, which also inhibits a true view of user performance.
Point No. 2: Load Testing 2.0 leverages an expansive worldwide testing network
Load Testing 2.0 leverages an expansive worldwide testing network (comprising hundreds of thousands of desktops and devices) to glean performance data directly from your geographically dispersed users’ browsers. In addition to the third-party enabled applications and services, an extremely varied set of individual user circumstances (including geographies, ISPs, connections speeds, desktops and browsers) can impact the user experience. This is known as the “last mile” and the sheer volume of usage scenarios presents a potentially huge drain on testing resources.
Only Load Testing 2.0 tests throughout this extremely wide range of usage scenarios, leveraging an “outside-in” approach to trace performance from the user’s browser all the way back to your data center. Worth noting: as applications grow richer (and by extension, heavier), technologies such as AJAX, Flash and Flex help maintain application speed by enabling browsers to perform much of the application work. With browsers emerging as a bigger part of the application delivery infrastructure, special attention must be paid to browser-related performance nuances, especially under load.
Point No. 3: Load Testing 2.0 solutions are available on-demand
Load Testing 2.0 solutions are available on-demand as self-service, user-friendly tools, making them much more accessible to more individuals within your organization. Today, any party with a stake in the performance of Web operations-not just application developers and quality assurance (QA) teams but also e-commerce managers, IT staffs and application development teams-can take advantage of Load Testing 2.0 tools to test whenever they want. They can do this as often as they want and across more last-mile scenarios, in a cost-effective, pay-per-consumption model.
When is Load Testing 2.0 Needed?
When is Load Testing 2.0 needed?
Load Testing 2.0 solutions are appropriate and valuable for a number of marketing or IT initiatives, including within the following five scenarios:
Scenario No. 1: You are launching new marketing campaigns, offering major promotions and/or hardening for peak sales periods. Applications simply must perform well, especially during high-traffic, high-visibility periods such as marketing promotions and holidays.
Scenario No. 2: You are releasing new features and/or incorporating a new third-party service into a Web application. The key is to identify and fix potentially costly and critical problems in your Web application delivery chain before launch, not after.
Scenario No. 3: You are virtualizing your IT infrastructure and you need assurance that your most important Web applications can still scale sufficiently.
Scenario No. 4: You are adding cloud-based services. Load Testing 2.0 can give valuable assurance of the ability of both internal and external cloud-based applications and services to scale to load.
Scenario No. 5: You are entering a new market or a new geography, and view Web application performance as a key to winning customers and improving market penetration.
Imad Mouline is CTO of Gomez, Inc. Imad is a veteran of software architecture, research and development. Imad is a recognized expert in Web application development, testing and performance management. Imad’s breadth of expertise spans Web 2.0, cloud computing, Web browsers, Web application architecture and infrastructure, and SAAS. Prior to Gomez, Imad was CTO at S1 Corp. He can be reached at imouline@gomez.com.