TKTK

 
 
By Jim Rapoza  |  Posted 2006-07-09 Email Print this article Print
 
 
 
 
 
 
 


Testing the IT stacks

eWeek labs had several goals in mind when we started our tests of IT stacks.

First, we werent interested in doing an unrealistic stress test designed to see which IT stack broke first. What we wanted was to run each stack under a heavy but realistic and consistent level of traffic to get practical results that could be applied to most organizations computing environments.

To test a Web-facing platform like an IT stack, we needed a subject application to test with. We wanted to avoid the clean-room-like environment in which these kinds of tests are often run, so instead of building a test application and then porting it to the languages used in the evaluation, we decided to use real-world applications. Mainsoft and IBM launches an effort to work together to extend the Linux ecosystem by helping Microsoft customers move to Linux. Click here to read more. Specifically, we chose to use portal applications because they exist in pretty much every scripting language and we could create almost the exact same test script in each one.

We used portals we consider popular—Microsoft SharePoint Portal Server 2003 (built on ASP), XOOPS (PHP), Plone (Python), and Liferay and JBoss Portal (JSP).

On the server side, our test systems were AMD Opteron-based servers with SATA (Serial ATA) RAID drives and 2GB of RAM. A separate system was configured for each database.

Virtual load test clients were generated by an AMD Athlon 64-based workstation running Windows XP. Everything ran on a Gigabit Ethernet network in eWeek Labs. Each test was run multiple times to avoid test discrepancies and outlier results.

We considered several different tools for performing the load tests, including the open-source OpenSTA (see below). We looked hard at OpenSTA, as it would have made it much easier to share our test scripts and methodologies. However, while OpenSTA had all the requisite capabilities, its configuration and reporting limitations would have added to our testing time.

We eventually decided to use Borlands SilkPerformer (formerly from Segue Software) to handle the actual test and reporting management. During the course of each approximately hourlong test, SilkPerformer ran 1,000 virtual clients against the stack applications.

To test the IT stacks, we recorded a script doing basic tasks that could be repeated in every one of the portals. The tasks included loading an identical page from each portal, loading a members page and general portal surfing. We opted to use open, rather than user-authenticated, pages because we did not want the process to turn into a test of authentication systems.

Among the many results generated from the tests, the ones we have chosen to publish focus on performance averages. These include average transactions per second, average throughput per second, average hits per second, average page download time and average document download time.

We should point out that the last two test averages include server busy time, as well as page download and document download attempts when the servers were more heavily loaded. The averages are a good barometer of how a stack setup will handle a long and heavy load, but they dont represent the actual time it will take to download a page or document from these servers.

Like any large-scale benchmarking test, these IT stack tests are not without their weaknesses.

One could argue that we were testing the subject apps—the portals—and not the stacks themselves. Of course, one could also argue that tests using a custom-built subject application would be less a test of the stacks than a test of the porting skills of the programmers.

In addition, many of the platforms that we tested arent designed to run straight to users. For example, it is recommended that Plone be run behind clusters or Apache proxies for production environments.

In reality, all these products would run in heavily optimized environments in an enterprise. But the point was to test the stacks, not their ideal performance points, which is also why we didnt tune or optimize any of the systems but ran them as close to default as possible.

The criticism we expect to hear most is of the stacks we left out—including commercial J2EE platforms, such as those available from BEA Systems, IBM, Oracle and Sun Microsystems, as well as the many other database and server platform permutations.

Hopefully, over time we—and readers who perform these tests and share their results—can address this last potential criticism. We plan to update our tests at blog.eweek.com/blogs/eweek_labs, and we invite you to also share your results there.

Next Page: Taking a closer look at LAMP, Linux J2EE, Linux JBoss, Linux Python, .Net.



 
 
 
 
Jim Rapoza, Chief Technology Analyst, eWEEK.For nearly fifteen years, Jim Rapoza has evaluated products and technologies in almost every technology category for eWEEK. Mr RapozaÔÇÖs current technology focus is on all categories of emerging information technology though he continues to focus on core technology areas that include: content management systems, portal applications, Web publishing tools and security. Mr. Rapoza has coordinated several evaluations at enterprise organizations, including USA Today and The Prudential, to measure the capability of products and services under real-world conditions and against real-world criteria. Jim Rapoza's award-winning weekly column, Tech Directions, delves into all areas of technologies and the challenges of managing and deploying technology today.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel