“Men wanted for hazardous journey. Low wages, bitter cold, long hours of complete darkness. Safe return doubtful. Honour and recognition in event of success.”
That famous advertisement for Antarctic exploration is attributed to Ernest Shackleton, but it could just as well appeal to those embarking on a benchmarking program. Ive spent my share of hours in cold, dim server rooms trying to warm my hands over the flicker of drive array LEDs. Benchmarking is very hard but, if successful, worthwhile. Thats why we do it, and its why Im glad to see a significant effort righting itself.
Java training and consultancy company The Middleware Company first set sail for parts unknown in October when it published the results of a benchmark comparing two J2EE (Java 2 Platform, Enterprise Edition) application servers with Microsofts .Net 1.0 and .Net 1.1 release candidate run-times. When the results—with the .Net platform on top—came out, an uproar in the Java community ensued. The fact that Microsoft had funded the test didnt help.
Now The Middleware Company has set out on a second effort designed to be more rigorous, accurate and fair than the first, which did many things right but also had notable problems.
Two problems stood out. First, the .Net version did more data caching than the J2EE version. Second, the J2EE version was based on an old code base that still used bean-managed persistence rather than the more modern container-managed persistence.
Last month, Salil Deshpande, president of The Middleware Company, contacted me with the specifics of his companys retest plan, which fixes both these technical problems and makes two major procedural improvements as well.
First, this round is a collaborative process that incorporates feedback from a group of outside experts who have been working together on this since February. Second, the benchmark is now based on a formal specification that defines exactly how the test application needs to behave.
“We created an expert group in February and invited these experts to review a functional spec,” said Deshpande. “We started with [Suns sample application] Pet Store but nailed down enough things—database tables, relationships, what data can be cached, for how long and where, what pages can look like. We fixed all the moving parts.”
Page Two
The new specification also requires something that I regard as the single most important component of a good benchmark—full disclosure. In fact, the benchmark Web site containing the specification, database schema, sample database data, static HTML, and several Java and .Net-based implementations will be made public at the same time as the publication of this column. Its at www.middleware-company.com/casestudy.
To its credit, The Middleware Company published full details on its October test. Every benchmark, no matter how well or how poorly designed, models a very particular usage pattern and workload. Organizations with similar workloads will find the benchmark details and results highly valuable; organizations using different application architectures will know how to read the results in context. In both cases, full disclosure is key.
The new specification is a functional specification, not an implementation specification, as is another major effort, SPECjAppServer. Because its specification includes J2EE code, it can be run only by J2EE application servers. The Middleware Companys effort defines exactly how the application needs to function as well as the back-end database design, but it doesnt specify the language used or other implementation details.
Adopting a vendor- and platform-independent benchmark is something the entire application server industry has long needed to grow up and do.
While I think the Transaction Processing Performance Council and its similarly platform-agnostic TPC-W test is the right long-term home for application server benchmarking efforts, The Middleware Companys specification provides a firm basis from which to move forward. Its certainly good enough to provide credible results already, and the company is planning on using the specification to carry out a J2EE-versus-.Net retest in the next few months.
Iterative improvements based on public comment are how good benchmarking happens. Sail on—honor and recognition await!
Read more Tech Directions columns: