Microsoft, Oracle in Benchmark War

If it hasn't happened already, Microsoft will post new benchmark numbers that show Windows 2000 blowing away Oracle's 9iAS application server in performance tests.

If it hasnt happened already, Microsoft will post new benchmark numbers that show Windows 2000 blowing away Oracles 9iAS application server in performance tests. The problem is that no one likes this "performance" test, and Microsoft is excluded from running in the one now sanctioned by the Java/Unix community. The result of this cat fight is that there will be even less regard for benchmarks, and Oracle and Microsoft will gain respect simply by participating in the shenanigans and proving theyre pursuing superiority.

Oracle must take the blame for kicking up this benchmark storm. The company believes its 9iAS app server has performance superior to its competitors but had no way to prove it. Oracle engineers grabbed Suns Pet Store Java reference implementation, which was never designed as a benchmark, and ran some numbers.

Microsoft, again trying hard to prove to the Unix community that Windows is an app server, took the benchmark and ran its own numbers. Of course, these new results show Windows stomping Oracles 9iAS.

Then at JavaOne, Oracle tweaked and tuned the application and reran its numbers and "proved" just what sputtering application servers Windows and Oracles competitors are. It was all in good fun to JavaOne attendees.

Microsoft, however, was not amused. The company was extremely agitated that Oracle—the king of performance hype—was mocking the performance of Windows. Microsoft responded by hiring VeriTest, a third-party testing lab, to settle the dispute for the last time.

The results show Windows is indeed faster than Oracle in this particular test—the first time that the same load testing tool and hardware platform were used. However, there are several reasons why this test is not completely valid, but there are two in particular that come to mind: The code used by Microsoft is obviously not Java, and, therefore, no direct comparisons can be made, and Pet Store was never designed to be a benchmark.

Microsoft is excluded from participating in ecPerf (, a more sanctioned app server benchmark, so well continue to see Pet Store-like performance tests. They will have little meaning, but I have to applaud Microsoft and Oracle for pushing performance limits.

How do these benchmarking battles help you? Write to me at