If it hasn't happened already, Microsoft will post new benchmark numbers that show Windows 2000 blowing away Oracle's 9iAS application server in performance tests.
If it hasnt happened already, Microsoft will post new benchmark numbers that show Windows 2000 blowing away Oracles 9iAS application server in performance tests. The problem is that no one likes this "performance" test, and Microsoft is excluded from running in the one now sanctioned by the Java/Unix community. The result of this cat fight is that there will be even less regard for benchmarks, and Oracle and Microsoft will gain respect simply by participating in the shenanigans and proving theyre pursuing superiority.
Oracle must take the blame for kicking up this benchmark storm. The company believes its 9iAS app server has performance superior to its competitors but had no way to prove it. Oracle engineers grabbed Suns Pet Store Java reference implementation, which was never designed as a benchmark, and ran some numbers.
Microsoft, again trying hard to prove to the Unix community that Windows is an app server, took the benchmark and ran its own numbers. Of course, these new results show Windows stomping Oracles 9iAS.
Then at JavaOne, Oracle tweaked and tuned the application and reran its numbers and "proved" just what sputtering application servers Windows and Oracles competitors are. It was all in good fun to JavaOne attendees.
Microsoft, however, was not amused. The company was extremely agitated that Oraclethe king of performance hypewas mocking the performance of Windows. Microsoft responded by hiring VeriTest, a third-party testing lab, to settle the dispute for the last time.
The results show Windows is indeed faster than Oracle in this particular testthe first time that the same load testing tool and hardware platform were used. However, there are several reasons why this test is not completely valid, but there are two in particular that come to mind: The code used by Microsoft is obviously not Java, and, therefore, no direct comparisons can be made, and Pet Store was never designed to be a benchmark.
Microsoft is excluded from participating in ecPerf (ecperf.theserverside.com/ecperf), a more sanctioned app server benchmark, so well continue to see Pet Store-like performance tests. They will have little meaning, but I have to applaud Microsoft and Oracle for pushing performance limits.
How do these benchmarking battles help you? Write to me at email@example.com.
As the director of eWEEK Labs, John manages a staff that tests and analyzes a wide range of corporate technology products. He has been instrumental in expanding eWEEK Labs' analyses into actual user environments, and has continually engineered the Labs for accurate portrayal of true enterprise infrastructures. John also writes eWEEK's 'Wide Angle' column, which challenges readers interested in enterprise products and strategies to reconsider old assumptions and think about existing IT problems in new ways. Prior to his tenure at eWEEK, which started in 1994, Taschek headed up the performance testing lab at PC/Computing magazine (now called Smart Business). Taschek got his start in IT in Washington D.C., holding various technical positions at the National Alliance of Business and the Department of Housing and Urban Development. There, he and his colleagues assisted the government office with integrating the Windows desktop operating system with HUD's legacy mainframe and mid-range servers.