A big part of tuning the performance of a Java application server is tuning the Java VM in which the application server runs.
On some platforms, organizations will have a choice of Java virtual machines. Sun Microsystems Inc. and IBM both produce their own VMs for several operating systems, and other platform vendors also make a Java VM tuned for their platforms. In these cases, organizations can choose the VM that handles their workloads the best.
In our setup (using BEA Systems Inc.s WebLogic 6.1 running on Windows 2000 Advanced Server), we ran tests using Java 1.3 VMs from Sun and IBM.
We found the IBM Java VM more stable (we had three crashes of the Sun VM during three weeks of testing and none with the IBM VM while we were using it) but also found the IBM VM took up about 20 percent more CPU on the same workload as the Sun VM (when the Sun VM was in its most optimized state).
We used the Sun VM for all our measured runs because we were concerned that we might run out of application server CPU capacity before database CPU capacity (and thus be testing the wrong part of our infrastructure).
After many variants of Sun VM settings, we found the best-performing configuration was just to set minimum and maximum heap size to 512MB of RAM and enable the Hotspot just-in-time compiler.
We also made a configuration change that made a major difference in performance with the Sun VM (but not with the IBM VM) that was passed on to us by BEA staff—limiting each Sun VM to using a maximum of two CPUs each.
Theoretically, the Sun VM was supposed to scale to work on the six-CPU systems we used as application servers, but in practice, this was not the case.
Binding each of the VMs to a different set of two CPUs (done by right-clicking on a process name in the Processes tab of the Task Manager) provided an immediate 30 percent reduction in overall CPU usage.