Real Benchmarks Consider the Wetware
Task speedup depends on leading users rather than pushing tech.Application performance is perhaps the most sought but least effectively pursued quarry of the enterprise developer. Its easy to be distracted from the real chase by IT providers, who are eager to play a part in the hunt--whether by turning up the clock rate on the processor or by speeding up the storage, not to mention debottlenecking the networks that tie things all together. In addition to parameters like clock rate, where "per second" is part of the specification, its widely understood that other system attributes can hugely improve performance. Ever since the debut of Windows 95, its been popular wisdom that you cant go wrong by adding memory. As memory costs fall, moreover, new algorithms use RAM extravagantly to slash task times: for example, by constructing the in-memory "data clouds" used by analytic tools like those from QlikTech International, or by building huge lookup tables to speed up crypto attacks.
But all of these strategies fail in enterprise environments, to the extent that they mainly improve the aspects of performance that can be measured by timing a scripted task. Making such benchmarks reproducible requires removing human users from the loop: After all, they dont yield consistent results. And yet, its users unpredictability and imperfection that have the greatest impact on many task times.