Application performance is perhaps the most sought but least effectively pursued quarry of the enterprise developer. Its easy to be distracted from the real chase by IT providers, who are eager to play a part in the hunt–whether by turning up the clock rate on the processor or by speeding up the storage, not to mention debottlenecking the networks that tie things all together.
In addition to parameters like clock rate, where “per second” is part of the specification, its widely understood that other system attributes can hugely improve performance. Ever since the debut of Windows 95, its been popular wisdom that you cant go wrong by adding memory. As memory costs fall, moreover, new algorithms use RAM extravagantly to slash task times: for example, by constructing the in-memory “data clouds” used by analytic tools like those from QlikTech International, or by building huge lookup tables to speed up crypto attacks.
But all of these strategies fail in enterprise environments, to the extent that they mainly improve the aspects of performance that can be measured by timing a scripted task. Making such benchmarks reproducible requires removing human users from the loop: After all, they dont yield consistent results. And yet, its users unpredictability and imperfection that have the greatest impact on many task times.
Real measurements of real users are expensive to do, and hard to find, but a 1983 study measured users performance in one common task and found that correction of errors was the single biggest piece of their unproductive time. Rather than benchmarking response time to correctly formulated queries, perhaps it would make more sense to determine what fraction of queries in everyday practice is considered successful by typical users: If a user averages two queries for every useful result, then refining the query interface–rather than speeding up the system–may offer greater leverage at much less cost.
Im reminded of research on basic application human factors such as menu design. Its been shown that a user will typically make a choice between two offered options in half the time required to choose from a list of eight. Follow the numbers: If one option gets chosen more than half the time, its a net time savings to use a multilevel menu, where the first menu merely lists the frequent choice and “other.”
Theres no compiler optimization that will do this for you: As Ed Post famously said, this takes actual talent.
As Ive said before, a real-life benchmark doesnt begin when the user asks the right question, or end when the system returns the right answer. The benchmark that matters begins with the user wanting to know something, and ends when the user is satisfied and ready to move on.