Handicapping the Hardware

 
 
By Peter Coffee  |  Posted 2001-11-19 Email Print this article Print
 
 
 
 
 
 
 

It sounds like a plot synopsis for an unconvincing TV movie, rather than a real news story, but Hewlett-Packard claims that a former employee tampered with hard disks and wiring in a Superdome server prior to benchmark tests—lowering scores and there

It sounds like a plot synopsis for an unconvincing TV movie, rather than a real news story, but Hewlett-Packard claims that a former employee tampered with hard disks and wiring in a Superdome server prior to benchmark tests—lowering scores and therefore possibly harming sales of HPs high-end Unix hardware.

For all the talk of computers being binary beasts, either on or off—working, or not working—they can be almost like racehorses in their variable performance under only slightly different conditions.

My favorite book on benchmark design is Richard Gabriels "Performance and Evaluation of LISP Systems," which has helped me bring a wide range of hardware to its knees. Even if you never use LISP for any other purpose, its a dandy benchmarking tool, with its ease of writing programs that place tremendous stress on processor and memory resources.

The most important thing about Gabriels book is not its extensive source code, though, but rather its discussion of which aspects of machine design affect test results. These discussions force the question of why were testing, not just what and how.

For example: Does a cache hide a slow subsystem interface, and should benchmarks deliberately frustrate cache algorithms? Or does cache design reflect realistic workload and provide a cost-effective balance of price and performance? Either point of view has its merits, and performance tests are not neutral on this subject.

More generally, where does IT product design or tuning cross the line from legitimate optimization to benchmark cheating? Do standardized benchmarks perversely encourage optimizations that actually harm everyday performance, away from the specific task parameters that those benchmarks employ?

Benchmarking needs a statistical approach, not a reliance on single numbers; it should locate thresholds of task difficulty that may mark abrupt performance changes; it should begin from the questions "What do we need to do?" and "How much is it worth?"—rather than the tempting but often irrelevant question "How fast will it go?"

Tell me what youd like to measure at peter_coffee@ziffdavis.com.

 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel