Why Measure What Cant Matter?

 
 
By Peter Coffee  |  Posted 2004-11-01 Email Print this article Print
 
 
 
 
 
 
 

Developers have a duty to focus stakeholders' attention on relevant metrics and technologies.

I was talking with someone about this years presidential election, when I said that it was silly for polls to report anything other than predictions of the Electoral College votes. I argued that a poll purporting to measure, for example, "a 51-to-49 lead" for one candidate versus another was meaningless—since theres no such thing as a "popular vote" for the Presidency. "What do you mean?," came the reply. "You count up all the votes in all the states for one candidate, you count up all the votes for the other, thats the popular vote."
"OK," I said, "Imagine its the end of a football game, and the announcers say, Well, our home team carried or passed the ball for a total of 1,100 yards, and the other team only moved the ball 980 yards, so our team is the real winner—but a technicality in the rules only gives points for moving the ball across the lines at the ends of the field, so the referees have declared the other team the winner.
"Would that make sense?," I asked—"Because its exactly the kind of language you hear on election night if the popular vote winner is different from the Electoral winner. Theres no such thing as a popular vote winner: The rules are the rules, and you either win by those rules or you dont." "Yeah, thats not a bad point," was the answer. My point here, though, is not political but technical: Its necessary for people to focus on measures of the actual achievement of a goal, rather than being distracted by other related measures that may be easier to compile but really have nothing to do with success or failure. What got me thinking along these lines was another question closely tied to the national elections: the credibility of electronic voting software. Just before the election, there came a noisy announcement that five makers of computer-based voting systems had agreed to place reference versions of their code in the National Software Reference Library. This does not entail, so far as I can determine, a public disclosure of source code; what the NSRL makes available to the public are hash values, so-called "digital fingerprints," that confirm—to a high degree of confidence—whether two files are identical, or not.
Heres my problem with this announcement. I agree that theres some value in being able to say, "Yes, the software on this voting machine is the software that the manufacturer has placed on file." At least it means that no one has managed to replace the certified code with rogue code that deliberately miscounts the votes. I dont accept, though, the implied assumption that the "official" code is as good as it needs to be. Im not accusing any e-voting vendor of deliberate misfeasance, or even of apolitical incompetence; Im just saying that flaws in the system dont have to be introduced in the field, but can just as well be part of the vendors code in the first place. For that matter, flaws in the system can be introduced in the field without ever disturbing the code on the machines, for example by hacking an insecure data collection or data transfer process. Crypto guru Avi Rubin has written about both of these types of vulnerability. Elections, at least, have multiple layers of additional protection including trained observers throughout the process—who may not know what theyre looking for, but know abnormal and suspicious behavior when they see it. Most of the code that conducts e-business transactions, or that plays other critical roles in enterprise operations, doesnt have that costly benefit. Its all the more important, therefore, for developers to be actively interested in the emergence of verifiable languages such as Microsofts Spec#" ("Spec sharp"), or the Ada-based SPARK, whose developer community enjoys a home page courtesy of Praxis High Integrity Systems. In a world where software licenses seem to be inspired by each others disclaimers of suitability for safety-critical applications, it would be nice to see competition shift in the direction of readiness for precisely that kind of critical role. And its crucial for developers to think about, and to communicate to others, the realities of which measures have any real meaning for the integrity of our systems—in terms of both the reliability of their function, and the legitimacy of their results. Tell me what mismeasurements of software quality youve encountered at peter_coffee@ziffdavis.com. To read more Peter Coffee, subscribe to eWEEK magazine. Check out eWEEK.coms Application Development Center for the latest news, reviews and analysis in programming environments and developer tools.

Be sure to add our eWEEK.com Application development news feed to your RSS newsreader or My Yahoo page

 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel