Prove to Me Whats Working

 
 
By Peter Coffee  |  Posted 2003-12-19 Email Print this article Print
 
 
 
 
 
 
 

Technologies must at least enable, and preferably provide, independent tests and confirmations.

Call me a mistrustful pessimist, but the question that best defines my relationship with technology is, "How do I know its working?" When I divide products and services into the sheep and the goats, to borrow a biblical metaphor, the biggest difference between one and the other is whether they tell me promptly and correctly whether my action has had the desired effect. I dislike technologies that wont tell me; I loathe and despise those that lie. And I get testy when I encounter a technology--or, more typically, an ill-considered application of a technology--that expects me to take things on faith, or to work by trial and error. I dont like it as a user, and I abhor it as a developer.
I know exactly why I feel so strongly about this issue. It comes from formative decades of building and using analog circuits, with essentially approximate control components like variable capacitors rather than their digital replacements such as optical encoders and direct digital frequency synthesizers. The result is a craving for independent measurements.
When I sit down at a ham radio transmitter, whatever its internal design may be, I dont want to trust its built-in displays: I want to see an independent frequency meter, and preferably an oscilloscope, that give me an independent view of what Im radiating. I feel the same way about the bits that I send into the world, whether were talking about code or data, no matter how pure they may be compared to analog signals. And there are things that I can do, and products and services that I can seek out, to satisfy that craving: I urge developers to do likewise and challenge platform providers to actively enable those efforts. The foundation of assurance is a difference between specification and implementation. When a technology has only one provider, its a particular challenge to tell the difference: the one and only implementation, for all practical purposes, is the specification, and people will build to what that one implementation actually does--or they will look for a less monolithic alternative. Those who stick with the monoculture, though, will build to its actual behavior, regardless of what the formal specification--if it even exists as a public document--does or does not guarantee.
A useful counterexample is thread scheduling in Java. At various points in the evolution of that platform, there have been implementations that time-sliced equal-priority threads by default, and others that needed explicit instructions to produce the same behavior. So far as I know, neither approach is actually wrong in terms of conforming to specifications, but a developer could be unpleasantly surprised if all testing took place on a single platform whose behavior was assumed to be the norm. Only by reading, and understanding, a separate specification will a developer know what needs to be stated rather than merely implied. Only by having a separate specification, for either a platform or an application, can a developer or a client organization have a basis for an implementations test-and-acceptance protocol and plan. Theres also a positive side effect: when developers start thinking at the specification level, theyre more likely to consider several different options for application behavior. Theyre more likely to write applications whose behavior is platform-independent, to the extent that their technologies of choice make that possible. If more of a development teams effort gets spent at the level of the specification, the temptation to write platform-specific code as the path of least resistance may decline--and that would be a very good thing indeed. Successive implementations will also have to justify their departures, especially any incompatible departures, from a specification rather than letting incompatibilities arise as the cumulative residue of peoples bright ideas. When specifications exist independently of implementations, more people can get into the act: I can use tools like Parasofts Jtest (read my review of Jtest 5.0), rather than trusting any particular Java compiler to pass judgment on my code. I can expect compiler flags that enable or disable extensions--because the specification is the indisputable starting point. I can write purchase orders that require purchased tools, or hired services, to conform to published specifications or negotiate exceptions. My constructive skepticism, if I may call it that, also takes other forms. I favor native-code compilers that include the option of emitting assembly-language source files, with the higher-level language statements interspersed as comments. When I edit and save an image using any of my arsenal of graphics editing applications, including the excellent new Jasc Paint Shop Power Suite, I open it with another to make sure that the result is what I thought. When there are two ways of doing a calculation in a spreadsheet, for example a choice between summing rows first or columns first, I often do both and create a test cell that confirms the results are identical. The growing performance of the systems and tools that we use should pay a tithe, so to speak, to the need to be more certain of things that we do more quickly. That one principle, applied to technologies in general, will keep us in control and help to assure that we like the results. Tell me what overhead youd willingly incur to be sure that things are working.
 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel