Only reasonable testing is

 
 
By Peter Coffee  |  Posted 2005-01-17 Email Print this article Print
 
 
 
 
 
 
 


necessary"> If there is a silver lining to this cloud, it is in Kaners counterchallenge to the negligence-lawsuit scenario given above. Its a formally provable statement that exhaustive testing is not merely impractical but also a theoretical impossibility. And negligence, Kaner notes, is not the failure to do the impossible or even the failure to do everything that is possible, but rather the failure to do whats reasonable.

Developers should therefore understand that cost-benefit calculations can make a good case against a negligence claim, but only if the costs of testing and the benefits of risk reduction can be shown to have been evaluated with at least some degree of rigor and good faith.

A criterion of reasonableness is, moreover, a mixed blessing. It means that a development team cannot approach testing as a mechanical or a mathematical exercise, something with a formulaic criterion for how much is enough. A team must instead develop a process and a management approach to test the right things in a consistent and conscientious way.

Doing it automatically

In addition to the types of tests already mentioned, the vocabulary of testing includes long lists of familiar and tedious tasks with (sometimes literally) colorful names.

"White box" (or "glass box") testing includes path testing, a form of coverage testing that attempts to traverse every possible path through an application. This becomes increasingly difficult as applications evolve into constellations of services developed and maintained by independent teams. "Black box" testing ignores internals and exercises only published interfaces to an application component, but this depends on a degree of completeness in software specification thats rarely encountered in any but the most critical domains.

"Basis path" testing uses knowledge of internals to generate test cases in a formal way; "monkey" testing (or "ad hoc" testing, for the more polite) merely exercises the functions of an application in a random manner.

All these methods represent different combinations of efficiency and reproducibility. Formally generated and reproducible tests might seem to be the gold standard, but they can be fools gold if theyre so time-consuming to generate and run that they arent used early and often during development.

By the time that attempts at exhaustive testing have anything informative to say, it may be too late for their results to be useful. A team is likely to be better served by earlier and less formal tests that are guided by expert experience as to where an applications problems are most likely to be found. This is a strong argument against the common practice of staffing a testing group with relatively inexperienced developers or with the less skilled members of a development team. The most effective tests are likely to come from developers with the most insight into what kinds of errors are most likely and least acceptable.

Regardless of testing and staffing doctrine, however, it does seem logical that the testing of computer applications should itself be streamlined by making it a programmable and thus repeatable task. "Test automation" is thus often taken to mean the development of scripts and other mechanisms for testing one piece of software with another.

Next Page: Automated testing has problems of its own.



 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developersÔÇÖ technical requirements on the companyÔÇÖs evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter companyÔÇÖs first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel