Pervasive public networks and the explosion of network-facing applications and Web services have dragged enterprise development out of the back room and into the showroom.
Customers and supply chain partners are coming to rely on network applications to complete time-critical transactions; government and public safety agencies are incorporating Web services into their missions. In this environment, lack of adequate software testing could become "the new negligence." The charter of the testing team must grow apace.
Application testing is traveling down the same path that has lately been followed by IT security. A combination of heightened awareness and regulatory mandates has transformed security from a "why fix the roof when its not raining?" cost to a recognized requirement of due diligence.
Application testing efforts may likewise obtain improved access to human and technical resources, and development team leaders may encounter fewer arguments when they seek to acquire state-of-the-art tools as the costs of application failure grow.
Cem Kaner, professor of software engineering at Florida Institute of Technology and director of Florida Techs Center for Software Testing Education & Research, has challenged enterprise development managers to consider the consequences of an application failure that results in someones death.
Its not difficult, Kaner asserts, to imagine a situation in which a single line of code turns out to be the proximate cause and in which that line turns out never to have been tested—despite the availability of tools to perform such tests. This could prove a classic setup for a claim of negligence against the developer and user of the application involved.
Kaners Web page, "Software Negligence and Testing Coverage" (www.kaner.com/coverage.htm), lists more than 100 types of coverage tests that a development team might need to perform—or perhaps wind up explaining why it did not. Some conceivable tests that might be ordered are obvious (and costly), but nonetheless inadequate—for example, "test every line of code." Others are less obvious but possibly crucial, such as "vary the location of every file used by the program" or "verification against every regulation [Internal Revenue Service, Securities and Exchange Commission, Food and Drug Administration, and so on] that applies to the data or procedures of the program."
Auditing user manuals and help files, confirming copyright permissions for images and sounds, and reviewing multicultural acceptability of all text and graphics in an application are other items on Kaners list that may not immediately occur to a software testing team. However, any of them could affect end-user acceptance of an application or the marketplace response to its deployment.
And these are merely the kinds of tests that ensure the application was constructed as intended and breaks no rules in the process. An application could survive rigorous review on these criteria, yet still be unsatisfactory.
An application could, for example, correctly implement the wrong algorithm, calculating interest using beginning-of-period formulas when end-of-period formulas are needed, or computing year-to-date values based on a calendar year instead of an intended fiscal year. It could differ from the behavior of an earlier version, not in a way that makes the new version wrong but in a way that breaks an existing application-integration or data-sharing scheme. It could fail under abnormal loads or fail to deal gracefully with intermittent network connections. These are some of the domain-specific or dynamic aspects of application testing that todays development teams must address.
Finally, application designers who deploy on public networks must anticipate the nonrandom, carefully targeted and frighteningly well-informed disruptions of a deliberate attack. We explored issues of design and development for application-level security in the Dec. 13 Developer Solutions, but security testing involves additional challenges.