Automated testing has problems
of its own"> Its important to realize that this is not the only approach or even necessarily the best approach. Automated tests, after all, are themselves pieces of software that can exhibit their own flaws of poor usability. A well-maintained archive of the tests performed, results obtained and resulting improvements made can be easier to understand than a cryptic body of test scripts that may have been made obsolete by relatively tiny changes in a body of code. Moreover, when automated tests are run by someone who didnt design them, they may be executed in ways that fail to catch errors. For example, a test might not be applied at boundary conditions, or changes in an application might alter those boundary values. Detecting crucial boundary conditionsand generating tests that focus on these likely failure pointsis one of the notable strengths of a state-of-the-art testing tool such as Agitator from Agitar Software Inc.Click here to read Peter Coffees review of Agitator 2.0.Alternatively, an automated test might generate huge numbers of false-positive alerts. For example, a simple screen-replay tool for a GUI may trip over cosmetic changes in interface layout. Its a virtual certainty that a test that generates false positives will somehow be bypassed or suppressed, perhaps leading future test teams to assume that something is being tested when its effectively been shoved below the radar. Automated testing may also produce accurate but misleading statistics. For example, a test might report that a certain fraction of the lines in a program were exercised a certain number of times, while failing to measureand therefore being unable to reportwhether those multiple tests actually verified behavior in more than one situation. Its up to a development team to ensure that tools are being used in a way that reflects this distinction. Its also essential in the Web-deployed environment to test an applications handling of errors that may be triggered only by its dependencies on remote resources. This is one of the strengths of Compuware Corp.s DevPartner Fault Simulator, now in beta testing and planned for release early this year . Click here to read Peter Coffees review of Compuwares DevPartner Studio Professional Edition 7.1. Test automation can also pave the way toward confirmation that an application does what its supposed to do, while leaving a massive blind spot obscuring things that it should not do. For example, its easy for an automated test to ensure that changes to a persons insurance record are correctly applied. Its possible that an automated test would also ensure that those changes are reflected, where appropriate, in the record of a persons spouse. What few such tests will check, however, is whether changes have been applied in places where they should not be. For example, a persons having a new child implies that the persons spouse also has a new child, but it would be an error to infer that the children already in that family are also new parents. Such bugs are easily overlooked, warned software testing consultant Brian Marick, of Testing Foundations, in his 1997 paper "Classic Testing Mistakes" . And those mistakes, already classics many years ago, remain all too likely to occur today. Failure to think about what should not happen is also the essential flaw that opens the door to so many security problems in applications. Developers are good at envisioning and testing for all the ways that an application should behave correctly and for the many complex logic paths and other interactions that it should be able to follow. They tend to be less adept at envisioning things that should not happenor that should be prevented if someone tries to make them happen. Logging of applications actions can be an effective means of surfacing behaviors that an alert developer will recognize as out of line, but the code that does that logging can itself be time-consuming to write. A tool such as Identify Software Ltd.s AppSight 5.5, released in November, can perform that kind of recording in an intelligent manner that captures more detail when unusual situations indicate the need . Its not enough to agree that testing is important. Unless the right things are tested in an effective way, software testing is like medieval medicine: Debugging the code, while ignoring fundamental flaws of design, is akin to bleeding a patient while failing to recognize (let alone cure) an infection. Testing tools cant automate experienced vision or a domain-specific sense of whats important, but they can free developers from the most routine and laborious aspects of application testing to give them time to put their expertise to work. Technology Editor Peter Coffee can be reached at email@example.com. Application Testing Creates a Lengthening List of Demands From choice of personnel to the design of test scenarios, teams must take arms against the rising costs of application failure * Dont relegate testing to inexperienced team members Many crucial errors require domain knowledge to recognize Exhaustive testing is impossible; experience improves focus * Dont stop testing when the application works Its not enough to do everything right; apps also must do nothing wrong Security problems and database corruption result when actions arent limited * Dont stop at the applications edge Web-based applications need end-to-end stress tests Performance, compatibility and tolerance of network errors are also key criteria Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.