Automated testing has problems

 
 
By Peter Coffee  |  Posted 2005-01-17 Email Print this article Print
 
 
 
 
 
 
 


of its own"> Its important to realize that this is not the only approach or even necessarily the best approach. Automated tests, after all, are themselves pieces of software that can exhibit their own flaws of poor usability. A well-maintained archive of the tests performed, results obtained and resulting improvements made can be easier to understand than a cryptic body of test scripts that may have been made obsolete by relatively tiny changes in a body of code.

Moreover, when automated tests are run by someone who didnt design them, they may be executed in ways that fail to catch errors. For example, a test might not be applied at boundary conditions, or changes in an application might alter those boundary values. Detecting crucial boundary conditions—and generating tests that focus on these likely failure points—is one of the notable strengths of a state-of-the-art testing tool such as Agitator from Agitar Software Inc.

Click here to read Peter Coffees review of Agitator 2.0.
Alternatively, an automated test might generate huge numbers of false-positive alerts. For example, a simple screen-replay tool for a GUI may trip over cosmetic changes in interface layout. Its a virtual certainty that a test that generates false positives will somehow be bypassed or suppressed, perhaps leading future test teams to assume that something is being tested when its effectively been shoved below the radar.

Automated testing may also produce accurate but misleading statistics. For example, a test might report that a certain fraction of the lines in a program were exercised a certain number of times, while failing to measure—and therefore being unable to report—whether those multiple tests actually verified behavior in more than one situation. Its up to a development team to ensure that tools are being used in a way that reflects this distinction.

Its also essential in the Web-deployed environment to test an applications handling of errors that may be triggered only by its dependencies on remote resources. This is one of the strengths of Compuware Corp.s DevPartner Fault Simulator, now in beta testing and planned for release early this year .

Click here to read Peter Coffees review of Compuwares DevPartner Studio Professional Edition 7.1. Test automation can also pave the way toward confirmation that an application does what its supposed to do, while leaving a massive blind spot obscuring things that it should not do.

For example, its easy for an automated test to ensure that changes to a persons insurance record are correctly applied. Its possible that an automated test would also ensure that those changes are reflected, where appropriate, in the record of a persons spouse.

What few such tests will check, however, is whether changes have been applied in places where they should not be. For example, a persons having a new child implies that the persons spouse also has a new child, but it would be an error to infer that the children already in that family are also new parents.

Such bugs are easily overlooked, warned software testing consultant Brian Marick, of Testing Foundations, in his 1997 paper "Classic Testing Mistakes" .

And those mistakes, already classics many years ago, remain all too likely to occur today.

Failure to think about what should not happen is also the essential flaw that opens the door to so many security problems in applications. Developers are good at envisioning and testing for all the ways that an application should behave correctly and for the many complex logic paths and other interactions that it should be able to follow. They tend to be less adept at envisioning things that should not happen—or that should be prevented if someone tries to make them happen.

Logging of applications actions can be an effective means of surfacing behaviors that an alert developer will recognize as out of line, but the code that does that logging can itself be time-consuming to write. A tool such as Identify Software Ltd.s AppSight 5.5, released in November, can perform that kind of recording in an intelligent manner that captures more detail when unusual situations indicate the need .

Its not enough to agree that testing is important. Unless the right things are tested in an effective way, software testing is like medieval medicine: Debugging the code, while ignoring fundamental flaws of design, is akin to bleeding a patient while failing to recognize (let alone cure) an infection.

Testing tools cant automate experienced vision or a domain-specific sense of whats important, but they can free developers from the most routine and laborious aspects of application testing to give them time to put their expertise to work.

Technology Editor Peter Coffee can be reached at peter_coffee@ziffdavis.com.

Application Testing Creates a Lengthening List of Demands

From choice of personnel to the design of test scenarios, teams must take arms against the rising costs of application failure

* Dont relegate testing to inexperienced team members

• Many crucial errors require domain knowledge to recognize

• Exhaustive testing is impossible; experience improves focus

* Dont stop testing when the application works

• Its not enough to do everything right; apps also must do nothing wrong

• Security problems and database corruption result when actions arent limited

* Dont stop at the applications edge

• Web-based applications need end-to-end stress tests

• Performance, compatibility and tolerance of network errors are also key criteria

Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.


 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developersÔÇÖ technical requirements on the companyÔÇÖs evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter companyÔÇÖs first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel