Close
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Subscribe
Logo
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Subscribe
    Home Applications
    • Applications
    • Development
    • IT Management

    Putting Applications to the Test

    Written by

    Peter Coffee
    Published January 17, 2005
    Share
    Facebook
    Twitter
    Linkedin

      eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

      Pervasive public networks and the explosion of network-facing applications and Web services have dragged enterprise development out of the back room and into the showroom.

      Customers and supply chain partners are coming to rely on network applications to complete time-critical transactions; government and public safety agencies are incorporating Web services into their missions. In this environment, lack of adequate software testing could become “the new negligence.” The charter of the testing team must grow apace.

      Application testing is traveling down the same path that has lately been followed by IT security. A combination of heightened awareness and regulatory mandates has transformed security from a “why fix the roof when its not raining?” cost to a recognized requirement of due diligence.

      Application testing efforts may likewise obtain improved access to human and technical resources, and development team leaders may encounter fewer arguments when they seek to acquire state-of-the-art tools as the costs of application failure grow.

      Redefining reasonableness

      Cem Kaner, professor of software engineering at Florida Institute of Technology and director of Florida Techs Center for Software Testing Education & Research, has challenged enterprise development managers to consider the consequences of an application failure that results in someones death.

      Its not difficult, Kaner asserts, to imagine a situation in which a single line of code turns out to be the proximate cause and in which that line turns out never to have been tested—despite the availability of tools to perform such tests. This could prove a classic setup for a claim of negligence against the developer and user of the application involved.

      Kaners Web page, “Software Negligence and Testing Coverage” (www.kaner.com/coverage.htm), lists more than 100 types of coverage tests that a development team might need to perform—or perhaps wind up explaining why it did not. Some conceivable tests that might be ordered are obvious (and costly), but nonetheless inadequate—for example, “test every line of code.” Others are less obvious but possibly crucial, such as “vary the location of every file used by the program” or “verification against every regulation [Internal Revenue Service, Securities and Exchange Commission, Food and Drug Administration, and so on] that applies to the data or procedures of the program.”

      Auditing user manuals and help files, confirming copyright permissions for images and sounds, and reviewing multicultural acceptability of all text and graphics in an application are other items on Kaners list that may not immediately occur to a software testing team. However, any of them could affect end-user acceptance of an application or the marketplace response to its deployment.

      And these are merely the kinds of tests that ensure the application was constructed as intended and breaks no rules in the process. An application could survive rigorous review on these criteria, yet still be unsatisfactory.

      An application could, for example, correctly implement the wrong algorithm, calculating interest using beginning-of-period formulas when end-of-period formulas are needed, or computing year-to-date values based on a calendar year instead of an intended fiscal year. It could differ from the behavior of an earlier version, not in a way that makes the new version wrong but in a way that breaks an existing application-integration or data-sharing scheme. It could fail under abnormal loads or fail to deal gracefully with intermittent network connections. These are some of the domain-specific or dynamic aspects of application testing that todays development teams must address.

      Finally, application designers who deploy on public networks must anticipate the nonrandom, carefully targeted and frighteningly well-informed disruptions of a deliberate attack. We explored issues of design and development for application-level security in the Dec. 13 Developer Solutions, but security testing involves additional challenges.

      Next Page: Only reasonable testing is necessary.

      Only reasonable testing is

      necessary”>

      If there is a silver lining to this cloud, it is in Kaners counterchallenge to the negligence-lawsuit scenario given above. Its a formally provable statement that exhaustive testing is not merely impractical but also a theoretical impossibility. And negligence, Kaner notes, is not the failure to do the impossible or even the failure to do everything that is possible, but rather the failure to do whats reasonable.

      Developers should therefore understand that cost-benefit calculations can make a good case against a negligence claim, but only if the costs of testing and the benefits of risk reduction can be shown to have been evaluated with at least some degree of rigor and good faith.

      A criterion of reasonableness is, moreover, a mixed blessing. It means that a development team cannot approach testing as a mechanical or a mathematical exercise, something with a formulaic criterion for how much is enough. A team must instead develop a process and a management approach to test the right things in a consistent and conscientious way.

      Doing it automatically

      In addition to the types of tests already mentioned, the vocabulary of testing includes long lists of familiar and tedious tasks with (sometimes literally) colorful names.

      “White box” (or “glass box”) testing includes path testing, a form of coverage testing that attempts to traverse every possible path through an application. This becomes increasingly difficult as applications evolve into constellations of services developed and maintained by independent teams. “Black box” testing ignores internals and exercises only published interfaces to an application component, but this depends on a degree of completeness in software specification thats rarely encountered in any but the most critical domains.

      “Basis path” testing uses knowledge of internals to generate test cases in a formal way; “monkey” testing (or “ad hoc” testing, for the more polite) merely exercises the functions of an application in a random manner.

      All these methods represent different combinations of efficiency and reproducibility. Formally generated and reproducible tests might seem to be the gold standard, but they can be fools gold if theyre so time-consuming to generate and run that they arent used early and often during development.

      By the time that attempts at exhaustive testing have anything informative to say, it may be too late for their results to be useful. A team is likely to be better served by earlier and less formal tests that are guided by expert experience as to where an applications problems are most likely to be found. This is a strong argument against the common practice of staffing a testing group with relatively inexperienced developers or with the less skilled members of a development team. The most effective tests are likely to come from developers with the most insight into what kinds of errors are most likely and least acceptable.

      Regardless of testing and staffing doctrine, however, it does seem logical that the testing of computer applications should itself be streamlined by making it a programmable and thus repeatable task. “Test automation” is thus often taken to mean the development of scripts and other mechanisms for testing one piece of software with another.

      Next Page: Automated testing has problems of its own.

      Automated testing has problems


      of its own”>

      Its important to realize that this is not the only approach or even necessarily the best approach. Automated tests, after all, are themselves pieces of software that can exhibit their own flaws of poor usability. A well-maintained archive of the tests performed, results obtained and resulting improvements made can be easier to understand than a cryptic body of test scripts that may have been made obsolete by relatively tiny changes in a body of code.

      Moreover, when automated tests are run by someone who didnt design them, they may be executed in ways that fail to catch errors. For example, a test might not be applied at boundary conditions, or changes in an application might alter those boundary values. Detecting crucial boundary conditions—and generating tests that focus on these likely failure points—is one of the notable strengths of a state-of-the-art testing tool such as Agitator from Agitar Software Inc.

      /zimages/3/28571.gifClick here to read Peter Coffees review of Agitator 2.0.

      Alternatively, an automated test might generate huge numbers of false-positive alerts. For example, a simple screen-replay tool for a GUI may trip over cosmetic changes in interface layout. Its a virtual certainty that a test that generates false positives will somehow be bypassed or suppressed, perhaps leading future test teams to assume that something is being tested when its effectively been shoved below the radar.

      Automated testing may also produce accurate but misleading statistics. For example, a test might report that a certain fraction of the lines in a program were exercised a certain number of times, while failing to measure—and therefore being unable to report—whether those multiple tests actually verified behavior in more than one situation. Its up to a development team to ensure that tools are being used in a way that reflects this distinction.

      Its also essential in the Web-deployed environment to test an applications handling of errors that may be triggered only by its dependencies on remote resources. This is one of the strengths of Compuware Corp.s DevPartner Fault Simulator, now in beta testing and planned for release early this year .

      /zimages/3/28571.gifClick here to read Peter Coffees review of Compuwares DevPartner Studio Professional Edition 7.1.

      Test automation can also pave the way toward confirmation that an application does what its supposed to do, while leaving a massive blind spot obscuring things that it should not do.

      For example, its easy for an automated test to ensure that changes to a persons insurance record are correctly applied. Its possible that an automated test would also ensure that those changes are reflected, where appropriate, in the record of a persons spouse.

      What few such tests will check, however, is whether changes have been applied in places where they should not be. For example, a persons having a new child implies that the persons spouse also has a new child, but it would be an error to infer that the children already in that family are also new parents.

      Such bugs are easily overlooked, warned software testing consultant Brian Marick, of Testing Foundations, in his 1997 paper “Classic Testing Mistakes” .

      And those mistakes, already classics many years ago, remain all too likely to occur today.

      Failure to think about what should not happen is also the essential flaw that opens the door to so many security problems in applications. Developers are good at envisioning and testing for all the ways that an application should behave correctly and for the many complex logic paths and other interactions that it should be able to follow. They tend to be less adept at envisioning things that should not happen—or that should be prevented if someone tries to make them happen.

      Logging of applications actions can be an effective means of surfacing behaviors that an alert developer will recognize as out of line, but the code that does that logging can itself be time-consuming to write. A tool such as Identify Software Ltd.s AppSight 5.5, released in November, can perform that kind of recording in an intelligent manner that captures more detail when unusual situations indicate the need .

      Its not enough to agree that testing is important. Unless the right things are tested in an effective way, software testing is like medieval medicine: Debugging the code, while ignoring fundamental flaws of design, is akin to bleeding a patient while failing to recognize (let alone cure) an infection.

      Testing tools cant automate experienced vision or a domain-specific sense of whats important, but they can free developers from the most routine and laborious aspects of application testing to give them time to put their expertise to work.

      Technology Editor Peter Coffee can be reached at [email protected].

      Application Testing Creates a Lengthening List of Demands

      From choice of personnel to the design of test scenarios, teams must take arms against the rising costs of application failure

      * Dont relegate testing to inexperienced team members

      • Many crucial errors require domain knowledge to recognize

      • Exhaustive testing is impossible; experience improves focus

      * Dont stop testing when the application works

      • Its not enough to do everything right; apps also must do nothing wrong

      • Security problems and database corruption result when actions arent limited

      * Dont stop at the applications edge

      • Web-based applications need end-to-end stress tests

      • Performance, compatibility and tolerance of network errors are also key criteria

      /zimages/3/28571.gifCheck out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.

      Peter Coffee
      Peter Coffee
      Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      MOST POPULAR ARTICLES

      Artificial Intelligence

      9 Best AI 3D Generators You Need...

      Sam Rinko - June 25, 2024 0
      AI 3D Generators are powerful tools for many different industries. Discover the best AI 3D Generators, and learn which is best for your specific use case.
      Read more
      Cloud

      RingCentral Expands Its Collaboration Platform

      Zeus Kerravala - November 22, 2023 0
      RingCentral adds AI-enabled contact center and hybrid event products to its suite of collaboration services.
      Read more
      Artificial Intelligence

      8 Best AI Data Analytics Software &...

      Aminu Abdullahi - January 18, 2024 0
      Learn the top AI data analytics software to use. Compare AI data analytics solutions & features to make the best choice for your business.
      Read more
      Latest News

      Zeus Kerravala on Networking: Multicloud, 5G, and...

      James Maguire - December 16, 2022 0
      I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
      Read more
      Video

      Datadog President Amit Agarwal on Trends in...

      James Maguire - November 11, 2022 0
      I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
      Read more
      Logo

      eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

      Facebook
      Linkedin
      RSS
      Twitter
      Youtube

      Advertisers

      Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

      Advertise with Us

      Menu

      • About eWeek
      • Subscribe to our Newsletter
      • Latest News

      Our Brands

      • Privacy Policy
      • Terms
      • About
      • Contact
      • Advertise
      • Sitemap
      • California – Do Not Sell My Information

      Property of TechnologyAdvice.
      © 2024 TechnologyAdvice. All Rights Reserved

      Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.