Getting Testy About High-Quality Software

Peter Coffee: If your development team and quality assurance team are playing pingpong with software glitches, you're probably producing buggy software-and wasting time doing it. Automated software testing might make a big difference.

Of all the topics that I try to cover in these developer-oriented letters, nothing generates even half as much mail as the issue of software quality. In our June 3 eWEEK story on software patching, many of the most vigorous comments concerning the inevitability of software updates in the field came to me as readers replies (quoted only with permission) to my May 20 "Enterprise IT Advantage" newsletter column.

Im fortunate to have readers who take the time to share their views with their colleagues.

A different angle on software quality improvement comes from my conversation last week with Adam Kolawa, president and chief executive officer at Parasoft. Avoiding the angels-on-pinheads debate over theoretical barriers to making software defect-free, Kolawa approaches software as a complex manufactured good: "If we ran TV production lines the way we build software, we would not have TVs that work," Kolawa told me; "If testing were not automated, it would not be done."

Kolawa therefore looks askance at software companies that brag about the enormous resources that they devote to testing. "Testing is a sub-concept of error prevention," he asserted, "not the only way. If you really look at the whole concept, you need to set up your infrastructure properly; you need a system that keeps people from writing over each other; you need to automate builds; you need to automate the packaging process. Its remarkable how many companies can not perform an automated build."

What destroys both the quality and the productivity of software development, argues Kolawa, is the separation between production and quality assurance. "QA should not be the place for finding errors; it should be the place for verifying that there are no errors. What you have now is pingpong," he said: "QA finds errors and production fixes them, then QA tells production whats wrong with the fix." Eventually, people get so tired that they ship the product and let the customers do the final round.

I asked Kolawa if the enterprise IT environment has become more proactive about software quality, now that hacker attacks on defective systems are such a high-profile risk—and if quality issues are now easier to discuss with front-office management, since one can blame attackers for relentlessly probing for holes instead of merely appearing to confess incompetence. He hasnt seen that change: "Everyone pays lip service; no one takes it seriously until they have trouble," he said. Even Microsofts Trustworthy Computing Initiative is yet another example of the wrong way to solve the problem, as far as Kolawa is concerned: Microsoft management, he urged, "should change the structure of how the developers are operating, not send them to school to learn to write better code."

IBM is one major developer that is interested in Kolawas views, not to mention the developer aids that his company has built to implement that vision. After extensive pilot tests, IBM has adopted Parasofts Jtest for use by both internal developers and IBM Global Services consultants.

If you agree that when testing is automated, more developers will test their own code instead of throwing it over the wall to a QA or beta-test team, then you may want to explore additional resources on software test automation: An extensive collection of links can be found on Bret Pettichords site, and perhaps youll write to tell me if any of these ought to be the focus of a future edition of this letter.

As always, I continue to appreciate your accounts of your own hard-learned lessons in every aspect of developing and deploying leading-edge IT. Keep those e-mails coming.