We’re all fascinated by Web 2.0 now, but remember Web 1.0, the dot-com boom that became dot-bomb? One major reason for the crash was poor software and systems quality-both for the Websites themselves and the hardware and software behind them. While Web 2.0 has a lot more speed and sizzle than Web 1.0 ever did, many organizations still manage quality like it’s 1999-relying on gut feel and good luck when making go-live decisions. Is this any way to run a serious business?
Businesses need to define their quality objectives, along with metrics and goals for those metrics. Having quality goals allows serious organizations to effectively and quantitatively manage projects and quality. In this article, I’ll discuss why organizations sometimes overlook this important program, and show you the steps to develop useful, measurable quality goals.
Serious IT professionals use techniques such as budgets, schedules and configuration management to manage all significant IT projects. Cost, schedule and features are all important elements of a project and deserve careful objective management. But some people subjectively manage the fourth important element of a successful project: quality. Perhaps they believe that quality is an intangible that defies quantitative management.
On the contrary, you can successfully manage quality using smart metrics derived from specific business objectives. While objectives vary, let’s use three typical objectives for quality to illustrate how to set quality goals for your IT project.
1. Remove most of the bugs before delivering software into the data center.
2. Test for the right problems to remove the important bugs.
3. Optimize quality from a financial point of view.
For each of these objectives, derive a question related to our effectiveness for that objective:
1. What percentage of the bugs do we find and remove?
2. Do we find and remove a higher percentage of the important bugs?
3. What is our cost per bug found and removed prior to release compared to the cost of a failure in production?
Devising a Metric
Devising a metric
For each of these questions, devise a metric. Let’s start with the percentage of bugs found and removed prior to release. Assuming you run a system test and user acceptance test prior to deployment, calculate the metric as follows:
Defect removal effectiveness (DRE) = the amount of test bugs removed divided by the amount of test bugs removed + the production bugs found.
A goal of 100 percent DRE is unreasonable. Trying to find and remove all the bugs would be too expensive and time-consuming. Instead, to set your quality goal for this objective, benchmark your organization first and then set a goal that improves quality relative to this metric.
Based on his studies, Capers Jones reports that the industry average for DRE is about 85 percent. Depending on the criticality of your application, the cost of production bugs, the importance of schedule and budget constraints and other considerations, you might set your goal well above or well below this industry average.
Though overall DRE should not reach 100 percent, you want to find and remove almost all of the important bugs. So, check this by using the DRE again. First, calculate the DRE for all bugs. Then, calculate the DRE for the critical bugs-however you define critical. The following relationship should hold:
Quality focus: DRE (all bugs) < DRE (critical bugs)
As before, I suggest you benchmark your organization first, and then set goals for improving your quality focus using this metric. If the DRE for critical bugs is 5 to 15 percent above the overall DRE, that typically indicates a good quality focus, provided the DRE for critical bugs is over 95 percent.
Cost of Quality
Cost of quality
So, you want to find lots of bugs, especially critical bugs, and also do so much more cheaply than the alternative: customers and users finding bugs in production. To measure this, use a technique called cost of quality to identify three main costs associated with testing and quality.
1. Cost of detection: the testing costs which you would incur even if you found no bugs. For example, setting up the test environment and creating test data are activities that incur costs of detection.
2. Cost of internal failure: the testing and development costs which you incur purely because you find bugs in prerelease testing. For example, filing bug reports and fixing bugs are activities that incur costs of internal failure.
3. Cost of external failure: the support, testing, development and other costs which you incur because you release systems with some number of bugs (just like everyone else). For example, much of the costs for technical support or help desk organizations and sustaining engineering teams are costs of external failure.
Calculate the average costs of a bug in testing and in production, as explained below:
1. The average cost of a test bug (ACTB) = the cost of detection + cost of internal failure divided by the number of test bugs.
2. The average cost of a production bug (ACPB) = the cost of external failure divided by the number of production bugs.
As I mentioned in a previous Knowledge Center article, the average cost of a bug found during prerelease testing is well below the average cost of a production bug-often by a factor of two, five, ten or more. The bigger the difference, the more optimized your quality assurance efforts are from a financial point of view. In addition, the more expensive it is for your organization to deal with bugs in production, the more it should invest in prerelease testing.
As you’ve seen in this article, quality need not be an elusive, subjective, unmanageable element in your projects. You can define quality objectives, derive important questions related to these objectives, devise metrics, set quality goals and measure quality progress. Organizations of all sizes-from small startups to large global enterprises-have already taken these steps toward quantitative quality management. You, too, can go beyond gut feel and rabbit’s feet to set-and achieve-quality goals for your IT projects.
Rex Black is President of RBCS. Rex is also the immediate past president of the International Software Testing Qualifications Board and the American Software Testing Qualifications Board. Rex has published six books, which have sold over 50,000 copies, including Japanese, Chinese, Indian, Hebrew and Russian editions. Rex has written over thirty articles, presented hundreds of papers, workshops and seminars, and given over fifty speeches at conferences and events around the world. Rex may be reached at rex_black@rbcs-us.com.