ERROR TRACKING Throughout the software process, a project team creates work products (e.g., require-

7.9 ERROR TRACKING Throughout the software process, a project team creates work products (e.g., require-

ments specifications or prototype, design documents, source code). But the team also creates (and hopefully corrects) errors associated with each work product. If error-

Error tracking allows you to compare current related measures and resultant metrics are collected over many software projects, a work with past efforts

project manager can use these data as a baseline for comparison against error data and provides a

collected in real time. Error tracking can be used as one means for assessing the sta- quantitative indication

tus of a current project. of the quality of the

In Chapter 4, the concept of defect removal efficiency was discussed. To review work being conducted. briefly, the software team performs formal technical reviews (and, later, testing) to In Chapter 4, the concept of defect removal efficiency was discussed. To review work being conducted. briefly, the software team performs formal technical reviews (and, later, testing) to

be defects, D. Defect removal efficiency (Chapter 4) has been defined as DRE = E/(E + D) DRE is a process metric that provides a strong indication of the effectiveness of

quality assurance activities, but DRE and the error and defect counts associated with it can also be used to assist a project manager in determining the progress that is being made as a software project moves through its scheduled work tasks.

Let us assume that a software organization has collected error and defect data over the past 24 months and has developed averages for the following metrics:

• Errors per requirements specification page, E req • Errors per component—design level, E design • Errors per component—code level, E code • DRE—requirements analysis • DRE—architectural design • DRE—component level design • DRE—coding

As the project progresses through each software engineering step, the software team records and reports the number of errors found during requirements, design, and code reviews. The project manager calculates current values for E req ,E design , and

E code . These are then compared to averages for past projects. If current results vary by more than 20% from the average, there may be cause for concern and there is cer- tainly cause for investigation.

For example, if E req = 2.1 for project X, yet the organizational average is 3.6, one of two scenarios is possible: (1) the software team has done an outstanding job of developing the requirements specification or (2) the team has been lax in its review approach. If the second scenario appears likely, the project manager should take

The more quantitative immediate steps to build additional design time 12 into the schedule to accommodate your approach to

the requirements defects that have likely been propagated into the design activity. project tracking and

control, the more likely These error tracking metrics can also be used to better target review and/or test- you’ll be able to

ing resources. For example, if a system is composed of 120 components, but 32 of foresee potential

these component exhibit E design values that have substantial variance from the aver- problems and respond

age, the project manager might elect to dedicate code review resources to the 32 to them proactively.

components and allow others to pass into testing with no code review. Although all Use earned value and

tracking metrics. components should undergo code review in an ideal setting, a selective approach (reviewing only those modules that have suspect quality based on the E design value) might be an effective means for recouping lost time and/or saving costs for a proj- ect that has gone over budget.

12 In reality, the extra time will be spent reworking requirements defects, but the work will occur when the design is underway.

CHAPTER 7

PROJECT SCHEDULING AND TRACKING