METRICS FOR SMALL ORGANIZATIONS

4.8 METRICS FOR SMALL ORGANIZATIONS

The vast majority of software development organizations have fewer than 20 soft- ware people. It is unreasonable, and in most cases unrealistic, to expect that such

If you’re just starting organizations will develop comprehensive software metrics programs. However, it to collect software

metrics, remember to is reasonable to suggest that software organizations of all sizes measure and then keep it simple. If you

use the resultant metrics to help improve their local software process and the qual- bury yourself with

ity and timeliness of the products they produce. Kautz [KAU99] describes a typical data, your metrics

scenario that occurs when metrics programs are suggested for small software orga- effort will fail.

nizations: Originally, the software developers greeted our activities with a great deal of skepticism,

but they eventually accepted them because we kept our measurements simple, tailored them to each organization, and ensured that they produced valuable information. In the end, the programs provided a foundation for taking care of customers and for planning and carrying out future work.

What Kautz suggests is a commonsense approach to the implementation of any soft- ware process related activity: keep it simple, customize to meet local needs, and be sure it adds value. In the paragraphs that follow, we examine how these guidelines

? relate to metrics for small shops.

How do I

derive a set

“Keep it simple” is a guideline that works reasonably well in many activities. But

of “simple”

software metrics? how do we derive a “simple” set of software metrics that still provides value, and how can we be sure that these simple metrics will meet the needs of a particular software

organization? We begin by focusing not on measurement but rather on results. The software group is polled to define a single objective that requires improvement. For example, “reduce the time to evaluate and implement change requests.” A small orga- nization might select the following set of easily collected measures:

• Time (hours or days) elapsed from the time a request is made until evalua- tion is complete, t queue .

• Effort (person-hours) to perform the evaluation, W eval . • Time (hours or days) elapsed from completion of evaluation to assignment of

change order to personnel, t eval .

• Effort (person-hours) required to make the change, W change . • Time required (hours or days) to make the change, t change . • Errors uncovered during work to make change, E change . • Defects uncovered after change is released to the customer base, D change .

Once these measures have been collected for a number of change requests, it is pos- sible to compute the total elapsed time from change request to implementation of the change and the percentage of elapsed time absorbed by initial queuing, evalua- tion and change assignment, and change implementation. Similarly, the percentage of effort required for evaluation and implementation can be determined. These met- rics can be assessed in the context of quality data, E change and D change . The percent- ages provide insight into where the change request process slows down and may lead to process improvement steps to reduce t queue ,W eval ,t eval ,W change , and/or

E change . In addition, the defect removal efficiency can be computed as DRE = E change / (E change +D change ) DRE can be compared to elapsed time and total effort to determine the impact of

quality assurance activities on the time and effort required to make a change. For small groups, the cost of collecting measures and computing metrics ranges from 3 to 8 percent of project budget during the learning phase and then drops to less than 1 percent of project budget after software engineers and project managers have become familiar with the metrics program [GRA99]. These costs can show a sub- stantial return on investment if the insights derived from metrics data lead to mean- ingful process improvement for the software organization.