McCall’s Quality Factors

19.1.1 McCall’s Quality Factors

The factors that affect software quality can be categorized in two broad groups: (1) factors that can be directly measured (e.g., defects per function-point) and (2) fac-

It’s interesting to note tors that can be measured only indirectly (e.g., usability or maintainability). In each that McCall’s quality

factors are as valid case measurement must occur. We must compare the software (documents, pro- today as they were

grams, data) to some datum and arrive at an indication of quality.

when they were first McCall, Richards, and Walters [MCC77] propose a useful categorization of fac- proposed in the

tors that affect software quality. These software quality factors, shown in 1970s. Therefore, it’s

Figure 19.1, focus on three important aspects of a software product: its opera- reasonable to assert

that the factors that tional characteristics, its ability to undergo change, and its adaptability to new affect software quality

environments. do not change.

Referring to the factors noted in Figure 19.1, McCall and his colleagues provide the following descriptions:

Correctness. The extent to which a program satisfies its specification and fulfills the cus- tomer's mission objectives.

Reliability. The extent to which a program can be expected to perform its intended function with required precision. [It should be noted that other, more complete definitions of relia- bility have been proposed (see Chapter 8).]

PRODUCT REVISION

PRODUCT TRANSITION

F I G U R E 19.1

PRODUCT OPERATION

McCall’s software

Correctness Usability Efficiency quality factors

Reliability Integrity

PA R T T H R E E C O N V E N T I O N A L M E T H O D S F O R S O F T WA R E E N G I N E E R I N G

Efficiency. The amount of computing resources and code required by a program to perform its function.

Integrity. Extent to which access to software or data by unauthorized persons can be controlled.

Usability. Effort required to learn, operate, prepare input, and interpret output of a program.

“A product’s quality is

Maintainability. Effort required to locate and fix an error in a program. [This is a very lim-

a function of how

ited definition.]

much it changes the world for the

Flexibility. Effort required to modify an operational program.

better.”

Testability. Effort required to test a program to ensure that it performs its intended function.

Tom DeMarco

Portability. Effort required to transfer the program from one hardware and/or software sys- tem environment to another.

Reusability. Extent to which a program [or parts of a program] can be reused in other applications—related to the packaging and scope of the functions that the program performs.

Interoperability. Effort required to couple one system to another. It is difficult, and in some cases impossible, to develop direct measures of these

quality factors. Therefore, a set of metrics are defined and used to develop expres- sions for each of the factors according to the following relationship:

F q =c 1 m 1 +c 2 m 2 +...+c n

where F q is a software quality factor, c n are regression coefficients, m n are the met- rics that affect the quality factor. Unfortunately, many of the metrics defined by McCall et al. can be measured only subjectively. The metrics may be in the form of a check- list that is used to "grade" specific attributes of the software [CAV78]. The grading scheme proposed by McCall et al. is a 0 (low) to 10 (high) scale. The following met- rics are used in the grading scheme:

Auditability. The ease with which conformance to standards can be checked. Accuracy. The precision of computations and control. Communication commonality. The degree to which standard interfaces, proto-

cols, and bandwidth are used. Completeness. The degree to which full implementation of required function

has been achieved. The metrics noted can

XRef

be assessed during Conciseness. The compactness of the program in terms of lines of code. formal technical reviews discussed in

Consistency. The use of uniform design and documentation techniques Chapter 8.

throughout the software development project. Data commonality. The use of standard data structures and types throughout

the program.

Error tolerance. The damage that occurs when the program encounters an error.

Execution efficiency. The run-time performance of a program. Expandability. The degree to which architectural, data, or procedural design

can be extended. Generality. The breadth of potential application of program components. Hardware independence. The degree to which the software is decoupled from

the hardware on which it operates. Instrumentation. The degree to which the program monitors its own opera-

tion and identifies errors that do occur. Modularity. The functional independence (Chapter 13) of program compo-

nents. Operability. The ease of operation of a program.

Quality Factors Security. The availability of mechanisms that control or protect programs and data.

Self-documentation. The degree to which the source code provides meaning- ful documentation.

Simplicity. The degree to which a program can be understood without diffi- culty.

Software system independence. The degree to which the program is indepen- dent of nonstandard programming language features, operating system char- acteristics, and other environmental constraints.

Traceability. The ability to trace a design representation or actual program component back to requirements.

Training. The degree to which the software assists in enabling new users to apply the system.

The relationship between software quality factors and these metrics is shown in Figure 19.2. It should be noted that the weight given to each metric is dependent on local products and concerns.