Secondary Attributes
7.2.2 Secondary Attributes
We consider the secondary attributes listed earlier and review the set of values that are available for each attribute, as well as how these values are impacted by the primary attributes.
The Oracle: If the target attribute of the test is an operational attribute, such as the response time of the product under normal workloads, or under exceptional workloads, then the oracle takes the form of an operational condition (a response time, or a function plotting the response time as a function of the workload). If the target attribute of the test is functional, then the oracle depends on whether the goal of the test is to find faults or to certify failure freedom. The following table highlights these dependencies.
Oracle Target attribute Functionality
Robustness
Design
Performance Graceful degradation
Fault
Use the strongest
Oracle
Performance requirements under
removal
(most refined) possible
normal/exceptional conditions oracle, e.g., the intended
program function
interactions
absence of faults
testing of
Estimating
Use the weakest (least refined) Oracle N/A
frequency
possible oracle that the end
checks
Goal of failures
user considers
infrequency of failures
The Oracle as a Function of the Goal of Testing and the Target Attribute Being Tested
The Test Life Cycle: Whereas in Chapter 3 we have presented a generic test lifecycle, we can imagine three variations thereof, which we present in the following text:
A sequential life cycle, which proceeds sequentially through three successive phases of test data generation, test execution, and test outcome analysis. An algo- rithmic representation of this cycle may look like this
{testDataGeneration(D); // D: test data set; T=empty;
// T: report while (not empty(D)) {d=removeFrom(D); d’=P(d); if (not(oracle(d,d’)) {add(d,T;}}
analyze(T);}
A SOFTWARE TESTING TAXONOMY
In this cycle, the phases of test data generation, test execution, and test analysis take place sequentially.
A semisequential life cycle, where the execution of tests pauses whenever a failure is observed; this life cycle may be adopted if we want to remove faults as the test progresses. An algorithmic representation of this cycle may look like this:
{testDataGeneration(D); // D: test data set; while (not empty(D)) {repeat {d=removeFrom(D); d’=P(d);} until not(oracle(d,d’));
offLineAnalysis(d); // fault diagnosis and removal } }
• An iterative life cycle, which integrates the test data generation into the iteration. An algorithmic representation of this cycle may look like this:
{while (not completeTest()) {d=generateTestData(); d’=P(d); if (oracle(d,d’)) {successfulTest(d);} else {unsuccessfulTest(d);}
The following table shows how the value of this attribute may depend on the primary attributes of goal and method.
Goal of testing
Life cycle
Fault
Proving
Estimating Ensuring
removal
absence
frequency infrequency
of failures of failures Structural
of faults
Semi sequential
da ration Functional
Sequential
st Random
Iterative
Iterative
Te gene method
7.2 A CLASSIFICATION SCHEME 133
Test Assumptions: A test can be characterized by the assumptions it makes about the product under test and/or about the environment in which it runs. As such, this attrib- ute can take three values, depending on the scale of the product being tested, as shown in the following table.
Scale
Assumptions Unit
Subsystem
System
The oracle/specification Only the targeted The test of the unit is not in
subsystem is in
environment
Test
question. Only the unit’s
question, not the
mimics the
assumption
correctness is.
remainder of the
product’s
system.
operating environment.
Test Completion: Test completion is the condition under which the test activity is deemed to achieve its goal. Such conditions are as follows:
• The software product has passed the certification standard. • The software product has performed to the satisfaction of the user. • It is felt that all relevant faults have been diagnosed and removed. • The reliability of the software product has been estimated. • The reliability of the software product has grown beyond the required threshold. • The test data generated for the test have been exhausted, and so on.
The following table illustrates how this attribute depends on the goal of the test and the test data generation method.
Goals of testing Completion criterion
Fault
Proving
Estimating Ensuring
removal
absence
frequency Infrequency
of failures of Failures Structural
of faults
Test data
tion Functional
Target threshold
Test genera method
level
reached or
of coverage
exceeded
achieved
A SOFTWARE TESTING TAXONOMY
Required Artifacts: Many artifacts may be needed to conduct a test, including any combination of the following artifacts:
• The source code • The executable code • The product specification • The product’s intended function • The product’s design • The signature of the software product (i.e., a specification of its input space) • The usage pattern of the software product (i.e., a probability distribution over its
input space) • The test data generated for the test
This attribute depends on virtually all four primary attributes; for the sake of par- simony, we only show its dependence on the goal of testing and on the test data gen- eration method.
Goals of testing Artifacts
Ensuring removal
absence of
frequency of infrequency of
Executable + Function
hod met
Source + Executable +
Function +
Functional
Specification + Executable + Specification +
ration
Function Specification Signature +
gene
Usage pattern
Executable + Source +
data
Function + Specification + Signature +
Signature + Usage pattern
Usage pattern Usage pattern
Stakeholders: A stakeholder in a test is a party that has a role in the execution of the test, or has a role in the production of the software asset being tested, or has a stake in the outcome of the test. Possible stakeholders include the product developer,
7.2 A CLASSIFICATION SCHEME 135
the product specifier, the product user, the quality assurance team, the verification and validation team, the configuration management team, and so on. The following table shows how this attribute depends on the goal of the test and the scale of the asset.
Goals of testing Stakeholders
Fault
Estimating Ensuring removal
Proving
absence of
frequency of infrequency
faults
failures of failures
Unit Unit
Unit developer,
developer
CM/QA team
Subsystem Subsystem
Subsystem
(maintenance) developer,
developer,
maintenance maintenance
Scale
engineer
engineer, CM/QA team
System Verification and Specifier team, validation team design team, Design team
and end users
Test Environment: The environment of a test is the set of interfaces that the product under test interacts with as it executes. The following table shows the different values that this attribute may take, depending on the goal of the test and the scale of the soft- ware product under test.
Goals of testing Test Environment
Fault removal
Proving
Estimating Ensuring
absence of
frequency infrequency
faults
of failures of failures
Unit Development
Subsystem Software system Scale
(maintenance) System
Development
Simulated operating environment
Operating
environment
environment
A SOFTWARE TESTING TAXONOMY
Position in the Life Cycle: As we have seen in Chapter 3, several phases of the software life cycle include a testing activity. The software activity at each phase can be characterized by primary attributes; the following table shows how the goal of testing and the scale of the product under test determine the phase at which each test activity takes place.
Goals of testing Position in the Lifecycle
Fault
Estimating Ensuring removal
Proving
absence
frequency of infrequency
of faults
failures of failures
Unit Unit testing
Adding the asset into the project e configuration
Scal
Subsystem Maintenance
System Integration
Reliability Reliability testing
Acceptance
testing
estimation growth