Pre-Generated Test Data

12.3.2 Pre-Generated Test Data

In the previous section, we discussed how to develop a test driver using randomly generated test data. Because we had a way to generate data on demand, we focused the test driver on the rules of the specification; we let the rules determine what form the test data takes. In other words, for each rule, we generate test data that exercises the implementation of that rule. In this section, we take a different/complementary approach, which is driven by test data generation, in the following sense: we consider the test data that candidate implementations must be executed on, and for each test case, we design a test oracle by invoking all the rules that apply to the test case. This technique is best illustrated by an example, using the stack specification.

As we remember from Chapter 9, the criterion for visiting all the (virtual) states of the stack as well as making all the state transitions produced the following set of test data for stack implementations; so in order to meet this data selection criterion, we must run the candidate implementation on all these test sequences. As far as the test oracle is concerned, our discussions in Chapter 11 provide that for each sequence, we must invoke all the applicable rules and consider that candidate implementations are successful for a particular input sequence if and only if they satisfy all applicable rules.

(X * /E)

init

init.push (_)

init.push (_).

init.push (_). push(_).

init.top

init.push (a).

init.push (_).

init.push (_).push(_).

top

push (a).top

push (a).top

size

init.size

init.push(a).

init.push (_).

init.push (_).push(_).

size

push (a).size

push (a).size

empty

init.empty

init.push(a).

init.push (_).

init.push (_).push(_).

empty

push (a).empty push (a).empty

AX VX (X * /E)

init

init.push (_)

init.push (_).

init.push (_). push(_).

push (_)

push (_)

init top

init.init.

init.push (_).push(_). top

init.push (a).

init.push (_).

init.top

push (a).init.top push (a).init.top

size

init.push (_).push(_). size

init.init.

init.push (a).

init.push (_).

init.size

push (a).init.size push (a).init.size

empty

init.init.

init.push (_).push(_). empty

init.push (a).

init.push (_).

init.empty

push (a).init.

push (a).init.empty

empty

(continued)

264 TEST DRIVER DESIGN

(continued) push

top

init.

init.push (_).push(_). push (b).

init.push (a).

init.push (_).

push (a).push(b). push (a).push(b).top top

push (b).top

init.push (_).push(_). push (b).

init.push (a).

init.push (_).

push (a).push(b). push (a).push(b).size size

push (b).size

init.push (_).push(_). push (b).

init.push (a).

init.push (_).

push (a).push(b). push (a).push(b). empty

push (b).

init.pop.

init.push (_).push(_). top

init.push (a).

init.push (_).

pop.top

push (a).pop.top push (a).pop.top

size

init.pop.

init.push (_).push(_). size

init.push (a).

init.push (_).

pop.size

push (a).pop.size push (a).pop.size

empty

init.push (_).push(_). empty

init.pop.

init.push (a).

init.push (_).

pop.empty

push (a).pop.

push (a).pop.empty

empty

For the sake of illustration, we test a candidate implementation on a small sample of this test data set, specifically those test cases that are highlighted in the tables above. For each selected test case, we cite in the table below all the rules that apply to the case, as well as the input sequence that must be invoked in the process of apply- ing the rule.

Test case

Applicable

Resulting Oracle

Rule

init.push (a).init.top

init.push (a).init.top = init.top init.push (_).push(_).push

Init Rule

init.push (_).push(_).push(a).size = 1+init. (a).size

Size Rule

push (_).push(_).size init.push (_).push(a).push

init.push (_).push(a).empty init.push (_). (b).empty

Empty Rule

push (a).push(b).empty init.push (_).push(_).push

init.push (_).push(_).push(a).pop.top = init. (a).pop.top

Push Pop

Rule

push (_).push(_).top

To this effect, we develop the following program:

#include <iostream> #include <cassert> #include “stack.cpp” #include “rand.cpp”

12.3 SELECTING A SPECIFICATION MODEL 265 using namespace std; typedef int boolean;

typedef int itemtype; const int Xsize = 5;

const itemtype paramrange=8; // drawing parameter to push() // random number generators

int randnat(int rangemax); int gt0randnat(int rangemax); /* State Variables */ stack s; // test object int nbtest, nbf; // number of tests, failures itemtype a, b, c; // push() parameters

bool storeempty; itemtype storetop; int storesize;

int main () { /* initialization */ nbf=0; nbtest=0; // counting the number of tests and

failures

SetSeed (825); // random number generator a=randnat(paramrange); b=randnat(paramrange); c=randnat(paramrange);

// first test case: init.push(a).init.top. // Init Rule nbtest++; s.sinit(); s.push(a); s.sinit(); storetop=s.top(); s.sinit(); if (!(s.top()==storetop)) {nbf++;}

// second test case: init.push().push().push(a).size. // Size Rule nbtest++; s.sinit(); s.push(c); s.push(b); s.push(a); storesize = s.size(); s.sinit(); s.push(c); s.push(b); if (!(storesize==1+s.size())) {nbf++;}

// third test case: init.push().push(a).push(b).empty.

266 TEST DRIVER DESIGN

// Empty Rule nbtest++; s.sinit(); s.push(c); s.push(a); s.push(b); storeempty=s.empty(); s.sinit(); s.push(c); s.push(a); if (!(!(s.empty()) || storeempty)) {nbf++;}

// fourth test case: init.push().push().push(a).pop.top // Push Pop Rule nbtest++; s.sinit(); s.push(c); s.push(b); s.push(a); s.pop(); storetop=s.top(); s.sinit(); s.push(c); s.push(b); if (!(s.top()==storetop)) {nbf++;}

cout << “failure rate: ” << nbf << “ out of ” << nbtest << endl; }

Execution of this program produces the following output:

failure rate: 0 out of 4.

Hence the candidate program passed these tests successfully. Combining the test data generated in Chapter 9 with the oracle design techniques of Chapter 11 produces

a complex test driver; fortunately, it is not difficult to automate the generation of the test driver from the test data and the rules.

12.3.3 Faults and Fault Detection

The test drivers we have generated in Sections 12.3.1 and 12.3.2 are both based on the ADT specification and hence can be developed and deployed on a candidate ADT without having to look at the ADT. The executions we have reported in Sections

12.3.1 and 12.3.2 refer in fact to two distinct implementations: •

A traditional implementation based on an array and an index • An implementation based on a single integer that stores the elements of the stack as successive digits in a numeric representation. The base of the numeric representation is determined by the number of symbols that we wish to store in the stack.

The motivation of having two implementations is to highlight that the test driver does not depend on candidate implementations; the purpose of the second imple- mentation, as counterintuitive as it is, is to highlight the fact that our specifications

12.3 SELECTING A SPECIFICATION MODEL 267

are behavioral, that is, they specify exclusively the externally observable behavior of software systems, and make no assumption/prescription on how this behavior ought to be implemented. Also note that the behavioral specifications that we use do not specify individually the behavior of each method; rather they specify collectively the inter-relationships between these methods, leaving all the neces- sary latitude to the designer to decide on the representation and the manipulation of the state data. The header files of the two implementations are virtually identical, except for different variable declarations (an array and an index in the first case, a single integer, and a constant base for the second). The .cpp files are shown below:

******************************************************** // Array based C++ implementation for the stack ADT. // file stack.cpp, refers to header file stack.h. //******************************************************

#include “stack.h” stack :: stack ()

void stack :: sinit () {sindex =0;};

bool stack :: empty () const {return (sindex==0);}

void stack :: push (itemtype sitem) {sindex++; sarray[sindex]=sitem;}

void stack :: pop () {if (sindex>0) { // stack is not empty sindex--;}

} itemtype stack :: top ()

{int error = -9999; if (sindex>0) {return sarray[sindex];} else {return error;} }

268 TEST DRIVER DESIGN

int stack :: size () {return sindex;}

As for the integer-based implementation, it is written as follows:

******************************************************** // Scalar based C++ implementation for the stack ADT. // file stack.cpp, refers to header file stack.h. // base is declared as a constant in the header file, =8. //******************************************************

#include “stack.h” #include <math.h>

stack :: stack () { };

void stack :: sinit () {n=1;};

bool stack :: empty () const {return (n==1);}

void stack :: push (itemtype sitem) {n = n*base + sitem;}

void stack :: pop () {if (n>1) { // stack is not empty n = n / base;} }

itemtype stack :: top () {int error = -9999; if (n>1) {return n % base;} else {return error;} }

int stack :: size () {return (int) (log(n)/log(base));}

12.4 TESTING BY SYMBOLIC EXECUTION 269

In order to assess the effectiveness of the test drivers we have developed, we have resolved to introduce faults into the array-based implementation and the scalar-based implementation, and to observe how the test drivers react in terms of detecting (or not detecting) failure.

Considering the array-based implementation, we present below some modifications we have made to the code, and document how this affects the performance of the test drivers (the test driver that generates random test data, presented in Section 12.3.1, and the test driver that uses pre-generated test data, presented in Section 12.3.2).

Locus Modification

Random test data

Pre-generated

test data pop();

generation

sindex>0 sindex>1

failure rate:

failure rate: 561 out of 10000

0 out of 4 push

failure rate: ();

sarray[sindex]=sitem;

failure rate:

0 out of 4 push

sindex++;

19 out of 10000

failure rate: ();

sindex++;

failure rate:

sarray[sindex]=sitem; 1964 out of 10000 1 out of 4 sindex++;

For the scalar-based implementation, we find the following results:

Locus Modification

Random test data

Pre-generated test

data pop();

generation

n>1 n>=1

failure rate:

failure rate:

0 out of 4 sinit();

281 out of 10000

n=1 n=0

failure rate:

failure rate:

0 out of 4 push();

822 out of 10000

n=n*base+sitem

failure rate: n=n+base*sitem

failure rate:

1047 out of 10000

2 out of 4

Dokumen yang terkait

Analisis Komparasi Internet Financial Local Government Reporting Pada Website Resmi Kabupaten dan Kota di Jawa Timur The Comparison Analysis of Internet Financial Local Government Reporting on Official Website of Regency and City in East Java

19 819 7

ANTARA IDEALISME DAN KENYATAAN: KEBIJAKAN PENDIDIKAN TIONGHOA PERANAKAN DI SURABAYA PADA MASA PENDUDUKAN JEPANG TAHUN 1942-1945 Between Idealism and Reality: Education Policy of Chinese in Surabaya in the Japanese Era at 1942-1945)

1 29 9

Improving the Eighth Year Students' Tense Achievement and Active Participation by Giving Positive Reinforcement at SMPN 1 Silo in the 2013/2014 Academic Year

7 202 3

Improving the VIII-B Students' listening comprehension ability through note taking and partial dictation techniques at SMPN 3 Jember in the 2006/2007 Academic Year -

0 63 87

The Correlation between students vocabulary master and reading comprehension

16 145 49

Improping student's reading comprehension of descriptive text through textual teaching and learning (CTL)

8 140 133

The correlation between listening skill and pronunciation accuracy : a case study in the firt year of smk vocation higt school pupita bangsa ciputat school year 2005-2006

9 128 37

Perancangan Sistem Informasi Akuntansi Laporan Keuangan Arus Kas Pada PT. Tiki Jalur Nugraha Ekakurir Cabang Bandung Dengan Menggunakan Software Microsoft Visual Basic 6.0 Dan SQL Server 2000 Berbasis Client Server

32 174 203

Pengaruh Kualitas Software Aplikasi pengawasan kredit (C-M@X) Pt.PLN (PERSERO) Distribusi Jawa Barat Dan Banten (DJBB) Terhadap Produktivitas Kerja karyawan UPJ Bandung Utara

5 72 130

Transmission of Greek and Arabic Veteri

0 1 22