EXAMPLES OF MODEL VERIFICATION
8.3 EXAMPLES OF MODEL VERIFICATION
In this section we employ the theory reviewed in Section 8.2 to demonstrate practical verification methods. We present verification examples in two systems: a single workstation and a system consisting of two workstations in tandem.
8.3.1 M ODEL V ERIFICATION IN A S INGLE W ORKSTATION
Consider a workstation (represented as a single-server queue), whose Arena model is exhibited in Figure 8.3. The workstation houses a single machine (server). Job processing times are exponen- tially distributed with a mean of 4 hours (i.e., with processing rate m ¼ 0:25), while job interarrival times are uniformly distributed between 1 and 8 hours. The workstation buffer has a finite capacity of four jobs (excluding any job being processed), so that jobs arriving at a full buffer are lost.
Jobs (entities) are generated in the Create module, called Create Arrivals, and their arrival times are stored in the ArrTime attribute of each incoming entity in the Assign module called Assign Arrival Time. An incoming job entity then enters the Decide module, called Is Buffer Full?, to check if there is space available in the buffer. If no space is available, the job entity proceeds to the Record module, called Count Rejected, to increment the count of rejected units, and is then disposed of. Otherwise, the job entity enters the Seize module, called Seize Server, where it attempts to seize the
150 Model Goodness: Verification and Validation
Figure 8.3 Arena model of a single workstation model.
resource Server, possibly waiting for its turn in the associated queue, Server_Queue. Once the job entity succeeds in seizing the server, it proceeds to the Delay module, called Processing Time, and stays there for the requisite processing time. On service completion, it enters the Release module, called Release Server, where it releases the
Server resource. Finally, the job entity traverses three consecutive Record modules to tally its system time (module Tally System Time) and the time since the previous departure (module Tally Interdeparture Time), and to update the number of jobs completed (module Count Processed). The job entity is then disposed of and leaves the system.
Figure 8.4 displays summary statistics generated by a simulation run of the single- workstation model for 100,000 hours. We now proceed to verify various performance measures of the model, starting with throughput (note that hats, or carets, denote estimated values). For a given replication r,
a throughput estimator, ^ o 1 (r), for Eq. 8.7 is given by
where D(r) is the number of jobs processed to completion during replication r, and T(r) is the length (duration) of replication r.
Next, an examination of Figure 8.4 verifies that replication 1 ran for T(1) ¼ 100,000 hours, and the Counter section reveals that D(1) ¼ 20,661 jobs were processed to completion, while 1578 jobs were rejected. Thus, the estimated throughput for replication 1 is given by
20, 661
o 1 (r) ¼
100, 000 ¼ 0:2066 jobs=hour:
152 Model Goodness: Verification and Validation On the other hand, the Resources section shows that utilization was estimated as
^ u ¼ 0:8392, and by the system description, while the workstation was busy, it processed jobs at the rate of m ¼ 0:25 jobs per hour. Using Eq. 8.7, an alternative
estimate of the throughput is given by o ^ 2 (r)
Furthermore, yet another alternative estimate of the throughput using the mean inter- departure time from the Tally section is given by
o 3 (r) ¼
¼ 0:2066 jobs=hour:
E ½interdeparture time
Finally, recall from Eq. 8.9 that the effective input rate should be the same as the effective output rate, which is simply the throughput. Let 1 p K
be the probability that an arriving job succeeds in entering the workstation. The Counter section shows that the number of jobs that succeeded in entering the workstation was 20,661 out of a total of 20,661 þ 1,578 jobs that arrived at the workstation. We then estimate
Since from Eq. 8.9 the effective arrival rate is the throughput, it follows that a fourth estimate of the throughput is simply
The fact that the four throughput estimates, ^ o 1 (r), ^ o 2 (r), ^ o 3 (r), and ^ o 4 (r), are quite close, even though they were computed in different ways, is reassuring. This fact would tend to increase our confidence that the computed throughput is correct. Furthermore, the esti-
mate ^ o 2 (r) supports the correctness of the utilization value, and the estimate ^ o 4 (r) supports the correctness of the probabilities that the system is full (and not full) at arrival instances. Consequently, these verification facts support the contention that the model is correct.
Another common verification check is based on Little’s formula in Eq. 8.8. Note that the formula asserts that there are two ways to estimate the mean number of jobs in the N S . The other way is to form the product of the estimated (effective) arrival rate l eff and the average
W S . Now, from the Queues section in Figure 8.4, the estimated mean number of jobs in the buffer alone is ^ N b ¼ 1:3705, while the esti- mated mean delay per job in the buffer is ^ W b ¼ 6:6325. Applying Little’s formula (Eq. 8.8) to the buffer alone results in the relation
N b ¼l eff
Estimating each side separately yields
N ^ b ¼ 1:3705
and ^ l eff
Model Goodness: Verification and Validation 153 Note that ^ N b and ^ l eff
W b are approximately equal, thereby providing additional
W b . In a similar vein, Little's formula for the entire system can be written as
N b ,l eff
N S ¼l eff
where N ^ S
N ¼ ^ b þ ^u ¼ 1:3705 þ 0:8392 ¼ 2:2085 and ^
l eff W ^ S ¼ ^ l eff
Again the two computations yield acceptably close values. In fact, they should come closer and closer as the simulation run length is increased.
The previous example exhibits multiple verification tests, some of which were overlapping. In practice, it usually suffices to perform a set of nonoverlapping verification tests. Of course, the more extensive the verification effort, the higher is the modeler’s confidence in the model. Note carefully, however, that verification does not conclusively prove program correctness. Rather, it merely provides circumstantial evidence for model correctness, indicating the absence of credible evidence to the contrary.
8.3.2 M ODEL V ERIFICATION IN T ANDEM W ORKSTATIONS
In this example, we consider a system of two workstations in tandem: an assembly workstation followed by a painting workstation, as shown in Figure 8.5. Thus, the output of the first workstation is funneled as input to the second.
Suppose that neither workstation has buffer capacity limitations, so that all arriving jobs can enter the first workstation and eventually proceed to the second. Assume further that the arrival stream consists of two types of jobs: type 1 and type 2, each undergoing its own assembly and painting operations. Job interarrival times are exponentially distri- buted with means 4 hours for type 1 and 10 hours for type 2. For type 1 jobs, assembly times (in hours) are distributed according to the Tria(1, 2, 3) distribution, while painting times are deterministic, lasting 3 hours. For type 2 jobs, assembly times (again in hours) are distributed according to the Tria(1, 3, 8) distribution, while painting times are deterministic, lasting 2 hours. We simulate this scenario for 100,000 hours to estimate resource utilizations and mean flow times by job type, and average WIP (work-in-process) levels in each buffer.
The Arena model corresponding to Figure 8.5 is depicted in Figure 8.6. The model has two Create modules, one for each job type. Arrival times, job types, and processing times are all specified in the respective Assign modules (Assign Type 1 Parameters and
Assembly
Painting
Figure 8.5 Two workstations in tandem.
154 Model Goodness: Verification and Validation
Figure 8.6 Arena model of the two-workstation flow line.
Assign Type 2 Parameters). The dialog box of the first Assign module is displayed in Figure 8.7.
Each job first seizes the Assembler resource via the Seize module, called Seize Assembler, whose dialog box is displayed in Figure 8.8. Jobs wait in the queue Assembler_Queue while the resource Assembler is busy. The dialog box for this resource is displayed in Figure 8.9. Note the Resource State field in Figure 8.9. The option Type instructs the corresponding Seize module to set the state of its Assembler resource according to the type of the job entity being processed. Recall that the job attribute Type is set to the type of the job, namely, 1 or 2. When a job entity seizes the resource Assembler, the resource's state is assigned that job’s type. It therefore
Figure 8.7 Dialog box for the Assign module for type 1 jobs.
Model Goodness: Verification and Validation 155
Figure 8.8 Dialog box for module Seize Assembler.
Figure 8.9 Dialog box for resource Assembler.
becomes possible, in principle, to collect resource statistics by job type, such as the percentage of time each job type keeps resource Assembler busy. However, this approach requires a definition in the StateSet module for each resource that aims to collect statistics by entity type. The resultant spreadsheet view of the Resource module is displayed in Figure 8.10.
Since each resource has a user-defined StateSet module entry, each such entry has to
be defined as a separate row, as shown in Figure 8.11.
156 Model Goodness: Verification and Validation
Figure 8.10 Dialog spreadsheet of the Resource module with StateSet module names.
Figure 8.11 Dialog spreadsheet of the StateSet module (left) and its States dialog spreadsheet (right).
In Figure 8.11, the dialog spreadsheet at the left shows that the resources Assembler and Painter each have two states, while the dialog spreadsheet at the right indicates that the user-assigned resource state names are Processing 1 and Processing 2. Each resource state will be assigned according to the type of job that seizes it.
After leaving module Seize Assembler, the job entity proceeds to the Delay module, called Assembly Operation, whose dialog box is displayed in Figure 8.12.
Figure 8.12 Dialog box of the Delay module Assembly Operation.
Its Delay Time field stipulates that the job’s delay time (assembly time) is supplied by the job entity's attribute called Assembly Time. When the assembly operation is com-
pleted (i.e., the associated delay is concluded), the job entity proceeds to the Release module, called Release Assembler, to release the resource Assembler. The job entity then proceeds to carry out the painting operation analogously via the sequence of modules Seize Painter, Painting Operation, and Release Painter.
Finally, the job entity proceeds to the Record module, called Tally Flow Times, to tally its flow time, measured from the point of arrival at the assembly workstation (that
Model Goodness: Verification and Validation 157
Figure 8.13 Dialog box (top) and spreadsheet view (bottom) for the Record module tallying flow times.
arrival time is stored in the job entity's ArrTime attribute). Figure 8.13 displays the dialog spreadsheet for this Record module (top), as well as the spreadsheet view of Record modules (bottom). Note that this Record module uses the Time Interval option in the Type field to collect job flow times. Furthermore, to indicate that flow times are to be tallied separately for each job type, the Record into Set check box is checked in the dialog box. Additionally, the Tally Set Name field specifies the tally set name, Flow Times, and the Set Index field is set to the Type attribute of incoming jobs to indicate flow-time tallying by job type. The tally set Flow Times is defined in the Set module from the Basic Process template panel, as shown in the associated dialog spreadsheets of Figure 8.14.
Figure 8.14 Dialog spreadsheet of the Set module (left) and its Members dialog spreadsheet (right).
The Members field on the left-side dialog spreadsheet specifies two separate tallies (the button labeled 2 rows). Clicking that button pops up the right-side dialog box, which specifies the names of the two tallies of set Flow Times as Type 1 FT and Type 2 FT (these correspond to flow-time tallies of type 1 jobs and type 2 jobs, respectively).
Finally the Statistic module can be used to produce Frequency statistics representing estimated state probabilities of each resource. Figure 8.15 displays the corresponding dialog spreadsheet.
Figures 8.16 and 8.17 display the output statistics from a replication of the model depicted in Figure 8.6 of duration 100,000 hours.
158 Model Goodness: Verification and Validation
Figure 8.15 Dialog box of the Statistic module with Frequency statistics.
The Resources section lists the utilization of each resource, while the Frequencies section displays the state probabilities for each resource, including resource utilization by job type. The Queues section contains statistics on delay times and average queue sizes (average WIP levels) for each resource queue. Finally, the User Specified section displays flow time statistics for each job type.
We begin our verification analysis with workstation utilization by job type. The arrival rates of type 1 and type 2 jobs at the assembly workstation are 0.25 and 0.1, respectively. Further, the mean assembly times are (1 þ 2 þ 3)/3 ¼ 2 and (1 þ 3 þ 8)/3 ¼ 4 for types 1 and 2 jobs, respectively (recall that processing times in each workstation are assumed to have a triangular distribution). Thus, the total workstation utilization of the assembly workstation is m ¼ 0:9, with partial utilizations by job type,
where u ¼ u(1) þ u(2). The simulation results from Figure 8.16 estimate the utilization of the assembly workstation as ^ u ¼ 0:889. The two values, u ¼ 0.9 and ^u ¼ 0:889, are close, but would be even closer if we were to run the simulation longer. As a rule, models should be run longer under heavy traffic (high utilization). The individual worksta- tion utilizations by job type can be verified for the painting workstation in a similar manner.
Next, we proceed to verify the flow times of the two job types simultaneously. Little’s formula (Eq. 8.8) may be applied to the entire system, resulting in the relation
where ^ N S is the simulation estimate of mean total jobs in the system, l is the assumed total arrival rate of all jobs at the system, and ^ W S is the simulation estimate of the combined mean flow time through the system of jobs of all types.
A theoretical verification of Little’s formula is carried out by the following computation: The total arrival rate is l ¼ 0:25 þ 0:1 ¼ 0:35.
The mean flow time of type 1 jobs was estimated by simulation as
W ^ S (1) ¼ 29:1943:
The mean flow time of type 2 jobs was estimated by simulation as
W ^ S (2) ¼ 28:4181:
Model Goodness: Verification and Validation 159
Figure 8.16 Resources and Frequencies reports from a replication of the model in Figure 8.6.
The estimated combined mean flow time over all job types (weighted by the relative arrival rates) is
The mean number of jobs in the system is estimated from the simulation run as the sum of average buffer sizes plus workstation utilizations over all job types, namely,
160 Model Goodness: Verification and Validation
Figure 8.17 Queues and User Specified reports from a replication of the model in Figure 8.6.
b (k) þ u(k) ^ ¼ 4:0814 þ 4:0900 þ 0:8894 þ 0:9359 ¼ 9:9967:
Thus, we find that N ^ S
It follows that Little’s formula is satisfied with acceptable precision (less than 1.5% relative error). Again, the preceding two quantities can be expected to agree more
closely as the length of the simulation run increases.
Model Goodness: Verification and Validation 161