MONTE CARLO SAMPLING AND HISTORIES

2.3 MONTE CARLO SAMPLING AND HISTORIES

Monte Carlo simulation models incorporate randomness by sampling random values from specified distributions. The underlying algorithms and/or their code use random number generators (RNG) that produce uniformly distributed values (“equally likely”) between 0 and 1; these values are then transformed to conform to a prescribed dis- tribution. We add parenthetically that the full term is pseudo RNG to indicate that the numbers generated are not “truly” random (they can be reproduced algorithmically), but only random in a statistical sense; however, the prefix “pseudo” is routinely dropped for brevity. This general sampling procedure is referred to as Monte Carlo sampling. The name is attributed to von Neumann and Ulam for their work at Los Alamos National Laboratory (see Hammersly and Handscomb 1964), probably as an allusion to the famous casino at Monte Carlo and the relation between random number generation and casino gambling.

In particular, the random values sampled using RNGs are used (among other things) to schedule events at random times. For the most part, actual event times are determined by sampling an interevent time (e.g., interarrival times, times to failure, repair times, etc.) via an RNG, and then adding that value to the current clock time. More details are presented in Chapter 4.

DES runs use a statistical approach to evaluating system performance; in fact, simulation-based performance evaluation can be thought of as a statistical experiment. Accordingly, the requisite performance measures of the model under study are not computed exactly, but rather, they are estimated from a set of histories. A standard statistical procedure unfolds as follows:

1. The modeler performs multiple simulation runs of the model under study, using independent sequences of random numbers. Each run is called a replication.

2. One or more performance measures are computed from each replication. Examples include average waiting times in a queue, average WIP (work in process) levels, and downtime probabilities.

3. The performance values obtained are actually random and mutually independent, and together form a statistical sample. To obtain a more reliable estimate of the true value of each performance metric, the corresponding values are averaged and confidence intervals about them are constructed. This is discussed in Chapter 3.

16 Discrete Event Simulation

2.3.1 E XAMPLE :W ORK S TATION S UBJECT TO F AILURES AND I NVENTORY C ONTROL

This section presents a detailed example that illustrates the random nature of DES modeling and simulation runs, including the random state and its sample paths. Our goal is to study system behavior and estimate performance metrics of interest. To this end, consider the workstation depicted in Figure 2.5.

The system is comprised of a machine, which never starves (always has a job to work on), and a warehouse that stores finished products (jobs). In addition, the machine is subject to failures, and its status is maintained in the random variable V(t), given by

Note that one job must reside at the machine, whenever its status is busy or down. The state S(t) is a pair of variables, S(t) ¼ (V(t), K(t)) where V(t) is the status of the machine as described previously, and K(t) is the finished-product level in the warehouse, all at time t. For example, the state S(t) ¼ (2, 3) indicates that at time t the machine is down (presumably being repaired), and the warehouse has an inventory of three finished product units. Customer orders (demand) arrive at the warehouse, and filled orders deplete the inventory by the ordered amount (orders that exceed the stock on hand are partially filled, the shortage simply goes unfilled, and no backorder is issued). The product unit processing time is 10 minutes. In this example, the machine does not operate independently, but rather is controlled by the warehouse as follows. Whenever the inven- tory level reaches or drops below r ¼ 2 units (called the reorder point), the warehouse issues a replenishment request to the machine to bring the inventory up to the level of R ¼

5 units (called target level or base-stock level). In this case, the inventory level is said to down-cross the reorder point. At this point, the machine starts processing a sequence of jobs until the inventory level reaches the target value, R, at which point the machine suspends operation. Such a control regime is known as the (r, R) continuous review inventory control policy (or simply as the (r, R) policy), and the corresponding replenishment regime is referred to as a pull system. See Chapter 12 for detailed examples.

Sample History Suppose that events occur in the DES model of the workstation above in the order

shown in Figure 2.6, which graphs the stock on hand in the warehouse as a function of

Raw Material

Figure 2.5 Workstation subject to failures and inventory control.

Discrete Event Simulation 17 Machine

Status Idle

Busy

Down

Busy Idle 6

Stock On Hand 1

Time (Min.) Figure 2.6 Operational history of the system of Figure 2.5.

time, and also tracks the status of the machine, V(t), over time. (Note that Figure 2.6 depicts a sample history—one of many possible histories that may be generated by the

simulated system.) An examination of Figure 2.6 reveals that at time t ¼ 0, the machine is idle and the warehouse contains four finished units, that is, V(0) ¼ 0 and K(0) ¼ 4. The first customer arrives at the warehouse at time t ¼ 35 and demands three units. Since the stock on hand can satisfy this order, it is depleted by three units, resulting in K(35) ¼ 1; at this point, the reorder point, r, is down-crossed, triggering a replenishment request at the machine that resumes the production of additional product in order to raise the inventory level to its target value, R. Note that the machine status changes concomi- tantly from idle to busy. During the next 30 minutes, no further demand arrives, and the inventory level climbs gradually as finished products arrive from the machine, until reaching a level of 4 at time t ¼ 65.

At time t ¼ 69, a second customer arrives and places a demand equal or larger than the stock on hand, thereby depleting the entire inventory. Since unsatisfied demand goes unfilled, we have K(69) ¼ 0. If backorders were allowed, then we would keep track of the backorder size represented by the magnitude of the corresponding negative inventory.

At time t ¼ 75, the unit that started processing at the machine at time t ¼ 65 is finished and proceeds to the warehouse, so that K(75) ¼ 1. Another unit is finished with processing at the machine at time t ¼ 85.

At time t ¼ 87, the machine fails and its repair begins (down state). The repair activity is completed at time t ¼ 119 and the machine status changes to busy. While the machine is down, a customer arrives at time t ¼ 101, and the associated demand decreases the stock on hand by one unit, so that K(101) ¼ 1. At time t ¼ 119, the repaired machine resumes processing of the unit whose processing was interrupted at the time of failure; that unit completes processing at time t ¼ 127.

From time t ¼ 127 to time t ¼ 157 no customers arrive at the warehouse, and consequently the inventory reaches its target level, R ¼ 5, at time t ¼ 157, at which time the machine suspends production. The simulation run finally terminates at time T ¼ 165.

18 Discrete Event Simulation Sample Statistics

Having generated a sample history of system operation, we can now proceed to compute associated statistics (performance measures). Probability distribution of machine status. Consider the machine status over the time interval [0, T]. Let T I be the total idle time over [0, T], T B the total busy time over [0, T], and T D the total downtime over [0, T]. The probability distribution of machine status is then estimated by the ratios of time spent in a state to total simulation time, namely,

I þ (165 157)

Pr{machine idle}

¼ 0:261, T B (87 35) þ (157 119)

Pr{machine busy} ¼

Pr{machine down}

T ¼ 165 ¼ 0:194:

In particular, the probability that the machine is busy coincides with the server utiliza- tion (the fraction of time the machine is actually busy producing). Note that all the probabilities above are estimated by time averages, which here assume the form of the fraction of time spent by the machine in each state (the general form of time averages is discussed in Section 9.3). The logic underlying these definitions is simple. If an outside observer “looks” at the system at random, then the probability of finding the machine in a given state is proportional to the total time spent by the machine in that state. Of course, the ratios (proportions) above sum to unity, by definition.

Machine throughput. Consider the number of job completions C T in the machine over the interval [0, T]. The throughput is a measure of effective processing rate, namely, the expected number of job completions (and, therefore, departures) per unit time, estimated by

T ¼ 165 ¼ 0:0545:

Customer service level. Consider customers arriving at the warehouse with a demand

for products. Let N S

be the number of customers whose demand is fully satisfied over the interval [0, T], and N T the total number of customers that arrived over [0, T]. The customer service level, x, is the probability of fully satisfying the demand of an arrival at the warehouse. This performance measure is estimated by

N ¼ ¼ 0:6667, T 3

assuming that the demand of the customer arriving at t ¼ 69 is not fully satisfied. Note that the x statistic is a customer average, which assumes here the form of the relative frequency of satisfied customers (the general form of customer averages is discussed in Section 9.3). Additionally, letting J k

be the unmet portion of the demand of customer k (possibly 0), the customer average of unmet demands is given by

k ¼ ¼1 ,

where M is the total number of customers.

Discrete Event Simulation 19

Table 2.1 Estimated distribution of finished products in the warehouse

k 0 1 2 3 4 5 Pr{K ¼ k}

Probability distribution of finished products in the warehouse. Consider the prob- ability that the long-term number of finished units in the warehouse, K, is at some given level, k. These probabilities are estimated by the expression

total time spent with k units in stock Pr{k units in stock} ¼

total time

and in particular, for k ¼ 0,

Pr{stockout} ¼

Suppressing the time index, the estimated distribution is displayed in Table 2.1. Summing the estimated probabilities above reveals that P

Pr{K ¼ k} ¼ 0:999

k ¼0 5 instead of P Pr{K ¼ k} ¼ 1, due to round-off errors. Such slight numerical inaccur-

k ¼0

acies are a fact of life in computer-based computations. Average inventory on hand. The average inventory level is estimated by

k Pr{K ¼ k} ¼ 2:506,

k ¼0

which is a consequence of the general time average formula (see Section 9.3)

R T K(t) dt