PROCESS GENERATION

4.3 PROCESS GENERATION

Generation of stochastic processes in simulation amounts to generating sequences of variates, subject to a particular probability law (see Section 3.9). Common cases are discussed in the following sections.

4.3.1 I ID P ROCESS G ENERATION

Generation of iid processes is very common in simulation. Its implementation is rather straightforward: Simply generate repeatedly variates from a common distribution from successive RNG seeds, using the methods and formulas as illustrated above.

4.3.2 N ON -I ID P ROCESS G ENERATION

Generating non-iid processes that follow a prescribed probability law is the most difficult. The difficulty lies in reproducing key aspects of temporal dependence of the requisite non-iid stochastic process. More specifically, the problem is twofold.

To specify a probability law for a process, one must specify the temporal dependence of the process in the form of joint distributions of any dimension. Except for special cases (when dependence does not extend arbitrarily far into the past), this approach can

be impractical. When temporal dependence cannot be estimated in terms of joint distributions, one may elect to use a statistical proxy for temporal dependence, often in the form of autocorrelations. Even when the process to be modeled is stationary, the problem of devising a model that fits the observed (empirical) autocorrelations and marginal distribution, simultaneously, is nontrivial.

To illustrate the complexity inherent in the first problem, we shall outline the algorithm for generating a non-iid stochastic sequence, with a real valued state space, S.

Algorithm 4.1 Generation of a Non-Iid Stochastic Sequence Input: a probability law L for a general random sequence fX n g 1 n ¼0 , specified by the

probabilities

F X 0 (x 0 ) ¼ PrfX 0 0 g, x 0 2 S,

F X 1 jX 0 (x 1 jx 0 ) ¼ PrfX 1 ¼x 1 jX 0 0 g, x 0 ,x 1 2 S,

F X n þ1 jX 0 , ...X n (x n þ1 jx 0 ,...,x n ) ¼

Pr fX n þ1

n þ1 jX 0 ¼x 0 ,...,X n ¼x n

g, x 0 ,...,x n þ1 2 S:

62 Random Number and Variate Generation Output: A sample path (realization) x 0 ,x 1 ,...,x n , . . . from the probability law L.

1. Use the Inverse Transform method of Section 4.2 to generate x 0 ¼F 1 X 0 (u 0 ) from

the initial RNG seed, u 0 .

2. Suppose that x 0 ,...,x n have already been generated, using the RNG seeds

u 0 ,...,u n . Use the Inverse Transform method of Section 4.2 to generate x n þ1 ¼F 1 X n þ1 jX 0 , ..., X n (u n þ1 jx 0 ,...,x n ),

where u n þ1 is the next RNG seed.

3. Continue analogously as necessary. Clearly, Algorithm 4.1 requires an increasingly complex specification in view of the

expanding conditioning on the n previous outcomes; this conditioning is suggestively called “memory,” since to generate the next step, we need to “remember” the outcomes

in all previous steps. Such extensive memory information is awkward, and in any event only rarely available.

We next illustrate two simple special cases of Algorithm 4.1: generation of discrete- state Markov chains in discrete time and in continuous time. Recall from Eq. 3.95 that Markov processes require limited “memory” of only one previous variate.

Algorithm 4.2 Generation of Discrete-Time, Discrete-State Markov Chains

Input: a probability law L for a discrete-time, discrete-state Markov chain fX g n 1 n ¼0 , specified by the probabilities

F X 0 (x 0 ) ¼ PrfX 0 0 g, x 0 2 S,

F X n þ1 jX n (x n þ1 jx n ) ¼ PrfX n þ1

g, x n ,x n þ1 2 S: Output: A sample path (realization) x 0 ,...,x n , . . . from the Markov probability law L.

n þ1 jX n ¼x n

1. Use the Inverse Transform method of Section 4.2 to generate x 0 ¼F 1 X 0 (u 0 ) from

the initial RNG seed, u 0 .

2. Suppose that x 0 ,...,x n have already been generated, using the RNG seeds u 0 ,...,u n . Next, use the Inverse Transform method to generate

x n þ1 ¼F 1 X n þ1 jX n (u n þ1 jx n )

where u n þ1 is the next RNG seed.

3. Continue analogously as necessary. The corresponding algorithm for a continuous-time, discrete-state Markov chain

generates the jump chain exactly as in Algorithm 4.2. In addition, the jump times are also generated.

Algorithm 4.3 Generation of Continuous-Time, Discrete-State Markov Chains

1 1 t ¼0 with a jump chain , fX n g n ¼0 and jump times fT n g n ¼0 , specified by the probabilities

Input: a probability law L for a continuous-time, discrete-state Markov chain fY t g 1

Random Number and Variate Generation 63

F X 0 (x 0 ) ¼ PrfX 0 0 g, x 0 2 S,

g, x n ,x n þ1 2 S, and

F X n þ1 jX n (x n þ1 jx n ) ¼ PrfX n þ1

n þ1 jX n ¼x n

F T n þ1 T n jX n (t jx n ) ¼ PrfT n þ1 T n n ¼x n g¼1 l x n el x n t, t

n ,x n þ1 2 S: Output: A sample path (realization) of the jump chain, y 0 ,y 1 ,...,y n , . . ., and jump

times, t 0 ,t 1 ,...,t n , . . ., from the Markov probability law L.

1. Set t 0 ¼ 0, and use the Inverse Transform method to generate x 0 ¼F 1 X 0 (u 0 ) from

the initial RNG seed, u 0 .

2. Suppose that t 0 ,...,t n and x 0 ,...,x n have already been generated, using the RNG seeds u 0 ,...,u 2n þ1 . Then the current simulation clock is T n ¼t n , and the current jump chain state is X n ¼x n . The algorithm consumes two seeds per process jump as follows. First, use the Inverse Transform method of Section

4.2 to generate the next jump time realization,

t n ¼t n þF 1 þ1 T n þ1 T n jX n (u 2(n þ1) jx n ),

where the interjump time is distributed exponentially according to Expo(l x n ) and u 2(n þ1) is the next RNG seed. Second, use the Inverse Transform method of Section 4.2 to generate the next state realization of the jump chain,

x n þ1 ¼F 1 X n þ1 jX n (u 2(n þ1)þ1 jx n ),

where u 2(n þ1)þ1 is the next RNG seed.

3. Continue analogously as necessary. Finally, we mention here that methods exist for modeling temporal dependence using

the autocorrelation function as a statistical proxy. This advanced topic is deferred, however, until Chapter 10.