Formulation of the Transition Probabilities

42 can set | x P x P   . In our case it is assumed that x P = 1 as we start our system from new. Since we assume that the system starts from new, at  i , the true state x can only be a normal state that is 1, hence equation 3-6 for 1  i can be written as | 1 1  x P        2 1 1 1 1 1 1 1 1 | 1 | | | 1 | | x x P X x P x y p x P X x P x y p 3-7 Hence, for every ,.... 3 , 2  i | i i x P  can be calculated from equation 3-6 recursively.

3.4 Formulation of the Transition Probabilities

Now, we go into the details of the formulation of | 1 k X m X P i i    and | i i x y p . First, we adopt the concept of delay-time modelling Christer and Waller, 1984 to model the transitional probability. Using a Markov model, it is assumed that i x must always be in one of a finite number of discrete states,   3 , 2 , 1  i X . We define 1 1 l F and 2 2 l F as the cumulative probability of random variables 1 L and 2 L . The transition probabilities that state m will occur at time i t , given that the current state is k at time 1  i t , are as follows, see Christer et al. 2001. 1 | 1 1    i i X X P = | 1 1 1    i i t L t L P =     1 1 1 1 1 1 1 i i t t dl l f dl l f = 1 1 1 1 1    i i t F t F 3-8 1 | 2 1    i i X X P = | , 1 1 1 1 2      i i i t L t L l t L P          1 1 1 1 1 1 1 2 2 2 1 1 i i i i t t t l t dl l f dl dl l f l f = 1 1 1 1 1 1 2 1 1 1       i t t i t F dl l t F l f i i 3-9 43 1 | 3 1    i i X X P = | , 1 1 1 1 2      i i i t L t L l t L P 1 1 1 1 2 2 2 1 1 1 1        i t t l t t F dl dl l f l f i i i = 1 1 1 1 1 2 1 1 1      i t t i t F dl l t F l f i i Equation 3-10 assumed that if the item failed before i t , it will stay failed until i t . 3-10 2 | 1 1    i i X X P = 0 3-11 2 | 2 1    i i X X P = , , | , 1 1 1 1 2 1 1 2         i i i i t L l t L t L l t L P =            1 1 1 1 1 1 2 2 2 1 1 1 2 2 2 1 1 i i i i t l t t l t dl dl l f l f dl dl l f l f =          1 1 1 1 1 2 1 1 1 1 2 1 1 1 1 i i t i t i dl l t F l f dl l t F l f 3-12 2 | 3 1    i i X X P = , , | , 1 1 1 1 2 1 1 2         i i i i t L l t L t L l t L P =             1 1 1 1 1 1 1 1 2 2 2 1 1 1 2 2 2 1 1 i i i i i t l t t l t l t dl dl l f l f dl dl l f l f =            1 1 1 1 1 2 1 1 1 1 1 2 1 2 1 1 1 i i t i t i i dl l t F l f dl l t F l t F l f 3-13 3 | 1 1    i i X X P = 0 3-14 3 | 2 1    i i X X P = 0 3-15 3 | 3 1    i i X X P = 1 3-16 Note that the stationary assumption is no longer valid, as at every time i t , the transition probability is clearly time-dependent. This is the property we needed, since most existing applications of Markov models are time-independent. 44

3.5 Formulation of the Relationship between the Observed Data and