Y Y Y Y Y Y Y Y Y

F t i, m Pr6 d t i, S t 1 m , Y L 1 7 d t log Pr 6 d t 1 ,observation Pr 6 d t 0 ,observation j m F t 1 , m j m F t 0 , m t m Pr6 S t

m, Y

t 1 t m Pr 6Y L t 1 S t m t i, m Pr6 d t

i, Y

t S t 1 m newstate i, m oldstatei,m Pr 6 Y L t 1 S t

m,Y

t 1 Pr6Y L t 1 S t m 13 4 5 6 7 8 11 The final log-likelihood ratio becomes In order to calculate F i,m’ we define the following probability functions m, m, t t t and i,m’ as t Compared to the Viterbi algorithm m corresponds to the state metrics, while i,m’ t t corresponds to the branch metrics. m can be seen as backwards state metrics. t For the notation we will also need the function giving the new encoder state S when t S =m’ and d =i t-1 t and the function giving the old encoder state S when S =m and d =i t-1 t t Since the encoder is a Markov process and the channel is memoryless, we have and F t i, m Pr6 S t 1 m , Y t 1 1 Pr6d t

i, Y

t S t 1 m Pr 6Y L t 1 S t newstate i, m t 1 m t i, m t newstate i, m t m j i 0,1 Pr 6 d t i, S t 1 oldstatei, m, Y t 1 j i 0,1 Pr 6 S t 1 oldstatei, m, Y t 1 1 Pr6d t

i, Y

t S t 1 oldstatei,m j i 0,1 t 1 oldstatei, m t i, oldstatei, m t m j i 0,1 Pr 6 d t 1

i,Y

L t 1 S t m j i 0,1 Pr 6d t 1

i,Y

t 1 S t mPr6Y L t 2 S t 1 newstatei,m j i 0,1 t 1 i, m t 1 newstate i, m t m Pr6 S t m Y t 1 Pr 6 S t

m, Y

t 1 Pr 6 Y t 1 t m Pr 6S t

m,Y

t 1 14 12 13 14 15 If we assume that the frames are terminated to state 0, we have 0=1, and m= 0, m=1,2,...2 1. We can calculate as a forward recursion M At the end of the frame we have 0=1, and m=0, m=1,2,...2 1. We can calculate L L M as a backward recursion If the frames are not terminated we have no knowledge of the initial and final states. In this case we must use m= m=2 . L -M Since becomes very small with increasing t some rescaling must be used. In principle the function ’ m should be used t where Pr{Y } is found as the sum of m over all states, meaning that the m values 1 t t t always add up to one. However, since the output is the log-likelihood ratio the actual t i, m Pr apriori 6d t iPr 6Y t d t i,S t 1 m k t 1 Pr apriori 6d t t 1, m Pr apriori 6d t 1 Pr apriori 6d t Pr 6Y t d t 1 ,S t 1 m t 0, m Pr 6Y t d t 0 ,S t 1 m 15 16 17 18 19 rescaling is not important as long as underflows are avoided. Similar the function m t needs rescaling. The algorithm sketched here requires that m is stored for the complete frame since we t have to await the end of the frame before we can calculate m. We can instead use a t sliding window approach with period T and training period Tr. First m is calculated and t stored for t=0 to T-1. The calculation of m is initiated at time t=T+Tr-1 with initial con- t ditions m=2 . The first Tr values m is discarded but after the training period, i.e. T+Tr-1 t -M for t=T-1 down to 0, we assume that m is correct and ready for the calculation of t Fi,m’. After the first window we continue with the next one until we reach the end of the t frame where we use the true final conditions for m. L Of course, this approach is an approximation but if the training period is carefully chosen the performance degradation can be very small. Since we have only one output associated with each transition, we can calculate i,m’ as t For turbo codes the a priori information typically arrives as a log-likelihood ratio. Luckily we see from the calculation of m and m that i,m’ is always used in pairs - 0,m’ t t t t and 1,m’. This means we can multiply i,m’ with a constant t t and get x E y loge x e y minx,ylog1e |yx| 16 20 For an actual implementation the values of m, m and i,m’ may be represented as t t t the negative logarithm to the actual probabilities. This is also common practice for Viterbi decoders where the branch and state metrics are -log to the corresponding probabilities. With the logarithmic presentation multiplication becomes addition and addition becomes an E-operation, where This function can be reduced to finding the minimum and adding a small correction factor. As seen from Formula 18 the incoming log-likelihood ratio 7’, can be used directly in the calculation of -log as the log-likelihood ratio of the a priori probabilities. 17 18

4. Final Remarks