DIFFERENTIAL PCM (DPCM)

12.2 DIFFERENTIAL PCM (DPCM)

In PCM each sample of the waveform is encoded independently of all the other samples. However, most signals, including speech, sampled at the Nyquist rate or faster exhibit significant correlation between successive samples. In other words, the average change in amplitude between suc- cessive samples is relatively small. Consequently, an encoding scheme that exploits the redundancy in the samples will result in a lower bit rate for the speech signal.

A relatively simple solution is to encode the differences between suc- cessive samples rather than the samples themselves. Since differences be- tween samples are expected to be smaller than the actual sampled ampli- tudes, fewer bits are required to represent the differences. A refinement of this general approach is to predict the current sample based on the previous p samples. To be specific, let s(n) denote the current sample of speech and let ˆ s(n) denote the predicted value of s(n), defined as

ˆ s(n) =

Thus ˆ s(n) is a weighted linear combination of the past p samples, and the a (i) are the predictor (filter) coefficients. The a (i) are selected to minimize some function of the error between s(n) and ˆ s(n).

A mathematically and practically convenient error function is the sum of squared errors. With this as the performance index for the predictor, we select the a (i) to minimize

e (n)=

s(n) −

a (i) s (n − i)

n =1

n =1

i =1 p

=r ss (0) − 2

a (i) r ss (i) +

a (i) a (j) r ss (i − j)

i =1

i =1 j =1

Chapter 12

APPLICATIONS IN COMMUNICATIONS

where r ss (m) is the autocorrelation function of the sampled signal se- quence s(n), defined as

Minimization of E p with respect to the predictor coefficients {a i (n)} re- sults in the set of linear equations, called the normal equations,

a (i) r ss (i − j) = r ss (j) , j = 1, 2, . . . , p (12.11)

i =1

or in the matrix form,

(12.12) where R is the autocorrelation matrix, a is the coefficient vector, and r

Ra = r =⇒ a = R −1 r

is the autocorrelation vector. Thus the values of the predictor coefficients are established.

Having described the method for determining the predictor coeffi- cients, let us now consider the block diagram of a practical DPCM system, shown in Figure 12.3. In this configuration the predictor is implemented with the feedback loop around the quantizer. The input to the predictor is denoted as ˜ s(n), which represents the signal sample s(n) modified by the quantization process, and the output of the predictor is

The difference

(12.14) is the input to the quantizer, and ˜ e(n) denotes the output. Each value of

e(n) = s(n) − -˜ s(n)

the quantized prediction error ˜ e(n) is encoded into a sequence of binary

FIGURE 12.3 Block diagram of a DPCM transcoder: (a) encoder, (b) decoder

Differential PCM (DPCM) 613

digits and transmitted over the channel to the receiver. The quantized error ˜ e(n) is also added to the predicted value -˜ s(n) to yield ˜ s(n).

At the receiver the same predictor that was used at the transmitting end is synthesized, and its output -˜ s(n) is added to ˜ e(n) to yield ˜ s(n). The signal ˜ s(n) is the desired excitation for the predictor and also the desired output sequence from which the reconstructed signal ˜ s (t) is obtained by filtering, as shown in Figure 12.3b.

The use of feedback around the quantizer, as described, ensures that the error in ˜ s(n) is simply the quantization error q(n) = ˜ e(n) − e(n) and that there is no accumulation of previous quantization errors in the implementation of the decoder. That is,

q(n) = ˜ e(n) − e(n) = ˜ e(n) − s(n) + -˜ s(n) = ˜ s(n) − s(n) (12.15) Hence ˜ s(n) = s(n) + q(n). This means that the quantized sample ˜ s(n)

differs from the input s(n) by the quantization error q(n) indepen- dent of the predictor used. Therefore the quantization errors do not accumulate.

In the DPCM system illustrated in Figure 12.3, the estimate or pre- dicted value ˜ s(n) of the signal sample s(n) is obtained by taking a linear combination of past values ˜ s (n − k) , k = 1, 2, . . . , p, as indicated by (12.13). An improvement in the quality of the estimate is obtained by including linearly filtered past values of the quantized error. Specifically, the estimate of s(n) may be expressed as

-˜s(n) =

a (i) ˜ s (n − i) +

b (i) ˜ e (n − i) (12.16)

i =1

i =1

where b (i) are the coefficients of the filter for the quantized error sequence ˜ e(n). The block diagram of the encoder at the transmitter and the decoder at the receiver are shown in Figure 12.4. The two sets of coefficients

a (i) and b (i) are selected to minimize some function of the error e(n) = ˜ s(n) − s(n), such as the sum of squared errors.

By using a logarithmic compressor and a 4-bit quantizer for the error sequence e(n), DPCM results in high-quality speech at a rate of 32,000 bps, which is a factor of two lower than logarithmic PCM.

12.2.1 PROJECT 12.2: DPCM The objective of this project is to gain understanding of the DPCM encod- ing and decoding operations. For simulation purposes, generate correlated random sequences using a pole-zero signal model of the form

s(n) = a (1) s (n − 1) + b 0 x(n) + b 1 x (n − 1) (12.17) where x(n) is a zero-mean unit variance Gaussian sequence. This can be

done using the filter function. The sequences developed in Project 12.1

Chapter 12

APPLICATIONS IN COMMUNICATIONS

FIGURE 12.4 DPCM modified by the linearly filtered error sequence can also be used for simulation. Develop the following three MATLAB

modules for this project:

1. a model predictor function to implement (12.12), given the input signal s(n);

2. a DPCM encoder function to implement the block diagram of Fig- ure 12.3a, which accepts a zero-mean input sequence and produces a quantized b-bit integer error sequence, where b is a free parameter; and

3. a DPCM decoder function of Figure 12.3b, which reconstructs the sig- nal from the quantized error sequence.

Experiment with several p-order prediction models for a given signal and determine the optimum order. Compare this DPCM implementation with the PCM system of Project 12.1 (at the end of the chapter) and comment on the results. Extend this implementation to include an mth- order moving average filter as indicated in (12.16).