LINEAR TRANSFORMATION OF STATE VECTOR
10.4 LINEAR TRANSFORMATION OF STATE VECTOR
In Section 10.1 we saw that the state of a system can be specified in several ways. The sets of all possible state variables are related-in other words, if we are given one set of state variables, we should be able to relate it to any other set. We are particularly interested in a linear type of
be two different sets of state variables specifying the same system. Let these sets be related by linear equations as
relationship. Let q 1 ,q 2 , ...,q N and w 1 ,w 2 , ..., w N
or
Defining the vector w and matrix P as just shown, we can write Eq. (10.67b) as
and
Thus, the state vector q is transformed into another state vector w through the linear transformation in Eq. (10.67c) . If we know w, we can determine q from Eq. (10.67d) , provided P −1 exists. This is equivalent to saying that P is a nonsingular matrix [ † ] (| P| ≠ 0).
Thus, if P nonsingular matrix, the vector w defined by Eq. (10.67c) is also a state vector. Consider the state equation of a system
If
then then
Equation (10.68d) is a state equation for the same system, but now it is expressed in terms of the state vector w.
The output equation is also modified. Let the original output equation be In terms of the new state variable w, this equation becomes
where
EXAMPLE 10.9
The state equations of a certain system are given by
Find the state equations for this system when the new state variables w 1 and w 2 are
or
According to Eq. (10.68d) , the state equation for the state variable w is given by where [see Eqs. (10.69)]
and
Therefore
This is the desired state equation for the state vector w. The solution of this equation requires a knowledge of the initial state w(0). This can be obtained from the given initial state q(0) by using Eq. (10.70b) .
COMPUTER EXAMPLE C10.4
Repeat Example 10.9 using MATLAB. >> A = [0 1; -2 -3]; B = [1; 2];
>> P = [1 1;1 -1]; >> Ahat = P*A*inv (P), Bhat = P*B
Ahat = -2 0 3 -1
Bhat = 3 -1
Therefore,
INVARIANCE OF EIGENVALUES
We have seen that the poles of all possible transfer functions of a system are the eigenvalues of the matrix
A. If we transform a state vector from q
to w, the variables w 1 ,w 2 , ..., w N are linear combinations of q 1 ,q 2 , ..., q N and therefore may be considered to be outputs. Hence, the poles of the
transfer functions relating w 1 ,w 2 , ..., w N to the various inputs must also be the eigenvalues of matrix
A. On the other hand, the system is also
specified by Eq. (10.68d) . This means that the poles of the transfer functions must be the eigenvalues of Â. Therefore, the eigenvalues of matrix A
remain unchanged for the linear transformation of variables represented by Eq. (10.67) , and the eigenvalues of matrix A and matrix Â(Â = PAP −1 ) are identical, implying that the characteristic equations of
A and  are also identical. This result also can be proved alternately as follows. Consider the matrix P(s I − A)P −1 . We have
Taking the determinants of both sides, we obtain The determinants | P| and |P −1 | are reciprocals of each other. Hence
This is the desired result. We have shown that the characteristic equations of A and  are identical. Hence the eigenvalues of A and  are identical. In Example 10.9 , matrix
A is given as
The characteristic equations is
Also
and
This result verifies that the characteristic equations of
A and  are identical.
10.4-1 Diagonalization of Matrix A
For several reasons, it is desirable to make matrix A diagonal. If A is not diagonal, we can transform the state variables such that the resulting matrix  is diagonal. [ † ] One can show that for any diagonal matrix
A, the diagonal elements of this matrix must necessarily be ? 1 ,? 2 , ..., λ N (the eigenvalues) of the matrix. Consider the diagonal matrix A:
The characteristic equation is given by
or Hence, the eigenvalues of
A are a 1 ,a 2 , ..., a N . The nonzero (diagonal) elements of a diagonal matrix are therefore its eigenvalues λ 1 ,? 2 , ..., λ N . We shall denote the diagonal matrix by a special symbol, A:
Let us now consider the transformation of the state vector
A such that the resulting matrix  is a diagonal matrix Λ.
Consider the system
A, are distinct (no repeated roots). Let us transform the state vector q into the new state vector z, using the transformation
We shall assume that λ 1 ,? 2 , ..., λ N , the eigenvalues of
Then, after the development of Eq. (10.68c) , we have
We desire the transformation to be such that PAP −1 is a diagonal matrix Λ given by Eq. (10.72) , or
Hence
or
We know Λ and A. Equation (10.74b) therefore can be solved to determine P. EXAMPLE 10.10
Find the diagonalized form of the state equation for the system in Example 10.9 . In this case,
We found λ 1 = −1 and λ 2 = −2. Hence
and Eq. (10.74b) becomes
Equating the four elements on two sides, we obtain
The reader will immediately recognize that Eqs. (10.75a) and (10.75b) are identical. Similarly, Eqs. (10.75c) and (10.75d) are identical. Hence two equations may be discarded, leaving us with only two equations [ Eqs. (10.75a) and (10.75c) ] and four unknowns. This observation means that there
is no unique solution. There is, in fact, an infinite number of solutions. We can assign any value to p 11 and p 21 to yield one possible solution. [ † ] If p 11
=k 1 and p 21 =k 2 , then from Eqs. (10.75a) and (10.75c) we have p 12 =k 1 /2 and p 22 =k 2 :
We may assign any values to k 1 and k 2 . For convenience, let k 1 = 2 and k 2 = 1. This substitution yields
The transformed variables [ Eq. (10.73a) ] are
Thus, the new state variables z 1 and z 2 are related to q 1 and q 2 by Eq. (10.76) . The system equation with z as the state vector is given by [see Eq. (10.73c) ]
Note the distinctive nature of these state equations. Each state equation involves only one variable and therefore can be solved by itself. A general state equation has the derivative of one state variable equal to a linear combination of all state variables. Such is not the case with the diagonalized matrix Λ. Each state variable z i is chosen so that it is uncoupled from the rest of the variables; hence a system with N eigenvalues is split into N decoupled systems, each with an equation of the form
This fact also can be readily seen from Fig. 10.7a , which is a realization of the system represented by Eq. (10.77) . In contrast, consider the original state equations [see Eq. 10.70a) ]
Figure 10.7: Two realizations of the second-order system. A realization for these equations is shown in Fig. 10.7b . It can be seen from Fig. 10.7a that the states z 1 and z 2 are decoupled, whereas the states
q 1 and q 2 ( Fig. 10.7b) are coupled. It should be remembered that Fig. 10.7a and 10.7b are realizations of the same system. [ † ]
COMPUTER EXAMPLE C10.5
Repeat Example 10.10 using MATLAB. [Caution: Neither P nor is unique.] >> A = [0 1; -2 -3]; B = [1; 2];
>> [V, Lambda] = eig (A); >> P = inv (V), Lambda, Bhat = P*B
P = 2.8284 1.4142 2.2361 2.2361
Lambda = -1 0 0 -2
[ † ] This condition is equivalent to saying that all N equations in Eq. (10.67a) are linearly independent; that is, none of the N equations can be expressed as a linear combination of the remaining equations.
[ † ] In this discussion we assume distinct eigenvalues. If the eigenvalues are not distinct, we can reduce the matrix to a modified diagonalized (Jordan) form.
[ † ] If, however, we want the state equations in diagonalized form, as in Eq. (10.30a) , where all the elements of matrix are unity, there is unique solution. The reason is that the equation
= PB, where all the elements of are unity, imposes additional constraints. In the present example,
this condition will yield p 11 = 1/2, p 12 = 1/4, p 21 = ⅓, and p 22 = ⅓. The relationship between z and q is then
[ † ] Here we only have a simulated state equation; the outputs are not shown. The outputs are linear combinations of state variables (and inputs). Hence, the output equation can be easily incorporated into these diagrams.