SOLUTION OF STATE EQUATIONS
10.3 SOLUTION OF STATE EQUATIONS
The state equations of a linear system are N simultaneous linear differential equations of the first order. We studied the techniques of solving linear differential equations in Chapters 2 and 4 . The same techniques can be applied to state equations without any modification. However, it is more convenient to carry out the solution in the framework of matrix notation.
These equations can be solved in both the time and frequency domains (Laplace transform). The latter is relatively easier to deal with than the time- domain solution. For this reason, we shall first consider the Laplace transform solution.
10.3-1 Laplace Transform Solution of State Equations
The ith state equation [ Eq. (10.6a) ] is of the form We shall take the Laplace transform of this equation. Let
so that
Also, let The Laplace transform of Eq. (10.31a) yields
Taking the Laplace transforms of all N state equations, we obtain
Defining the vectors, as indicated, we have or and
where I is the N × N identity matrix. From Eq. (10.32b) , we have
where
Thus, from Eq. (10.33b) ,
and
Equation (10.35b) gives the desired solution. Observe the two components of the solution. The first component yields q(t) when the input x(t) = 0. Hence the first component is the zero-input component. In a similar manner, we see that the second component is the zero-state component.
EXAMPLE 10.5
Find the state vector q(t) for the system whose state equation is given by where
and the initial conditions are q 1 (0) = 2, q 2 (0) = 1.
From Eq. (10.33b) , we have Let us first find Φ (s). We have From Eq. (10.33b) , we have Let us first find Φ (s). We have
Now, q(0) is given as
Also, X(s) = 1/s, and
Therefore
and
The inverse Laplace transform of this equation yields
COMPUTER EXAMPLE C10.2
Repeat Example 10.5 using MATLAB. [Caution: See the caution in Computer Example C10.1 .] >> syms s
>> A = [-12 2/3;-36 -1]; B = [1/3; 1]; q0 = [2;1]; X = 1/s; >> q = ilaplace (inv(s*eye(2)-A)*(q0+B*X)) q=
[ 136/45*exp(-9*t) -21/20*exp(-4*t) +1/36] [ 68/5*exp(-9*t)-63/5*exp(-4*t)]
Next, a plot is generated of the state vector. >> t = (0:.01:2)'; q = subs (q);
>> q1 = q(1:length (t)); q2 = q(length (t) + 1:end); >> plot (t, q1, 'k', t, q2, 'k--'); xlabel ('t'); ylabel ('Amplitude'); >> legend ('q 1(t)', 'q 2(t)');
The plot is shown in Fig. C10.2 .
Figure C10.2 THE OUTPUT
The output equation is given by and Upon substituting Eq. (10.33b) into this equation, we have
The zero-state response [i.e., the response Y(s) when q(0) = 0] is given by Note that the transfer function of a system is defined under the zero-state condition [see Eq. (4.32) ]. The matrix C Φ(s) B + D is the transfer function
matrix H(s) of the system, which relates the responses y 1 ,y 2 ,...,y k to the inputs x 1 ,x 2 ,...,x j :
and the zero-state response is
The matrix H(s) is a k x j matrix (k is the number of outputs and j is the number of inputs). The ijth element H ij (s) of H(s) is the transfer function that relates the output y i (t) to the input x j (t).
EXAMPLE 10.6
Let us consider a system with a state equation
and an output equation
In this case,
and
Hence, the transfer function matrix H(s) is given by
and the zero-state response is Remember that the ijth element of the transfer function matrix in Eq. (10.42) represents the transfer function that relates the output y i (t) to the input and the zero-state response is Remember that the ijth element of the transfer function matrix in Eq. (10.42) represents the transfer function that relates the output y i (t) to the input
COMPUTER EXAMPLE C10.3
Repeat Example 10.6 using MATLAB. >> A = [0 1; -2 -3]; B = [1 0; 1 1];
>> C = [1 0; 1 1; 0 2]; D = [0 0;1 0;0 1]; >> syms s; H = simplify (C*inv (s*eye (2) -A) *B+D)
H= [ (s+4) / (s^2+3*s+2), 1/(s^2+3*s+2)]
[ (s+4) / (s+2), 1/(s+2)] [ 2*(-2+s) / (s^2+3*s+2), (5*s+s^2+2) / (s^2+3*s+2)]
Transfer functions relating particular inputs to particular outputs can be obtained using the ss2tf function. >> disp ('Transfer function relating y_3 and x_2:')
>> [num, den] = ss2tf(A, B, C, D, 2); tf (num (3,:), den) Transfer function relating y_3 and x_2: Transfer function:
s^2 + 5 s + 2 ------------- s^2 + 3 s + 2
CHARACTERISTIC ROOTS (EIGENVALUES) OF A MATRIX
It is interesting to observe that the denominator of every transfer function in Eq. (10.42) is (s + 1)(s + 2) except for H 21 (s) and H 22 (s), where the factor (s + 1) is canceled. This is no coincidence. We see that the denominator of every element of Φ(s) is |s I − A| because Φ(s) = (s I − A) −1 , and the inverse of a matrix has its determinant in the denominator. Since C,
B, and D are matrices with constant elements, we see from Eq. (10.38b) that the denominator of Φ(s) will also be the denominator of H(s). Hence, the denominator of every element of H(s) is |s I − A|, except for the possible cancellation of the common factors mentioned earlier. In other words, the zeros of the polynomial |s I − A| are also the poles of all transfer functions of the system. Therefore, the zeros of the polynomial |s I − A| are the characteristic roots of the system. Hence, the characteristic roots of the system are the roots of the equation
Since |s I − A| is an Nth-order polynomial in s with N zeros λ 1 , λ 2 , ..., λ N , we can write Eq. (10.43a) as
For the system in Example 10.6 ,
Hence Equation (10.43a) is known as the characteristic equation of the matrix
A, and λ 1 ,? 2 , ..., λ N , are the characteristic roots of A. The term eigenvalue, meaning "characteristic value" in German, is also commonly used in the literature. Thus, we have shown that the characteristic roots of a system are
the eigenvalues (characteristic values) of the matrix A.
At this point, the reader will recall that if ? 1 ,? 2 , ..., λ N , are the poles of the transfer function, then the zero-input response is of the form
This fact is also obvious from Eq. (10.37) . The denominator of every element of the zero-input response matrix C Φ(s)q(0) is |s I − A| = (s − λ 1 )(s -
? 2 ) ...(s −λ N ). Therefore, the partial fraction expansion and the subsequent inverse Laplace transform will yield a zero-input component of the form in Eq. (10.45) .
10.3-2 Time-Domain Solution of State Equations
The state equation is
We now show that the solution of the vector differential Equation (10.46) is
Before proceeding further, we must define the exponential of the matrix appearing in Eq. (10.47) . An exponential of a matrix is defined by an infinite series identical to that used in defining an exponential of a scalar. We shall define
For example, if
then
and
and so on. We can show that the infinite series in Eq. (10.48a) is absolutely and uniformly convergent for all values of t. Consequently, it can be differentiated or
integrated term by term. Thus, to find (d/dt)e At , we differentiate the series on the right-hand side of Eq. (10.48a) term by term:
The infinite series on the right-hand side of Eq. (10.51a) also may be expressed as
Hence
Also note that from the definition (10.48a) , it follows that
where
If we premultiply or postmultiply the infinite series for e At [ Eq. (10.48a) ] by an infinite series for e − At , we find that
In Section B.6-3 , we showed that
Using this relationship, we observe that
We now premultiply both sides of Eq. (10.46) by e − At to yield
or
A glance at Eq. (10.54) shows that the left-hand side of Eq. (10.55b) is
Hence
The integration of both sides of this equation from 0 to t yields
or
Hence
Premultiplying Eq. (10.56c) by e At and using Eq. (10.53b) , we have
This is the desired solution. The first term on the right-hand side represents q(t) when the input x(t) = 0. Hence it is the zero-input component. The second term, by a similar argument, is seen to be the zero-state component.
The results of Eq. (10.57a) can be expressed more conveniently in terms of the matrix convolution. We can define the convolution of two matrices in a manner similar to the multiplication of two matrices, except that the multiplication of two elements is replaced by their convolution.
For example,
By using this definition of matrix convolution, we can express Eq. (10.57a) as
Note that the limits of the convolution integral [ Eq. (10.57a) ] are from 0 to t. Hence, all the elements of e At in the convolution term of Eq. (10.57b) are implicitly assumed to be multiplied by u(t).
The result of Eqs. (10.57) can be easily generalized for any initial value of t. It is left as an exercise for the reader to show that the solution of the state equation can be expressed as
DETERMINING e At The exponential e At required in Eqs. (10.57) can be computed from the definition in Eq. (10.48a) . Unfortunately, this is an infinite series, and its
computation can be quite laborious. Moreover, we may not be able to recognize the closed-form expression for the answer. There are several
efficient methods of determining e At in closed form. It was shown in Section B.6-5 that for an N × N matrix A.
where
and λ 1 , λ 2 , ..., λ N , are the N characteristic values (eigenvalues) of A.
We can also determine e At by comparing Eqs. (10.57a) and (10.35b) . It is clear that
Thus, e At and Φ(s) are a Laplace transform pair. To be consistent with Laplace transform notation, e At is often denoted by Φ(t), the state transition matrix (STM):
EXAMPLE 10.7
Use the time-domain method to find the solution to the problem in Example 10.5 . For this case, the characteristic roots are given by
The roots are λ 1 = −4 and λ 2 = −9, so
and
The zero-input component is given by [see Eq. (10.57a) ]
Note the presence of u(t) in Eq. (10.61a) , indicating that the response begins at t = 0. The zero-state component is e At * Bx = [see Eq. (10.57b) ], where
and
Note again the presence of the term u(t) in every element of e At . This is the cast because the limits of the convolution integral run from 0 to t [ Eqs. (10.56) ]. Thus
Substitution for the preceding convolution integrals from the convolution table ( Table 2.1 ) yields
The sum of the two components [ Eq. (10.61a) and Eq. (10.61b) ] now gives the desired solution for q(t):
This result confirms the solution obtained by using the frequency-domain method [see Eq. (10.36b) ]. Once the state variables q 1 and q 2 have been found for t ≥ 0, all the remaining variables can be determined from the output equation.
THE OUTPUT
The output equation is given by The substitution of the solution for q[ Eq. (10.57b) ] in this equation yields
Since the elements of
B are constants,
With this result, Eq. (10.62a) becomes
Now recall that the convolution of x(t) with the unit impulse δ(t) yields x(t). Let us define a j × j diagonal matrix δ(t) such that all its diagonal terms are unit impulse functions. It is then obvious that
and Eq. (10.62b) can be expressed as
With the notation φ(t) for e At , Eq. (10.63b) may be expressed as
The zero-state response, that is, the response when q(0) = 0, is
where
The matrix h(t) is a k × j matrix known as the impulse response matrix. The reason for this designation is obvious. The ij th element of h(t) is h ij (t), which represents the zero-state response y i when the input x j (t) = δ(t) and when all other inputs (and all the initial conditions) are zero. It can also
be seen from Eq. (10.39) and (10.64b) that
EXAMPLE 10.8
For the system described by Eqs. (10.40a) and (10.40b) , use Eq. (10.59b) to determine e At : This problem was solved earlier with frequency-domain techniques. From Eq. (10.41) , we have
The same result is obtained in Example B.13 ( Section B.6-5 ) by using Eq. (10.59a) [see Eq. (B.84) ]. Also, δ(t) is a diagonal j × j or 2 × 2 matrix:
Substituting the matrices φ(t), δ(t), C,D, and B [ Eq. (10.40c) ] into Eq. (10.65) , we have
The reader can verify that the transfer function matrix H(s) in Eq. (10.42) is the Laplace transform of the unit impulse response matrix h(t) in Eq. (10.66) .