SOLUTION OF DIFFERENTIAL AND INTEGRO-DIFFERENTIAL EQUATIONS
4.3 SOLUTION OF DIFFERENTIAL AND INTEGRO-DIFFERENTIAL EQUATIONS
The time-differentiation property of the Laplace transform has set the stage for solving linear differential (or integro-differential) equations with constant coefficients. Because d k y/dt k s k Y(s), the Laplace transform of a differential equation is an algebraic equation that can be readily solved for Y(s). Next we take the inverse Laplace transform of Y(s) to find the desired solution y(t). The following examples demonstrate the Laplace
transform procedure for solving linear differential equations with constant coefficients.
EXAMPLE 4.10
Solve the second-order linear differential equation
for the initial conditions y(0 − ) = 2 and
and the input x(t) = e −4t u(t).
The equation is
Let Then from Eqs. (4.24)
and
Moreover, for x(t) = e −4t u(t),
Taking the Laplace transform of Eq. (4.35b) , we obtain
Collecting all the terms of Y(s) and the remaining terms separately on the left-hand side, we obtain
Therefore
and
Expanding the right-hand side into partial fractions yields
The inverse Laplace transform of this equation yields
Example 4.10 demonstrates the ease with which the Laplace transform can solve linear differential equations with constant coefficients. The method is general and can solve a linear differential equation with constant coefficients of any order.
Zero-Input and Zero-State Components of Response The Laplace transform method gives the total response, which includes zero-input and zero-state components. It is possible to separate the two
components if we so desire. The initial condition terms in the response give rise to the zero-input response. For instance, in Example 4.10 , the terms attributable to initial conditions y(0 − ) = 2 and
in Eq. (4.36a) generate the zero-input response. These initial condition terms are −(2s + 11), as seen in Eq. (4.36b) . The terms on the right-hand side are exclusively due to the input. Equation (4.36b) is reproduced below with the proper labeling of the terms.
so that
Therefore
Taking the inverse transform of this equation yields
COMMENTS ON INITIAL CONDITIONS AT 0 − AND AT 0 +
The initial conditions in Example 4.10 are y(0 − ) = 2 and . If we let t = 0 in the total response in Eq. (4.37) , we find y(0) = 2 and , which is at odds with the given initial conditions. Why? Because the initial conditions are given at t = 0 − (just before the input is applied), when only the zero-input response is present. The zero-state response is the result of the input x(t) applied at t = 0. Hence, this component does not exist at t = 0 − . Consequently, the initial conditions at t = 0 − are satisfied by the zero-input response, not by the total response. We can readily verify in this example that the zero-input response does indeed satisfy the given initial conditions at t = 0 − . It is the total response that satisfies the initial conditions at t = 0 + , which are generally different from the initial conditions at 0 − .
There also exists a version of the Laplace transform, which uses the initial conditions at t = 0 + rather than at 0 − (as in our present version). The
version, which was in vogue till the early 1960s, is identical to the version except the limits of Laplace integral [ Eq. (4.8) ] are from 0 + to ∞. Hence, by definition, the origin t = 0 is excluded from the domain. This version, still used in some math books, has some serious difficulties. For instance, the Laplace transform of δ(t) is zero because δ(t) = 0 for t ≥ 0 + . Moreover, this approach is rather clumsy in the theoretical study of linear systems because the response obtained cannot be separated into zero-input and zero-state components. As we know, the zero-state component represents the system response as an explicit function of the input, and without knowing this component, it is not possible to assess the effect of the input on the system response in a general way. The
version can separate the response in terms of the natural and the forced components, which are not as interesting as the zero-input and the zero-state components. Note that we can always determine the natural and the forced components from the zero-input and the zero-state components [see Eqs. (2.52)], but the converse is not true. Because of these and some other
problems, electrical engineers (wisely) started discarding the
version in the early sixties.
It is interesting to note the time-domain duals of these two Laplace versions. The classical method is the dual of the method, and the convolution
(zero-input/zero-state) method is the dual of the method. The first pair uses the initial conditions at 0 + , and the second pair uses those at t = 0 − . The first pair (the classical method and the
version) is awkward in the theoretical study of linear system analysis. It was no coincidence that the version was adopted immediately after the introduction to the electrical engineering community of state-space analysis (which uses zero- input/zero-state separation of the output).
EXERCISE E4.6
Solve
for the input x(t) = u(t). The initial conditions are y(0 − ) = 1 and
Answers
y(t) = (1 + 9e −t − 7e −3t )u(t)u(t)
EXAMPLE 4.11
In the circuit of Fig. 4.7a , the switch is in the closed position for a long time before t = 0, when it is opened instantaneously. Find the inductor current y(t) for t ≥ 0.
Figure 4.7: Analysis of a network with a switching action. When the switch is in the closed position (for a long time), the inductor current is 2 amperes and the capacitor voltage is 10 volts. When the switch is
opened, the circuit is equivalent to that depicted in Fig. 4.7b , with the initial inductor current y(0 − ) = 2 and the initial capacitor voltage v C (0 − ) = 10. The input voltage is 10 volts, starting at t = 0, and, therefore, can be represented by 10u(t).
The loop equation of the circuit in Fig. 4.7b is
If
then
and [see Eq. (4.26) ]
Because y(t) is the capacitor current, the integral ∫ 0 − −∞ y( τ)dτ is q c (0 − ), the capacitor charge at t = 0 − , which is given by C times the capacitor voltage at t = 0 − . Therefore
From Eq. (4.39c) it follows that
Taking the Laplace transform of Eq. (4.38) and using Eqs. (4.39a) , (4.39b) , and (4.40) , we obtain
or
and
To find the inverse Laplace transform of Y(s), we use pair 10c ( Table 4.1) with values A = 2, B = 0, a = 1, and c = 5. This yields
Therefore This response is shown in Fig. 4.7c .
Comment. In our discussion so far, we have multiplied input signals by u(t), implying that the signals are zero prior to t = 0. This is needlessly restrictive. These signals can have any arbitrary value prior to t = 0. As long as the initial conditions at t = 0 are specified, we need only the knowledge of the input for t ≥ 0 to compute the response for t ≥ 0. Some authors use the notation 1(t) to denote a function that is equal to u(t) for t ≥ 0 and that has arbitrary value for negative t. We have abstained from this usage to avoid needless confusion caused by introduction of a new function, which is very similar to u(t).
4.3-1 Zero-State Response
Consider an Nth-order LTIC system specified by the equation or
We shall now find the general expression for the zero-state response of an LTIC system. Zero-state response y(t), by definition, is the system response to an input when the system is initially relaxed (in zero state). Therefore, y(t) satisfies the system equation (4.41) with zero initial conditions
Moreover, the input x(t) is causal, so that Let Because of zero initial conditions
Therefore, the Laplace transform of Eq. (4.41) yields or
But we have shown in Eq. (4.31) that Y(s) = H(s)X(s). Consequently,
This is the transfer function of a linear differential system specified in Eq. (4.41) . The same result has been derived earlier in Eq. (2.50) using an This is the transfer function of a linear differential system specified in Eq. (4.41) . The same result has been derived earlier in Eq. (2.50) using an
transform of the input x(t) and H(s) is the system transfer function [relating the particular output y(t) to the input x(t)].
INTUITIVE INTERPRETATION OF THE LAPLACE TRANSFORM
So far we have treated the Laplace transform as a machine, which converts linear integro-differential equations into algebraic equations. There is no physical understanding of how this is accomplished or what it means. We now discuss a more intuitive interpretation and meaning of the Laplace transform.
In Chapter 2 , Eq. (2.47) , we showed that LTI system response to an everlasting exponential e st is H (s)e st . If we could express every signal as a linear combination of everlasting exponentials of the form e st , we could readily obtain the system response to any input. For example, if
The response of an LTIC system to such input x(t) is given by
Unfortunately, a very small class of signals can be expressed in this form. However, we can express almost all signals of practical utility as a sum of everlasting exponentials over a continuum of frequencies. This is precisely what the Laplace transform in Eq. (4.2) does.
Invoking the linearity property of the Laplace transform, we can find the system response y(t) to input x(t) in Eq. (4.44) as [ † ]
Clearly We can now represent the transformed version of the system, as depicted in Fig. 4.8a . The input X(s) is the Laplace transform of x(t), and the output
Y(s) is the Laplace transform of (the zero-input response) y(t). The system is described by the transfer function H(s). The output Y(s) is the product X(s)H(s).
Figure 4.8: Alternate interpretation of the Laplace transform. Recall that s is the complex frequency of e st . This explains why the Laplace transform method is also called the frequency-domain method. Note that
X(s), Y(s), and H(s) are the frequency-domain representations of x(t), y(t), and h(t), respectively. We may view the boxes marked and in Fig. 4.8a as the interfaces that convert the time-domain entities into the corresponding frequency-domain entities, and vice versa. All real-life signals begin in the time domain, and the final answers must also be in the time domain. First, we convert the time-domain input(s) into the frequency- domain counterparts. The problem itself is solved in the frequency domain, resulting in the answer Y(s), also in the frequency domain. Finally, we convert Y(s) to y(t). Solving the problem is relatively simpler in the frequency domain than in the time domain. Henceforth, we shall omit the explicit representation of the interface boxes and
, representing signals and systems in the frequency domain, as shown in Fig. 4.8b .
THE DOMINANCE CONDITION
In this intuitive interpretation of the Laplace transform, one problem should have puzzled the reader. In Section 2.5 (classical solution of differential equations), we showed in Eq. (2.57) that an LTI system response to input e st is H(s) e st plus characteristic mode terms. In the intuitive interpretation, an LTI system response is found by adding the system responses to all the infinite exponential components of the input. These components are exponentials of the form e st starting at t = −∞. We showed in Eq. (2.47) that the response to everlasting input e st is also an everlasting exponential H(s) e st . Does this result not conflict with the result in Eq. (2.57) ? Why are there no characteristic mode terms in Eq. (2.47) , as predicted by Eq. (2.57) ? The answer is that the mode terms are also present. The system response to an everlasting input e st is indeed an everlasting exponential H(s) e st plus mode terms. All these signals start at t = −∞. Now, if a mode e λ i t is such that it decays faster than (or grows slower than) e st , that is, if Re λ i < Re s, then after some time interval, e st will be overwhelmingly stronger than e λ i t , and hence will completely dominate such a mode term. In such a case, at any finite time (which is long time after the start at t = −∞), we can ignore the mode terms and say that the complete response is
st
H(s) e . Hence, we can reconcile Eq. (2.47) to Eq. (2.57) only if the dominance condition is satisfied, that is, if Re λ i < Re s for all i. If the
dominance condition is not satisfied, the mode terms dominate e st and Eq. (2.47) does not hold. [ 10 ]
Careful examination shows that the dominance condition is implied in Eq. (2.47) . This is because of the caviat in Eq. (2.47) that the response of an LTIC system to everlasting e st is H(s) e st , provided H(s) exists (or converges). We can show that this condition amounts to dominance condition. If a
system has characteristic roots ? ,? , ..., λ , then h(t) consists of exponentials of the form e λ i t 1 2 N (i = 1, 2, ..., N) and the convergence of H (s) requires that Re s > Re λ i for i = 1, 2, ..., N, which precisely is the dominance condition. Clearly, the dominance condition is implied in Eq. (2.47) , and also in the entire fabric of the Laplace transform. It is interesting to note that the elegant structure of convergence in Laplace transform is rooted
in such a lowly, mundane origin, as Eq. (2.57) .
EXAMPLE 4.12
Find the response y(t) of an LTIC system described by the equation
if the input x(t) = 3e −5t u(t) and all the initial conditions are zero; that is, the system is in the zero state. The system equation is
The inverse Laplace transform of this equation is
EXAMPLE 4.13
Show that the transfer function of a. an ideal delay of T seconds is e −sT b. an ideal differentiator is s
c. an ideal integrator is 1/s a. Ideal Delay. For an ideal delay of T seconds, the input x(t) and output y(t) are related by
or Therefore
b. Ideal Differentiator. For an ideal differentiator, the input x(t) and the output y(t) are related by
The Laplace transform of this equation yields and The Laplace transform of this equation yields and
and
Therefore
EXERCISE E4.7
For an LTIC system with transfer function
a. Describe the differential equation relating the input x(t) and output y(t). b. Find the system response y(t) to the input x(t) = e −2t u(t) if the system is initially in zero
b. y(t) = (2e −t − 3e −2t − 3e −2t +e −3t )u(t)
4.3-2 Stability
Equation (4.43) shows that the denominator of H(s) is Q(s), which is apparently identical to the characteristic polynomial Q( λ) defined in Chapter 2 . Does this mean that the denominator of H(s) is the characteristic polynomial of the system? This may or may not be the case, since if P(s) and Q(s) in Eq. (4.43) have any common factors, they cancel out, and the effective denominator of H(s) is not necessarily equal to Q(s). Recall also that the system transfer function H(s), like h(t), is defined in terms of measurements at the external terminals. Consequently, H(s) and h(t) are both external descriptions of the system. In contrast, the characteristic polynomial Q(s) is an internal description. Clearly, we can determine only external stability, that is, BIBO stability, from H(s). If all the poles of H(s) are in LHP, all the terms in h(t) are decaying exponentials, and h(t) is absolutely integrable [see Eq. (2.64) ]. [ † ] Consequently, the system is BIBO stable. Otherwise the system is BIBO unstable.
Beware of the RHP poles! So far, we have assumed that H(s) is a proper function, that is, M ≥ N. We now show that if H(s) is improper, that is, if M > N, the system is BIBO
unstable. In such a case, using long division, we obtain H(s) = R(s) + H ′(s), where R(s) is an (M − N)th-order polynomial and H′(s) is a proper transfer function. For example,
As shown in Eq. (4.47) , the term s is the transfer function of an ideal differentiator. If we apply step function (bounded input) to this system, the output will contain an impulse (unbounded output). Clearly, the system is BIBO unstable. Moreover, such a system greatly amplifies noise because differentiation enhances higher frequencies, which generally predominate in a noise signal. These are two good reasons to avoid improper systems (M > N). In our future discussion, we shall implicitly assume that the systems are proper, unless stated otherwise.
If P(s) and Q(s) do not have common factors, then the denominator of H(s) is identical to Q(s), the characteristic polynomial of the system. In this case, we can determine internal stability by using the criterion described in Section 2.6 . Thus, if P(s) and Q(s) have no common factors, the asymptotic stability criterion in Section 2.6 can be restated in terms of the poles of the transfer function of a system, as follows:
1. An LTIC system is asymptotically stable if and only if all the poles of its transfer function H(s) are in the LHP. The poles may be simple or repeated.
2. An LTIC system is unstable if and only if either one or both of the following conditions exist: (i) at least one pole of H(s) is in the RHP; (ii) there are repeated poles of H(s) on the imaginary axis.
3. An LTIC system is marginally stable if and only if there are no poles of H(s) in the RHP and some unrepeated poles on the imaginary axis.
Location of zeros of H(s) have no role in determining the system stability.
EXAMPLE 4.14
Figure 4.9a shows a cascade connection of two LTIC systems followed by . The transfer functions of these systems are H 1 (s) = 1/(s − 1) and H 2 (s) = (s − 1)/(s + 1), respectively. We shall find the BIBO and asymptotic stability of the composite (cascade) system.
Figure 4.9: Distinction between BIBO and asymptotic stability. If the impulse response of and are h 1 (t) and h 2 (t), respectively, then the impulse response of the cascade system is h(t) = h 1 (t) * h 2 (t).
Hence, H(s) = H 1 (s)H 2 (s). In the present case,
The pole of at s = 1 cancels with the zero at s = 1 of . This results in a composite system having a single pole at s = − 1. If the composite cascade system were to be enclosed inside a black box with only the input and the output terminals accessible, any measurement from these external terminals would show that the transfer function of the system is 1/(s + 1), without any hint of the fact that the system is housing an unstable system ( Fig. 4.9b) .
The impulse response of the cascade system is h(t) = e −t u(t), which is absolutely integrable. Consequently, the system is BIBO stable. To determine the asymptotic stability, we note that has one characteristic root at 1, and also has one root at − 1. Recall that the two systems
are independent (one does not load the other), and the characteristic modes generated in each subsystem are independent of the other. Clearly, the mode e t will not be eliminated by the presence of . Hence, the composite system has two characteristic roots, located at ±1, and the system is asymptotically unstable, though BIBO stable.
Interchanging the positions of and makes no difference in this conclusion. This example shows that BIBO stability can be misleading. If a system is asymptotically unstable, it will destroy itself (or, more likely, lead to saturation condition) because of unchecked growth of the response due to intended or unintended stray initial conditions. BIBO stability is not going to save the system. Control systems are often compensated to realize certain desirable characteristics. One should never try to stabilize an unstable system by canceling its RHP pole(s) by RHP zero(s). Such a misguided attempt will fail, not because of the practical impossibility of exact cancellation but for the more fundamental reason, as just explained.
EXERCISE E4.8
Show that an ideal integrator is marginally stable but BIBO unstable.
4.3-3 Inverse Systems
If H(s) is the transfer function of a system , then , its inverse system has a transfer function H i (s) given by
This follows from the fact the cascade of with its inverse system is an identity system, with impulse response δ(t), implying H(s)H i (s) = 1. For example, an ideal integrator and its inverse, an ideal differentiator, have transfer functions 1/s and s, respectively, leading to H(s)H i (s) = 1.
[ † ] Recall that H(s) has its own region of validity. Hence, the limits of integration for the integral in Eq. (4.44) are modified in Eq. (4.45) to accommodate the region of existence (validity) of X(s) as well as H(s).
[ 10 ] Lathi, B. P. Signals, Systems, and Communication. Wiley, New York, 1965. [ † ] Values of s for which H(s) is ∞ are the poles of H(s). Thus poles of H(s) are the values of s for which the denominator of H(s) is zero.