h y
t t
i
t
i⫹ 1
Predicted error
True
FIGURE 22.1
Euler’s method.
EXAMPLE 22.1 Euler’s Method
Problem Statement. Use Euler’s method to integrate y
′
= 4e
0.8t
− 0.5y from t = 0 to 4
with a step size of 1. The initial condition at t = 0 is y = 2. Note that the exact solution can be determined analytically as
y = 4
1.3 e
0.8t
− e
− 0.5t
+ 2e
− 0.5t
Solution. Equation 22.5 can be used to implement Euler’s method:
y 1 = y0 + f 0, 21
where y0 = 2 and the slope estimate at t = 0 is f
0, 2 = 4e −
0.52 = 3 Therefore,
y 1 = 2 + 31 = 5
The true solution at t = 1 is y =
4 1.3
e
0.81
− e
− 0.51
+ 2e
− 0.51
= 6.19463
Thus, the percent relative error is ε
t
= 6.19463 − 5
6.19463 ×
100 = 19.28 For the second step:
y 2 = y1 + f 1, 51
= 5 +
4e
0.81
− 0.55
1 = 11.40216
The true solution at t = 2.0 is 14.84392 and, therefore, the true percent relative error is
23.19. The computation is repeated, and the results compiled in Table 22.1 and Fig. 22.2. Note that although the computation captures the general trend of the true solution, the error
is considerable. As discussed in the next section, this error can be reduced by using a smaller step size.
22.2.1 Error Analysis for Euler’s Method
The numerical solution of ODEs involves two types of error recall Chap. 4:
1. Truncation,
or discretization, errors caused by the nature of the techniques employed to approximate values of y.
2. Roundoff
errors caused by the limited numbers of significant digits that can be retained by a computer.
The truncation errors are composed of two parts. The first is a local truncation error that results from an application of the method in question over a single step. The second is
a propagated truncation error that results from the approximations produced during the
22.2 EULER’S METHOD 557
TABLE 22.1 Comparison of true and numerical values of the integral
of y’ =
4
e
0.8t
− .
5
y , with the initial condition that y =
2
at t
= . The numerical values were computed using Euler’s
method with a step size of
1
.
t y
true
y
Euler
| ε
t
|
2.00000 2.00000
1 6.19463
5.00000 19.28
2 14.84392
11.40216 23.19
3 33.67717
25.51321 24.24
4 75.33896
56.84931 24.54
y
t 60
40 20
1 2
3 4
Euler solution True solution
FIGURE 22.2
Comparison of the true solution with a numerical solution using Euler’s method for the integral of y
’ =
4
e
0.8t
− .
5
y from t =
to
4
with a step size of
1
. . The initial condition at t =
is y =
2
.
previous steps. The sum of the two is the total error. It is referred to as the global trunca- tion error.
Insight into the magnitude and properties of the truncation error can be gained by de- riving Euler’s method directly from the Taylor series expansion. To do this, realize that the
differential equation being integrated will be of the general form of Eq. 22.3, where d ydt
= y
′
, and t and y are the independent and the dependent variables, respectively. If
the solution—that is, the function describing the behavior of y—has continuous derivatives, it can be represented by a Taylor series expansion about a starting value t
i
, y
i
, as in
[recall Eq. 4.13]: y
i + 1
= y
i
+ y
′ i
h + y
′′ i
2 h
2
+ · · · + y
n i
n h
n
+ R
n
22.6
where h = t
i + 1
− t
i
and R
n
= the remainder term, defined as
R
n
= y
n+ 1
ξ n +
1 h
n+ 1
22.7
where ξ lies somewhere in the interval from t
i
to t
i + 1
. An alternative form can be devel-
oped by substituting Eq. 22.3 into Eqs. 22.6 and 22.7 to yield y
i + 1
= y
i
+ f t
i
, y
i
h + f
′
t
i
, y
i
2 h
2
+ · · · + f
n− 1
t
i
, y
i
n h
n
+ Oh
n+ 1
22.8
where Oh
n+ 1
specifies that the local truncation error is proportional to the step size raised to the n + 1th power.
By comparing Eqs. 22.5 and 22.8, it can be seen that Euler’s method corresponds to the Taylor series up to and including the term f t
i
, y
i
h. Additionally, the comparison
indicates that a truncation error occurs because we approximate the true solution using a fi- nite number of terms from the Taylor series. We thus truncate, or leave out, a part of the true
solution. For example, the truncation error in Euler’s method is attributable to the remain- ing terms in the Taylor series expansion that were not included in Eq. 22.5. Subtracting
Eq. 22.5 from Eq. 22.8 yields
E
t
= f
′
t
i
, y
i
2 h
2
+ · · · + Oh
n+ 1
22.9
where E
t
= the true local truncation error. For sufficiently small h, the higher-order terms
in Eq. 22.9 are usually negligible, and the result is often represented as E
a
= f
′
t
i
, y
i
2 h
2
22.10
or E
a
= Oh
2
22.11
where E
a
= the approximate local truncation error.
According to Eq. 22.11, we see that the local error is proportional to the square of the step size and the first derivative of the differential equation. It can also be demon-
strated that the global truncation error is Oh—that is, it is proportional to the step size
Carnahan et al., 1969. These observations lead to some useful conclusions:
1. The global error can be reduced by decreasing the step size.
2. The method will provide error-free predictions if the underlying function i.e., the
solution of the differential equation is linear, because for a straight line the second derivative would be zero.
This latter conclusion makes intuitive sense because Euler’s method uses straight-line seg- ments to approximate the solution. Hence, Euler’s method is referred to as a first-order
method. It should also be noted that this general pattern holds for the higher-order one-step
methods described in the following pages. That is, an nth-order method will yield perfect results if the underlying solution is an nth-order polynomial. Further, the local truncation
error will be Oh
n +1
and the global error Oh
n
.
22.2.2 Stability of Euler’s Method
In the preceding section, we learned that the truncation error of Euler’s method depends on the step size in a predictable way based on the Taylor series. This is an accuracy issue.
The stability of a solution method is another important consideration that must be con- sidered when solving ODEs. A numerical solution is said to be unstable if errors grow
exponentially for a problem for which there is a bounded solution. The stability of a par- ticular application can depend on three factors: the differential equation, the numerical
method, and the step size.
Insight into the step size required for stability can be examined by studying a very simple ODE:
d y dt
= −ay
22.12
If y0 = y
, calculus can be used to determine the solution as y
= y e
−at
Thus, the solution starts at y and asymptotically approaches zero.
Now suppose that we use Euler’s method to solve the same problem numerically: y
i +1
= y
i
+ d y
i
dt h
Substituting Eq. 22.12 gives y
i +1
= y
i
− ay
i
h or
y
i +1
= y
i
1 − ah
22.13
The parenthetical quantity 1 − ah is called an amplification factor. If its absolute value is
greater than unity, the solution will grow in an unbounded fashion. So clearly, the stability depends on the step size h. That is, if h 2a,
|y
i
| → ∞ as i → ∞. Based on this analy-
sis, Euler’s method is said to be conditionally stable. Note that there are certain ODEs where errors always grow regardless of the method.
Such ODEs are called ill-conditioned.
22.2 EULER’S METHOD 559
FIGURE 22.3
An M-file to implement Euler’s method.
function [t,y] = eulodedydt,tspan,y0,h,varargin eulode: Euler ODE solver
[t,y] = eulodedydt,tspan,y0,h,p1,p2,...: uses Eulers method to integrate an ODE
input: dydt = name of the M-file that evaluates the ODE
tspan = [ti, tf]
where ti and tf = initial and final values of independent variable
y0 = initial value of dependent variable h = step size
p1,p2,... = additional parameters used by dydt output:
t = vector of independent variable y = vector of solution for dependent variable
if nargin4,errorat least 4 input arguments required,end ti = tspan1;tf = tspan2;
if ~tfti,errorupper limit must be greater than lower,end t = ti:h:tf; n = lengtht;
if necessary, add an additional value of t so that range goes from t = ti to tf
if tntf
tn+1 = tf; n = n+1;
end y = y0onesn,1; preallocate y to improve efficiency
for i = 1:n-1 implement Eulers method yi+1 = yi + dydtti,yi,varargin{:}ti+1-ti;
end
Inaccuracy and instability are often confused. This is probably because a both repre- sent situations where the numerical solution breaks down and b both are affected by step
size. However, they are distinct problems. For example, an inaccurate method can be very stable. We will return to the topic when we discuss stiff systems in Chap. 23.
22.2.3 MATLAB M-file Function:
eulode
We have already developed a simple M-file to implement Euler’s method for the falling bungee jumper problem in Chap. 3. Recall from Section 3.6, that this function used Euler’s
method to compute the velocity after a given time of free fall. Now, let’s develop a more general, all-purpose algorithm.
Figure 22.3 shows an M-file that uses Euler’s method to compute values of the dependent variable
y
over a range of values of the independent variable
t
. The name of the function holding the right-hand side of the differential equation is passed into the function as the
variable
dydt
. The initial and final values of the desired range of the independent variable is passed as a vector
tspan
. The initial value and the desired step size are passed as
y0
and
h
, respectively. The function first generates a vector
t
over the desired range of the dependent variable using an increment of
h
. In the event that the step size is not evenly divisible into the range, the last value will fall short of the final value of the range. If this occurs, the final value is
added to
t
so that the series spans the complete range. The length of the
t
vector is deter- mined as
n
. In addition, a vector of the dependent variable
y
is preallocated with
n
values of the initial condition to improve efficiency.
At this point, Euler’s method Eq. 22.5 is implemented by a simple loop:
for i = 1:n-1 yi+1 = yi + dydtti,yi,varargin{:}ti+1-
ti; end
Notice how a function is used to generate a value for the derivative at the appropriate val- ues of the independent and dependent variables. Also notice how the time step is automat-
ically calculated based on the difference between adjacent values in the vector
t
. The ODE being solved can be set up in several ways. First, the differential equation can
be defined as an anonymous function object. For example, for the ODE from Example 22.1:
dydt=t,y 4exp0.8t - 0.5y;
The solution can then be generated as
[t,y] = eulodedydt,[0 4],2,1; disp[t,y]
with the result compare with Table 22.1:
0 2.0000 1.0000 5.0000
2.0000 11.4022 3.0000 25.5132
4.0000 56.8493
Although using an anonymous function is feasible for the present case, there will be more complex problems where the definition of the ODE requires several lines of code. In
such instances, creating a separate M-file is the only option.
22.3 IMPROVEMENTS OF EULER’S METHOD
A fundamental source of error in Euler’s method is that the derivative at the beginning of the interval is assumed to apply across the entire interval. Two simple modifications are
available to help circumvent this shortcoming. As will be demonstrated in Section 22.4, both modifications as well as Euler’s method itself actually belong to a larger class of so-
lution techniques called Runge-Kutta methods. However, because they have very straight- forward graphical interpretations, we will present them prior to their formal derivation as
Runge-Kutta methods.
22.3 IMPROVEMENTS OF EULER’S METHOD 561
y
t t
i
t
i⫹ 1
Slope ⫽ f t
i⫹1
, y
i ⫹1
Slope ⫽ ft
i
, y
i
y
t t
i
t
i⫹1
Slope ⫽ f t
i
, y
i
⫹ f t
i⫹1
, y
i ⫹1
2
a b
FIGURE 22.4
Graphical depiction of Heun’s method. a Predictor and b corrector.
22.3.1 Heun’s Method
One method to improve the estimate of the slope involves the determination of two deriv- atives for the interval—one at the beginning and another at the end. The two derivatives are
then averaged to obtain an improved estimate of the slope for the entire interval. This ap- proach, called Heun’s method, is depicted graphically in Fig. 22.4.
Recall that in Euler’s method, the slope at the beginning of an interval y
′ i
= f t
i
, y
i
22.14
is used to extrapolate linearly to y
i + 1
: y
i + 1
= y
i
+ f t
i
, y
i
h
22.15
For the standard Euler method we would stop at this point. However, in Heun’s method the y
i + 1
calculated in Eq. 22.15 is not the final answer, but an intermediate prediction. This is why we have distinguished it with a superscript 0. Equation 22.15 is called a predictor equa-
tion. It provides an estimate that allows the calculation of a slope at the end of the interval:
y
′ i +
1
= f t
i + 1
, y
i + 1
22.16
Thus, the two slopes [Eqs. 22.14 and 22.16] can be combined to obtain an average slope for the interval:
¯y
′
= f
t
i
, y
i
+ f t
i + 1
, y
i + 1
2 This average slope is then used to extrapolate linearly from y
i
to y
i + 1
using Euler’s method:
y
i + 1
= y
i
+ f
t
i
, y
i
+ f t
i + 1
, y
i + 1
2 h
22.17
which is called a corrector equation.
f t
i
, y
i m
⫹ f t
i⫹ 1
, y
i⫹ 1
⫹ y
i j
⫹1
y
i m
2 h
j⫺ 1
FIGURE 22.5
Graphical representation of iterating the corrector of Heun’s method to obtain an improved estimate.
The Heun method is a predictor-corrector approach. As just derived, it can be ex- pressed concisely as
Predictor Fig. 22.4a: y
i +1
= y
m i
+ f t
i
, y
i
h
22.18
Corrector Fig. 22.4b: y
j i
+1
= y
m i
+ f
t
i
, y
m i
+ f t
i +1
, y
j −1
i +1
2 h
22.19
for j = 1, 2, . . . , m
Note that because Eq. 22.19 has y
i +1
on both sides of the equal sign, it can be applied in an iterative fashion as indicated. That is, an old estimate can be used repeatedly to provide
an improved estimate of y
i +1
. The process is depicted in Fig. 22.5.
As with similar iterative methods discussed in previous sections of the book, a termi- nation criterion for convergence of the corrector is provided by
|ε
a
| = y
j i
+1
− y
j −1
i +1
y
j i
+1
× 100 where y
j −1
i +1
and y
j i
+1
are the result from the prior and the present iteration of the corrector, respectively. It should be understood that the iterative process does not necessarily con-
verge on the true answer but will converge on an estimate with a finite truncation error, as demonstrated in the following example.
EXAMPLE 22.2 Heun’s Method
Problem Statement. Use Heun’s method with iteration to integrate y
′
= 4e
0.8t
− 0.5y
from t = 0 to 4 with a step size of 1. The initial condition at t = 0 is y = 2. Employ a stop- ping criterion of 0.00001 to terminate the corrector iterations.
Solution. First, the slope at t
, y
is calculated as y
′
= 4e
− 0.52 = 3
Then, the predictor is used to compute a value at 1.0: y
1
= 2 + 31 = 5
22.3 IMPROVEMENTS OF EULER’S METHOD 563
Note that this is the result that would be obtained by the standard Euler method. The true value in Table 22.2 shows that it corresponds to a percent relative error of 19.28.
Now, to improve the estimate for y
i +1
, we use the value y
1
to predict the slope at the end of the interval
y
′ 1
= f x
1
, y
1
= 4e
0.81
− 0.55 = 6.402164
which can be combined with the initial slope to yield an average slope over the interval from t = 0 to 1:
¯y
′
= 3 + 6.402164
2 =
4.701082 This result can then be substituted into the corrector [Eq. 22.19] to give the prediction at
t = 1:
y
1 1
= 2 + 4.7010821 = 6.701082
which represents a true percent relative error of −8.18. Thus, the Heun method without iteration of the corrector reduces the absolute value of the error by a factor of about 2.4 as
compared with Euler’s method. At this point, we can also compute an approximate error as
|ε
a
| = 6.701082 − 5
6.701082 ×
100 = 25.39 Now the estimate of y
1
can be refined by substituting the new result back into the right-hand side of Eq. 22.19 to give
y
2 1
= 2 +
3 + 4e
0.81
− 0.56.701082
2 1 = 6.275811
which represents a true percent relative error of 1.31 percent and an approximate error of |ε
a
| = 6.275811 − 6.701082
6.275811 ×
100 = 6.776
TABLE 22.2 Comparison of true and numerical values of the integral of y’ =
4
e
0.8t
− .
5
y , with the initial condition that y =
2
at t = . The numerical values
were computed using the Euler and Heun methods with a step size of
1
. The Heun method was implemented both without and with iteration of
the corrector.
Without Iteration With Iteration
t y
true
y
Euler
| ε
t
|
y
Heun
| ε
t
|
y
Heun
| ε
t
|
2.00000 2.00000
2.00000 2.00000
1 6.19463
5.00000 19.28
6.70108 8.18
6.36087 2.68
2 14.84392
11.40216 23.19
16.31978 9.94
15.30224 3.09
3 33.67717
25.51321 24.24
37.19925 10.46
34.74328 3.17
4 75.33896
56.84931 24.54
83.33777 10.62
77.73510 3.18
The next iteration gives y
2 1
= 2 + 3
+ 4e
0.81
− 0.56.275811 2
1 = 6.382129
which represents a true error of 3.03 and an approximate error of 1.666. The approximate error will keep dropping as the iterative process converges on a sta-
ble final result. In this example, after 12 iterations the approximate error falls below the stopping criterion. At this point, the result at t
= 1 is 6.36087, which represents a true rel- ative error of 2.68. Table 22.2 shows results for the remainder of the computation along
with results for Euler’s method and for the Heun method without iteration of the corrector.
Insight into the local error of the Heun method can be gained by recognizing that it is related to the trapezoidal rule. In the previous example, the derivative is a function of both
the dependent variable y and the independent variable t. For cases such as polynomials, where the ODE is solely a function of the independent variable, the predictor step
[Eq. 22.18] is not required and the corrector is applied only once for each iteration. For such cases, the technique is expressed concisely as
y
i +1
= y
i
+ f t
i
+ f t
i +1
2 h
22.20
Notice the similarity between the second term on the right-hand side of Eq. 22.20 and the trapezoidal rule [Eq. 19.11]. The connection between the two methods can be formally
demonstrated by starting with the ordinary differential equation
d y dt
= f t
22.21
This equation can be solved for y by integration:
y
i +1
y
i
d y =
t
i +1
t
i
f t dt
22.22
which yields y
i +1
− y
i
=
t
i +1
t
i
f t dt
22.23
or y
i +1
= y
i
+
t
i +1
t
i
f t dt
22.24
Now, recall that the trapezoidal rule [Eq. 19.11] is defined as
t
i +1
t
i
f t dt =
f t
i
+ f t
i +1
2 h
22.25 22.3 IMPROVEMENTS OF EULER’S METHOD
565
where h = t
i +1
− t
i
. Substituting Eq. 22.25 into Eq. 22.24 yields
y
i +1
= y
i
+ f t
i
+ f t
i +1
2 h
22.26
which is equivalent to Eq. 22.20. For this reason, Heun’s method is sometimes referred to as the trapezoidal rule.
Because Eq. 22.26 is a direct expression of the trapezoidal rule, the local truncation error is given by [recall Eq. 19.14]
E
t
= − f
′′
ξ 12
h
3
22.27
where ξ is between t
i
and t
i + 1
. Thus, the method is second order because the second deriv-
ative of the ODE is zero when the true solution is a quadratic. In addition, the local and global errors are Oh
3
and Oh
2
, respectively. Therefore, decreasing the step size
decreases the error at a faster rate than for Euler’s method.
22.3.2 The Midpoint Method
Figure 22.6 illustrates another simple modification of Euler’s method. Called the midpoint method,
this technique uses Euler’s method to predict a value of y at the midpoint of the interval Fig. 22.6a:
y
i + 12
= y
i
+ f t
i
, y
i
h 2
22.28
Then, this predicted value is used to calculate a slope at the midpoint: y
′ i +
12
= f t
i + 12
, y
i + 12
22.29
y
t t
i
t
i⫹ 1
t
i⫹ 1兾2
Slope ⫽ ft
i⫹1兾2
, y
i⫹1兾2
Slope ⫽ ft
i⫹1兾2
, y
i⫹1兾2
y
t t
i
t
i⫹1
a b
FIGURE 22.6
Graphical depiction of midpoint method. a Predictor and b corrector.
which is assumed to represent a valid approximation of the average slope for the entire interval. This slope is then used to extrapolate linearly from t
i
to t
i +1
Fig. 22.6b: y
i +1
= y
i
+ f t
i +12
, y
i +12
h
22.30
Observe that because y
i +1
is not on both sides, the corrector [Eq. 22.30] cannot be applied iteratively to improve the solution as was done with Heun’s method.
As in our discussion of Heun’s method, the midpoint method can also be linked to Newton-Cotes integration formulas. Recall from Table 19.4 that the simplest Newton-Cotes
open integration formula, which is called the midpoint method, can be represented as
b a
f x d x ∼ =
b − a f x
1
22.31
where x
1
is the midpoint of the interval a, b. Using the nomenclature for the present case, it can be expressed as
t
i + 1
t
i
f t dt ∼ =
h f t
i + 12
22.32
Substitution of this formula into Eq. 22.24 yields Eq. 22.30. Thus, just as the Heun method can be called the trapezoidal rule, the midpoint method gets its name from the
underlying integration formula on which it is based. The midpoint method is superior to Euler’s method because it utilizes a slope estimate
at the midpoint of the prediction interval. Recall from our discussion of numerical differen- tiation in Section 4.3.4 that centered finite differences are better approximations of deriva-
tives than either forward or backward versions. In the same sense, a centered approximation such as Eq. 22.29 has a local truncation error of Oh
2
in comparison with the forward approximation of Euler’s method, which has an error of Oh. Consequently, the local and
global errors of the midpoint method are Oh
3
and Oh
2
, respectively.
22.4 RUNGE-KUTTA METHODS
Runge-Kutta RK methods achieve the accuracy of a Taylor series approach without requiring the calculation of higher derivatives. Many variations exist but all can be cast in
the generalized form of Eq. 22.4:
y
i + 1
= y
i
+ φh
22.33
where φ is called an increment function, which can be interpreted as a representative slope over the interval. The increment function can be written in general form as
φ = a
1
k
1
+ a
2
k
2
+ · · · + a
n
k
n
22.34
where the a’s are constants and the k’s are k
1
= f t
i
, y
i
22.34a
k
2
= f t
i
+ p
1
h, y
i
+ q
11
k
1
h
22.34b
k
3
= f t
i
+ p
2
h, y
i
+ q
21
k
1
h + q
22
k
2
h
22.34c
.. .
k
n
= f t
i
+ p
n− 1
h, y
i
+ q
n− 1,1
k
1
h + q
n− 1,2
k
2
h + · · · + q
n− 1,n−1
k
n− 1
h
22.34d 22.4 RUNGE-KUTTA METHODS
567
where the p’s and q’s are constants. Notice that the k’s are recurrence relationships. That is, k
1
appears in the equation for k
2
, which appears in the equation for k
3
, and so forth. Be-
cause each k is a functional evaluation, this recurrence makes RK methods efficient for computer calculations.
Various types of Runge-Kutta methods can be devised by employing different num- bers of terms in the increment function as specified by n. Note that the first-order RK
method with n = 1 is, in fact, Euler’s method. Once n is chosen, values for the a’s, p’s, and
q ’s are evaluated by setting Eq. 22.33 equal to terms in a Taylor series expansion. Thus,
at least for the lower-order versions, the number of terms n usually represents the order of the approach. For example, in Section 22.4.1, second-order RK methods use an increment
function with two terms n = 2. These second-order methods will be exact if the solution
to the differential equation is quadratic. In addition, because terms with h
3
and higher are dropped during the derivation, the local truncation error is Oh
3
and the global error is Oh
2
. In Section 22.4.2, the fourth-order RK method n
= 4 is presented for which the global truncation error is Oh
4
.
22.4.1 Second-Order Runge-Kutta Methods
The second-order version of Eq. 22.33 is y
i +1
= y
i
+ a
1
k
1
+ a
2
k
2
h
22.35
where k
1
= f t
i
, y
i
22.35a
k
2
= f t
i
+ p
1
h, y
i
+ q
11
k
1
h
22.35b
The values for a
1
, a
2
, p
1
, and q
11
are evaluated by setting Eq. 22.35 equal to a second-order Taylor series. By doing this, three equations can be derived to evaluate the
four unknown constants see Chapra and Canale, 2010, for details. The three equations are a
1
+ a
2
= 1
22.36
a
2
p
1
= 12
22.37
a
2
q
11
= 12
22.38
Because we have three equations with four unknowns, these equations are said to be underdetermined. We, therefore, must assume a value of one of the unknowns to determine
the other three. Suppose that we specify a value for a
2
. Then Eqs. 22.36 through 22.38
can be solved simultaneously for a
1
= 1 − a
2
22.39
p
1
= q
11
= 1
2a
2
22.40
Because we can choose an infinite number of values for a
2
, there are an infinite num-
ber of second-order RK methods. Every version would yield exactly the same results if the solution to the ODE were quadratic, linear, or a constant. However, they yield different re-
sults when as is typically the case the solution is more complicated. Three of the most commonly used and preferred versions are presented next.