NEWTON-RAPHSON Applied Numerical Methods with MATLAB fo

6.2 NEWTON-RAPHSON 157 As with other root-location methods, Eq. 6.3 can be used as a termination criterion. In addition, a theoretical analysis Chapra and Canale, 2010 provides insight regarding the rate of convergence as expressed by E t,i +1 = − f ′′ x r 2 f ′ x r E 2 t,i 6.7 Thus, the error should be roughly proportional to the square of the previous error. In other words, the number of significant figures of accuracy approximately doubles with each iteration. This behavior is called quadratic convergence and is one of the major reasons for the popularity of the method. Although the Newton-Raphson method is often very efficient, there are situations where it performs poorly. A special case—multiple roots—is discussed elsewhere Chapra and Canale, 2010. However, even when dealing with simple roots, difficulties can also arise, as in the following example. EXAMPLE 6.3 A Slowly Converging Function with Newton-Raphson Problem Statement. Determine the positive root of f x = x 10 − 1 using the Newton- Raphson method and an initial guess of x = 0.5. Solution. The Newton-Raphson formula for this case is x i +1 = x i − x 10 i − 1 10x 9 i which can be used to compute f x f x i f x i ⫺ 0 Slope ⫽ fx i x x i⫹1 x i x i ⫺ x i⫹1 FIGURE 6.4 Graphical depiction of the Newton-Raphson method. A tangent to the function of x i [that is, f ′ x ] is extrapolated down to the x axis to provide an estimate of the root at x i +1 . i x i | ε a | , 0.5 1 51.65 99.032 2 46.485 11.111 3 41.8365 11.111 4 37.65285 11.111 • • • 40 1.002316 2.130 41 1.000024 0.229 42 1 0.002 Thus, after the first poor prediction, the technique is converging on the true root of 1, but at a very slow rate. Why does this happen? As shown in Fig. 6.5, a simple plot of the first few iterations is helpful in providing insight. Notice how the first guess is in a region where the slope is near zero. Thus, the first iteration flings the solution far away from the initial guess to a new value x = 51.65 where fx has an extremely high value. The solution then plods along for over 40 iterations until converging on the root with adequate accuracy. 2E ⫹ 17 1E ⫹ 17 10 20 30 40 50 16 12 8 4 0.5 1 f x x FIGURE 6.5 Graphical depiction of the Newton-Raphson method for a case with slow convergence. The inset shows how a near-zero slope initially shoots the solution far from the root. Thereafter, the solution very slowly converges on the root. Aside from slow convergence due to the nature of the function, other difficulties can arise, as illustrated in Fig. 6.6. For example, Fig. 6.6a depicts the case where an inflection 6.2 NEWTON-RAPHSON 159 f x f x f x f x x 2 x 1 x x 1 x x x x x 1 x x x 2 x 4 x 3 x 1 x x 2 a b c d FIGURE 6.6 Four cases where the Newton-Raphson method exhibits poor convergence. point i.e., f ′ x = 0 occurs in the vicinity of a root. Notice that iterations beginning at x progressively diverge from the root. Fig. 6.6b illustrates the tendency of the Newton-Raphson technique to oscillate around a local maximum or minimum. Such oscillations may persist, or, as in Fig. 6.6b, a near-zero slope is reached whereupon the solution is sent far from the area of interest. Figure 6.6c shows how an initial guess that is close to one root can jump to a location several roots away. This tendency to move away from the area of interest is due to the fact that near-zero slopes are encountered. Obviously, a zero slope [ f ′ x = 0] is a real disaster be- cause it causes division by zero in the Newton-Raphson formula [Eq. 6.6]. As in Fig. 6.6d, it means that the solution shoots off horizontally and never hits the x axis. Thus, there is no general convergence criterion for Newton-Raphson. Its convergence depends on the nature of the function and on the accuracy of the initial guess. The only remedy is to have an initial guess that is “sufficiently” close to the root. And for some func- tions, no guess will work Good guesses are usually predicated on knowledge of the phys- ical problem setting or on devices such as graphs that provide insight into the behavior of the solution. It also suggests that good computer software should be designed to recognize slow convergence or divergence.

6.2.1 MATLAB M-file:

newtraph An algorithm for the Newton-Raphson method can be easily developed Fig. 6.7. Note that the program must have access to the function func and its first derivative dfunc . These can be simply accomplished by the inclusion of user-defined functions to compute these quantities. Alternatively, as in the algorithm in Fig. 6.7, they can be passed to the function as arguments. After the M-file is entered and saved, it can be invoked to solve for root. For example, for the simple function x 2 − 9, the root can be determined as in newtraphx x2-9,x 2x,5 ans = 3 EXAMPLE 6.4 Newton-Raphson Bungee Jumper Problem Problem Statement. Use the M-file function from Fig. 6.7 to determine the mass of the bungee jumper with a drag coefficient of 0.25 kgm to have a velocity of 36 ms after 4 s of free fall. The acceleration of gravity is 9.81 ms 2 . Solution. The function to be evaluated is f m = gm c d tanh gc d m t − vt E6.4.1 To apply the Newton-Raphson method, the derivative of this function must be evalu- ated with respect to the unknown, m: d f m dm = 1 2 g mc d tanh gc d m t − g 2m t sech 2 gc d m t E6.4.2 6.3 SECANT METHODS 161 We should mention that although this derivative is not difficult to evaluate in principle, it involves a bit of concentration and effort to arrive at the final result. The two formulas can now be used in conjunction with the function newtraph to evaluate the root: y = m sqrt9.81m0.25tanhsqrt9.810.25m4-36; dy = m 12sqrt9.81m0.25tanh9.810.25m ... 124-9.812msechsqrt9.810.25m42; newtraphy,dy,140,0.00001 ans = 142.7376

6.3 SECANT METHODS

As in Example 6.4, a potential problem in implementing the Newton-Raphson method is the evaluation of the derivative. Although this is not inconvenient for polynomials and many other functions, there are certain functions whose derivatives may be difficult or FIGURE 6.7 An M-file to implement the Newton-Raphson method. function [root,ea,iter]=newtraphfunc,dfunc,xr,es,maxit,varargin newtraph: Newton-Raphson root location zeroes [root,ea,iter]=newtraphfunc,dfunc,xr,es,maxit,p1,p2,...: uses Newton-Raphson method to find the root of func input: func = name of function dfunc = name of derivative of function xr = initial guess es = desired relative error default = 0.0001 maxit = maximum allowable iterations default = 50 p1,p2,... = additional parameters used by function output: root = real root ea = approximate relative error iter = number of iterations if nargin3,errorat least 3 input arguments required,end if nargin4|isemptyes,es=0.0001;end if nargin5|isemptymaxit,maxit=50;end iter = 0; while 1 xrold = xr; xr = xr - funcxrdfuncxr; iter = iter + 1; if xr ~= 0, ea = absxr - xroldxr 100; end if ea = es | iter = maxit, break, end end root = xr; inconvenient to evaluate. For these cases, the derivative can be approximated by a back- ward finite divided difference: f ′ x i ∼ = f x i −1 − f x i x i −1 − x i This approximation can be substituted into Eq. 6.6 to yield the following iterative equation: x i +1 = x i − f x i x i −1 − x i f x i −1 − f x i 6.8 Equation 6.8 is the formula for the secant method. Notice that the approach requires two initial estimates of x. However, because f x is not required to change signs between the estimates, it is not classified as a bracketing method. Rather than using two arbitrary values to estimate the derivative, an alternative ap- proach involves a fractional perturbation of the independent variable to estimate f ′ x, f ′ x i ∼ = f x i + δx i − f x i δ x i where δ = a small perturbation fraction. This approximation can be substituted into Eq. 6.6 to yield the following iterative equation: x i +1 = x i − δ x i f x i f x i + δx i − f x i 6.9 We call this the modified secant method. As in the following example, it provides a nice means to attain the efficiency of Newton-Raphson without having to compute derivatives. EXAMPLE 6.5 Modified Secant Method Problem Statement. Use the modified secant method to determine the mass of the bungee jumper with a drag coefficient of 0.25 kgm to have a velocity of 36 ms after 4 s of free fall. Note: The acceleration of gravity is 9.81 ms 2 . Use an initial guess of 50 kg and a value of 10 −6 for the perturbation fraction. Solution. Inserting the parameters into Eq. 6.9 yields First iteration: x = 50 f x = −4.57938708 x + δx = 50.00005 f x + δx = −4.579381118 x 1 = 50 − 10 −6 50−4.57938708 −4.579381118 − −4.57938708 = 88.39931|ε t | = 38.1; |ε a | = 43.4