Relaxation methods The convergence rate of the Jacobi and Gauss–Seidel methods depends on

7.6.3 Relaxation methods The convergence rate of the Jacobi and Gauss–Seidel methods depends on

the properties of the iteration matrix. It has been found that these can be improved by the introduction of a so-called relaxation parameter α. Consider

the iteration equation (7.18) for the Jacobi method. It is easy to see that it can also be written as

∑ B E x j + (i = 1, 2, . . . , n)

j =1 C a ii F a ii

7.6 POINT-ITERATIVE METHODS 227

We try to modify the convergence rate of the iteration sequence by multi- plying the second and third terms on the right hand side by the relaxation parameter α:

G (k n −1) A −a ij D (k −1) b i J

(k)

x i =x i + α H ∑ B E x j +

If we use α = 1 in (7.23) we get back to the original Jacobi method (7.18), but different values of parameter α will yield different iterative sequences. When we choose 0 < α < 1 the procedure is an under-relaxation method, whereas α > 1 is called over-relaxation.

Before proceeding to apply (7.23) we verify that introduction of the relaxation parameter α changes the iteration path without changing the final solution. First we compare the expression in the square brackets of (7.23) with matrix equation (7.16). If the iteration sequence converges, the vector x (k →∞) j will contain the correct solution of the system, so

Dividing both sides by coefficient a ii and some rearrangement yields

b i n −a +

a ii ∑ j =1 a ii

After k iterations the intermediate solution vector x (k) j is not equal to the correct solution, so

a x ∑ (k) ij j ≠b i

We define the residual r (k) i of the ith equation after k iterations as the dif- ference between the left and right hand sides of (7.25):

n r (k) i =b −

(k) i

∑ a ij x j

If the iteration process is convergent the intermediate solution vector x (k) should get progressively closer to final solution vector x (k →∞)

as the iteration count k increases, and hence the residuals r i for all n equations should also tend to zero as k → ∞. Finally, we note that the expression in the square brackets of (7.23) is just equal to the residual r (k −1) i after k − 1 iterations

(k)

divided by coefficient a ii :

This confirms that the introduction of relaxation parameter α does not affect the converged solution, because all residuals r (k −1) i in the square brackets of (7.27) will be zero when k → ∞.

Next we note that, in terms of the iteration matrix form (7.19a–c) of the equation, the introduction of the relaxation parameter in (7.23) implies the

228 CHAPTER 7 SOLUTION OF DISCRETISED EQUATIONS

following changes to the coefficients T ij of the iteration matrix and constant vector c i :

Thus, we have demonstrated that the relaxation parameter alters the itera- tion path through changes in the iteration matrix, without altering the final solution. This suggests that relaxation may be advantageous if we select an optimum value of α that minimises the number of iterations required to reach the converged solution.

To see if this works in practice we perform the Jacobi iteration scheme with relaxation (7.23) for the example system (7.14) using the same initial guess as before: x (0)

1 =x 2 =x 3 = 0 with α = 0.75, 1.0 and 1.25. We find that the process converges to the correct solution x 1 = 1, x 2 = 2, x 3 = 3 after 25, 17

and 84 iterations, respectively. It appears that α = 1 is the optimum value for the Jacobi method and that there is not much to be gained by changes of α (at least not for this sample problem).

In spite of this slightly disappointing result we try out the relaxation con- cept on the Gauss–Seidel method. In this case the iteration equation after k iterations can be rewritten as

(i = 1, 2, . . . , n) If we introduce the relaxation parameter α as before, this yields

i−1

G + −a A ij D (k)

A −a ij D + b (k−1) + i J

(7.29) This is the iteration sequence for the Gauss–Seidel method with relaxa-

(i = 1, 2, . . . , n)

tion. We leave it as an exercise for the reader to verify that iteration of the sequence (7.29) using coefficients and the right hand side of example system (7.14) with α = 0.75, 1.0 and 1.25 yields convergence after 21, 13 and 27 iterations, respectively. It seems, once again, that no improvement is pos- sible, but a slightly more careful search reveals that the iteration sequence converges to 4 decimal places within 10 iterations for slightly over-relaxed values of α in the range 1.06 to 1.08.

Unfortunately, the optimum value of the relaxation parameter is problem and mesh dependent, and it is difficult to give precise guidance. Nevertheless, through experience with a particular range of similar problems it is, at least in principle, possible to select a value of α which gives a better convergence

rate than the basic Gauss–Seidel method. The well-known successive

7.7 MULTIGRID TECHNIQUES

7.7 Multigrid techniques

We have established in earlier chapters that the discretisation error reduces with the mesh spacing. In other words, the finer the mesh, the better the accuracy of a CFD simulation. Iterative techniques are preferred over direct methods because their storage overheads are lower, which makes them more attractive for the solution of large systems of equations arising from highly refined meshes. Moreover, we have seen in Chapter 6 that the SIMPLE algorithm for the coupling of continuity and momentum equations is itself iterative. Hence, there is no need to obtain very accurate intermediate solu- tions, as long as the iteration process eventually converges to the true solution. Unfortunately, it transpires that the convergence rate of iterative

methods, such as the Jacobi and Gauss–Seidel, rapidly reduces as the

mesh is refined.

To examine the relationship between the convergence rate of an iterative method and the number of grid cells in a problem we consider a simple two-dimensional cavity-driven flow. The inset of Figure 7.5 shows that the computational domain is a square cavity with a size of 1 cm × 1 cm. The lid of the cavity is moving with a velocity of 2 m/s in the positive x-direction. The fluid in the cavity is air and the flow is assumed to be laminar. We use a line-by-line iterative solver to compute the solution on three different grids with 10 × 10, 20 × 20 and 40 × 40 cells.

To obtain a measure of the closeness to the true solution of an intermedi- ate solution in an iteration sequence we use the residual defined in (7.26) for the ith equation. The average residual Ö over all n equations in the system (i.e. an average over all the control volumes in the computational domain of a flow problem) is a useful indicator of iterative convergence for a given problem:

Ö= ∑ |r i |

n i=1

If the iteration process is convergent the average residual Ö should tend to zero, since all contributing residuals r i → 0 as k → ∞. The average residual

Figure 7.5 Residual reduction pattern with a line-by-line iterative solver using different grid resolutions

230 CHAPTER 7 SOLUTION OF DISCRETISED EQUATIONS

for a given solution parameter, e.g. the u-velocity component, is usually normalised to make it easier to interpret its value from case to case and to compare it with residuals relating to other solution parameters (e.g. v- or w-velocity or pressure, which may each have very different magnitudes). The most common normalisation is to consider the ratio of the average residual after k iterations and its value at the first iteration:

Ö (k)

R (k) norm =

In Figure 7.5 we have plotted the normalised residual of the u-momentum equation against the iteration number. The solution is aborted when the normalised residuals for all solution variables (velocity and pressure in this case) fall below 10 −3 . We note that the 10 × 10 mesh solution converges in

161 iterations, whereas the 20 × 20 and 40 × 40 mesh solutions take 331 and 891 iterations to converge, respectively. Within the CFD code it is possible to improve the convergence rate by adjusting solution parameters, including relaxation parameters, but for the sake of consistency all solution parameters were kept constant. The pattern of residual reduction is evident from the diagram. After a rapid initial reduction of the residuals their rate of decrease settles to a more modest final value. It is also clear that the final conver- gence rate is lowest for the finest mesh. If we tried an even finer mesh, it would take even longer to converge.

Multigrid concept To simplify the explanation of the multigrid method we use matrix notation

and first revisit the definition of the residual. Consider the following system of equations arising from the finite volume discretisation of a conservation equation on a flow domain:

A.x=b

The vector x is the true solution of system (7.32).

If we solve this system with an iterative method we obtain an intermediate solution y after some unspecified number of iterations. This intermediate solution does not satisfy (7.32) exactly and, as before, we define the residual vector r as follows:

(7.33) We can also define an error vector e as the difference between the true solu-

A.y=b−r

tion and the intermediate solution:

(7.34) Subtracting (7.33) from (7.32) gives the following relationship between the

e=x−y

error vector and the residual vector:

(7.35) The residual vector can be easily calculated at any stage of the iteration pro-

A.e=r

cess by substituting the intermediate solution into (7.33). We can imagine using an iterative process to solve system (7.35) and obtain the error vector. For this it might be useful to write the system in the iteration matrix form:

e (k) =T.e (k−1) +c

(7.36a)

7.7 MULTIGRID TECHNIQUES 231

Since the coefficient matrix A is the same for systems (7.32) and (7.35), the coefficients T ij of the iteration matrix are equal to those of the chosen itera-

tion method, i.e. the Jacobi method or Gauss–Seidel method without or with relaxation. The elements of the constant vector are, however, different:

c i = i (7.36b)

a ii In practice, if we tried to solve system (7.35) using the same iteration method

as we used for the original system (7.32) we would not find that this made any difference in terms of convergence rate. However, system (7.35) is important, because it shows how the error propagates from one iteration to the next. Moreover, its equivalent (7.36) highlights the crucial role played by the iteration matrix. As we saw earlier when we introduced the relaxation technique, the properties of the iteration matrix determine the rate of error propagation and, hence, the rate of convergence.

These properties have been extensively studied along with the math- ematical behaviour of the error propagation as a function of the iterative technique, mesh size, discretisation scheme etc. It has been established that the solution error has components with a range of wavelengths that are multiples of the mesh size. Iteration methods cause rapid reduction of error components with short wavelengths up to a few multiples of the mesh size. However, long-wavelength components of the error tend to decay very slowly as the iteration count increases.

This error behaviour explains the observed trends in Figure 7.5. For the coarse mesh, the longest possible wavelengths of error components (i.e. those of the order of the domain size) are just within the short-wavelength range of the mesh and, hence, all error components reduce rapidly. On the finer meshes, however, the longest error wavelengths are progressively further outside the short-wavelength range for which decay is rapid.

Multigrid methods are designed to exploit these inherent differences of the error behaviour and use iterations on meshes of different size. The short- wavelength errors are effectively reduced on the finest meshes, whereas the long-wavelength errors decrease rapidly on the coarsest meshes. Moreover, the computational cost of iterations is larger on finer meshes than on coarse meshes, so the extra cost due to iterations on the coarse meshes is offset by the benefit of much improved convergence rate.