LU Decomposition Methods

2.3 LU Decomposition Methods

Introduction

It is possible to show that any square matrix A can be expressed as a product of a lower triangular matrix L and an upper triangular matrix U:

(2.11) The process of computing L and U for a given A is known as LU decomposition or

A = LU

LU factorization. LU decomposition is not unique (the combinations of L and U for

a prescribed A are endless), unless certain constraints are placed on L or U. These constraints distinguish one type of decomposition from another. Three commonly used decompositions are listed in Table 2.2.

Name

Constraints

Doolittle’s decomposition L ii = 1, i = 1, 2, . . . , n Crout’s decomposition

U ii = 1, i = 1, 2, . . . , n Choleski’s decomposition

L=U T

Table 2.2

After decomposing A, it is easy to solve the equations Ax = b, as pointed out in Art. 2.1. We first rewrite the equations as LUx = b. Upon using the notation Ux = y, the equations become

Ly = b Ly = b

Ux = y

will yield x by the back substitution process. The advantage of LU decomposition over the Gauss elimination method is that once A is decomposed, we can solve Ax = b for as many constant vectors b as we please. The cost of each additional solution is relatively small, since the forward and back substitution operations are much less time consuming than the decomposition process.

Doolittle’s Decomposition Method

Decomposition phase Doolittle’s decomposition is closely related to Gauss elim- ination. In order to illustrate the relationship, consider a 3 × 3 matrix A and assume

that there exist triangular matrices ⎡

1 ⎤ 0 0 U 11 U 12 U 13 ⎢ L

U L= ⎥ ⎣ 21 ⎦ U= ⎣ 22 23 ⎦

L 31 L 32 1 0 0 U 33 such that A = LU. After completing the multiplication on the right hand side, we get

(2.12) U 11 L 31 U 12 L 31 +U 22 L 32 U 13 L 31 +U 23 L 32 +U 33

Let us now apply Gauss elimination to Eq. (2.12). The first pass of the elimina- tion procedure consists of choosing the first row as the pivot row and applying the elementary operations

row 2 ← row 2 − L 21 × row 1 (eliminates A 21 ) row 3 ← row 3 − L 31 × row 1 (eliminates A 31 )

The result is

In the next pass we take the second row as the pivot row, and utilize the operation row 3 ← row 3 − L 32 × row 2 (eliminates A 32 ) In the next pass we take the second row as the pivot row, and utilize the operation row 3 ← row 3 − L 32 × row 2 (eliminates A 32 )

The foregoing illustration reveals two important features of Doolittle’s decompo- sition:

r The matrix U is identical to the upper triangular matrix that results from Gauss elimination.

r The off-diagonal elements of L are the pivot equation multipliers used during Gauss elimination; that is, L ij is the multiplier that eliminated A ij .

It is usual practice to store the multipliers in the lower triangular portion of the coefficient matrix, replacing the coefficients as they are eliminated (L ij replacing A ij ). The diagonal elements of L do not have to be stored, since it is understood that each of them is unity. The final form of the coefficient matrix would thus be the following mixture of L and U:

The algorithm for Doolittle’s decomposition is thus identical to the Gauss elim- ination procedure in gauss , except that each multiplier λ is now stored in the lower triangular portion of A.

In this version of LU decomposition the original A is destroyed and replaced by its decomposed form [L\U].

function A = LUdec(A) % LU decomposition of matrix A; returns A = [L\U]. % USAGE: A = LUdec(A)

n = size(A,1); for k = 1:n-1

for i = k+1:n if A(i,k) ˜= 0.0 lambda = A(i,k)/A(k,k); A(i,k+1:n) = A(i,k+1:n) - lambda*A(k,k+1:n);

A(i,k) = lambda; end end end

Solution phase Consider now the procedure for solving Ly = b by forward substi- tution. The scalar form of the equations is (recall that L ii = 1)

y 1 =b 1 L 21 y 1 +y 2 =b 2

.. . L k1 y 1 +L k2 y 2 +···+L k,k−1 y k−1 +y k =b k

.. . Solving the kth equation for y k yields

Letting y overwrite b, we obtain the forward substitution algorithm: for k = 2:n

y(k)= b(k) - A(k,1:k-1)*y(1:k-1); end

The back substitution phase for solving Ux = y is identical to that used in the Gauss elimination method.

This function carries out the solution phase (forward and back substitutions). It is assumed that the original coefficient matrix has been decomposed, so that the input is A = [L\U]. The contents of b are replaced by y during forward substitution. Similarly, back substitution overwrites y with the solution x.

function x = LUsol(A,b) % Solves L*U*b = x, where A contains both L and U; % that is, A has the form [L\U]. % USAGE: x = LUsol(A,b) function x = LUsol(A,b) % Solves L*U*b = x, where A contains both L and U; % that is, A has the form [L\U]. % USAGE: x = LUsol(A,b)

b(k) = b(k) - A(k,1:k-1)*b(1:k-1); end for k = n:-1:1

b(k) = (b(k) - A(k,k+1:n)*b(k+1:n))/A(k,k); end x = b;

Choleski’s Decomposition

Choleski’s decomposition A = LL T has two limitations: r Since the matrix product LL T is symmetric, Choleski’s decomposition requires A

to be symmetric. r The decomposition process involves taking square roots of certain combinations

of the elements of A. It can be shown that square roots of negative numbers can

be avoided only if A is positive definite. Although the number of long operations in all the decomposition methods is

about the same, Choleski’s decomposition is not a particularly popular means of solving simultaneous equations, mainly due to the restrictions listed above. We study it here because it is invaluable in certain other applications (e.g., in the transformation of eigenvalue problems).

Let us start by looking at Choleski’s decomposition

(2.15) of a 3 × 3 matrix:

A 31 A 32 A 33 L 31 L 32 L 33 0 0 L 33 After completing the matrix multiplication on the right hand side, we get

11 A 12 A 13 L 11 L 11 L 21 L 11 L 31 ⎢

⎣ A 21 A 22 A 23 ⎥ ⎢ ⎦= ⎣ L 11 L 21 L 2 +L 2 L 21 L 31 +L 22 L 32 21 ⎥ 22 ⎦ (2.16)

A 31 A 32 A 33 L 11 L 31 L 21 L 31 +L 22 L 32 L 2 +L 2 +L 31 2 32 33 Note that the right-hand-side matrix is symmetric, as pointed out before. Equating the

matrices A and LL T element-by-element, we obtain six equations (due to symmetry matrices A and LL T element-by-element, we obtain six equations (due to symmetry

Consider the lower triangular portion of each matrix in Eq. (2.16) (the upper triangular portion would do as well). By equating the elements in the first column, starting with the first row and proceeding downward, we can compute L 11 ,L 21 , and L 31 in that order:

The second column, starting with second row, yields L 22 and L 32 :

A 22 2 =L 2 21 +L 22 L 22 = A 22 −L 2 21

A 32 =L 21 L 31 +L 22 L 32 L 32 = (A 32 −L 21 L 31 )/L 22

Finally the third column, third row gives us L 33 :

A 33 =L 2 31 +L 2 +L 2 L 33 = A 33 −L 32 2 33 31 −L 2 32

We can now extrapolate the results for an n × n matrix. We observe that a typical element in the lower triangular portion of LL T is of the form

(LL T ) ij =L i1 L j1 +L i2 L j2 +···+L ij L jj = L ik L jk , i≥j

k=1

Equating this term to the corresponding element of A yields

A ij =

L ik L jk ,

i = j, j + 1, . . . , n, j = 1, 2, . . . , n (2.17)

k=1

The range of indices shown limits the elements to the lower triangular part. For the first column ( j = 1), we obtain from Eq. (2.17)

i = 2, 3, . . . , n (2.18) Proceeding to other columns, we observe that the unknown in Eq. (2.17) is L ij (the

L 11 =

11 L i1 =A i1 / L 11 ,

other elements of L appearing in the equation have already been computed). Taking the term containing L ij outside the summation in Eq. (2.17), we obtain

j−1

A ij =

L ik L jk +L ij L jj

k=1

If i = j (a diagonal term) , the solution is

L 2 jk , j = 2, 3, . . . , n (2.19)

k=1

For a nondiagonal term we get

j−1

L ij = A ij − L ik L jk / L jj , j = 2, 3, . . . , n − 1, i = j + 1, j + 2, . . . , n (2.20)

k=1

Note that in Eqs. (2.19) and (2.20) A ij appears only in the formula for L ij . Therefore, once L ij has been computed, A ij is no longer needed. This makes it possible to write the elements of L over the lower triangular portion of A as they are computed. The elements above the principal diagonal of A will remain untouched. At the conclusion

of decomposition L is extracted with the MATLAB command tril(A) . If a negative L 2 jj is encountered during decomposition, an error message is printed and the program is terminated.

function L = choleski(A) % Computes L in Choleski’s decomposition A = LL’. % USAGE: L = choleski(A)

n = size(A,1); for j = 1:n

temp = A(j,j) - dot(A(j,1:j-1),A(j,1:j-1)); if temp < 0.0

error(’Matrix is not positive definite’) end A(j,j) = sqrt(temp); for i = j+1:n

A(i,j)=(A(i,j) - dot(A(i,1:j-1),A(j,1:j-1)))/A(j,j);

end end L = tril(A)

We could also write the algorithm for forward and back substitutions that are necessary in the solution of Ax = b. But since Choleski’s decomposition has no ad-

vantages over Doolittle’s decomposition in the solution of simultaneous equations, we will skip that part.

EXAMPLE 2.5 Use Doolittle’s decomposition method to solve the equations Ax = b, where

Solution We first decompose A by Gauss elimination. The first pass consists of the elementary operations

row 2 ← row 2 − 1 × row 1 (eliminates A 21 ) row 3 ← row 3 − 2 × row 1 (eliminates A 31 )

Storing the multipliers L 21 = 1 and L 31 = 2 in place of the eliminated terms, we obtain

A ′ = ⎢ 1 2 −2 ⎣ ⎥ ⎦

The second pass of Gauss elimination uses the operation row 3 ← row 3 − (−4.5) × row 2 (eliminates A 32 )

Storing the multiplier L 32 = −4.5 in place of A 32 , we get

A ′′ = [L\U] = ⎢

The decomposition is now complete, with

1 ⎤ 0 0 1 4 1 L= ⎢

U= ⎣ 0 2 −2 ⎥ ⎦

0 0 −9 Solution of Ly = b by forward substitution comes next. The augmented coeffi-

cient form of the equations is

The solution is y 1 =7

y 2 = 13 − y 1 = 13 − 7 = 6

y 3 = 5 − 2y 1 + 4.5y 2 = 5 − 2(7) + 4.5(6) = 18

Finally, the equations Ux = y, or

are solved by back substitution. This yields

2 =1 x 1 = 7 − 4x 2 −x 3 = 7 − 4(1) − (−2) = 5

EXAMPLE 2.6 Compute Choleski’s decomposition of the matrix

Solution First we note that A is symmetric. Therefore, Choleski’s decomposition is applicable, provided that the matrix is also positive definite. An a priori test for posi- tive definiteness is not needed, since the decomposition algorithm contains its own test: if the square root of a negative number is encountered, the matrix is not positive definite and the decomposition fails.

Substituting the given matrix for A in Eq. (2.16), we obtain ⎡

2 −4 11 L 11 L 31 L 21 L 31 +L 22 L 32 L 2 31 +L 2 32 +L 2 33 Equating the elements in the lower (or upper) triangular portions yields

√ L 11 = 4=2

L 21 = −2/L 11 = −2/2 = −1 L 31 = 2/L 11 = 2/2 = 1 L 22 = 2−L 2 21 = 2−1 2 =1

−4 − L 21 L 31 −4 − (−1)(1)

L 33 =

11 − L 2

31 −L 2 32 =

The result can easily be verified by performing the multiplication LL T . EXAMPLE 2.7 Solve AX = B with Doolittle’s decomposition and compute |A|, where

Solution In the program below the coefficient matrix A is first decomposed by calling LUdec . Then LUsol is used to compute the solution one vector at a time.

% Example 2.7 (Doolittle’s decomposition) A = [3 -1 4; -2 0 5; 7 2 -2]; B = [6 -4; 3 2; 7 -5]; A = LUdec(A);

det = prod(diag(A)) for i = 1:size(B,2)

X(:,i) = LUsol(A,B(:,i)); end X

Here are the results: >> det =

EXAMPLE 2.8 Test the function choleski by decomposing

0.00 ⎥ ⎥ A= ⎢ ⎣ 5.52 −7.78 28.40 9.00 ⎥ ⎦

Solution

% Example 2.8 (Choleski decomposition) A = [1.44 -0.36

L = choleski(A) Check = L*L’

% Verify the result

PROBLEM SET 2.1

1. By evaluating the determinant, classify the following matrices as singular, ill- conditioned or well-conditioned.

2. Given the LU decomposition A = LU, determine A and |A| . ⎡

(a) L= ⎢ ⎣ 1 1 0 ⎥ ⎦

U= ⎢ ⎣ 0 3 21 ⎥ ⎦

(b) ⎢

L= ⎥ ⎣ −1

1 0 ⎦ U= ⎣ 0 1 −3 ⎦

3. Utilize the results of LU decomposition

to solve Ax = b, where b T = 1 −1 2 .

4. Use Gauss elimination to solve the equations Ax = b, where

5. Solve the equations AX = B by Gauss elimination, where ⎡

6. Solve the equations Ax = b by Gauss elimination, where

−1 Hint: reorder the equations before solving.

7. Find L and U so that

4 using (a) Doolittle’s decomposition; (b) Choleski’s decomposition.

8. Use Doolittle’s decomposition method to solve Ax = b, where

9. Solve the equations Ax = b by Doolittle’s decomposition method, where ⎡

⎢ A= ⎥ ⎣ −1.98

b= ⎣ −0.73 ⎦

10. Solve the equations AX = B by Doolittle’s decomposition method, where

11. Solve the equations Ax = b by Choleski’s decomposition method, where

A= ⎥

b= ⎣ 3/2 ⎦

12. Solve the equations

−16 28 18 x 3 −2.3 by Doolittle’s decomposition method.

13. Determine L that results from Choleski’s decomposition of the diagonal matrix

14. Modify the function gauss so that it will work with mconstant vectors. Test the program by solving AX = B, where

15. A well-known example of an ill-conditioned matrix is the Hilbert matrix

Write a program that specializes in solving the equations Ax = b by Doolittle’s decomposition method, where A is the Hilbert matrix of arbitrary size n × n, and

A ij

j=1

The program should have no input apart from n. By running the program, de- termine the largest n for which the solution is within 6 significant figures of the exact solution

x= 1 1 1 ···

(the results depend on the software and the hardware used).

16. Write a function for the solution phase of Choleski’s decomposition method. Test the function by solving the equations Ax = b, where

Use the function choleski for the decomposition phase.

17. Determine the coefficients of the polynomial y = a 0 +a 1 x+a 2 x 2 +a 3 x 3 that passes through the points (0, 10), (1, 35), (3, 31) and (4, 2).

18. Determine the 4th degree polynomial y(x) that passes through the points (0, −1), (1, 1), (3, 3), (5, 2) and (6, −2).

19. Find the 4th degree polynomial y(x) that passes through the points (0, 1), (0.75, −0.25) and (1, 1), and has zero curvature at (0, 1) and (1, 1).

20. Solve the equations Ax = b, where ⎡

By computing |A| and Ax comment on the accuracy of the solution.

Dokumen yang terkait

WARTA ARDHIA Jurnal Perhubungan Udara Harapan dan Kepentingan Pengguna Jasa Angkutan Udara Terhadap Pelayanan di Bandar Udara H.AS Hanandjoeddin–Tanjung Pandan Belitung Air Transport Passenger Expectations and Interest of Airport Services in H.A.S Hanandj

0 0 12

The Evaluation of Air Transportation Infrastructure Performance in Indonesian Capital Province

0 0 10

Demand Forecasting of Modal Share of Bandara Internasional Jawa Barat (BJIB) in Kertajati Majalengka

0 1 12

WARTA ARDHIA Jurnal Perhubungan Udara Analisis Flutter Pada Uji Model Separuh Sayap Pesawat N219 di Terowongan Angin Kecepatan Rendah The Analysis of Half Wing Flutter Test N219 Aircraft Model in The Low Speed Wind Tunnel

0 0 8

WARTA ARDHIA Jurnal Perhubungan Udara Penerapan Skema Badan Layanan Umum Dalam Pengelolaan Keuangan Bandar Udara Internasional Jawa Barat The Application of Badan Layanan Umum Scheme in The Financial Management of Jawa Barat International Airport

0 0 16

Speech Recognition Conceptual Design in Aircraft Communications to Reduce Flight Communication Mistake

0 0 10

WARTA ARDHIA Jurnal Perhubungan Udara Perencanaan Integrasi Transportasi Antarmoda Dalam Pembangunan Bandar Udara (Studi Kasus: Pembangunan Bandar Udara di Kertajati) Intermodal Transportation Integration Planning in Airport Development (Case Study: Airpo

0 0 8

WARTA ARDHIA Jurnal Perhubungan Udara Gaya Hambat Saat Hidro Planing dan Gaya Angkat Aerodinamika Saat Cruise di Efek Permukaan pada Pesawat Wing in Surface Effect The Hump Drags During Hydro Planing and Aerodynamic Lift During Cruise in Surface Effect Al

0 0 8

WARTA ARDHIA Jurnal Perhubungan Udara Peran Jasa Ground Handling Terhadap Pelayanan Perusahaan Air Freight di Bali dalam Menghadapi Kompetisi Global The Role of Ground Handling Services to The Air Freight Services in Bali in The Face of Global Competition

0 1 14

Advance Engineering Mathematic chaper 1

0 0 63