Directory UMM :Data Elmu:jurnal:J-a:Journal of Computational And Applied Mathematics:Vol106.Issue1.1999:
Journal of Computational and Applied Mathematics 106 (1999) 117–129
Semiconvergence of extrapolated iterative methods for singular
linear systems
Yongzhong Song
Department of Mathematics, Nanjing Normal University, Nanjing 210097, People’s Republic of China
Received 4 July 1997; received in revised form 30 December 1998
Abstract
In this paper, we discuss convergence of the extrapolated iterative methods for solving singular linear systems. A general
principle of extrapolation is presented. The semiconvergence of an extrapolated method induced by a regular splitting and
a nonnegative splitting is proved whenever the coecient matrix A is a singular M -matrix with ‘property c’ and an
irreducible singular M -matrix, respectively. Since the (generalized, block) JOR and AOR methods are respectively the
extrapolated methods of the (generalized, block) Jacobi and SOR methods, so the semiconvergence of the (generalized,
block) JOR and AOR methods for solving general singular systems are proved. Furthermore, the semiconvergence of the
c 1999
extrapolated power method, the (block) JOR, AOR and SOR methods for solving Markov chains are discussed.
Elsevier Science B.V. All rights reserved.
MSC: 65F10
Keywords: Singular linear system; Markov chain; Extrapolated iterative method; AOR method; JOR method;
Semiconvergence
1. Introduction
Let us consider a system of n equations
Ax = b;
(1.1)
where A ∈ Cn×n is singular, b; x ∈ Cn with b known and x unknown. If A is split into
A = M − N;
E-mail address: [email protected] (Y. Song)
c 1999 Elsevier Science B.V. All rights reserved.
0377-0427/99/$ - see front matter
PII: S 0 3 7 7 - 0 4 2 7 ( 9 9 ) 0 0 0 6 0 - 6
(1.2)
118
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
where M is nonsingular, then a linear stationary iterative formula for solving the system (1.1) can
be described as follows:
xk+1 = Txk + M −1 b;
k = 0; 1; 2; : : : ;
(1.3)
where T = M −1 N is the iteration matrix.
The convergence of the iterative method (1.3) has been investigated in many papers and books.
It is well known that for nonsingular systems the iterative method (1.3) is convergent if and only if
the spectral radius (T ) is less than 1. But for the singular systems we have 1 ∈ (T ) and (T )¿1,
so that man can require only the semiconvergence of the iterative method (1.3), which means that
for every x0 the sequence dened by (1.3) converges to a solution to Eq. (1.1). By [2] the iterative
method (1.3) is semiconvergent if and only if the following three conditions are satised:
• (T ) = 1.
• Elementary divisors associated with = 1 ∈ (T ) are linear, i.e.,
rank(I − T )2 = rank(I − T );
or equivalently
index(I − T ) = 1:
• If ∈ (T ) with || = 1, then = 1, i.e.,
#(T ) ≡ max{||; ∈ (T ); 6= 1} ¡ 1;
where (T ) denotes the spectrum of T .
In this case, the associated convergence factor is then #(T ).
We call a matrix T is semiconvergent provided it satises the three conditions above.
For ! ∈ C the extrapolated method of (1.3) can be dened by
xk+1 = T! xk + !M −1 b;
k = 0; 1; 2; : : : ;
(1.4)
where
T! = (1 − !)I + !T
(1.5)
is the iteration matrix and ! is called the extrapolated parameter (cf. [9]). Clearly, if ! = 0 then
T0 = I and the extrapolated method (1.4) has no meaning. Thus, in the following we will assume
! 6= 0 to be true.
Now, we split A into
A = D − C L − CU :
(1.6)
We assume throughout D in Eq. (1.6) is always nonsingular. But we do not in general assume that
D is diagonal, or that CL and CU are triangular. Associated with the splitting (1.6) the generalized
Jacobi iteration matrix J can be expressed as
J = D−1 (CL + CU ) = L + U
with L = D−1 CL ; U = D−1 CU . For any ! 6= 0 and
∈ C the generalized AOR method [6] (GAOR
method [17]) for solving Eq. (1.1) is dened as
xk+1 = L
; ! xk + !(I −
L)−1 D−1 b;
k = 0; 1; 2; : : : ;
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
119
where
L
; ! = (I −
L)−1 [(1 − !)I + (! −
)L + !U ]
is the GAOR iteration matrix.
When the matrix D in the splitting (1.6) is (block) diagonal, and CL ; CU are, respectively, strictly
(block) lower and strictly (block) upper triangular matrices, the GAOR method is the (block AOR
– BAOR) AOR method.
When (
; !) is equal to (!; !); (1; 1); (0; !) and (0; 1) the (generalized, block) AOR method
reduces respectively to the (generalized, block) SOR, Gauss–Seidel, JOR [7,20] and Jacobi iterative
methods with the iteration matrices L! ; L; J! and J .
It is easy to check that the (generalized, block) JOR method is an extrapolated method of the
(generalized, block) Jacobi method with the extrapolated parameter ! and, for
6= 0, the (generalized, block) AOR method is an extrapolated method of the (generalized, block) SOR method with
the relaxation factor
and the extrapolated parameter !=
, namely
J! = (1 − !)I + !J;
L
; ! = 1 −
!
!
I + L
:
Because of these the JOR method is also called the extrapolated Jacobi method (cf. [3]) and the
AOR method is called the extrapolated SOR (ESOR) method (cf. [12]). Furthermore, for
= 1 the
AOR method is called the extrapolated Gauss–Seidel (EGS) method (cf. [12]).
The extrapolated method for solving the nonsingular systems has been discussed in many papers.
In this paper, we discuss convergence of the extrapolated iterative methods for solving singular
linear systems. In Section 2 a general principle of extrapolation is presented. The semiconvergence of
an extrapolated method induced by a regular splitting and a nonnegative splitting is proved whenever
the coecient matrix A is a singular M -matrix with ‘property c’ and an irreducible singular M -matrix,
respectively. In Section 3, the semiconvergence of the (generalized, block) JOR and (generalized,
block) AOR methods, which are respectively the extrapolated methods of the (generalized, block)
Jacobi and (generalized, block) SOR methods, for solving general singular systems are proved. In
Section 4 the semiconvergence of the extrapolated power method, the (block) JOR, AOR and SOR
methods for solving Markov chains are discussed.
For convenience we shall now brie
y explain some of the terminology used in the next sections.
We write B¿C (B ¿ C) if bij ¿cij (bij ¿ cij ) holds for all entries of B = (bij ) and C = (cij ), calling
B nonnegative if B¿0. These denitions can be applied immediately to vectors by identifying them
with n × 1 matrices. We denote the spectrum and the spectral radius of B by (T ) and (B),
respectively. Moreover,
#(B) = max{||; ∈ (B); 6= 1}:
Denition 1.1 (Berman and Plemmons [2], Plemmons [15]). A matrix B = (bij ) ∈ Rn×n is called a
singular M -matrix if B can be expressed in the form
B = sI − C;
and s = (C).
s ¿ 0;
C¿0
(1.7)
120
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
A singular M -matrix B is said to have ‘property c’ if it can be split into (1.7) and the matrix
T = C=s is semiconvergent.
Denition 1.2. Let B ∈ Cn×n . The splitting B = M − N is called:
(i) Regular (cf. [19]) if M −1 ¿0 and N ¿0;
(ii) Weak regular (cf. [2,14]) if M −1 ¿0 and M −1 N ¿0;
(iii) Nonnegative (cf. [18]) if M −1 N ¿0.
2. A general principle of extrapolation
For the nonsingular systems the convergence of the extrapolated method has been discussed in
many papers and books. For the singular systems it follows from Eq. (1.5) that
= 1 − ! + !
(2.1)
with ∈ (T! ) and ∈ (T ). Moreover, for and satisfying Eq. (2.1) we have
= 1 i
= 1:
Since
I − T! = !(I − T );
the number = 1 as the eigenvalue of T! has same multiplicity with = 1 as the eigenvalue of T
and
index(I − T! ) = index(I − T ):
Now we have proved the following statement.
Lemma 2.1. For the singular systems (1:1) the following results are true.
(a) 1 ∈ (T ); 1 ∈ (T! ) and
index(I − T! ) = index(I − T ):
(b) For and satisfying (2:1) ∈ (T ) − {1} i ∈ (T! ) − {1}.
(c) If the extrapolated method (1:4) is semiconvergent; then index(I − T ) = 1:
Now we describe the extrapolation theorem.
Theorem 2.2. Let the iteration matrix T in Eq. (1:3) satisfy (T ) = 1 and index(I − T ) = 1. Then
the extrapolated method (1:4) is semiconvergent provided
0¡!¡
2
1 + #(T )
(¿1):
Proof. Let = a + ib ∈ (T ) − {1} and satises Eq. (2.1). Since ||6#(T )61, it is obvious that
either of the following holds || ¡ 1 or || = 1 with −16a ¡ 1.
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
121
When #(T ) = 1 we have 0 ¡ ! ¡ 1. Then for || ¡ 1 one gets
||6|1 − !| + !|| = 1 − ! + !|| ¡ 1:
While for || = 1 with −16a ¡ 1 we derive a2 + b2 = 1 and
||2 = (1 − ! + !a)2 + (!b)2
= (1 − !)2 + 2!a(1 − !) + !2 a2 + !2 b2
¡ (1 − !)2 + 2!(1 − !) + !2
= 1:
For the case when #(T ) ¡ 1 we have ||6#(T ) ¡ 1 and, therefore,
|| 6 |1 − !| + !||6|1 − !| + !#(T )
1 − ! + !#(T ) ¡ 1;
2(1 + #(T ))
=
! − 1 + !#(T ) ¡
−1=1
1 + #(T )
This shows that, in any case,
if 0 ¡ !61;
if 16! ¡
2
:
1 + #(T )
(2.2)
#(T! ) ¡ 1
holds.
On the other hand, by Lemma 2.1 we obtain
index(I − T! ) = 1:
Now, the semiconvergence follows immediately.
This result extends the extrapolation theorem given in [9] to the singular systems. Here it is not
necessary to assume that the iterative method (1.3) is semiconvergent.
Example 2.3. Let
1
A= 0
−1
0
2
0
Then A is singular. For
1
M =0
0
0
2
0
−1
−1 :
1
0
0;
1
we have A = M − N and
0 0
T = M −1 N = 0 0
1 0
It is easy to obtain that
(T ) = {1; 0; −1}
and
0
N =0
1
1
1
2
0
:
#(T ) = 1;
0
0
0
1
1
0
122
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
which implies that the iterative method (1.3) is not semiconvergent. While, in this case, we have
T! = (1 − !)I + !T;
(T! ) = {1; 1 − !; 1 − 2!}:
For 0 ¡ ! ¡ 1 (T! ) = 1 holds and
|1 − !| = 1 − ! ¡ 1
|1 − 2!| =
1 − 2!;
2! − 1;
for 0 ¡ !6 12
¡ 1:
for 21 6! ¡ 1
Hence, #(T! ) ¡ 1 and the extrapolated method (1.4) is semiconvergent.
If the iterative method (1.3) is semiconvergent, from Theorem 2.2 and Eq. (2.2), we can derive
the following semiconvergence, directly.
Corollary 2.4. Let the iterative method (1:3) be semiconvergent. Then the extrapolated method
(1:4) is semiconvergent provided
2
0¡!¡
(¿ 1):
1 + #(T )
More specically
#(T! )6|1 − !| + !#(T ) ¡ 1:
(2.3)
By [10, Theorem 2] or [2, Theorem 7 − 6:22] the following semiconvergence result is obvious.
Corollary 2.5. Let A be Hermitian and positive semi-denite. Assume that M H + N is positive
denite. Then; the iterative method (1:3) is semiconvergent and the extrapolated method (1:4) is
semiconvergent provided
2
0¡!¡
(¿ 1):
1 + #(T )
More specically; the inequality (2:3) is true.
In Theorem 2.2 we need the hypothesis that T satisfy (T ) = 1 and index(I − T ) = 1. In the next
theorems the conditions ensuring this hypothesis are given.
Theorem 2.6. Let A be a singular M-matrix with ‘property c’. Further; assume that the splitting
(1:2) is regular. Then the extrapolated method (1:4) is semiconvergent provided
2
0¡!¡
(¿1):
1 + #(T )
Proof. By [2, Theorem 7 − 6:20] it can be justied that the iteration matrix T satises
(T ) = 1
and
index(I − T ) = 1:
The required result follows directly from Theorem 2.2.
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
123
Denition 2.7 (Neumann and Plemmons [13]). A matrix B ∈ Rn×n is called weak semipositive if
there exists a vector x ¿ 0 such that Bx¿0.
The following result is proved by [13, Theorem 6].
Lemma 2.8. Let B be singular and weak semipositive. If the splitting B = M − N is weak regular
then it holds
(T ) = 1
index(I − T ) = 1:
and
Using this lemma we prove the semiconvergence of a nonnegative splitting.
Theorem 2.9. Let A be an irreducible singular M-matrix. Further; assume that the splitting (1:2)
is nonnegative. Then the extrapolated method (1:4) is semiconvergent provided
0¡!¡
2
1 + #(T )
(¿1):
Proof. Since A is an irreducible singular M -matrix, it holds
aii ¿ 0;
i = 1; : : : ; n:
Let D = diag(a11 ; : : : ; ann ). Then
J = I − D−1 A¿0
is also irreducible and satises (J ) = 1. By the Perron–Frobenius theorem (cf. [19, Theorem 2.1])
there exists x ¿ 0 such that
Jx = (J )x = x;
and, consequently,
M −1 Ax = M −1 D(I − J )x = 0;
which means that the matrix M −1 A is weak semipositive. Since T ¿0 it follows that M −1 A = I − T
is a regular splitting. By Lemma 2.8 we obtain
(T ) = 1
and
index(I − T ) = 1:
Now, the required result follows from Theorem 2.2, immediately.
Remark 2.10. As
2
¿1;
1 + #(T )
the result here is much better than that given in [5, Proposition 2]; where the splitting (1:2) is
assumed to be weak regular.
124
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
3. Semiconvergence of the JOR and AOR methods
In this section we discuss the semiconvergence of the (generalized, block) JOR and (generalized,
block) AOR methods.
In order to derive the semiconvergence we rst introduce a lemma.
Lemma 3.1. Let A be a singular M-matrix with ‘property c’. Assume that the splitting (1:6)
satises that D is a nonsingular M-matrix; CL ¿0 and CU ¿0. Then the matrix D−1 A is also a
singular M-matrix with ‘property c’.
Proof. Since D is a nonsingular M -matrix and CL ¿0; CU ¿0, it follows that the splitting
A = D − (CL + CU )
is a regular splitting. By [2, Theorem 7-6.20] the generalized Jacobi iteration matrix J satises
J ¿0;
(J ) = 1
and
index(I − J ) = 1:
(3.1)
Thus, I − J is a singular M -matrix. From
D−1 A = I − J
and by [15, Theorem 1] or [2, Lemma 6-4.11] we can obtain the required result.
Theorem 3.2. Let A be a singular M-matrix with ‘property c’; in particular; let A be an irreducible singular M-matrix. Assume that the splitting (1:6) satises that D is a nonsingular
M-matrix; CL ¿0 and CU ¿0. Then
(i) the GJOR method is semiconvergent provided
0¡!¡
2
1 + #(J )
(¿1);
(ii) the GAOR method is semiconvergent provided
(
2
0 ¡ ! ¡ max 1;
1 + #(L
)
)
;
0 ¡
61
and
(L) ¡ 1:
Proof. By [2, Theorem 6-4.16] if A is an irreducible singular M -matrix then it has also the ‘property
c’. Thus, we assume that A is a singular M -matrix with ‘property c’.
Since
D−1 ¿0;
L = D−1 CL ¿0;
U = D−1 CU ¿0;
it is shown that the splitting
A = D − (CL − CU )
is regular. Hence (i) follows by Theorem 2.6, immediately.
In order to prove the semiconvergence of the GAOR method, we notice that
(I −
L)−1 ¿0:
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
125
Thus, the splitting
D−1 A = (I −
L) − [(1 −
)I +
U ]
(3.2)
is a regular splitting for 0 ¡
61 and
(L) ¡ 1. Clearly, Eq. (3.2) gets the GSOR iteration matrix
L
= (I −
L)−1 [(1 −
)I +
U ]¿0:
Further, the GAOR method is an extrapolated method of the GSOR method with the extrapolated
parameter !=
so that by Theorem 2.6 the GAOR method is semiconvergent provided
0¡
!
2
;
¡
1 + #(L
)
which is equivalent to
0¡!¡
2
:
1 + #(L
)
(3.3)
Now, we consider the case when 0 ¡ ! ¡ 1. If ! ¡
then (!;
) satises Eq. (3.3) as 2=[1 +
#(L
)]¿1, so that the GAOR method is semiconvergent. While if !¿
then the splitting
!D−1 A = (I −
L) − [(1 − !)I + (! −
)L + !U ]
is a regular splitting. By Lemma 3.1 and [2, Theorem 7-6.20] we derive that (L
; ! ) = 1 and
index(I − L
; ! ) = 1. Moreover, the GAOR iteration matrix satises that
L
; ! = (I −
L)−1 [(1 − !)I + (! −
)L + !U ]
= (I +
L +
2 L2 + · · ·)[(1 − !)I + (! −
)L + !U ]
¿ (1 − !)I ¿0
and all of whose diagonal entries are at least 1 − ! ¿ 0. With the completely same technique in the
proof of [3, Theorem 3.4] we can prove that, in this case, #(L
; ! ) ¡ 1.
Now we have shown that the GAOR method is semiconvergent for 0 ¡ ! ¡ 1, 0 ¡
61 and
(L) ¡ 1. Combining this with (3.3) we can derive (ii).
For the BJOR and BAOR methods we have the following result.
Theorem 3.3. Let A be a singular M-matrix with ‘property c’; in particular; let A be an irreducible
singular M-matrix. Assume that the block matrix D in the splitting (1:6) is nonsingular. Then
(i) the BJOR method is semiconvergent provided
0¡!¡
2
1 + #(J )
(¿1);
(ii) the BAOR method is semiconvergent provided
(
2
0 ¡ ! ¡ max 1;
1 + #(L
)
)
and
0 ¡
61:
126
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
Proof. Since the block diagonal matrix D is nonsingular, it is a nonsingular M -matrix. Further, we
have
CL ¿0
and
CU ¿0:
Therefore, the matrix L = D−1 CL ¿0 is strictly triangular so that (L) = 0. By Theorem 3.2 we derive
the required result.
As the special cases of the BJOR and BAOR methods we can obtain the simeconvergence of the
JOR and AOR methods.
Theorem 3.4. Let A be a singular M-matrix with ‘property c’ and aii 6= 0; i=1; : : : ; n; in particular;
let A be an irreducible singular M-matrix. Then
(i) the JOR method is semiconvergent provided
2
(¿1);
0¡!¡
1 + #(J )
(ii) the AOR method is semiconvergent provided
(
2
0 ¡ ! ¡ max 1;
1 + #(L
)
)
and
0 ¡
61;
(iii) the extrapolated Gauss–Seidel method is semiconvergent provided
2
0¡!¡
(¿1):
1 + #(L)
Proof. We have known that if A is an irreducible singular M -matrix then it has also the ‘property
c’. Furthermore, the irreducibility of A insures that aii 6= 0; i = 1; : : : ; n. While, if aii 6= 0; i = 1; : : : ; n,
then the diagonal matrix D is nonsingular. From Theorem 3.3 we have proved the statements (i)
and (ii). By [2, Theorem 7-6.20] it can derive #(L)61 so that the statement (iii) is a special case
of (ii).
When
=! from the semiconvergence of the (generalized, block) AOR method we derive directly
the semiconvergence of the (generalized, block) SOR method.
Corollary 3.5. Let A be a singular M-matrix with ‘property c’; in particular; let A be an irreducible singular M-matrix.
(i) If D in the splitting (1:6) is a nonsingular M-matrix; CL ¿0; CU ¿0 and 0 ¡ ! ¡ 1;
!(D−1 CL ) ¡ 1; then the GSOR method is semiconvergent.
(ii) If the block matrix D in the splitting (1:6) is nonsingular; then the BSOR method is semiconvergent provided 0 ¡ ! ¡ 1.
(iii) If aii 6= 0; i = 1; : : : ; n; then the SOR method is semiconvergent provided 0 ¡ ! ¡ 1.
Remark 3.6. The convergence interval for the AOR method given in Theorem 3:4 is better than
that given in [4, Theorems 1 and 2]. From Theorem 3:2 and Corollary 3:5 we can get the corresponding semiconvergence about GJOR and GSOR methods given in [1, Proposition 3:8; Theorem
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
127
3:9]. Similarly; from Theorem 3:4 and Corollary 3:5 we can derive the result given in [3, Theorem
3:4].
By [2, Corollaries 6-6.23, 6-6.24] we have the following semiconvergence.
Theorem 3.7. Let A be Hermitian and positive semidenite; and let the block diagonal matrix D
be Hermitian and positive denite.
(i) If 2D − A is positive denite; then the BJOR method is semiconvergent provided
2
0¡!¡
(¿ 1);
1 + #(J )
(ii) The BAOR method is semiconvergent provided
2
0¡!¡
and 0 ¡
¡ 2:
1 + #(L
)
Remark 3.8. The optimum AOR method; when the matrix J is weakly 2-cyclic consistently ordered
and possesses real eigenvalues with (J ) = 1; was analyzed in [8].
4. Extrapolated power method, JOR, AOR and SOR methods for solving Markov chains
As a special case of the singular systems, in the recent years one is interested in using iterative
methods to compute the stationary probability distribution of a Markov chain. That is, the problem
is to solve the homogeneous system of equations
T (I − P) = 0
(4.1)
subject to the normalizing condition
||||1 = 1;
where the matrix P is a row stochastic matrix. The system (4.1) is equivalent to
A = 0
(4.2)
with singular matrix A = I − P T .
Iterative methods for solving Markov chains are investigated by many authors. For the system
(4.2) a natural splitting is
A=I −Q
with Q = P T , which derives the following iterative method:
xk+1 = Qxk ;
k = 0; 1; 2; : : : :
This iterative method is called the power method in [2]. The extrapolated power method is now
given by
xk+1 = [(1 − !)I + !Q]xk ;
k = 0; 1; 2; : : : :
We can also dene the (B)AOR, (B)SOR and (B)JOR methods, etc.
(4.3)
128
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
It is clearly that A = I − P T is a singular M -matrix. Furthermore, since P is row stochastic, it
follows by [11, p. 133, 5.13.4] or [16, Corollary 3.5] that A has ‘property c’. Now from Theorem
3.2 we can derive the following semiconvergence, immediately.
Theorem 4.1. The extrapolated power method (4:3) for solving (4:2) is semiconvergent if
0¡!¡
2
1 + #(P)
(¿1):
Similarly, from Theorems 3.2, 3.3 and Corollary 3.5 we can derive the semiconvergence of
(B)AOR, (B)SOR, (B)JOR methods.
Acknowledgements
The author is most grateful to Professor W. Niethammer for his help, many helpful suggestions
and discussions during the author’s visit at the Institute for Practical Mathematics of University
Karlsruhe. He is also indebted to the referees for their very constructive comments and suggestions.
References
[1] G.P. Baker, S.-J. Yang, Semi-iterative and iterative methods for singular M -matrices, SIAM J. Matrix Anal. Appl.
9 (1988) 169–180.
[2] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1979.
[3] J.J. Buoni, M. Neumann, R.S. Varga, Theorems of Stein–Rosenberg type III. The singular case, Linear Algebra
Appl. 42 (1982) 183–198.
[4] L. Cvetkovic, D. Herceg, Relaxation methods fos singular M -matrix, A. angew. Math. Mech. 70 (1990) 552–553.
[5] I. Galligani, Splitting methods for the solution of systems of linear equations with singular matrices, Rend. Mat.
Appl. 7 (14) (1994) 341–353.
[6] A. Hadjidimos, Accelerated overrelaxation method, Math. Comput. 32 (1978) 149–157.
[7] A. Hadjidimos, On the optimization of the classical iterative schemes for the solution of complex singular linear
systems, SIAM J. Algebra Discrete Meth. 6 (1985) 555–566.
[8] A. Hadjidimos, Optimum stationary and nonstationary iterative methods for the solution of singular linear systems,
Numer. Math. 51 (1987) 517–530.
[9] A. Hadjidimos, A. Yeyios, The principle of extrapolation in connection with the accelerated overrelaxation method,
Linear Algebra Appl. 30 (1980) 115–128.
[10] H.B. Keller, On the solution of singular and semidenite linear systems by iteration, SIAM J. Numer. Anal. 2 (1965)
281–290.
[11] M. Marcus, H. Minc, A Survey of Matrix Theory and Matrix Inequalities, Allyn and Bacon, Boston, 1964.
[12] N.M. Missirlis, D.J. Evans, On the convergence of some generalized preconditioned iterative methods, SIAM J.
Numer. Anal. 18 (1981) 591–596.
[13] M. Neumann, R.J. Plemmons, Convergent nonnegative matrices and iterative methods for consistent linear systems,
Numer. Math. 31 (1978) 265–279.
[14] J.M. Ortega, W. Rheinboldt, Monotone iterations for nonlinear equations with applications to Gauss–Seidel methods,
SIAM J. Numer. Anal. 4 (1967) 171–190.
[15] R.J. Plemmons, M -matrices leading to semiconvergent splittings, Linear Algebra Appl. 15 (1976) 243–252.
[16] U.C. Rothblum, Algebraic eigenspaces of nonnegative matrices, Linear Algebra Appl. 12 (1975) 281–292.
[17] Y. Song, Extensions of the Ostrowski–Reich theorem in AOR iterations, Numer. Math. Sinica 7 (1985) 323–326.
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
[18] Y. Song, Comparisons of nonnegative splittings of matrices, Linear Algebra Appl. 154 –156 (1991) 433–455.
[19] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, NJ, 1962.
[20] D.M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971.
129
Semiconvergence of extrapolated iterative methods for singular
linear systems
Yongzhong Song
Department of Mathematics, Nanjing Normal University, Nanjing 210097, People’s Republic of China
Received 4 July 1997; received in revised form 30 December 1998
Abstract
In this paper, we discuss convergence of the extrapolated iterative methods for solving singular linear systems. A general
principle of extrapolation is presented. The semiconvergence of an extrapolated method induced by a regular splitting and
a nonnegative splitting is proved whenever the coecient matrix A is a singular M -matrix with ‘property c’ and an
irreducible singular M -matrix, respectively. Since the (generalized, block) JOR and AOR methods are respectively the
extrapolated methods of the (generalized, block) Jacobi and SOR methods, so the semiconvergence of the (generalized,
block) JOR and AOR methods for solving general singular systems are proved. Furthermore, the semiconvergence of the
c 1999
extrapolated power method, the (block) JOR, AOR and SOR methods for solving Markov chains are discussed.
Elsevier Science B.V. All rights reserved.
MSC: 65F10
Keywords: Singular linear system; Markov chain; Extrapolated iterative method; AOR method; JOR method;
Semiconvergence
1. Introduction
Let us consider a system of n equations
Ax = b;
(1.1)
where A ∈ Cn×n is singular, b; x ∈ Cn with b known and x unknown. If A is split into
A = M − N;
E-mail address: [email protected] (Y. Song)
c 1999 Elsevier Science B.V. All rights reserved.
0377-0427/99/$ - see front matter
PII: S 0 3 7 7 - 0 4 2 7 ( 9 9 ) 0 0 0 6 0 - 6
(1.2)
118
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
where M is nonsingular, then a linear stationary iterative formula for solving the system (1.1) can
be described as follows:
xk+1 = Txk + M −1 b;
k = 0; 1; 2; : : : ;
(1.3)
where T = M −1 N is the iteration matrix.
The convergence of the iterative method (1.3) has been investigated in many papers and books.
It is well known that for nonsingular systems the iterative method (1.3) is convergent if and only if
the spectral radius (T ) is less than 1. But for the singular systems we have 1 ∈ (T ) and (T )¿1,
so that man can require only the semiconvergence of the iterative method (1.3), which means that
for every x0 the sequence dened by (1.3) converges to a solution to Eq. (1.1). By [2] the iterative
method (1.3) is semiconvergent if and only if the following three conditions are satised:
• (T ) = 1.
• Elementary divisors associated with = 1 ∈ (T ) are linear, i.e.,
rank(I − T )2 = rank(I − T );
or equivalently
index(I − T ) = 1:
• If ∈ (T ) with || = 1, then = 1, i.e.,
#(T ) ≡ max{||; ∈ (T ); 6= 1} ¡ 1;
where (T ) denotes the spectrum of T .
In this case, the associated convergence factor is then #(T ).
We call a matrix T is semiconvergent provided it satises the three conditions above.
For ! ∈ C the extrapolated method of (1.3) can be dened by
xk+1 = T! xk + !M −1 b;
k = 0; 1; 2; : : : ;
(1.4)
where
T! = (1 − !)I + !T
(1.5)
is the iteration matrix and ! is called the extrapolated parameter (cf. [9]). Clearly, if ! = 0 then
T0 = I and the extrapolated method (1.4) has no meaning. Thus, in the following we will assume
! 6= 0 to be true.
Now, we split A into
A = D − C L − CU :
(1.6)
We assume throughout D in Eq. (1.6) is always nonsingular. But we do not in general assume that
D is diagonal, or that CL and CU are triangular. Associated with the splitting (1.6) the generalized
Jacobi iteration matrix J can be expressed as
J = D−1 (CL + CU ) = L + U
with L = D−1 CL ; U = D−1 CU . For any ! 6= 0 and
∈ C the generalized AOR method [6] (GAOR
method [17]) for solving Eq. (1.1) is dened as
xk+1 = L
; ! xk + !(I −
L)−1 D−1 b;
k = 0; 1; 2; : : : ;
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
119
where
L
; ! = (I −
L)−1 [(1 − !)I + (! −
)L + !U ]
is the GAOR iteration matrix.
When the matrix D in the splitting (1.6) is (block) diagonal, and CL ; CU are, respectively, strictly
(block) lower and strictly (block) upper triangular matrices, the GAOR method is the (block AOR
– BAOR) AOR method.
When (
; !) is equal to (!; !); (1; 1); (0; !) and (0; 1) the (generalized, block) AOR method
reduces respectively to the (generalized, block) SOR, Gauss–Seidel, JOR [7,20] and Jacobi iterative
methods with the iteration matrices L! ; L; J! and J .
It is easy to check that the (generalized, block) JOR method is an extrapolated method of the
(generalized, block) Jacobi method with the extrapolated parameter ! and, for
6= 0, the (generalized, block) AOR method is an extrapolated method of the (generalized, block) SOR method with
the relaxation factor
and the extrapolated parameter !=
, namely
J! = (1 − !)I + !J;
L
; ! = 1 −
!
!
I + L
:
Because of these the JOR method is also called the extrapolated Jacobi method (cf. [3]) and the
AOR method is called the extrapolated SOR (ESOR) method (cf. [12]). Furthermore, for
= 1 the
AOR method is called the extrapolated Gauss–Seidel (EGS) method (cf. [12]).
The extrapolated method for solving the nonsingular systems has been discussed in many papers.
In this paper, we discuss convergence of the extrapolated iterative methods for solving singular
linear systems. In Section 2 a general principle of extrapolation is presented. The semiconvergence of
an extrapolated method induced by a regular splitting and a nonnegative splitting is proved whenever
the coecient matrix A is a singular M -matrix with ‘property c’ and an irreducible singular M -matrix,
respectively. In Section 3, the semiconvergence of the (generalized, block) JOR and (generalized,
block) AOR methods, which are respectively the extrapolated methods of the (generalized, block)
Jacobi and (generalized, block) SOR methods, for solving general singular systems are proved. In
Section 4 the semiconvergence of the extrapolated power method, the (block) JOR, AOR and SOR
methods for solving Markov chains are discussed.
For convenience we shall now brie
y explain some of the terminology used in the next sections.
We write B¿C (B ¿ C) if bij ¿cij (bij ¿ cij ) holds for all entries of B = (bij ) and C = (cij ), calling
B nonnegative if B¿0. These denitions can be applied immediately to vectors by identifying them
with n × 1 matrices. We denote the spectrum and the spectral radius of B by (T ) and (B),
respectively. Moreover,
#(B) = max{||; ∈ (B); 6= 1}:
Denition 1.1 (Berman and Plemmons [2], Plemmons [15]). A matrix B = (bij ) ∈ Rn×n is called a
singular M -matrix if B can be expressed in the form
B = sI − C;
and s = (C).
s ¿ 0;
C¿0
(1.7)
120
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
A singular M -matrix B is said to have ‘property c’ if it can be split into (1.7) and the matrix
T = C=s is semiconvergent.
Denition 1.2. Let B ∈ Cn×n . The splitting B = M − N is called:
(i) Regular (cf. [19]) if M −1 ¿0 and N ¿0;
(ii) Weak regular (cf. [2,14]) if M −1 ¿0 and M −1 N ¿0;
(iii) Nonnegative (cf. [18]) if M −1 N ¿0.
2. A general principle of extrapolation
For the nonsingular systems the convergence of the extrapolated method has been discussed in
many papers and books. For the singular systems it follows from Eq. (1.5) that
= 1 − ! + !
(2.1)
with ∈ (T! ) and ∈ (T ). Moreover, for and satisfying Eq. (2.1) we have
= 1 i
= 1:
Since
I − T! = !(I − T );
the number = 1 as the eigenvalue of T! has same multiplicity with = 1 as the eigenvalue of T
and
index(I − T! ) = index(I − T ):
Now we have proved the following statement.
Lemma 2.1. For the singular systems (1:1) the following results are true.
(a) 1 ∈ (T ); 1 ∈ (T! ) and
index(I − T! ) = index(I − T ):
(b) For and satisfying (2:1) ∈ (T ) − {1} i ∈ (T! ) − {1}.
(c) If the extrapolated method (1:4) is semiconvergent; then index(I − T ) = 1:
Now we describe the extrapolation theorem.
Theorem 2.2. Let the iteration matrix T in Eq. (1:3) satisfy (T ) = 1 and index(I − T ) = 1. Then
the extrapolated method (1:4) is semiconvergent provided
0¡!¡
2
1 + #(T )
(¿1):
Proof. Let = a + ib ∈ (T ) − {1} and satises Eq. (2.1). Since ||6#(T )61, it is obvious that
either of the following holds || ¡ 1 or || = 1 with −16a ¡ 1.
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
121
When #(T ) = 1 we have 0 ¡ ! ¡ 1. Then for || ¡ 1 one gets
||6|1 − !| + !|| = 1 − ! + !|| ¡ 1:
While for || = 1 with −16a ¡ 1 we derive a2 + b2 = 1 and
||2 = (1 − ! + !a)2 + (!b)2
= (1 − !)2 + 2!a(1 − !) + !2 a2 + !2 b2
¡ (1 − !)2 + 2!(1 − !) + !2
= 1:
For the case when #(T ) ¡ 1 we have ||6#(T ) ¡ 1 and, therefore,
|| 6 |1 − !| + !||6|1 − !| + !#(T )
1 − ! + !#(T ) ¡ 1;
2(1 + #(T ))
=
! − 1 + !#(T ) ¡
−1=1
1 + #(T )
This shows that, in any case,
if 0 ¡ !61;
if 16! ¡
2
:
1 + #(T )
(2.2)
#(T! ) ¡ 1
holds.
On the other hand, by Lemma 2.1 we obtain
index(I − T! ) = 1:
Now, the semiconvergence follows immediately.
This result extends the extrapolation theorem given in [9] to the singular systems. Here it is not
necessary to assume that the iterative method (1.3) is semiconvergent.
Example 2.3. Let
1
A= 0
−1
0
2
0
Then A is singular. For
1
M =0
0
0
2
0
−1
−1 :
1
0
0;
1
we have A = M − N and
0 0
T = M −1 N = 0 0
1 0
It is easy to obtain that
(T ) = {1; 0; −1}
and
0
N =0
1
1
1
2
0
:
#(T ) = 1;
0
0
0
1
1
0
122
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
which implies that the iterative method (1.3) is not semiconvergent. While, in this case, we have
T! = (1 − !)I + !T;
(T! ) = {1; 1 − !; 1 − 2!}:
For 0 ¡ ! ¡ 1 (T! ) = 1 holds and
|1 − !| = 1 − ! ¡ 1
|1 − 2!| =
1 − 2!;
2! − 1;
for 0 ¡ !6 12
¡ 1:
for 21 6! ¡ 1
Hence, #(T! ) ¡ 1 and the extrapolated method (1.4) is semiconvergent.
If the iterative method (1.3) is semiconvergent, from Theorem 2.2 and Eq. (2.2), we can derive
the following semiconvergence, directly.
Corollary 2.4. Let the iterative method (1:3) be semiconvergent. Then the extrapolated method
(1:4) is semiconvergent provided
2
0¡!¡
(¿ 1):
1 + #(T )
More specically
#(T! )6|1 − !| + !#(T ) ¡ 1:
(2.3)
By [10, Theorem 2] or [2, Theorem 7 − 6:22] the following semiconvergence result is obvious.
Corollary 2.5. Let A be Hermitian and positive semi-denite. Assume that M H + N is positive
denite. Then; the iterative method (1:3) is semiconvergent and the extrapolated method (1:4) is
semiconvergent provided
2
0¡!¡
(¿ 1):
1 + #(T )
More specically; the inequality (2:3) is true.
In Theorem 2.2 we need the hypothesis that T satisfy (T ) = 1 and index(I − T ) = 1. In the next
theorems the conditions ensuring this hypothesis are given.
Theorem 2.6. Let A be a singular M-matrix with ‘property c’. Further; assume that the splitting
(1:2) is regular. Then the extrapolated method (1:4) is semiconvergent provided
2
0¡!¡
(¿1):
1 + #(T )
Proof. By [2, Theorem 7 − 6:20] it can be justied that the iteration matrix T satises
(T ) = 1
and
index(I − T ) = 1:
The required result follows directly from Theorem 2.2.
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
123
Denition 2.7 (Neumann and Plemmons [13]). A matrix B ∈ Rn×n is called weak semipositive if
there exists a vector x ¿ 0 such that Bx¿0.
The following result is proved by [13, Theorem 6].
Lemma 2.8. Let B be singular and weak semipositive. If the splitting B = M − N is weak regular
then it holds
(T ) = 1
index(I − T ) = 1:
and
Using this lemma we prove the semiconvergence of a nonnegative splitting.
Theorem 2.9. Let A be an irreducible singular M-matrix. Further; assume that the splitting (1:2)
is nonnegative. Then the extrapolated method (1:4) is semiconvergent provided
0¡!¡
2
1 + #(T )
(¿1):
Proof. Since A is an irreducible singular M -matrix, it holds
aii ¿ 0;
i = 1; : : : ; n:
Let D = diag(a11 ; : : : ; ann ). Then
J = I − D−1 A¿0
is also irreducible and satises (J ) = 1. By the Perron–Frobenius theorem (cf. [19, Theorem 2.1])
there exists x ¿ 0 such that
Jx = (J )x = x;
and, consequently,
M −1 Ax = M −1 D(I − J )x = 0;
which means that the matrix M −1 A is weak semipositive. Since T ¿0 it follows that M −1 A = I − T
is a regular splitting. By Lemma 2.8 we obtain
(T ) = 1
and
index(I − T ) = 1:
Now, the required result follows from Theorem 2.2, immediately.
Remark 2.10. As
2
¿1;
1 + #(T )
the result here is much better than that given in [5, Proposition 2]; where the splitting (1:2) is
assumed to be weak regular.
124
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
3. Semiconvergence of the JOR and AOR methods
In this section we discuss the semiconvergence of the (generalized, block) JOR and (generalized,
block) AOR methods.
In order to derive the semiconvergence we rst introduce a lemma.
Lemma 3.1. Let A be a singular M-matrix with ‘property c’. Assume that the splitting (1:6)
satises that D is a nonsingular M-matrix; CL ¿0 and CU ¿0. Then the matrix D−1 A is also a
singular M-matrix with ‘property c’.
Proof. Since D is a nonsingular M -matrix and CL ¿0; CU ¿0, it follows that the splitting
A = D − (CL + CU )
is a regular splitting. By [2, Theorem 7-6.20] the generalized Jacobi iteration matrix J satises
J ¿0;
(J ) = 1
and
index(I − J ) = 1:
(3.1)
Thus, I − J is a singular M -matrix. From
D−1 A = I − J
and by [15, Theorem 1] or [2, Lemma 6-4.11] we can obtain the required result.
Theorem 3.2. Let A be a singular M-matrix with ‘property c’; in particular; let A be an irreducible singular M-matrix. Assume that the splitting (1:6) satises that D is a nonsingular
M-matrix; CL ¿0 and CU ¿0. Then
(i) the GJOR method is semiconvergent provided
0¡!¡
2
1 + #(J )
(¿1);
(ii) the GAOR method is semiconvergent provided
(
2
0 ¡ ! ¡ max 1;
1 + #(L
)
)
;
0 ¡
61
and
(L) ¡ 1:
Proof. By [2, Theorem 6-4.16] if A is an irreducible singular M -matrix then it has also the ‘property
c’. Thus, we assume that A is a singular M -matrix with ‘property c’.
Since
D−1 ¿0;
L = D−1 CL ¿0;
U = D−1 CU ¿0;
it is shown that the splitting
A = D − (CL − CU )
is regular. Hence (i) follows by Theorem 2.6, immediately.
In order to prove the semiconvergence of the GAOR method, we notice that
(I −
L)−1 ¿0:
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
125
Thus, the splitting
D−1 A = (I −
L) − [(1 −
)I +
U ]
(3.2)
is a regular splitting for 0 ¡
61 and
(L) ¡ 1. Clearly, Eq. (3.2) gets the GSOR iteration matrix
L
= (I −
L)−1 [(1 −
)I +
U ]¿0:
Further, the GAOR method is an extrapolated method of the GSOR method with the extrapolated
parameter !=
so that by Theorem 2.6 the GAOR method is semiconvergent provided
0¡
!
2
;
¡
1 + #(L
)
which is equivalent to
0¡!¡
2
:
1 + #(L
)
(3.3)
Now, we consider the case when 0 ¡ ! ¡ 1. If ! ¡
then (!;
) satises Eq. (3.3) as 2=[1 +
#(L
)]¿1, so that the GAOR method is semiconvergent. While if !¿
then the splitting
!D−1 A = (I −
L) − [(1 − !)I + (! −
)L + !U ]
is a regular splitting. By Lemma 3.1 and [2, Theorem 7-6.20] we derive that (L
; ! ) = 1 and
index(I − L
; ! ) = 1. Moreover, the GAOR iteration matrix satises that
L
; ! = (I −
L)−1 [(1 − !)I + (! −
)L + !U ]
= (I +
L +
2 L2 + · · ·)[(1 − !)I + (! −
)L + !U ]
¿ (1 − !)I ¿0
and all of whose diagonal entries are at least 1 − ! ¿ 0. With the completely same technique in the
proof of [3, Theorem 3.4] we can prove that, in this case, #(L
; ! ) ¡ 1.
Now we have shown that the GAOR method is semiconvergent for 0 ¡ ! ¡ 1, 0 ¡
61 and
(L) ¡ 1. Combining this with (3.3) we can derive (ii).
For the BJOR and BAOR methods we have the following result.
Theorem 3.3. Let A be a singular M-matrix with ‘property c’; in particular; let A be an irreducible
singular M-matrix. Assume that the block matrix D in the splitting (1:6) is nonsingular. Then
(i) the BJOR method is semiconvergent provided
0¡!¡
2
1 + #(J )
(¿1);
(ii) the BAOR method is semiconvergent provided
(
2
0 ¡ ! ¡ max 1;
1 + #(L
)
)
and
0 ¡
61:
126
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
Proof. Since the block diagonal matrix D is nonsingular, it is a nonsingular M -matrix. Further, we
have
CL ¿0
and
CU ¿0:
Therefore, the matrix L = D−1 CL ¿0 is strictly triangular so that (L) = 0. By Theorem 3.2 we derive
the required result.
As the special cases of the BJOR and BAOR methods we can obtain the simeconvergence of the
JOR and AOR methods.
Theorem 3.4. Let A be a singular M-matrix with ‘property c’ and aii 6= 0; i=1; : : : ; n; in particular;
let A be an irreducible singular M-matrix. Then
(i) the JOR method is semiconvergent provided
2
(¿1);
0¡!¡
1 + #(J )
(ii) the AOR method is semiconvergent provided
(
2
0 ¡ ! ¡ max 1;
1 + #(L
)
)
and
0 ¡
61;
(iii) the extrapolated Gauss–Seidel method is semiconvergent provided
2
0¡!¡
(¿1):
1 + #(L)
Proof. We have known that if A is an irreducible singular M -matrix then it has also the ‘property
c’. Furthermore, the irreducibility of A insures that aii 6= 0; i = 1; : : : ; n. While, if aii 6= 0; i = 1; : : : ; n,
then the diagonal matrix D is nonsingular. From Theorem 3.3 we have proved the statements (i)
and (ii). By [2, Theorem 7-6.20] it can derive #(L)61 so that the statement (iii) is a special case
of (ii).
When
=! from the semiconvergence of the (generalized, block) AOR method we derive directly
the semiconvergence of the (generalized, block) SOR method.
Corollary 3.5. Let A be a singular M-matrix with ‘property c’; in particular; let A be an irreducible singular M-matrix.
(i) If D in the splitting (1:6) is a nonsingular M-matrix; CL ¿0; CU ¿0 and 0 ¡ ! ¡ 1;
!(D−1 CL ) ¡ 1; then the GSOR method is semiconvergent.
(ii) If the block matrix D in the splitting (1:6) is nonsingular; then the BSOR method is semiconvergent provided 0 ¡ ! ¡ 1.
(iii) If aii 6= 0; i = 1; : : : ; n; then the SOR method is semiconvergent provided 0 ¡ ! ¡ 1.
Remark 3.6. The convergence interval for the AOR method given in Theorem 3:4 is better than
that given in [4, Theorems 1 and 2]. From Theorem 3:2 and Corollary 3:5 we can get the corresponding semiconvergence about GJOR and GSOR methods given in [1, Proposition 3:8; Theorem
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
127
3:9]. Similarly; from Theorem 3:4 and Corollary 3:5 we can derive the result given in [3, Theorem
3:4].
By [2, Corollaries 6-6.23, 6-6.24] we have the following semiconvergence.
Theorem 3.7. Let A be Hermitian and positive semidenite; and let the block diagonal matrix D
be Hermitian and positive denite.
(i) If 2D − A is positive denite; then the BJOR method is semiconvergent provided
2
0¡!¡
(¿ 1);
1 + #(J )
(ii) The BAOR method is semiconvergent provided
2
0¡!¡
and 0 ¡
¡ 2:
1 + #(L
)
Remark 3.8. The optimum AOR method; when the matrix J is weakly 2-cyclic consistently ordered
and possesses real eigenvalues with (J ) = 1; was analyzed in [8].
4. Extrapolated power method, JOR, AOR and SOR methods for solving Markov chains
As a special case of the singular systems, in the recent years one is interested in using iterative
methods to compute the stationary probability distribution of a Markov chain. That is, the problem
is to solve the homogeneous system of equations
T (I − P) = 0
(4.1)
subject to the normalizing condition
||||1 = 1;
where the matrix P is a row stochastic matrix. The system (4.1) is equivalent to
A = 0
(4.2)
with singular matrix A = I − P T .
Iterative methods for solving Markov chains are investigated by many authors. For the system
(4.2) a natural splitting is
A=I −Q
with Q = P T , which derives the following iterative method:
xk+1 = Qxk ;
k = 0; 1; 2; : : : :
This iterative method is called the power method in [2]. The extrapolated power method is now
given by
xk+1 = [(1 − !)I + !Q]xk ;
k = 0; 1; 2; : : : :
We can also dene the (B)AOR, (B)SOR and (B)JOR methods, etc.
(4.3)
128
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
It is clearly that A = I − P T is a singular M -matrix. Furthermore, since P is row stochastic, it
follows by [11, p. 133, 5.13.4] or [16, Corollary 3.5] that A has ‘property c’. Now from Theorem
3.2 we can derive the following semiconvergence, immediately.
Theorem 4.1. The extrapolated power method (4:3) for solving (4:2) is semiconvergent if
0¡!¡
2
1 + #(P)
(¿1):
Similarly, from Theorems 3.2, 3.3 and Corollary 3.5 we can derive the semiconvergence of
(B)AOR, (B)SOR, (B)JOR methods.
Acknowledgements
The author is most grateful to Professor W. Niethammer for his help, many helpful suggestions
and discussions during the author’s visit at the Institute for Practical Mathematics of University
Karlsruhe. He is also indebted to the referees for their very constructive comments and suggestions.
References
[1] G.P. Baker, S.-J. Yang, Semi-iterative and iterative methods for singular M -matrices, SIAM J. Matrix Anal. Appl.
9 (1988) 169–180.
[2] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1979.
[3] J.J. Buoni, M. Neumann, R.S. Varga, Theorems of Stein–Rosenberg type III. The singular case, Linear Algebra
Appl. 42 (1982) 183–198.
[4] L. Cvetkovic, D. Herceg, Relaxation methods fos singular M -matrix, A. angew. Math. Mech. 70 (1990) 552–553.
[5] I. Galligani, Splitting methods for the solution of systems of linear equations with singular matrices, Rend. Mat.
Appl. 7 (14) (1994) 341–353.
[6] A. Hadjidimos, Accelerated overrelaxation method, Math. Comput. 32 (1978) 149–157.
[7] A. Hadjidimos, On the optimization of the classical iterative schemes for the solution of complex singular linear
systems, SIAM J. Algebra Discrete Meth. 6 (1985) 555–566.
[8] A. Hadjidimos, Optimum stationary and nonstationary iterative methods for the solution of singular linear systems,
Numer. Math. 51 (1987) 517–530.
[9] A. Hadjidimos, A. Yeyios, The principle of extrapolation in connection with the accelerated overrelaxation method,
Linear Algebra Appl. 30 (1980) 115–128.
[10] H.B. Keller, On the solution of singular and semidenite linear systems by iteration, SIAM J. Numer. Anal. 2 (1965)
281–290.
[11] M. Marcus, H. Minc, A Survey of Matrix Theory and Matrix Inequalities, Allyn and Bacon, Boston, 1964.
[12] N.M. Missirlis, D.J. Evans, On the convergence of some generalized preconditioned iterative methods, SIAM J.
Numer. Anal. 18 (1981) 591–596.
[13] M. Neumann, R.J. Plemmons, Convergent nonnegative matrices and iterative methods for consistent linear systems,
Numer. Math. 31 (1978) 265–279.
[14] J.M. Ortega, W. Rheinboldt, Monotone iterations for nonlinear equations with applications to Gauss–Seidel methods,
SIAM J. Numer. Anal. 4 (1967) 171–190.
[15] R.J. Plemmons, M -matrices leading to semiconvergent splittings, Linear Algebra Appl. 15 (1976) 243–252.
[16] U.C. Rothblum, Algebraic eigenspaces of nonnegative matrices, Linear Algebra Appl. 12 (1975) 281–292.
[17] Y. Song, Extensions of the Ostrowski–Reich theorem in AOR iterations, Numer. Math. Sinica 7 (1985) 323–326.
Y. Song / Journal of Computational and Applied Mathematics 106 (1999) 117–129
[18] Y. Song, Comparisons of nonnegative splittings of matrices, Linear Algebra Appl. 154 –156 (1991) 433–455.
[19] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, NJ, 1962.
[20] D.M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971.
129