Directory UMM :Data Elmu:jurnal:J-a:Journal of Computational And Applied Mathematics:Vol104.Issue2.1999:
Journal of Computational and Applied Mathematics 104 (1999) 123–143
Analytic-numerical solutions with a priori error bounds for a
class of strongly coupled mixed partial dierential systems
L. Jodar ∗ , E. Navarro, J. Camacho
Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Apartado 22.012, Camino de Vera 14,
46071 Valencia, Spain
Received 8 January 1998; received in revised form 29 October 1998
Abstract
This paper deals with the construction of analytic-numerical solutions with a priori error bounds for systems of the type
ut = Auxx , u(0; t) + ux (0; t) = 0, Bu(1; t) + Cux (1; t) = 0, 0 ¡ x ¡ 1; t ¿ 0, u(x; 0) = f(x). Here A, B, C are matrices for
which no diagonalizable hypothesis is assumed. First an exact series solution is obtained after solving appropriate vector
Sturm–Liouville-type problems. Given an admissible error and a bounded subdomain D, after appropriate truncation an
approximate solution constructed in terms of data and approximate eigenvalues is given so that the error is less than the
c 1999 Elsevier Science B.V. All rights reserved.
prexed accuracy , uniformly in D.
Keywords: Coupled dierential system; Coupled boundary conditions; Analytic-numerical solution; Vector
Sturm–Liouville problem; A priori error bound; Moore–Penrose pseudoinverse
1. Introduction
Coupled partial dierential systems with coupled boundary value conditions are frequent in quantum mechanical scattering problems [1,19,27,28], chemical physics [16,17,22], modelling of coupled thermoelastoplastic response of clays subjected to nuclear waste heat [13], coupled diusion
problems [7,20,30]. The solution of these problems has motivated the study of vector and matrix
Sturm–Liouville problems [3,4,12,18]. In this paper we study systems of the type
ut (x; t) − Auxx (x; t) = 0;
u(0; t) + ux (0; t) = 0;
0 ¡ x ¡ 1; t ¿ 0;
t ¿ 0;
∗
Corresponding author.
E-mail address: [email protected] (L. Jodar).
c 1999 Elsevier Science B.V. All rights reserved.
0377-0427/99/$ - see front matter
PII: S 0 3 7 7 - 0 4 2 7 ( 9 9 ) 0 0 0 4 8 - 5
(1)
(2)
124
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Bu(1; t) + Cux (1; t) = 0;
u(x; 0) = f(x);
t ¿ 0;
06x61;
(3)
(4)
where the unknown u = (u1 ; : : : ; um )T and f(x) = (f1 ; : : : ; fm )T are m-dimensional vectors and A; B; C
are m×m complex matrices, elements of Cm×m . Mixed problems of the above type but with Dirichlet
conditions u(0; t) = 0, u(1; t) = 0 instead of (2), (3) have been treated in [15,23]. Here we assume
that A is a positive stable matrix
Re(z) ¿ 0
for all eigenvalues z of A;
(5)
and that the pencil B + C is regular, i.e, the determinant det(B + C) = |B + C| is not identically
zero. Conditions on function f(x) will be determined below in order to specify existence and
well-posedness conditions.
The organization of the paper is as follows. In Section 2, vector eigenvalue dierential problems
of the type
X ′′ (x) + 2 X (x) = 0;
0 ¡ x ¡ 1; t¿0; ¿ 0;
X (0) + X ′ (0) = 0;
BAj X (1) + CAj X ′ (1) = 0;
(6)
06j6p − 1;
are treated. Sucient conditions for the existence of an appropriate sequence of eigenvalues and
eigenfunctions, as well as some invariant properties of the problem, are studied. In Section 3 an
exact series solution of problem (1) – (4) is obtained using results of Section 2 and the separation
of variables technique. Section 4 deals with the construction of an analytic-numerical solution of the
problem with a prexed accuracy in a bounded subdomain. The approximation is expressed in terms
of the data and approximate eigenvalues of the underlying eigenvalue problem of the type (6).
Throughout this paper, the set of all eigenvalues of a matrix C in Cm×m is denoted by (C) and
its 2-norm denoted by k C k is dened by [11, p. 56]
k C k = sup
z 6= 0
k Cz k2
;
k z k2
where for a vector y in Cm , k y k2 is the usual euclidean norm of y. Let us introduce the notation
(C) = max{Re(!); ! ∈ (C)} and (C) = min{Re(!); ! ∈ (C)}. By [11, p. 556] it follows that
√
m−1
X k mC kk t k
tC
t(C)
k e k 6e
; t¿0:
(7)
k!
k=0
If B is a matrix in Cn×m we denote by B† its Moore–Penrose pseudoinverse. An account of properties,
examples and applications of this concept may be found in [5,26]. In particular the kernel of B,
denoted by Ker B coincides with the image of the matrix I −B† B denoted by Im(I −B† B). We say that
a subspace E of Cm is invariant by the matrix A ∈ Cm×m if A(E) ⊂ E. The property A(Ker G) ⊂ Ker G
is equivalent to the condition GA(I − G † G) = 0 since Ker G = Im(I − G † G), see [5]. The Moore–
Penrose pseudoinverse of a matrix can be eciently computed with the MATLAB package. The set
of all the real numbers will be denoted by R. The determinant of a matrix C ∈ Cm×m is denoted by
|C| and the conjugate transpose of C is denoted by C ? . If z = a + ib is a complex number we denote
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
125
by z = a − ib its conjugate. Finally we denote by C D the Drazin inverse of the matrix C ∈ Cm×m .
We recall that C D can be computed as a polynomial in D and by [5, p. 129] it follows that
z 6= 0;
z ∈ (C) if and only if z −1 ∈ (C D ):
(8)
If C is invertible then C D = C −1 . An account of properties, examples and algorithms for the computation of the Drazin inverse may be found in Ch. 7 of [5].
2. On a class of vector eigenvalue dierential problems
Vector Sturm–Liouville systems of the form
−(P(x)y′ )′ + Q(x)y = W (x)y;
a6x6b;
′
?
A?
1 y(a) + A2 P(a)y (a) = 0;
B1? y(b) + B2? P(b)y′ (b) = 0;
where P; Q and W are symmetric m × m matrix functions of x with P and W positive denite
for all x ∈ [a; b]; y is an n-vector function of x, is a scalar parameter, and A1 ; A2 ; B1 and B2 are
?
m × m matrices such that (A1 ; A2 ), (B1 ; B2 ) are full rank m × 2m matrices, with A?
1 A2 − A2 A1 = 0;
B1? B2 − B2? B1 = 0, have been recently treated in [3,4,12,18], and arise in a natural way in quantum
mechanical applications. The recent literature on quantum mechanical scattering problems related
to these problems includes [1,16,17,22]. In this section we consider vector eigenvalue dierential
problems of a dierent nature, in the sense that we admit more than two boundary value conditions
and under dierent hypotheses. If p ¿ 1, we consider the vector problem
X ′′ (x) + 2 X (x) = 0;
0 ¡ x ¡ 1; ¿0;
′
X (0) + X (0) = 0;
j
j
′
(9)
BA X (1) + CA X (1) = 0;
06j6p − 1;
where A; B; C are matrices in Cm×m such that the matrix pencil B + C is regular, i.e.,
|B + C| is not identically zero:
(10)
There exists complex numbers 0 such that B + 0 C is invertible:
(11)
Under hypothesis (10), since the determinant |B + C| is a polynomial of degree m, the matrix
B + C is singular at most for m dierent values of . Hence
Assume that
There exists an eigenvalue 0 of the matrix (B + 0 C)−1 C;
such that
(1 + 0 )0 6= 1
and
0
is a real number:
1 − 0 (1 + 0 )
The general solution of the vector equation X ′′ + 2 X = 0 is given by
X (x) =
(
sin( x)D + cos( x)E ;
D0 x + E0 ;
D ; E ∈ Cm ; ¿ 0;
D0 ; E0 ∈ Cm ; = 0:
(12)
(13)
126
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Condition X (0) + X ′ (0) = 0 produces
X (x) =
(
(sin( x) − cos( x))D ;
D0 (x − 1);
¿ 0;
(14)
= 0:
By imposing the boundary value conditions BAj X (1) + CAj X ′ (1) = 0; 06j6p − 1, one gets that
the vector D must satisfy
and
[(2 C + B) sin() + (C − B) cos()]Aj D = 0;
CD0 = 0;
06j6p − 1; ¿ 0
(15)
= 0:
(16)
In order to obtain nonzero solutions of problem (9), for ¿ 0, the vector D must be nonzero. By
(15) one gets the condition
H = (2 C + B) sin() + cos()(C − B) is singular:
(17)
Under hypothesis (11) we can write
H = sin()[(2 − 0 )C + B + 0 C] + cos()[(0 + 1)C − (B + 0 C)];
¿ 0;
(18)
(B + 0 C)−1 H = sin()[(2 − 0 )(B + 0 C)−1 C + I ] + cos()[(0 + 1)(B + 0 C)−1 C − I ]:
(19)
Under hypothesis (12), taking values of ¿ 0 satisfying
sin()(1 + (2 − 0 )0 ) + cos()((1 + 0 )0 − 1) = 0
(20)
by (19) and the spectral mapping theorem [8, p. 569], the matrix (B + 0 C)−1 H is singular. Hence
H is singular for values of ¿ 0 satisfying (20). Note that by (13), if ¿ 0 satises Eq. (20) one
gets sin() 6= 0 and Eq. (20) is equivalent to
0
; ¿0:
(21)
cot() = a2 + 1 + a; a =
1 − 0 (1 + 0 )
It is easy to show that Eq. (21) has an innite sequence of solutions {k } whose location depends
on the parameter a in the following way:
Case 1: (a ¿ 0)
k ¡ k ¡ (2k + 1) ; k¿1:
2
Case 2: (a = 0)
0 = 0 and k ¡ k ¡ (2k + 1) ; k¿1:
2
2
Case 3: (−4=(4 + ) ¡ a ¡ 0). Here we have innitely many subcases. Let
[
−4
−4
−4
˙
˙
J=
J j; J j =
;0 =
;
:
4 + 2
4 + (2j + 1)2 2 4 + (2j + 3)2 2
j¿0
Then, if
a=−
4
;
4 + (2j + 1)2
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
127
one gets the sequence {k; j }k¿1 of solutions of (21) satisfying
0 ¡ 1 ¡ ;
2
k ¡ k; j ¡ (k + 1) ;
2
16k6j
and
j+1; j = (2j + 1) ; (2k + 1) ¡ k; j ¡ (k + 1); k¿j + 1:
2
2
If a ∈ J˙j ; j¿0, then the solution sequence {k; j }k¿1 of (21) is located as follows:
0 ¡ 1 ¡ ;
2
k ¡ k; j ¡ (2k + 1) ;
2
16k6j + 1
and
(2k + 1) ¡ k; j ¡ (k + 1);
2
k¿j + 1:
Case 4: (a = −4=(4 + 2 ))
1 = ; (2k − 1) ¡ k ¡ k;
2
2
k¿2:
Case 5: (a ¡ − 4=(4 + 2 ))
(2k − 1) ¡ k ¡ k; k¿1:
2
The following lemma provides information about the eigenvalues 0 of the matrix (B + 0 C)−1 C
verifying (12).
Lemma 2.1. Let 0 and 0 be complex numbers satisfying conditions (10) and (11); respectively.
Assume that 0 satises one of the following conditions:
(i) 0 = 0;
(ii) 0 6= 0 and 0 =1=(0 +i Im(0 )); where 0 is a real eigenvalue of the matrix ((B+0 C)−1 C)D −
Im(0 )I satisfying
0 6= 1 + Re(0 ):
(22)
Then 1 − 0 (1 + 0 ) 6= 0 and 0 =(1 − (1 + 0 )0 ) is a real number.
Proof. If 0 = 0 the result is immediate. Under hypothesis (ii) it follows that
1
= 0 + i Im(0 );
0
(23)
and 1=0 ∈ (((B + 0 C)−1 C)D ). By (8) it follows that 0 ∈ ((B + 0 C)−1 C). By (22) one gets
1 − 0 (1 + 0 ) = 1 − (0 + 1)=(0 + i Im(0 )) 6= 0 and by (23)
0
0
1=0 − 1=0 − (0 − 0 )
= 0:
−
=
1 − (0 + 1)0 1 − (0 + 1)0 [1=0 − (0 + 1)][1=0 − (0 + 1)]
Thus the result is established.
128
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Remark 1. Note that if 0 ∈ R then 0 is any real eigenvalue of the matrix (B + 0 C)−1 C.
Let F(0 ; 0 ) be the eigenvalue set of problem (9) and note that 0 ∈ F(0 ; 0 ) if and, only if
0 = 0 is an eigenvalue of C, that is, if C is a singular matrix. If ¿ 0 satises (21) by (18) one
gets
1 − cot() = 0 (2 − 0 + (0 + 1) cot());
(24)
1
H = (2 − 0 + (0 + 1) cot())C + (1 − cot())(B + 0 C);
sin()
1
H = (2 − 0 + cot()(1 + 0 ))[C − 0 (B + 0 C)]:
sin()
(25)
Note that if ∈ F(0 ; 0 ) with ¿ 0, then 2 − 0 + (0 + 1) cot() 6= 0 because otherwise by (24),
1 − cot() = 0 and by (21) we would have 2 + 1 = 0, contradicting that ¿ 0. Hence (25) can
be written in the equivalent form
1
H = C − 0 (B + 0 C):
(26)
2
( − 0 + cot()(0 + 1))sin()
Let G(0 ; 0 ) be the matrix in Cmp×m dened by
C − 0 (B + 0 C)
[C − 0 (B + 0 C)]A
..
.
G(0 ; 0 ) =
[C − 0 (B + 0 C)]Ap−1
:
(27)
Condition (15) is equivalent to the condition
G(0 ; 0 )D = 0;
∈ F(0 ; 0 ):
(28)
m
Eq. (28) admits nonzero vector solutions D ∈ C if
rank G(0 ; 0 )D ¡ m:
Thus, if D 6= 0 satises (28), the vector functions
(29)
X (x) = [sin( x) − cos( x)]D
(30)
X0 (x) = (x − 1)D0 ; D0 ∈ Cm ; CD0 = 0;
(31)
(1 + 0 ) 0 = 1:
(32)
are eigenfunctions of problem (9). If C is a singular matrix, then from (28), = 0 is also an
eigenvalue of problem (9) and if CD0 = 0, D0 6= 0, the function
is an eigenfunction of problem (9). Suppose that
Substituting this condition into (20) one gets
(2 + 1) 0 sin() = 0
and since 0 6= 0, by (33) it follows that
sin() = 0:
(33)
(34)
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
129
By (17) and the spectral mapping theorem [8, p. 569], the matrix H is singular if and only if C − B
is singular and Eq. (28) takes the form
(C − B)Aj D = 0;
06j6p − 1; ¿ 0:
(35)
By (34) the positive eigenvalue set of problem (9) in this case is F(0 ; 0 ) = {k; k¿1} and the
corresponding eigenfunctions are
X (x) = (sin(kx) − k cos(kx))Dk ;
C −B
(C − B)A
G(0 ; 0 )Dk =
Dk = 0;
.
.
.
(C − B)Ap−1
:
Dk 6= 0
(36)
Summarizing, the following result has been established:
Theorem 2.1. Let p¿1 be an integer; assume that pencil B + C is regular and let 0 be dened
by (11). Assume that 0 is a eigenvalue of (B + 0 C)−1 C satisfying (13) and that matrix G(0 ; 0 )
dened by (27) satises (29). Then problem (9) admits a sequence of real non-negative eigenvalues
F(0 ; 0 ). If ∈ F(0 ; 0 ) is an eigenvalue, the associated eigenfunction set is given by (14) where
D is a nonzero m-dimensional vector lying in Ker G(0 ; 0 ). The explicit expression for D is given
by
D = (I − [G(0 ; 0 )† G(0 ; 0 )])S ;
(37)
where S is a nonzero arbitrary vector in Cm .
Proof. The proof is a consequence of previous comments and Theorem 2:3:2 of [26] that provides
the general solution of G(0 ; 0 )D = 0 in the form (37).
The following result shows that eigenvalues and eigenfunctions of problem (9) are independent of
the chosen number 0 verifying (11). Properties (13) or (32), and (29) are also invariant.
Theorem 2.2. Let 0 6= 1 be complex numbers such that B + 0 C and B + 1 C are invertible
matrices in Cm×m ; then the following
properties hold:
(i) If 0 ∈ (B + 0 C)−1 C then 1 − (0 − 1 )0 6= 0 and
1 =
0
∈ ((B + 0 C)−1 C):
1 − 0 (0 − 1 )
(ii) If 0 ; 1 ; 0 ; 1 are dened as in (i); then (0 − 0 ) satises (13) if and only if (1 ; 1 ) satises
(13). Furthermore; the eigenvalues of (9) are invariant when (1 ; 1 ) replaces (0 ; 0 ) in conditions
(11)–(13).
(iii) Eigenfunctions of problem (9) corresponding to the pair (0 ; 0 ) ∈ R2 coincide with those
associated to (1 ; 1 ) by (i). Furthermore; Ker((B + 0 C)−1 − 0 I ) = Ker((B + 1 C)−1 − 1 I ) and
matrices G(0 ; 0 ); G(1 ; 1 ) have the same rank.
130
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Proof. (i) If 0 = 0 ∈ ((B + 0 C)−1 C), then the matrix C is singular and hence 1 = 0 also belongs
to ((B + 1 C)−1 C). If 0 6= 0, by the properties of determinants one gets
0 = |(B + 0 C)−1 C − 0 I | = |C − 0 (B + 0 C)| = |C − 0 (B + 1 C + (0 − 1 )C)|
= |[1 − 0 (0 − 1 )]C − 0 (B + 1 C)|:
(38)
Since B + 1 C is invertible, by the last equation it follows that 1 − 0 (0 − 1 ) 6= 0 because otherwise
0 = |0 (B + 1 C)|, contradicting the invertibility of (B + 1 C). By (38) we can write
0
0 I
−1
:
(B + 1 C) = (B + 1 C) C −
0 = C −
1 − 0 (0 − 1 )
1 − 0 (0 − 1 )
Hence
0
= 1 ∈ ((B + 1 C)−1 C):
1 − 0 (0 − 1 )
(ii) Note that if 0 = 0 then 1 = 0 and Eq. (20) is the same replacing 1 by 0 . Thus F(0 ; 0) =
F(1 ; 0). If 0 6= 0, by part (i), 1 6= 0 and
1
1
1
=
=
1 − (1 + 1)1 1=1 − (1 + 1) 1=0 − (0 − 1 ) − (1 + 1)
1
0
=
=
1=0 − (0 + 1) 1 − 0 (0 + 1)
with
1
0
1 − 0 0
1 − 1 1
=1−
=1−
=
:
1 − (1 + 1)1
1 − (1 + 1)1
1 − (0 + 1)0 1 − (0 + 1)0
Hence the coecients of Eq. (20) are the same as the corresponding coecients when 0 and 0 are
replaced by 1 and 1 , respectively. This proves that F(0 ; 0 ) = F(1 ; 1 ) and these eigenvalues
sets of problem (9) are invariant when (1 ; 1 ) replaces (0 ; 0 ).
(iii) By Theorem 2.1, and parts (i) and (ii) of this theorem one gets that eigenfunctions of
problem (9) are invariant replacing (1 ; 1 ) by (0 ; 0 ). Vectors D appearing in (28) for F(0 ; 0 )
are the same as those appearing for F(1 ; 1 ). In order to prove this we show that Ker G(0 ; 0 ) =
Ker G(1 ; 1 ). First we prove that Ker((B + 0 C)−1 C − 0 I ) = Ker((B + 1 C)−1 C − 1 I ). Let y 6= 0
be a vector in Cm such that ((B + 0 C)−1 C − 0 I )y = 0. Hence
0 = [C − 0 (B + 0 C)]y = [C − 0 (B + 1 C + (0 − 1 )C)]y
= [1 − 0 (0 − 1 )C − 0 (B + 1 C)]y;
0= C −
0
(B + 1 C) y = [C − 1 (B + 1 C)]y
1 − 0 (0 − 1 )
and
y ∈ Ker[(B + 1 C)−1 C − 1 I ]:
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
131
By denition of G(0 ; 0 ) given by (27) we have
C − 0 (B + 0 C) = (1 − 0 (0 − 1 ))[C − 1 (B + 1 C)];
G(0 ; 0 ) = (1 − 0 (0 − 1 ))G(1 ; 1 ):
(39)
Hence G(0 ; 0 )D = 0 if and only if G(1 ; 1 )D = 0 and the result is established.
3. Construction of an exact series solution
Let us seek solutions v(x; t) of the boundary value problem (1) – (4) under hypotheses (11) – (13).
The separation of variables technique suggests
v (x; t) = T (t)X (x);
T (t) ∈ Cm×m ;
X (x) ∈ Cm ;
T′ (t) + 2 AT (t) = 0;
t¿0; ¿0;
X′′ (x) + 2 X (x) = 0;
0 ¡ x ¡ 1; ¿0;
¿0;
(40)
where
X (0) +
X′ (0)
BX (1) +
= 0;
CX′ (1)
(41)
(42)
= 0:
The solution of (41) satisfying T (0) = I , is T (t) = exp(−2 At), but although v (x; t) dened by
(40) satises (3)
@
@2
(v (x; t)) − A 2 (v (x; t)) = T′ (t)X (x) − AT (t)X′′ (x)
@t
@x
= −2 AT (t)X (x) + AT (t)2 X (x) = 0;
@
(v (0; t)) = T (t)(X (0) + X′ (0)) = 0;
@x
condition (4) is not guaranteed because
v (0; t) +
Bv (1; t) + C
@
(v (1; t)) = BT (t)X (1) + CT (t)X′ (1)
@x
= B exp(−2 At)X (1) + C exp(−2 At)X′ (1)
(43)
and the last equation does not vanish because matrices B and C do not commute with A. However,
if X (x) satises (42) together with condition
BAj X (1) + CAj X′ (1) = 0;
16j6p − 1;
(44)
where p is the degree of the minimal polynomial of A, that is, problem (9) for this value of p,
then we show now that v (x; t) dened by (40) satises (3) – (4). In fact, for each t¿0, the matrix
exponential T (t) = exp(−2 At) can be expressed as a matrix polynomial of A [8, p. 557],
X (t) = exp(−2 At) = b0 (t)I + b1 (t)A + · · · + bp−1 (t)Ap−1 ;
(45)
132
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
where bj (t); 06j6p − 1 are scalars. Under hypothesis (44), by (43) and (45) one gets
p−1
X
@
bj (t){BAj X (1) + CAj X′ (1)} = 0;
Bv (1; t) + C (v (1; t)) =
@x
j=0
t¿0:
(46)
Assume the hypotheses and notation of Theorem 2.1, let F(0 ; 0 ) be the eigenvalue set of problem
(9) and consider the candidate series solution of problem (1) – (4) of the form
X
2
X
(x)D
+
e−n At Xn (x)
0
0
n¿1
U (x; t) = X
2
e−n At Xn (x)
if 0 ∈ F(0 ; 0 );
(47)
if 0 6∈ F(0 ; 0 );
n¿1
where
Xn (x) = (sin(n x) − n cos(n x))Dn ;
n ¿ 0 n¿1;
X0 (x) = (x − 1)D0 ;
0 = 0:
(48)
Now we seek appropriate vectors Dn and D0 in Cm so that the initial condition (4) holds true. By
imposing (4) on (47) one gets that these vectors must satisfy
X
(x
−
1)D
+
{sin(n x) − n cos(n x)}Dn ;
0
n¿1
f(x) = X
{sin(n x) − n cos(n x)}Dn ;
n¿1
0 ∈ F(0 ; 0 );
(49)
0 6∈ F(0 ; 0 ):
Let f = (f1 ; f2 ; : : : ; fm )T and consider for 16j6m, the scalar regular Sturm–Liouville problem
Xj′′ (x) + 2 Xj (x) = 0;
Xj (0) + Xj′ (0) = 0;
0 ¡ x ¡ 1; ¿ 0
(50)
(1 − 0 0 )Xj (1) + 0 Xj′ (1) = 0;
It is easy to check that the eigenvalue set problem (50) is F(0 ; 0 ) and its common set of eigenfunctions is given by (48) substituting vectors Dn ; D0 by scalars Dn; j ; D0; j , respectively, 16j6m.
In order to guarantee well-posedness let us assume:
fj (x) is twice continuously dierentiable in [0; 1];
such that fj (0) + fj ′ (0) = 0; (1 − 0 0 )fj (1) + 0 fj ′ (1) = 0
(51)
and let
dn; j =
d0; j
R1
0
fj (x)(sin(n x) − n cos(n x)) d x
R1
0
(sin(n x) − n cos(n x))2 d x
0;
R1
=
0 fj (x)(x − 1) d x
;
R1
2
0
(x − 1) d x
;
n¿1;
(52)
0 6= 0;
0 = 0:
(53)
Note that 0 ∈ F(0 ; 0 ) means that 0 =0, or that C is singular. By the convergence theorem in series
of Sturm–Liouville functions [14; Ch: 11; 9; p: 90] one gets (49) and the series in (50) is uniformly
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
133
convergent to f(x) in [0; 1] for each j with 16j6m. Now we study conditions so that vectors
Dn = (dn; 1 ; dn; 2 ; : : : ; dn; m )T ; D0 = (d0;1 ; d0; 2 ; : : : ; d0; m )T dened by (52) – (53) satisfy (28) and (31),
respectively. Assume that
(C − 0 (B + 0 C))f(x) = 0;
06x61
(54)
and
Ker(C − 0 (B + 0 C)) is an invariant of A:
(55)
Taking into account that under hypothesis (10) we always have real values 0 satisfying (11),
from Theorems 2.1 and 2.2, without losing generality we may assume 0 ∈ R and that 0 is a real
eigenvalue of (B + 0 C)−1 C.
Under hypotheses (54) and (55) one gets
G(0 ; 0 )Dn = 0;
n ∈ F(0 ; 0 ); n¿0:
(56)
Finally we prove that under hypothesis (5) series (47) with coecients D dened by (52) – (53) is
a solution of problem (1) – (4). By inequality (7) it is easy to prove that in any set
D(t0 ) = {(x; t); 06x61; t¿t0 ¿ 0};
series (57) as well as those appearing after twice termwise partial dierentiation with respect to x
and once partial dierentiation with respect to t, namely
2
X
n2 e−n At Xn (x);
X
(−n2 )Ae−n At Xn (x);
n¿1
2
n¿1
are uniformly convergent in D(t0 ). By the dierentiation theorem of functional series [2, p. 403],
the series dened by (47), (52), (53) is twice partially dierentiable with respect to x, once with
respect to t and satises (1) – (4). Summarizing, by the convergence theorems of Sturm–Liouville
series expansions [9,14], the following result has been established.
Theorem 3.1. With the hypothesis and the notation of Theorem 2:1; assume that f(x) satises
(51) and (54); A is positive stable matrix and
[C − 0 (B + 0 C)]A{I − [C − 0 (B + 0 C)]† [C − 0 (B + 0 C)]} = 0:
(57)
Then U (x; t) dened by (47); (52); (53) is a solution of problem (1)–(4).
Proof. By the previous comments and the equivalence of conditions (57) and (55) the result is
established.
Now we construct a series solution of problem (1) – (4) under weaker hypotheses on the function
f(x) appearing in (4).
Assume that apart from hypothesis (11),
There exist k distinct real eigenvalues 0 (i) of (B + 0 C)−1 C;
16i6k:
(58)
134
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Let R and Ri be the matrices dened by
k
Y
[(B + 0 C)−1 C − 0 (j)];
k
Y
[(B + 0 C)−1 C − 0 (j)I ]:
Ri =
j=1
j 6= i
R =
j=1
16i6k
(59)
If E = Ker R, then by the descomposition theorem [10, p. 536] we have
E = Ker[(B + 0 C)−1 C − 0 (1)I ] ⊕ · · · ⊕ Ker[(B + 0 C)−1 C − 0 (k)I ]:
(60)
Note that polynomials
Qi (x) =
k
Y
j=1
j 6= i
(x − 0 (j));
16i6k;
are coprime, and by Bezout’s Theorem [10, p. 538], there exist numbers 1 ; 2 ; : : : ; k such that
1=
k
X
i Qi (x) = Q(x):
i=1
Taking x = 0 (i) one gets that
i =
k
Y
j=1
j 6= i
−1
(0 (i) − 0 (j))
:
Q(x) is the Lagrange interpolating polynomial and I =
f(x) =
k
X
i Ri f(x) =
i=1
k
X
gi (x);
Pk
i=1
i Ri . Hence one gets the descomposition
gi (x) = i Ri f(x):
(61)
i=1
If Rf(x) = 0; 06x61, then gi (x) is the projection of f(x) on the subspace Ker((B + 0 C)−1 C −
0 (i)I ) since
[(B + 0 C)−1 C − 0 (i)I ]gi (x) = i Rf(x):
(62)
Under the hypothesis
Rf(x) = 0;
06x61;
(63)
by (62), it follows that
[(B + 0 C)−1 C − 0 (i) I ]gi (x) = 0;
06x61:
(64)
Assume that gi (x) dened by (61) satises
gi (x) is twice continuously dierentiable in [0; 1] with
gi (0) + gi′ (0) = 0;
(1 − 0 0 (i))gi (1) + 0 (i)gi′ (1) = 0;
16i6k;
(65)
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
135
and
[(C − 0 (i)(B + 0 C)]A[I − [C − B + 0 C)]† [C − 0 (i) (B + 0 C)]] = 0;
16i6k:
(66)
Under these conditions (11) and the positive stability of matrix A, a series solution ui (x; t) of the
problem
ut (x; t) − A uxx (x; t) = 0;
0 ¡ x ¡ 1; t ¿ 0;
u(0; t) + ux (0; t) = 0;
t ¿ 0;
B u(1; t) + C ux (1; t) = 0;
t ¿ 0;
u(x; 0) = gi (x);
06x61;
(Pi )
is given by Theorem 3.1. By (61), then the function
U (x; t) =
k
X
ui (x; t);
(67)
i=1
is a solution of problem (1) – (4). Summarizing the following result has been established.
Theorem 3.2. Let A be a positive stable matrix; assume that the pencil B+C is regular and let 0
be a real number satisfying (11). Suppose that matrix (B+0 C)−1 C has k dierent real eigenvalues
0 (1); 0 (2); : : : ; 0 (k) such that condition (66) holds for 16i6k. Let R and Ri be matrices dened
by (59) and let E be the subspace dened by (60). Let f(x) be a twice continuously dierentiable
function on [0; 1] satisfying (63) and
Ri f(0) + Ri f′ (0) = 0;
(1 − 0 0 (i))Ri f(1) + 0 (i)Ri f′ (1) = 0;
(68)
16i6k:
Then conditions (65) hold true and problem (1)–(4) admits a solution given by (67) where ui (x; t)
is the solution of problem (Pi ) constructed by Theorem 3:1.
Example 3.1. Consider problem (1) – (4) where
2
0
A=
0
1
−2
0
−1
0
1
1
2
0
0
0
;
0
1
0
0
B=
0
1
0
0
1
0
0
1
0
0
1
0
;
0
0
−2
0
C =
0
0
1
0
2
2
1
2
0
0
2
0
:
0
0
Since B is invertible, the pencil B + C is regular and condition (11) is satised with 0 = 0. Let
0
0
H = (B + 0 C)−1 C =
0
−2
2
2
0
1
0 (1)(0 + 1) − 1 = −1 6= 0;
0 (2)(0 + 1) − 1 = 1 6= 0:
0
0
2
1
0
0
0
2
and note that (H ) = {0; 2}. If 0 (1) = 0; 0 (2) = 2, then we have
136
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
In this case we have that hypotheses (13) hold true for 0 (i); i = 1; 2, and
−2
0
H − 2I = R1 =
0
−2
2
0
0
1
0
0
R = R1 R2 =
0
0
0
0
0
−2
0
0
0
1
0
0
0
2
0
0
;
0
0
0
0
H = R2 =
0
−2
2
2
0
1
0
0
2
1
0
0
;
0
2
0
0
:
0
0
Condition (63) for f = (f1 ; f2 ; f3 ; f4 )T takes the form
f2 (x) = f3 (x);
06x61
(69)
and projections gi (x) for i = 1; 2, are
f1 (x) − f2 (x)
1
0
;
g1 (x) = − R1 f(x) =
0
2
f1 (x) − f2 (x)
f2 (x)
1
f2 (x)
:
g2 (x) = R2 f(x) =
f2 (x)
2
−f1 (x) + f2 (x) + f4 (x)
In this case we have
R†1 =
0
1
2
− 12
0
0
0
0
0
0
0
0
0
R2 A(I − R†2 R2 ) = 0
− 31
− 31
2 ;
3
0
and
R†2 =
1
16
1
4
1
16
1
4
− 161
− 161
0
0
1
8
0
1
2
− 18
− 14
0
;
0
1
4
R1 A(I − R†1 R1 ) = 0
and thus the subspaces Ker (H −2I ) and Ker H are invariant by the matrix A. Note that A is positive
stable because (A) = {1; 2}. Condition (68) in this case takes the form
f(0) + f′ (0) = 0;
f1 (1) = f2 (1);
f2 (1) + 2f2′ (1) = 0;
(70)
f1 (1) + 2f1′ (1) = f4 (1) + 2f4′ (1):
Thus for functions f(x) twice continuously dierentiable on [0; 1] satisfying (69) and (70), Theorem
3.2 provides a solution of problem (1) – (4).
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
137
4. Analytic-numerical solutions with a priori error bounds
The series solution provided in Section 3 presents some computational drawbacks. Firstly there is
the inniteness of the series. Secondly, eigenvalues are not exactly computable in spite of well-known
ecient algorithms, see [24,25,18]. Finally the computation of matrix exponentials is not an easy
task [21,29]. In this section we address the following question. Given an admissable error j ¿ 0 and
a domain D(t0 ; t1 ) = {(x; t); 06x61; 0 ¡ t0 ¡ t1 } how to construct an approximation avoiding the
above inconveniences and whose error with respect to the exact solution is less than j uniformly
in D(t0 ; t1 ). It is sucient to develop the approach when the exact series solution is given by (67)
with k = 1.
P
With the notation of Section 3, we have k Dn k2 = mj=1 |Dn ; j |2 , and by Parseval’s inequality [3,
p. 223, 6], one gets
|Dn ; j |2 6
1
Z
0
2
k D n k 6
|fj (x)|2 d x;
m Z
X
j=1
n¿0; 16j6m;
1
0
2
|fj (x)| d x =
Z
(71)
1
0
k f(x) k22 d x = M;
n¿0:
(72)
By (48) and (71) one gets
k Xn (x) k2 6(1 + n )M;
(73)
n¿0:
By (7), for t1 ¿t¿t0 it follows that
−n2 At
ke
k22
6e
−(A)t0 n2
m−1
X
j=0
√
(k A k t1 m)j 2j
n :
j!
(74)
Let k and ’k be the scalar functions dened for s ¿ 0 by
2
k (s) = e−s (A)t0 sk ;
’k (s) = (k + 2) ln(s) − s2 (A)t0 ;
06k62m − 1:
(75)
Note that
’′k (s) ¡ 0
for s ¿
s
k +2
= sk ;
t0 (A)
06k62m − 1:
(76)
Take sk′ ¿sk such that
(k + 2) ln(s) − s2 (A)t0 ¡ 0;
s¿sk′ ;
(77)
then by (77) it follows that
2
k (s) = sk e−s (A)t0 ¡ s−2 ;
s¿sk′ ;
06k62m − 1:
(78)
Since limn→+∞ n = +∞, and n ¡ n+1 , let n0 be the rst positive integer so that
n0 ¿ s? = max{sk′ ; 06k62m − 1}:
(79)
138
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
By (72) – (79) it follows that:
2
2
k e−n At Xn (x) k22 = k e−n At (sin(n x) − n cos(n x))Dn k22
√
m−1
X (2j (n ) + 2j+1 (n ))(k A k t1 m) j
6M
j!
j=0
m−1
X
6 n−2
=
n−2 L;
j=0
√
(k A k t1 m) j
M
j!
06x61;
t1 ¿t¿t0 ¿ 0;
n¿n0 ;
where
m−1
X
L=
j=0
Hence
√
j
(k A k t1 m)
M:
j!
2
X
X
2
L X −2
n−2 6 2
e−n At Xn (x) 6L
n ;
n¿n
n¿n
0
because in each of the ve cases quoted in Section 2, the eigenvalues n satisfy
n ¿ n − ¿(n − 1); n¿2:
2
Since
P
n¿1
n1
X
n
(81)
n−2 = 2 =6, taking n1 ¿ n0 so that
−2
n=1
(80)
n¿n0
0
2
j
1
−
¿
6 3L
(82)
by (80) and (82) one gets
2
X
j
−n2 At
e
Xn (x)
¡ ;
n ¿ n
3
1
06x61;
t¿t0 ¿ 0:
(83)
2
Thus the nite sum
U (x; t; n1 ) =
n1
X
2
X
(x)
+
e−n At Xn (x);
0
n=1
n1
X
2
e−n At Xn (x);
n=1
satises
0 = 0;
(84)
0 6= 0;
k U (x; t) − U (x; t; n1 ) k2 ¡ ; 06x61; t¿t0 ¿ 0;
(85)
3
but it has the imperfection of the exact computation of the eigenvalues 1 ; 2 ; : : : ; n1 . Now we
study the admissible tolerance when one approximates these eigenvalues i by ˜i ; 16i6n1 , in
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
139
expression (84):
U˜ (x; t; n1 ) =
n1
X
˜2
e−n At X˜n (x);
X
(x)
+
0
n=1
n1
X
˜2
e−n At X˜n (x);
n=1
with
X˜n (x) = {sin(˜n x) − ˜n cos(˜n x)}D˜n ;
0 = 0;
(86)
0 6= 0
06x61; t0 6t6t1 :
Note that
˜2
2
e−n At X˜n (x) − e−n At Xn (x)
2
2
˜
=e−n At {sin(˜n x) − ˜n cos(˜n x)}D˜n − e−n At {sin(n x) − n cos(n x)}Dn
2
2
˜
=(e−n At − e−n At ){sin(˜n x) − ˜n cos(˜n x)}D˜n
2
˜
+ e−n At {sin(˜n x) − ˜n cos(˜n x) − sin(n x) + n cos(n x)}D˜n
2
+ e−n At {sin(n x) − n cos(n x)}(D˜n − Dn ):
(87)
Let I () be dened for ¿ 0 by
I () =
Z
0
1
(sin(x) − cos(x))2 d x = 1 − sin2 () + (2 − 1)
1 sin(2)
+
2
4
(88)
and let
¿ 0; ¿ 0; 1 ¿ 0, be chosen so that
inf {I (); = n ; = ˜n ; 16n6n1 }¿
−1 ;
(89)
max{n ; ˜n ; 16n6n1 }6; 0 ¡ 1 ¡ min(1 ; ˜1 ):
Note that depending on the ve cases quoted in Section 2 and the interval where the eigenvalues
are located, these constants
, and 1 are always available. It is easy to show that
|sin(˜n x) − ˜n cos(˜n x) − sin(n x) + n cos(n x)|6|n − ˜n |(2 + n );
|sin(˜n x) − ˜n cos(˜n x)|61 + ˜n ;
(90)
06x61:
By (52) one gets
Dn ; j − D˜n ; j =
I (n )I (˜n )
+
R1
0
R1
0
fj (x){sin(˜n x) − ˜n cos(˜n x)} d x
I (n )I (˜n )
fj (x){sin(n x) − n cos(n x) − sin(˜n x) + ˜n cos(˜n x)} d x
:
I (n )
(91)
By the Cauchy–Schwarz inequality for integrals it follows that
Z
0
1
|fj (x){sin(˜n x) − ˜n cos(˜n x)}| d x6
Z
0
1
|fj (x)|2 d x
!1=2
(I (˜n ))1=2
(92)
140
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
and by (90) one gets
Z
1
|fj (x){sin(n x) − n cos(n x) − sin(˜n x) + ˜n cos(˜n x)}| d x
0
Z
6|n − ˜n |(2 + n )
1
2
0
|fj (x)| d x
!1=2
(93)
:
By (91) – (93) it follows that
|Dn; j − D˜n ; j |6
(
) R1
|I (n ) − I (˜n )|
+ |n − ˜n |(2 + n )
(I (˜n ))1=2
(
0
|fj (x)|2 d x)1=2
:
(I (n ))
Note that
I (n ) − I (˜n )
=
Z
0
1
{sin n x − n cos n x + sin ˜n x − ˜n cos ˜n x}{sin n x − n cos n x − sin ˜n x
+˜n cos ˜n x} d x
and by (89), (90) one gets
|I (n ) − I (˜n )|6|n − ˜n |(2 + n + ˜n )2 ;
Z
|Dn ; j − D˜n ; j |6((I (˜n ))−1=2 + 1)(I (˜n ))−1 (2 + n + ˜n )2
|Dn ; j − D˜n ; j |64(
1=2
+ 1)
(1 + )
1
Z
2
0
k Dn − D˜n k 64(
1=2 + 1)
(1 + )
Z
|fj (x)| d x
!1=2
1
|fj (x)|2 d x
0
0
1
|fj (x)|2 d x
|n − ˜n |;
!1=2
|n − ˜n |;
!1=2
|n − ˜n |;
16n6n1 ; 16j6mj ;
16n6n1 :
(94)
By (90) we also have
k D˜n k 62(1 + ˜n )
Z
1
0
!
k fj (x) k2 d x 62(1 + ˜n )
Z
0
1
k fj (x) k22 d x
!1=2
;
16n6n1 : (95)
By (7) and (89) one gets
−n2 At
ke
2
−˜n At
ke
k2 6e
−t0 (A)21
k2 6e
−t0 (A)21
m−1
X
j=0
m−1
X
j=0
(2 t1 k A k
j!
(2 t1 k A k
j!
√
m)j
√
m)j
;
(96)
;
16n6n1 ; t0 6t6t1 :
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
141
Let us write
˜2
2
˜2
˜2
2
e−tn A − e−t n A = e−t n A (e−t(n −n )A − I ):
By (7), (89) and the mean-value theorem, under the hypothesis |n − ˜n | ¡ 1 one gets
2
2
2
2
˜
˜
k e−tn A − e−t n A k2 6 k e−t n A k2 (exp(t(n2 − ˜n ) k A k) − 1);
√ 2 j
m−1
X
2
2
2
(t
k
A
k
m )
˜
1
k e−tn A − e−t n A k2 6e−t0 1 (A)
4 k A k t1 e2kAkt1 |n − ˜n |:
j!
j=0
(97)
By (87), (94) – (97), assuming that |n − ˜n | ¡ 1; 16n6n1 ; t0 6t6t1 , it follows that
2
2
˜
k e−t n A X˜n (x) − e−tn A Xn (x)− k2 6S k n − ˜n k;
t0 6t6t1 ; 16n6n1 ;
(98)
where
S = 4(1 + )2 e
−t0 21 (A)
m−1
X
j=0
!1=2
√
Z 1
(t1 k A k m2 ) j
t1 k A k e2kAkt1 + 2
k f(x) k22 d x
:
j!
0
(99)
Given ¿ 0 and n1 , consider approximations ˜n of n for 16n6n1 , so that
|n − ˜n | ¡ min 1;
3n1 S
16n6n1 ;
;
(100)
then by (84), (86), (98) and (99) it follows that
k U (x; t; n1 ) − U˜ (x; t; n1 ) k2 ¡ ; t0 6t6t1 ; 06x61:
3
By Theorem 11:2:4 of [11, p. 550], for t0 6t6t1 one gets
(101)
2
q
q
˜
−˜2 tA X
2
m
(−n tA)
˜2
e n −
6
(˜n t1 k A k)q+1 ekAkt1 |n |
q!
(q + 1)!
k=0
and by (89)
q
˜2 tA)q
−˜2 tA X
(2 t1 k A k)q+1 kAkt 2
(−
n
e n −
6
e 1 ;
q!
(q
+
1)!
k=0
t0 6t6t1 ; 16n6n1 :
Note that by (89), (90) and (95) one gets
k X˜n (x) k2 62(1 + )
2
Z
Since
(2 t1 k A k)q+1
= 0;
q→+∞
(q + 1)!
lim
0
1
k f(x)
k22
dx
!1=2
;
16n6n1 :
(102)
142
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
take the rst positive integer q0 such that
(2 t1 k A k)q+1
;
¡
R1
2
kAkt
2
(q + 1)!
6n1 e 1 (1 + ) ( 0 k f(x) k22 d x)1=2
(103)
then by (86), (101) and (103) it follows that u(x;
˜ t; n1 ; q0 ) dened by
u(x;
˜ t; n1 ; q0 ) =
2
q0
n1 X
X
(−˜n tA)k
X0 (x) +
X˜n (x);
k!
n=1 k=0
2
q0
n1 X
X
(−˜n tA)k
X˜n (x);
n=1 k=0
satises
k!
0 = 0;
(104)
0 6= 0;
k U˜ (x; t) − u(x;
˜ t; n1 ; q0 ) k2 ¡ ; t0 6t6t1 ; 06x61
3
and by (85), (102) and (105) one concludes
k u(x; t) − u(x;
˜ t; n1 ; q0 ) k2 ¡ ;
t0 6t6t1 ; 06x61:
(105)
(106)
Summarizing, the following result has been established.
Theorem 4.1. With the hypotheses and the notation of Theorem 3:1 and assuming that f 6= 0; let
; and 1 be dened by (89). Let ¿ 0; t0 ¿ 0; D(t0 ; t1 ) = {(x; t); 06x61; t0 6t6t1 } and let n1
be chosen so that (82) holds. Take the rst positive integer q0 satisfying (103). Then u(x;
˜ t; n1 ; q0 )
dened by (104) is an approximation of the exact series solution u(x; t) dened by (47); (52); (53)
satisfying (106).
Acknowledgements
This work has been supported by Generalitat Valenciana grants GV-CCN-1005796, GV-97-CB1263 and the Spanish D.G.I.C.Y.T grant PB96-1321-CO2-02.
References
[1] M.H. Alexander, D.E. Manopoulos, A stable linear reference potencial algorithm for solution of the quantum
close-coupled equations in molecular scattering theory, J. Chem. Phys. 86 (1987) 2044 –2050.
[2] T.M. Apostol, Mathematical Analysis, Addison-Wesley, Reading, MA, 1977.
[3] F.V. Atkinson, Discrete and Continuous Boundary Value Problems, Academic Press, New York, 1964.
[4] F.V. Atkinson, A.M. Krall, G.K. Leaf, A. Zettel, On the numerical computation of eigenvalues of Sturm–Liouville
problems with matrix coecients, Tech. Rep. Argonne National Laboratory, 1987.
[5] S.L. Campbell, C.D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman Pub. Co., London, 1979.
[6] E.A. Coddington, N. Levinson, Theory of Ordinary Dierential Equations, McGraw-Hill, New York, 1967.
[7] J. Crank, The Mathematics of Diusion, Second ed., Oxford Univ. Press, Oxford, 1995.
[8] N. Dunford, J. Schwartz, Linear Operators, Part I, Interscience, New York, 1957.
[9] G.B. Folland, Fourier Analysis and its Applications, Wadsworth & Brooks, Pacic Grove, California, 1992.
[10] R. Godement, Cours D’Algebre, Hermann, Paris, 1967.
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
143
[11] G. Golub, C.F. Van Loan, Matrix Computations, The Johns Hopkins Univ. Press, Baltimore, 1989.
[12] L. Greenberg, A Prufer Method for Calculating Eigenvalues of Self-Adjoint Systems of Ordinary Dierential
Equations, Parts 1 and 2, University of Maryland, Tech. Rep., TR91-24.
[13] T. Hueckel, M. Borsetto, A. Peano, Modelling of coupled thermo-elastoplastic-hydraulic response of clays subjected
to nuclear waste heat, in: R.W. Lewis, E. Hinton, P. Bettess, B.A. Schre
er (Eds.), Numerical Methods in Transient
and Coupled Problems, Wiley, New York, 1987, pp. 213 –235.
[14] E.L. Ince, Ordinary Dierential Equations, Dover, New York, 1927.
[15] L. Jodar, E. Ponsoda, Continuous numerical solution and error bounds for time dependent systems of partial
dierential equations: mixed problems, Comput. Math. Appl. 29 (8) (1995) 63 –71.
[16] R.D. Levine, M. Shapiro, B. Johnson, Transition probabilities in molecular collisions: computational studies of
rotational excitation, J. Chem. Phys. 52 (1) (1970) 1755 –1766.
[17] J.V. Lill, T.G. Schmalz, J.C. Light, Imbedded matrix Green’s functions in atomic and molecular scattering theory,
J. Chem. Phys. 78 (7) (1983) 4456 –4463.
[18] M. Marletta, Theory and Implementation of Algorithms for Sturm–Liouville Systems, Ph.D. Thesis, Royal Military
College of Science, Craneld 1991.
[19] V.S. Melezhik, I.V. Puzynin, T.P. Puzynina, L.N. Somov, Numerical solution of a system of integrodierential
equations arising the quantum-mechanical three-body problem with Coulomb interaction, J. Comput. Phys. 54 (1984)
221 –236.
[20] M.D. Mikhailov, M.N. Osizik,
Unied Analysis and Solutions of Heat and Mass Diusion, Wiley, New York, 1984.
[21] C.B. Moler, C.F. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, SIAM Rev. 20 (1978)
801 –836.
[22] F. Mrugala, D. Secrest, The generalized log-derivate method for inelastic and reactive collisions, J. Chem. Phys. 78
(10) (1983) 5954 –5961.
[23] E. Navarro, E. Ponsoda, L. Jodar, A matrix approach to the analytic-numerical solution of mixed partial dierential
systems, Comput. Math. Appl. 30 (1) (1995) 99 –109.
[24] J.D. Pryce, Numerical Solution of Sturm–Liouville Problems, Clarendon Press, Oxford, 1993.
[25] J.D. Pryce, M. Marletta, Automatic solution of Sturm–Liouville problems using Pruess method, J. Comput. Appl.
Math. 39 (1992) 57 –78.
[26] C.R. Rao, S.K. Mitra, Generalized Inverse of Matrices and its Applications, Wiley, New York, 1971.
[27] W.T. Reid, Ordinary Dierential Equations, J. Wiley, New York, 1971.
[28] M. Shapiro, G.G. Balint-Kurti, A new method for the exact calculation of vibrational-rotational energy levels of
triatomic molecules, J. Chem. Phys. 71 (3) (1979) 1461 –1469.
[29] R.B. Sidje, Expokit: a software package for computing matrix exponentials, ACM, Trans. Math. Software 24 (1998)
130 –156.
[30] I. Stakgold, Green’s Functions and Boundary Value Problems, Wiley, New York, 1979.
Analytic-numerical solutions with a priori error bounds for a
class of strongly coupled mixed partial dierential systems
L. Jodar ∗ , E. Navarro, J. Camacho
Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Apartado 22.012, Camino de Vera 14,
46071 Valencia, Spain
Received 8 January 1998; received in revised form 29 October 1998
Abstract
This paper deals with the construction of analytic-numerical solutions with a priori error bounds for systems of the type
ut = Auxx , u(0; t) + ux (0; t) = 0, Bu(1; t) + Cux (1; t) = 0, 0 ¡ x ¡ 1; t ¿ 0, u(x; 0) = f(x). Here A, B, C are matrices for
which no diagonalizable hypothesis is assumed. First an exact series solution is obtained after solving appropriate vector
Sturm–Liouville-type problems. Given an admissible error and a bounded subdomain D, after appropriate truncation an
approximate solution constructed in terms of data and approximate eigenvalues is given so that the error is less than the
c 1999 Elsevier Science B.V. All rights reserved.
prexed accuracy , uniformly in D.
Keywords: Coupled dierential system; Coupled boundary conditions; Analytic-numerical solution; Vector
Sturm–Liouville problem; A priori error bound; Moore–Penrose pseudoinverse
1. Introduction
Coupled partial dierential systems with coupled boundary value conditions are frequent in quantum mechanical scattering problems [1,19,27,28], chemical physics [16,17,22], modelling of coupled thermoelastoplastic response of clays subjected to nuclear waste heat [13], coupled diusion
problems [7,20,30]. The solution of these problems has motivated the study of vector and matrix
Sturm–Liouville problems [3,4,12,18]. In this paper we study systems of the type
ut (x; t) − Auxx (x; t) = 0;
u(0; t) + ux (0; t) = 0;
0 ¡ x ¡ 1; t ¿ 0;
t ¿ 0;
∗
Corresponding author.
E-mail address: [email protected] (L. Jodar).
c 1999 Elsevier Science B.V. All rights reserved.
0377-0427/99/$ - see front matter
PII: S 0 3 7 7 - 0 4 2 7 ( 9 9 ) 0 0 0 4 8 - 5
(1)
(2)
124
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Bu(1; t) + Cux (1; t) = 0;
u(x; 0) = f(x);
t ¿ 0;
06x61;
(3)
(4)
where the unknown u = (u1 ; : : : ; um )T and f(x) = (f1 ; : : : ; fm )T are m-dimensional vectors and A; B; C
are m×m complex matrices, elements of Cm×m . Mixed problems of the above type but with Dirichlet
conditions u(0; t) = 0, u(1; t) = 0 instead of (2), (3) have been treated in [15,23]. Here we assume
that A is a positive stable matrix
Re(z) ¿ 0
for all eigenvalues z of A;
(5)
and that the pencil B + C is regular, i.e, the determinant det(B + C) = |B + C| is not identically
zero. Conditions on function f(x) will be determined below in order to specify existence and
well-posedness conditions.
The organization of the paper is as follows. In Section 2, vector eigenvalue dierential problems
of the type
X ′′ (x) + 2 X (x) = 0;
0 ¡ x ¡ 1; t¿0; ¿ 0;
X (0) + X ′ (0) = 0;
BAj X (1) + CAj X ′ (1) = 0;
(6)
06j6p − 1;
are treated. Sucient conditions for the existence of an appropriate sequence of eigenvalues and
eigenfunctions, as well as some invariant properties of the problem, are studied. In Section 3 an
exact series solution of problem (1) – (4) is obtained using results of Section 2 and the separation
of variables technique. Section 4 deals with the construction of an analytic-numerical solution of the
problem with a prexed accuracy in a bounded subdomain. The approximation is expressed in terms
of the data and approximate eigenvalues of the underlying eigenvalue problem of the type (6).
Throughout this paper, the set of all eigenvalues of a matrix C in Cm×m is denoted by (C) and
its 2-norm denoted by k C k is dened by [11, p. 56]
k C k = sup
z 6= 0
k Cz k2
;
k z k2
where for a vector y in Cm , k y k2 is the usual euclidean norm of y. Let us introduce the notation
(C) = max{Re(!); ! ∈ (C)} and (C) = min{Re(!); ! ∈ (C)}. By [11, p. 556] it follows that
√
m−1
X k mC kk t k
tC
t(C)
k e k 6e
; t¿0:
(7)
k!
k=0
If B is a matrix in Cn×m we denote by B† its Moore–Penrose pseudoinverse. An account of properties,
examples and applications of this concept may be found in [5,26]. In particular the kernel of B,
denoted by Ker B coincides with the image of the matrix I −B† B denoted by Im(I −B† B). We say that
a subspace E of Cm is invariant by the matrix A ∈ Cm×m if A(E) ⊂ E. The property A(Ker G) ⊂ Ker G
is equivalent to the condition GA(I − G † G) = 0 since Ker G = Im(I − G † G), see [5]. The Moore–
Penrose pseudoinverse of a matrix can be eciently computed with the MATLAB package. The set
of all the real numbers will be denoted by R. The determinant of a matrix C ∈ Cm×m is denoted by
|C| and the conjugate transpose of C is denoted by C ? . If z = a + ib is a complex number we denote
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
125
by z = a − ib its conjugate. Finally we denote by C D the Drazin inverse of the matrix C ∈ Cm×m .
We recall that C D can be computed as a polynomial in D and by [5, p. 129] it follows that
z 6= 0;
z ∈ (C) if and only if z −1 ∈ (C D ):
(8)
If C is invertible then C D = C −1 . An account of properties, examples and algorithms for the computation of the Drazin inverse may be found in Ch. 7 of [5].
2. On a class of vector eigenvalue dierential problems
Vector Sturm–Liouville systems of the form
−(P(x)y′ )′ + Q(x)y = W (x)y;
a6x6b;
′
?
A?
1 y(a) + A2 P(a)y (a) = 0;
B1? y(b) + B2? P(b)y′ (b) = 0;
where P; Q and W are symmetric m × m matrix functions of x with P and W positive denite
for all x ∈ [a; b]; y is an n-vector function of x, is a scalar parameter, and A1 ; A2 ; B1 and B2 are
?
m × m matrices such that (A1 ; A2 ), (B1 ; B2 ) are full rank m × 2m matrices, with A?
1 A2 − A2 A1 = 0;
B1? B2 − B2? B1 = 0, have been recently treated in [3,4,12,18], and arise in a natural way in quantum
mechanical applications. The recent literature on quantum mechanical scattering problems related
to these problems includes [1,16,17,22]. In this section we consider vector eigenvalue dierential
problems of a dierent nature, in the sense that we admit more than two boundary value conditions
and under dierent hypotheses. If p ¿ 1, we consider the vector problem
X ′′ (x) + 2 X (x) = 0;
0 ¡ x ¡ 1; ¿0;
′
X (0) + X (0) = 0;
j
j
′
(9)
BA X (1) + CA X (1) = 0;
06j6p − 1;
where A; B; C are matrices in Cm×m such that the matrix pencil B + C is regular, i.e.,
|B + C| is not identically zero:
(10)
There exists complex numbers 0 such that B + 0 C is invertible:
(11)
Under hypothesis (10), since the determinant |B + C| is a polynomial of degree m, the matrix
B + C is singular at most for m dierent values of . Hence
Assume that
There exists an eigenvalue 0 of the matrix (B + 0 C)−1 C;
such that
(1 + 0 )0 6= 1
and
0
is a real number:
1 − 0 (1 + 0 )
The general solution of the vector equation X ′′ + 2 X = 0 is given by
X (x) =
(
sin( x)D + cos( x)E ;
D0 x + E0 ;
D ; E ∈ Cm ; ¿ 0;
D0 ; E0 ∈ Cm ; = 0:
(12)
(13)
126
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Condition X (0) + X ′ (0) = 0 produces
X (x) =
(
(sin( x) − cos( x))D ;
D0 (x − 1);
¿ 0;
(14)
= 0:
By imposing the boundary value conditions BAj X (1) + CAj X ′ (1) = 0; 06j6p − 1, one gets that
the vector D must satisfy
and
[(2 C + B) sin() + (C − B) cos()]Aj D = 0;
CD0 = 0;
06j6p − 1; ¿ 0
(15)
= 0:
(16)
In order to obtain nonzero solutions of problem (9), for ¿ 0, the vector D must be nonzero. By
(15) one gets the condition
H = (2 C + B) sin() + cos()(C − B) is singular:
(17)
Under hypothesis (11) we can write
H = sin()[(2 − 0 )C + B + 0 C] + cos()[(0 + 1)C − (B + 0 C)];
¿ 0;
(18)
(B + 0 C)−1 H = sin()[(2 − 0 )(B + 0 C)−1 C + I ] + cos()[(0 + 1)(B + 0 C)−1 C − I ]:
(19)
Under hypothesis (12), taking values of ¿ 0 satisfying
sin()(1 + (2 − 0 )0 ) + cos()((1 + 0 )0 − 1) = 0
(20)
by (19) and the spectral mapping theorem [8, p. 569], the matrix (B + 0 C)−1 H is singular. Hence
H is singular for values of ¿ 0 satisfying (20). Note that by (13), if ¿ 0 satises Eq. (20) one
gets sin() 6= 0 and Eq. (20) is equivalent to
0
; ¿0:
(21)
cot() = a2 + 1 + a; a =
1 − 0 (1 + 0 )
It is easy to show that Eq. (21) has an innite sequence of solutions {k } whose location depends
on the parameter a in the following way:
Case 1: (a ¿ 0)
k ¡ k ¡ (2k + 1) ; k¿1:
2
Case 2: (a = 0)
0 = 0 and k ¡ k ¡ (2k + 1) ; k¿1:
2
2
Case 3: (−4=(4 + ) ¡ a ¡ 0). Here we have innitely many subcases. Let
[
−4
−4
−4
˙
˙
J=
J j; J j =
;0 =
;
:
4 + 2
4 + (2j + 1)2 2 4 + (2j + 3)2 2
j¿0
Then, if
a=−
4
;
4 + (2j + 1)2
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
127
one gets the sequence {k; j }k¿1 of solutions of (21) satisfying
0 ¡ 1 ¡ ;
2
k ¡ k; j ¡ (k + 1) ;
2
16k6j
and
j+1; j = (2j + 1) ; (2k + 1) ¡ k; j ¡ (k + 1); k¿j + 1:
2
2
If a ∈ J˙j ; j¿0, then the solution sequence {k; j }k¿1 of (21) is located as follows:
0 ¡ 1 ¡ ;
2
k ¡ k; j ¡ (2k + 1) ;
2
16k6j + 1
and
(2k + 1) ¡ k; j ¡ (k + 1);
2
k¿j + 1:
Case 4: (a = −4=(4 + 2 ))
1 = ; (2k − 1) ¡ k ¡ k;
2
2
k¿2:
Case 5: (a ¡ − 4=(4 + 2 ))
(2k − 1) ¡ k ¡ k; k¿1:
2
The following lemma provides information about the eigenvalues 0 of the matrix (B + 0 C)−1 C
verifying (12).
Lemma 2.1. Let 0 and 0 be complex numbers satisfying conditions (10) and (11); respectively.
Assume that 0 satises one of the following conditions:
(i) 0 = 0;
(ii) 0 6= 0 and 0 =1=(0 +i Im(0 )); where 0 is a real eigenvalue of the matrix ((B+0 C)−1 C)D −
Im(0 )I satisfying
0 6= 1 + Re(0 ):
(22)
Then 1 − 0 (1 + 0 ) 6= 0 and 0 =(1 − (1 + 0 )0 ) is a real number.
Proof. If 0 = 0 the result is immediate. Under hypothesis (ii) it follows that
1
= 0 + i Im(0 );
0
(23)
and 1=0 ∈ (((B + 0 C)−1 C)D ). By (8) it follows that 0 ∈ ((B + 0 C)−1 C). By (22) one gets
1 − 0 (1 + 0 ) = 1 − (0 + 1)=(0 + i Im(0 )) 6= 0 and by (23)
0
0
1=0 − 1=0 − (0 − 0 )
= 0:
−
=
1 − (0 + 1)0 1 − (0 + 1)0 [1=0 − (0 + 1)][1=0 − (0 + 1)]
Thus the result is established.
128
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Remark 1. Note that if 0 ∈ R then 0 is any real eigenvalue of the matrix (B + 0 C)−1 C.
Let F(0 ; 0 ) be the eigenvalue set of problem (9) and note that 0 ∈ F(0 ; 0 ) if and, only if
0 = 0 is an eigenvalue of C, that is, if C is a singular matrix. If ¿ 0 satises (21) by (18) one
gets
1 − cot() = 0 (2 − 0 + (0 + 1) cot());
(24)
1
H = (2 − 0 + (0 + 1) cot())C + (1 − cot())(B + 0 C);
sin()
1
H = (2 − 0 + cot()(1 + 0 ))[C − 0 (B + 0 C)]:
sin()
(25)
Note that if ∈ F(0 ; 0 ) with ¿ 0, then 2 − 0 + (0 + 1) cot() 6= 0 because otherwise by (24),
1 − cot() = 0 and by (21) we would have 2 + 1 = 0, contradicting that ¿ 0. Hence (25) can
be written in the equivalent form
1
H = C − 0 (B + 0 C):
(26)
2
( − 0 + cot()(0 + 1))sin()
Let G(0 ; 0 ) be the matrix in Cmp×m dened by
C − 0 (B + 0 C)
[C − 0 (B + 0 C)]A
..
.
G(0 ; 0 ) =
[C − 0 (B + 0 C)]Ap−1
:
(27)
Condition (15) is equivalent to the condition
G(0 ; 0 )D = 0;
∈ F(0 ; 0 ):
(28)
m
Eq. (28) admits nonzero vector solutions D ∈ C if
rank G(0 ; 0 )D ¡ m:
Thus, if D 6= 0 satises (28), the vector functions
(29)
X (x) = [sin( x) − cos( x)]D
(30)
X0 (x) = (x − 1)D0 ; D0 ∈ Cm ; CD0 = 0;
(31)
(1 + 0 ) 0 = 1:
(32)
are eigenfunctions of problem (9). If C is a singular matrix, then from (28), = 0 is also an
eigenvalue of problem (9) and if CD0 = 0, D0 6= 0, the function
is an eigenfunction of problem (9). Suppose that
Substituting this condition into (20) one gets
(2 + 1) 0 sin() = 0
and since 0 6= 0, by (33) it follows that
sin() = 0:
(33)
(34)
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
129
By (17) and the spectral mapping theorem [8, p. 569], the matrix H is singular if and only if C − B
is singular and Eq. (28) takes the form
(C − B)Aj D = 0;
06j6p − 1; ¿ 0:
(35)
By (34) the positive eigenvalue set of problem (9) in this case is F(0 ; 0 ) = {k; k¿1} and the
corresponding eigenfunctions are
X (x) = (sin(kx) − k cos(kx))Dk ;
C −B
(C − B)A
G(0 ; 0 )Dk =
Dk = 0;
.
.
.
(C − B)Ap−1
:
Dk 6= 0
(36)
Summarizing, the following result has been established:
Theorem 2.1. Let p¿1 be an integer; assume that pencil B + C is regular and let 0 be dened
by (11). Assume that 0 is a eigenvalue of (B + 0 C)−1 C satisfying (13) and that matrix G(0 ; 0 )
dened by (27) satises (29). Then problem (9) admits a sequence of real non-negative eigenvalues
F(0 ; 0 ). If ∈ F(0 ; 0 ) is an eigenvalue, the associated eigenfunction set is given by (14) where
D is a nonzero m-dimensional vector lying in Ker G(0 ; 0 ). The explicit expression for D is given
by
D = (I − [G(0 ; 0 )† G(0 ; 0 )])S ;
(37)
where S is a nonzero arbitrary vector in Cm .
Proof. The proof is a consequence of previous comments and Theorem 2:3:2 of [26] that provides
the general solution of G(0 ; 0 )D = 0 in the form (37).
The following result shows that eigenvalues and eigenfunctions of problem (9) are independent of
the chosen number 0 verifying (11). Properties (13) or (32), and (29) are also invariant.
Theorem 2.2. Let 0 6= 1 be complex numbers such that B + 0 C and B + 1 C are invertible
matrices in Cm×m ; then the following
properties hold:
(i) If 0 ∈ (B + 0 C)−1 C then 1 − (0 − 1 )0 6= 0 and
1 =
0
∈ ((B + 0 C)−1 C):
1 − 0 (0 − 1 )
(ii) If 0 ; 1 ; 0 ; 1 are dened as in (i); then (0 − 0 ) satises (13) if and only if (1 ; 1 ) satises
(13). Furthermore; the eigenvalues of (9) are invariant when (1 ; 1 ) replaces (0 ; 0 ) in conditions
(11)–(13).
(iii) Eigenfunctions of problem (9) corresponding to the pair (0 ; 0 ) ∈ R2 coincide with those
associated to (1 ; 1 ) by (i). Furthermore; Ker((B + 0 C)−1 − 0 I ) = Ker((B + 1 C)−1 − 1 I ) and
matrices G(0 ; 0 ); G(1 ; 1 ) have the same rank.
130
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Proof. (i) If 0 = 0 ∈ ((B + 0 C)−1 C), then the matrix C is singular and hence 1 = 0 also belongs
to ((B + 1 C)−1 C). If 0 6= 0, by the properties of determinants one gets
0 = |(B + 0 C)−1 C − 0 I | = |C − 0 (B + 0 C)| = |C − 0 (B + 1 C + (0 − 1 )C)|
= |[1 − 0 (0 − 1 )]C − 0 (B + 1 C)|:
(38)
Since B + 1 C is invertible, by the last equation it follows that 1 − 0 (0 − 1 ) 6= 0 because otherwise
0 = |0 (B + 1 C)|, contradicting the invertibility of (B + 1 C). By (38) we can write
0
0 I
−1
:
(B + 1 C) = (B + 1 C) C −
0 = C −
1 − 0 (0 − 1 )
1 − 0 (0 − 1 )
Hence
0
= 1 ∈ ((B + 1 C)−1 C):
1 − 0 (0 − 1 )
(ii) Note that if 0 = 0 then 1 = 0 and Eq. (20) is the same replacing 1 by 0 . Thus F(0 ; 0) =
F(1 ; 0). If 0 6= 0, by part (i), 1 6= 0 and
1
1
1
=
=
1 − (1 + 1)1 1=1 − (1 + 1) 1=0 − (0 − 1 ) − (1 + 1)
1
0
=
=
1=0 − (0 + 1) 1 − 0 (0 + 1)
with
1
0
1 − 0 0
1 − 1 1
=1−
=1−
=
:
1 − (1 + 1)1
1 − (1 + 1)1
1 − (0 + 1)0 1 − (0 + 1)0
Hence the coecients of Eq. (20) are the same as the corresponding coecients when 0 and 0 are
replaced by 1 and 1 , respectively. This proves that F(0 ; 0 ) = F(1 ; 1 ) and these eigenvalues
sets of problem (9) are invariant when (1 ; 1 ) replaces (0 ; 0 ).
(iii) By Theorem 2.1, and parts (i) and (ii) of this theorem one gets that eigenfunctions of
problem (9) are invariant replacing (1 ; 1 ) by (0 ; 0 ). Vectors D appearing in (28) for F(0 ; 0 )
are the same as those appearing for F(1 ; 1 ). In order to prove this we show that Ker G(0 ; 0 ) =
Ker G(1 ; 1 ). First we prove that Ker((B + 0 C)−1 C − 0 I ) = Ker((B + 1 C)−1 C − 1 I ). Let y 6= 0
be a vector in Cm such that ((B + 0 C)−1 C − 0 I )y = 0. Hence
0 = [C − 0 (B + 0 C)]y = [C − 0 (B + 1 C + (0 − 1 )C)]y
= [1 − 0 (0 − 1 )C − 0 (B + 1 C)]y;
0= C −
0
(B + 1 C) y = [C − 1 (B + 1 C)]y
1 − 0 (0 − 1 )
and
y ∈ Ker[(B + 1 C)−1 C − 1 I ]:
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
131
By denition of G(0 ; 0 ) given by (27) we have
C − 0 (B + 0 C) = (1 − 0 (0 − 1 ))[C − 1 (B + 1 C)];
G(0 ; 0 ) = (1 − 0 (0 − 1 ))G(1 ; 1 ):
(39)
Hence G(0 ; 0 )D = 0 if and only if G(1 ; 1 )D = 0 and the result is established.
3. Construction of an exact series solution
Let us seek solutions v(x; t) of the boundary value problem (1) – (4) under hypotheses (11) – (13).
The separation of variables technique suggests
v (x; t) = T (t)X (x);
T (t) ∈ Cm×m ;
X (x) ∈ Cm ;
T′ (t) + 2 AT (t) = 0;
t¿0; ¿0;
X′′ (x) + 2 X (x) = 0;
0 ¡ x ¡ 1; ¿0;
¿0;
(40)
where
X (0) +
X′ (0)
BX (1) +
= 0;
CX′ (1)
(41)
(42)
= 0:
The solution of (41) satisfying T (0) = I , is T (t) = exp(−2 At), but although v (x; t) dened by
(40) satises (3)
@
@2
(v (x; t)) − A 2 (v (x; t)) = T′ (t)X (x) − AT (t)X′′ (x)
@t
@x
= −2 AT (t)X (x) + AT (t)2 X (x) = 0;
@
(v (0; t)) = T (t)(X (0) + X′ (0)) = 0;
@x
condition (4) is not guaranteed because
v (0; t) +
Bv (1; t) + C
@
(v (1; t)) = BT (t)X (1) + CT (t)X′ (1)
@x
= B exp(−2 At)X (1) + C exp(−2 At)X′ (1)
(43)
and the last equation does not vanish because matrices B and C do not commute with A. However,
if X (x) satises (42) together with condition
BAj X (1) + CAj X′ (1) = 0;
16j6p − 1;
(44)
where p is the degree of the minimal polynomial of A, that is, problem (9) for this value of p,
then we show now that v (x; t) dened by (40) satises (3) – (4). In fact, for each t¿0, the matrix
exponential T (t) = exp(−2 At) can be expressed as a matrix polynomial of A [8, p. 557],
X (t) = exp(−2 At) = b0 (t)I + b1 (t)A + · · · + bp−1 (t)Ap−1 ;
(45)
132
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
where bj (t); 06j6p − 1 are scalars. Under hypothesis (44), by (43) and (45) one gets
p−1
X
@
bj (t){BAj X (1) + CAj X′ (1)} = 0;
Bv (1; t) + C (v (1; t)) =
@x
j=0
t¿0:
(46)
Assume the hypotheses and notation of Theorem 2.1, let F(0 ; 0 ) be the eigenvalue set of problem
(9) and consider the candidate series solution of problem (1) – (4) of the form
X
2
X
(x)D
+
e−n At Xn (x)
0
0
n¿1
U (x; t) = X
2
e−n At Xn (x)
if 0 ∈ F(0 ; 0 );
(47)
if 0 6∈ F(0 ; 0 );
n¿1
where
Xn (x) = (sin(n x) − n cos(n x))Dn ;
n ¿ 0 n¿1;
X0 (x) = (x − 1)D0 ;
0 = 0:
(48)
Now we seek appropriate vectors Dn and D0 in Cm so that the initial condition (4) holds true. By
imposing (4) on (47) one gets that these vectors must satisfy
X
(x
−
1)D
+
{sin(n x) − n cos(n x)}Dn ;
0
n¿1
f(x) = X
{sin(n x) − n cos(n x)}Dn ;
n¿1
0 ∈ F(0 ; 0 );
(49)
0 6∈ F(0 ; 0 ):
Let f = (f1 ; f2 ; : : : ; fm )T and consider for 16j6m, the scalar regular Sturm–Liouville problem
Xj′′ (x) + 2 Xj (x) = 0;
Xj (0) + Xj′ (0) = 0;
0 ¡ x ¡ 1; ¿ 0
(50)
(1 − 0 0 )Xj (1) + 0 Xj′ (1) = 0;
It is easy to check that the eigenvalue set problem (50) is F(0 ; 0 ) and its common set of eigenfunctions is given by (48) substituting vectors Dn ; D0 by scalars Dn; j ; D0; j , respectively, 16j6m.
In order to guarantee well-posedness let us assume:
fj (x) is twice continuously dierentiable in [0; 1];
such that fj (0) + fj ′ (0) = 0; (1 − 0 0 )fj (1) + 0 fj ′ (1) = 0
(51)
and let
dn; j =
d0; j
R1
0
fj (x)(sin(n x) − n cos(n x)) d x
R1
0
(sin(n x) − n cos(n x))2 d x
0;
R1
=
0 fj (x)(x − 1) d x
;
R1
2
0
(x − 1) d x
;
n¿1;
(52)
0 6= 0;
0 = 0:
(53)
Note that 0 ∈ F(0 ; 0 ) means that 0 =0, or that C is singular. By the convergence theorem in series
of Sturm–Liouville functions [14; Ch: 11; 9; p: 90] one gets (49) and the series in (50) is uniformly
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
133
convergent to f(x) in [0; 1] for each j with 16j6m. Now we study conditions so that vectors
Dn = (dn; 1 ; dn; 2 ; : : : ; dn; m )T ; D0 = (d0;1 ; d0; 2 ; : : : ; d0; m )T dened by (52) – (53) satisfy (28) and (31),
respectively. Assume that
(C − 0 (B + 0 C))f(x) = 0;
06x61
(54)
and
Ker(C − 0 (B + 0 C)) is an invariant of A:
(55)
Taking into account that under hypothesis (10) we always have real values 0 satisfying (11),
from Theorems 2.1 and 2.2, without losing generality we may assume 0 ∈ R and that 0 is a real
eigenvalue of (B + 0 C)−1 C.
Under hypotheses (54) and (55) one gets
G(0 ; 0 )Dn = 0;
n ∈ F(0 ; 0 ); n¿0:
(56)
Finally we prove that under hypothesis (5) series (47) with coecients D dened by (52) – (53) is
a solution of problem (1) – (4). By inequality (7) it is easy to prove that in any set
D(t0 ) = {(x; t); 06x61; t¿t0 ¿ 0};
series (57) as well as those appearing after twice termwise partial dierentiation with respect to x
and once partial dierentiation with respect to t, namely
2
X
n2 e−n At Xn (x);
X
(−n2 )Ae−n At Xn (x);
n¿1
2
n¿1
are uniformly convergent in D(t0 ). By the dierentiation theorem of functional series [2, p. 403],
the series dened by (47), (52), (53) is twice partially dierentiable with respect to x, once with
respect to t and satises (1) – (4). Summarizing, by the convergence theorems of Sturm–Liouville
series expansions [9,14], the following result has been established.
Theorem 3.1. With the hypothesis and the notation of Theorem 2:1; assume that f(x) satises
(51) and (54); A is positive stable matrix and
[C − 0 (B + 0 C)]A{I − [C − 0 (B + 0 C)]† [C − 0 (B + 0 C)]} = 0:
(57)
Then U (x; t) dened by (47); (52); (53) is a solution of problem (1)–(4).
Proof. By the previous comments and the equivalence of conditions (57) and (55) the result is
established.
Now we construct a series solution of problem (1) – (4) under weaker hypotheses on the function
f(x) appearing in (4).
Assume that apart from hypothesis (11),
There exist k distinct real eigenvalues 0 (i) of (B + 0 C)−1 C;
16i6k:
(58)
134
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
Let R and Ri be the matrices dened by
k
Y
[(B + 0 C)−1 C − 0 (j)];
k
Y
[(B + 0 C)−1 C − 0 (j)I ]:
Ri =
j=1
j 6= i
R =
j=1
16i6k
(59)
If E = Ker R, then by the descomposition theorem [10, p. 536] we have
E = Ker[(B + 0 C)−1 C − 0 (1)I ] ⊕ · · · ⊕ Ker[(B + 0 C)−1 C − 0 (k)I ]:
(60)
Note that polynomials
Qi (x) =
k
Y
j=1
j 6= i
(x − 0 (j));
16i6k;
are coprime, and by Bezout’s Theorem [10, p. 538], there exist numbers 1 ; 2 ; : : : ; k such that
1=
k
X
i Qi (x) = Q(x):
i=1
Taking x = 0 (i) one gets that
i =
k
Y
j=1
j 6= i
−1
(0 (i) − 0 (j))
:
Q(x) is the Lagrange interpolating polynomial and I =
f(x) =
k
X
i Ri f(x) =
i=1
k
X
gi (x);
Pk
i=1
i Ri . Hence one gets the descomposition
gi (x) = i Ri f(x):
(61)
i=1
If Rf(x) = 0; 06x61, then gi (x) is the projection of f(x) on the subspace Ker((B + 0 C)−1 C −
0 (i)I ) since
[(B + 0 C)−1 C − 0 (i)I ]gi (x) = i Rf(x):
(62)
Under the hypothesis
Rf(x) = 0;
06x61;
(63)
by (62), it follows that
[(B + 0 C)−1 C − 0 (i) I ]gi (x) = 0;
06x61:
(64)
Assume that gi (x) dened by (61) satises
gi (x) is twice continuously dierentiable in [0; 1] with
gi (0) + gi′ (0) = 0;
(1 − 0 0 (i))gi (1) + 0 (i)gi′ (1) = 0;
16i6k;
(65)
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
135
and
[(C − 0 (i)(B + 0 C)]A[I − [C − B + 0 C)]† [C − 0 (i) (B + 0 C)]] = 0;
16i6k:
(66)
Under these conditions (11) and the positive stability of matrix A, a series solution ui (x; t) of the
problem
ut (x; t) − A uxx (x; t) = 0;
0 ¡ x ¡ 1; t ¿ 0;
u(0; t) + ux (0; t) = 0;
t ¿ 0;
B u(1; t) + C ux (1; t) = 0;
t ¿ 0;
u(x; 0) = gi (x);
06x61;
(Pi )
is given by Theorem 3.1. By (61), then the function
U (x; t) =
k
X
ui (x; t);
(67)
i=1
is a solution of problem (1) – (4). Summarizing the following result has been established.
Theorem 3.2. Let A be a positive stable matrix; assume that the pencil B+C is regular and let 0
be a real number satisfying (11). Suppose that matrix (B+0 C)−1 C has k dierent real eigenvalues
0 (1); 0 (2); : : : ; 0 (k) such that condition (66) holds for 16i6k. Let R and Ri be matrices dened
by (59) and let E be the subspace dened by (60). Let f(x) be a twice continuously dierentiable
function on [0; 1] satisfying (63) and
Ri f(0) + Ri f′ (0) = 0;
(1 − 0 0 (i))Ri f(1) + 0 (i)Ri f′ (1) = 0;
(68)
16i6k:
Then conditions (65) hold true and problem (1)–(4) admits a solution given by (67) where ui (x; t)
is the solution of problem (Pi ) constructed by Theorem 3:1.
Example 3.1. Consider problem (1) – (4) where
2
0
A=
0
1
−2
0
−1
0
1
1
2
0
0
0
;
0
1
0
0
B=
0
1
0
0
1
0
0
1
0
0
1
0
;
0
0
−2
0
C =
0
0
1
0
2
2
1
2
0
0
2
0
:
0
0
Since B is invertible, the pencil B + C is regular and condition (11) is satised with 0 = 0. Let
0
0
H = (B + 0 C)−1 C =
0
−2
2
2
0
1
0 (1)(0 + 1) − 1 = −1 6= 0;
0 (2)(0 + 1) − 1 = 1 6= 0:
0
0
2
1
0
0
0
2
and note that (H ) = {0; 2}. If 0 (1) = 0; 0 (2) = 2, then we have
136
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
In this case we have that hypotheses (13) hold true for 0 (i); i = 1; 2, and
−2
0
H − 2I = R1 =
0
−2
2
0
0
1
0
0
R = R1 R2 =
0
0
0
0
0
−2
0
0
0
1
0
0
0
2
0
0
;
0
0
0
0
H = R2 =
0
−2
2
2
0
1
0
0
2
1
0
0
;
0
2
0
0
:
0
0
Condition (63) for f = (f1 ; f2 ; f3 ; f4 )T takes the form
f2 (x) = f3 (x);
06x61
(69)
and projections gi (x) for i = 1; 2, are
f1 (x) − f2 (x)
1
0
;
g1 (x) = − R1 f(x) =
0
2
f1 (x) − f2 (x)
f2 (x)
1
f2 (x)
:
g2 (x) = R2 f(x) =
f2 (x)
2
−f1 (x) + f2 (x) + f4 (x)
In this case we have
R†1 =
0
1
2
− 12
0
0
0
0
0
0
0
0
0
R2 A(I − R†2 R2 ) = 0
− 31
− 31
2 ;
3
0
and
R†2 =
1
16
1
4
1
16
1
4
− 161
− 161
0
0
1
8
0
1
2
− 18
− 14
0
;
0
1
4
R1 A(I − R†1 R1 ) = 0
and thus the subspaces Ker (H −2I ) and Ker H are invariant by the matrix A. Note that A is positive
stable because (A) = {1; 2}. Condition (68) in this case takes the form
f(0) + f′ (0) = 0;
f1 (1) = f2 (1);
f2 (1) + 2f2′ (1) = 0;
(70)
f1 (1) + 2f1′ (1) = f4 (1) + 2f4′ (1):
Thus for functions f(x) twice continuously dierentiable on [0; 1] satisfying (69) and (70), Theorem
3.2 provides a solution of problem (1) – (4).
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
137
4. Analytic-numerical solutions with a priori error bounds
The series solution provided in Section 3 presents some computational drawbacks. Firstly there is
the inniteness of the series. Secondly, eigenvalues are not exactly computable in spite of well-known
ecient algorithms, see [24,25,18]. Finally the computation of matrix exponentials is not an easy
task [21,29]. In this section we address the following question. Given an admissable error j ¿ 0 and
a domain D(t0 ; t1 ) = {(x; t); 06x61; 0 ¡ t0 ¡ t1 } how to construct an approximation avoiding the
above inconveniences and whose error with respect to the exact solution is less than j uniformly
in D(t0 ; t1 ). It is sucient to develop the approach when the exact series solution is given by (67)
with k = 1.
P
With the notation of Section 3, we have k Dn k2 = mj=1 |Dn ; j |2 , and by Parseval’s inequality [3,
p. 223, 6], one gets
|Dn ; j |2 6
1
Z
0
2
k D n k 6
|fj (x)|2 d x;
m Z
X
j=1
n¿0; 16j6m;
1
0
2
|fj (x)| d x =
Z
(71)
1
0
k f(x) k22 d x = M;
n¿0:
(72)
By (48) and (71) one gets
k Xn (x) k2 6(1 + n )M;
(73)
n¿0:
By (7), for t1 ¿t¿t0 it follows that
−n2 At
ke
k22
6e
−(A)t0 n2
m−1
X
j=0
√
(k A k t1 m)j 2j
n :
j!
(74)
Let k and ’k be the scalar functions dened for s ¿ 0 by
2
k (s) = e−s (A)t0 sk ;
’k (s) = (k + 2) ln(s) − s2 (A)t0 ;
06k62m − 1:
(75)
Note that
’′k (s) ¡ 0
for s ¿
s
k +2
= sk ;
t0 (A)
06k62m − 1:
(76)
Take sk′ ¿sk such that
(k + 2) ln(s) − s2 (A)t0 ¡ 0;
s¿sk′ ;
(77)
then by (77) it follows that
2
k (s) = sk e−s (A)t0 ¡ s−2 ;
s¿sk′ ;
06k62m − 1:
(78)
Since limn→+∞ n = +∞, and n ¡ n+1 , let n0 be the rst positive integer so that
n0 ¿ s? = max{sk′ ; 06k62m − 1}:
(79)
138
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
By (72) – (79) it follows that:
2
2
k e−n At Xn (x) k22 = k e−n At (sin(n x) − n cos(n x))Dn k22
√
m−1
X (2j (n ) + 2j+1 (n ))(k A k t1 m) j
6M
j!
j=0
m−1
X
6 n−2
=
n−2 L;
j=0
√
(k A k t1 m) j
M
j!
06x61;
t1 ¿t¿t0 ¿ 0;
n¿n0 ;
where
m−1
X
L=
j=0
Hence
√
j
(k A k t1 m)
M:
j!
2
X
X
2
L X −2
n−2 6 2
e−n At Xn (x) 6L
n ;
n¿n
n¿n
0
because in each of the ve cases quoted in Section 2, the eigenvalues n satisfy
n ¿ n − ¿(n − 1); n¿2:
2
Since
P
n¿1
n1
X
n
(81)
n−2 = 2 =6, taking n1 ¿ n0 so that
−2
n=1
(80)
n¿n0
0
2
j
1
−
¿
6 3L
(82)
by (80) and (82) one gets
2
X
j
−n2 At
e
Xn (x)
¡ ;
n ¿ n
3
1
06x61;
t¿t0 ¿ 0:
(83)
2
Thus the nite sum
U (x; t; n1 ) =
n1
X
2
X
(x)
+
e−n At Xn (x);
0
n=1
n1
X
2
e−n At Xn (x);
n=1
satises
0 = 0;
(84)
0 6= 0;
k U (x; t) − U (x; t; n1 ) k2 ¡ ; 06x61; t¿t0 ¿ 0;
(85)
3
but it has the imperfection of the exact computation of the eigenvalues 1 ; 2 ; : : : ; n1 . Now we
study the admissible tolerance when one approximates these eigenvalues i by ˜i ; 16i6n1 , in
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
139
expression (84):
U˜ (x; t; n1 ) =
n1
X
˜2
e−n At X˜n (x);
X
(x)
+
0
n=1
n1
X
˜2
e−n At X˜n (x);
n=1
with
X˜n (x) = {sin(˜n x) − ˜n cos(˜n x)}D˜n ;
0 = 0;
(86)
0 6= 0
06x61; t0 6t6t1 :
Note that
˜2
2
e−n At X˜n (x) − e−n At Xn (x)
2
2
˜
=e−n At {sin(˜n x) − ˜n cos(˜n x)}D˜n − e−n At {sin(n x) − n cos(n x)}Dn
2
2
˜
=(e−n At − e−n At ){sin(˜n x) − ˜n cos(˜n x)}D˜n
2
˜
+ e−n At {sin(˜n x) − ˜n cos(˜n x) − sin(n x) + n cos(n x)}D˜n
2
+ e−n At {sin(n x) − n cos(n x)}(D˜n − Dn ):
(87)
Let I () be dened for ¿ 0 by
I () =
Z
0
1
(sin(x) − cos(x))2 d x = 1 − sin2 () + (2 − 1)
1 sin(2)
+
2
4
(88)
and let
¿ 0; ¿ 0; 1 ¿ 0, be chosen so that
inf {I (); = n ; = ˜n ; 16n6n1 }¿
−1 ;
(89)
max{n ; ˜n ; 16n6n1 }6; 0 ¡ 1 ¡ min(1 ; ˜1 ):
Note that depending on the ve cases quoted in Section 2 and the interval where the eigenvalues
are located, these constants
, and 1 are always available. It is easy to show that
|sin(˜n x) − ˜n cos(˜n x) − sin(n x) + n cos(n x)|6|n − ˜n |(2 + n );
|sin(˜n x) − ˜n cos(˜n x)|61 + ˜n ;
(90)
06x61:
By (52) one gets
Dn ; j − D˜n ; j =
I (n )I (˜n )
+
R1
0
R1
0
fj (x){sin(˜n x) − ˜n cos(˜n x)} d x
I (n )I (˜n )
fj (x){sin(n x) − n cos(n x) − sin(˜n x) + ˜n cos(˜n x)} d x
:
I (n )
(91)
By the Cauchy–Schwarz inequality for integrals it follows that
Z
0
1
|fj (x){sin(˜n x) − ˜n cos(˜n x)}| d x6
Z
0
1
|fj (x)|2 d x
!1=2
(I (˜n ))1=2
(92)
140
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
and by (90) one gets
Z
1
|fj (x){sin(n x) − n cos(n x) − sin(˜n x) + ˜n cos(˜n x)}| d x
0
Z
6|n − ˜n |(2 + n )
1
2
0
|fj (x)| d x
!1=2
(93)
:
By (91) – (93) it follows that
|Dn; j − D˜n ; j |6
(
) R1
|I (n ) − I (˜n )|
+ |n − ˜n |(2 + n )
(I (˜n ))1=2
(
0
|fj (x)|2 d x)1=2
:
(I (n ))
Note that
I (n ) − I (˜n )
=
Z
0
1
{sin n x − n cos n x + sin ˜n x − ˜n cos ˜n x}{sin n x − n cos n x − sin ˜n x
+˜n cos ˜n x} d x
and by (89), (90) one gets
|I (n ) − I (˜n )|6|n − ˜n |(2 + n + ˜n )2 ;
Z
|Dn ; j − D˜n ; j |6((I (˜n ))−1=2 + 1)(I (˜n ))−1 (2 + n + ˜n )2
|Dn ; j − D˜n ; j |64(
1=2
+ 1)
(1 + )
1
Z
2
0
k Dn − D˜n k 64(
1=2 + 1)
(1 + )
Z
|fj (x)| d x
!1=2
1
|fj (x)|2 d x
0
0
1
|fj (x)|2 d x
|n − ˜n |;
!1=2
|n − ˜n |;
!1=2
|n − ˜n |;
16n6n1 ; 16j6mj ;
16n6n1 :
(94)
By (90) we also have
k D˜n k 62(1 + ˜n )
Z
1
0
!
k fj (x) k2 d x 62(1 + ˜n )
Z
0
1
k fj (x) k22 d x
!1=2
;
16n6n1 : (95)
By (7) and (89) one gets
−n2 At
ke
2
−˜n At
ke
k2 6e
−t0 (A)21
k2 6e
−t0 (A)21
m−1
X
j=0
m−1
X
j=0
(2 t1 k A k
j!
(2 t1 k A k
j!
√
m)j
√
m)j
;
(96)
;
16n6n1 ; t0 6t6t1 :
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
141
Let us write
˜2
2
˜2
˜2
2
e−tn A − e−t n A = e−t n A (e−t(n −n )A − I ):
By (7), (89) and the mean-value theorem, under the hypothesis |n − ˜n | ¡ 1 one gets
2
2
2
2
˜
˜
k e−tn A − e−t n A k2 6 k e−t n A k2 (exp(t(n2 − ˜n ) k A k) − 1);
√ 2 j
m−1
X
2
2
2
(t
k
A
k
m )
˜
1
k e−tn A − e−t n A k2 6e−t0 1 (A)
4 k A k t1 e2kAkt1 |n − ˜n |:
j!
j=0
(97)
By (87), (94) – (97), assuming that |n − ˜n | ¡ 1; 16n6n1 ; t0 6t6t1 , it follows that
2
2
˜
k e−t n A X˜n (x) − e−tn A Xn (x)− k2 6S k n − ˜n k;
t0 6t6t1 ; 16n6n1 ;
(98)
where
S = 4(1 + )2 e
−t0 21 (A)
m−1
X
j=0
!1=2
√
Z 1
(t1 k A k m2 ) j
t1 k A k e2kAkt1 + 2
k f(x) k22 d x
:
j!
0
(99)
Given ¿ 0 and n1 , consider approximations ˜n of n for 16n6n1 , so that
|n − ˜n | ¡ min 1;
3n1 S
16n6n1 ;
;
(100)
then by (84), (86), (98) and (99) it follows that
k U (x; t; n1 ) − U˜ (x; t; n1 ) k2 ¡ ; t0 6t6t1 ; 06x61:
3
By Theorem 11:2:4 of [11, p. 550], for t0 6t6t1 one gets
(101)
2
q
q
˜
−˜2 tA X
2
m
(−n tA)
˜2
e n −
6
(˜n t1 k A k)q+1 ekAkt1 |n |
q!
(q + 1)!
k=0
and by (89)
q
˜2 tA)q
−˜2 tA X
(2 t1 k A k)q+1 kAkt 2
(−
n
e n −
6
e 1 ;
q!
(q
+
1)!
k=0
t0 6t6t1 ; 16n6n1 :
Note that by (89), (90) and (95) one gets
k X˜n (x) k2 62(1 + )
2
Z
Since
(2 t1 k A k)q+1
= 0;
q→+∞
(q + 1)!
lim
0
1
k f(x)
k22
dx
!1=2
;
16n6n1 :
(102)
142
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
take the rst positive integer q0 such that
(2 t1 k A k)q+1
;
¡
R1
2
kAkt
2
(q + 1)!
6n1 e 1 (1 + ) ( 0 k f(x) k22 d x)1=2
(103)
then by (86), (101) and (103) it follows that u(x;
˜ t; n1 ; q0 ) dened by
u(x;
˜ t; n1 ; q0 ) =
2
q0
n1 X
X
(−˜n tA)k
X0 (x) +
X˜n (x);
k!
n=1 k=0
2
q0
n1 X
X
(−˜n tA)k
X˜n (x);
n=1 k=0
satises
k!
0 = 0;
(104)
0 6= 0;
k U˜ (x; t) − u(x;
˜ t; n1 ; q0 ) k2 ¡ ; t0 6t6t1 ; 06x61
3
and by (85), (102) and (105) one concludes
k u(x; t) − u(x;
˜ t; n1 ; q0 ) k2 ¡ ;
t0 6t6t1 ; 06x61:
(105)
(106)
Summarizing, the following result has been established.
Theorem 4.1. With the hypotheses and the notation of Theorem 3:1 and assuming that f 6= 0; let
; and 1 be dened by (89). Let ¿ 0; t0 ¿ 0; D(t0 ; t1 ) = {(x; t); 06x61; t0 6t6t1 } and let n1
be chosen so that (82) holds. Take the rst positive integer q0 satisfying (103). Then u(x;
˜ t; n1 ; q0 )
dened by (104) is an approximation of the exact series solution u(x; t) dened by (47); (52); (53)
satisfying (106).
Acknowledgements
This work has been supported by Generalitat Valenciana grants GV-CCN-1005796, GV-97-CB1263 and the Spanish D.G.I.C.Y.T grant PB96-1321-CO2-02.
References
[1] M.H. Alexander, D.E. Manopoulos, A stable linear reference potencial algorithm for solution of the quantum
close-coupled equations in molecular scattering theory, J. Chem. Phys. 86 (1987) 2044 –2050.
[2] T.M. Apostol, Mathematical Analysis, Addison-Wesley, Reading, MA, 1977.
[3] F.V. Atkinson, Discrete and Continuous Boundary Value Problems, Academic Press, New York, 1964.
[4] F.V. Atkinson, A.M. Krall, G.K. Leaf, A. Zettel, On the numerical computation of eigenvalues of Sturm–Liouville
problems with matrix coecients, Tech. Rep. Argonne National Laboratory, 1987.
[5] S.L. Campbell, C.D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman Pub. Co., London, 1979.
[6] E.A. Coddington, N. Levinson, Theory of Ordinary Dierential Equations, McGraw-Hill, New York, 1967.
[7] J. Crank, The Mathematics of Diusion, Second ed., Oxford Univ. Press, Oxford, 1995.
[8] N. Dunford, J. Schwartz, Linear Operators, Part I, Interscience, New York, 1957.
[9] G.B. Folland, Fourier Analysis and its Applications, Wadsworth & Brooks, Pacic Grove, California, 1992.
[10] R. Godement, Cours D’Algebre, Hermann, Paris, 1967.
L. Jodar et al. / Journal of Computational and Applied Mathematics 104 (1999) 123–143
143
[11] G. Golub, C.F. Van Loan, Matrix Computations, The Johns Hopkins Univ. Press, Baltimore, 1989.
[12] L. Greenberg, A Prufer Method for Calculating Eigenvalues of Self-Adjoint Systems of Ordinary Dierential
Equations, Parts 1 and 2, University of Maryland, Tech. Rep., TR91-24.
[13] T. Hueckel, M. Borsetto, A. Peano, Modelling of coupled thermo-elastoplastic-hydraulic response of clays subjected
to nuclear waste heat, in: R.W. Lewis, E. Hinton, P. Bettess, B.A. Schre
er (Eds.), Numerical Methods in Transient
and Coupled Problems, Wiley, New York, 1987, pp. 213 –235.
[14] E.L. Ince, Ordinary Dierential Equations, Dover, New York, 1927.
[15] L. Jodar, E. Ponsoda, Continuous numerical solution and error bounds for time dependent systems of partial
dierential equations: mixed problems, Comput. Math. Appl. 29 (8) (1995) 63 –71.
[16] R.D. Levine, M. Shapiro, B. Johnson, Transition probabilities in molecular collisions: computational studies of
rotational excitation, J. Chem. Phys. 52 (1) (1970) 1755 –1766.
[17] J.V. Lill, T.G. Schmalz, J.C. Light, Imbedded matrix Green’s functions in atomic and molecular scattering theory,
J. Chem. Phys. 78 (7) (1983) 4456 –4463.
[18] M. Marletta, Theory and Implementation of Algorithms for Sturm–Liouville Systems, Ph.D. Thesis, Royal Military
College of Science, Craneld 1991.
[19] V.S. Melezhik, I.V. Puzynin, T.P. Puzynina, L.N. Somov, Numerical solution of a system of integrodierential
equations arising the quantum-mechanical three-body problem with Coulomb interaction, J. Comput. Phys. 54 (1984)
221 –236.
[20] M.D. Mikhailov, M.N. Osizik,
Unied Analysis and Solutions of Heat and Mass Diusion, Wiley, New York, 1984.
[21] C.B. Moler, C.F. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, SIAM Rev. 20 (1978)
801 –836.
[22] F. Mrugala, D. Secrest, The generalized log-derivate method for inelastic and reactive collisions, J. Chem. Phys. 78
(10) (1983) 5954 –5961.
[23] E. Navarro, E. Ponsoda, L. Jodar, A matrix approach to the analytic-numerical solution of mixed partial dierential
systems, Comput. Math. Appl. 30 (1) (1995) 99 –109.
[24] J.D. Pryce, Numerical Solution of Sturm–Liouville Problems, Clarendon Press, Oxford, 1993.
[25] J.D. Pryce, M. Marletta, Automatic solution of Sturm–Liouville problems using Pruess method, J. Comput. Appl.
Math. 39 (1992) 57 –78.
[26] C.R. Rao, S.K. Mitra, Generalized Inverse of Matrices and its Applications, Wiley, New York, 1971.
[27] W.T. Reid, Ordinary Dierential Equations, J. Wiley, New York, 1971.
[28] M. Shapiro, G.G. Balint-Kurti, A new method for the exact calculation of vibrational-rotational energy levels of
triatomic molecules, J. Chem. Phys. 71 (3) (1979) 1461 –1469.
[29] R.B. Sidje, Expokit: a software package for computing matrix exponentials, ACM, Trans. Math. Software 24 (1998)
130 –156.
[30] I. Stakgold, Green’s Functions and Boundary Value Problems, Wiley, New York, 1979.