Directory UMM :Data Elmu:jurnal:J-a:Journal of Computational And Applied Mathematics:Vol99.Issue1-2.1998:

Journal of Computational and Applied Mathematics 99 (1998) 105–117

Some applications of the Hermite matrix
polynomials series expansions1
Emilio Defez ∗ , Lucas Jodar
Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Spain
Received 20 October 1997; received in revised form 5 May 1998

Abstract
This paper deals with Hermite matrix polynomials expansions of some relevant matrix functions appearing in the
solution of di erential systems. Properties of Hermite matrix polynomials such as the three terms recurrence formula
permit an ecient computation of matrix functions avoiding important computational drawbacks of other well-known
methods. Results are applied to compute accurate approximations of certain di erential systems in terms of Hermite
c 1998 Elsevier Science B.V. All rights reserved.
matrix polynomials.
AMS classi cation: 15A60; 33C25; 34A50; 41A10
Keywords: Hermite matrix polynomials; Matrix functions; Di erential systems; Approximate solutions; Error bounds;
Sylvester equation

1. Introduction and preliminaries


The evaluation of matrix functions is frequent in the solution of di erential systems. So, the
system
Y ′ = AY;

Y (0) = y0 ;

(1)

where A is matrix and y0 is a vector, arises in the semidiscretization of the heat equation [17]. The
matrix di erential problem
Y ′′ + A2 Y = 0;

Y (0) = P;

Y ′ (0) = Q;



(2)


Corresponding author. E-mail: [email protected].
This work has been partially supported by the Generalitat Valenciana grant GV-C-CN-1005796 and the Spanish
D.G.I.C.Y.T grant PB96-1321-CO2-02.
1

c 1998 Elsevier Science B.V. All rights reserved.
0377-0427/98/$ – see front matter
PII: S 0 3 7 7 - 0 4 2 7 ( 9 8 ) 0 0 1 4 9 - 6

106

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

where A is a matrix and P and Q are vectors, arises from semidiscretization of the wave equation [15]. The Sylvester matrix di erential equation
X ′ = AX + X B;

X (0) = C;

(3)


where A, B and C are matrices, appears in systems stability and control [4, p. 226]. The solutions
of problems (1)–(3) can be expressed in terms of exp(At ); cos(At ); sin(At ) and exp(Bt ) and the
computation of these matrix functions has motivated many and varied approaches. An excelent
survey about the matrix exponential is [11] and the study of cos(At ) is treated in [1, 14, 15]. Some
of the drawbacks of the existing methods are
(i) The computation of eigenvalues or eigenvectors [11, 16];
(ii) The computation of the inverses of matrices (see Pade methods, [5, p. 557]) [11, 14];
(iii) Storage problems and expensive computational time [5, Chap. 1];
(iv) Round-o accumulation errors [5, p. 551];
(v) Diculties for computing approximations of matrix functions with a pre xed accuracy [15,
16].
In this paper we propose a new method for computing the above matrix functions using
Hermite matrix polynomials which avoids the quoted computational diculties. Results are applied
to construct approximations of problems (1)–(3) with a pre xed accuracy in a bounded domain.
If D0 is the complex plane cut along the negative real axis and log(z ) denotes the principal
logarithm of z , [13, p. 72], then z 1=2 represents exp( 21 log(z )). If A is a matrix in C r×r ; its 2-norm
denoted kAk2 is de ned by kAk2 = kAxk2 =kxk2 , where for a vector y in C r , kyk2 denotes the usual
euclidean norm of y, kyk2 = (yT y)1=2 . The set of all the eigenvalues of A is denoted by (A). If
f(z ) and g(z ) are holomorphic functions of the complex variable z , which are de ned in an open
set

of the complex plane, and A is a matrix in C r×r such that (A) ⊂
, then from the properties
of the matrix functional calculus,
f(A)g(A) = g(A)f(A). If A is a matrix
√ [3, p. 558], it follows that
1=2
1=2
with (A) ⊂ D0 , then A = A denotes the image by z of the matrix functional calculus acting
on the matrix A. We say that A is a positive stable matrix if Re(z )¿0 for all z ∈ (A).
For the sake of clarity in the presentation we recall some properties of the Hermite matrix
polynomials which will be used below and that have been established in [7], see also [8]. If A is a
positive stable matrix in Cr×r , the nth Hermite matrix polynomial is de ned by
Hn (x; A) = n!



[n=2]
X
(−1) k ( 2A) n−2k
k=0


k !(n − 2k )!

x n−2k ;

(4)

and satisfy the three terms recurrence relationship
p

Hn (x; A) = Ix (2A)Hn−1 (x; A) − 2(n − 1)Hn−2 (x; A);
H−1 (x; A) = 0;

n¿1

H0 (x; A) = I;

(5)

where I is the identity matrix in C r×r . By [7] we also have

ex t



2A−t 2 I

=

X 1

n!
n¿0

Hn (x; A)t n ;

|t|¡+∞;

(6)

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117


107

and if Hn (x) denotes the classical nth scalar Hermite polynomial, then one gets


Hn ( iak 22Ak2 )
kHn (x; A)k2 6
;
in
X Hn ( k



2Ak2 ia
) n
2
|t|
n!in


n¿0

|x|¡a; n ∈ N;

(7)


= exp(|t|ak 2Ak2 + |t|2 );

|t|¡ ∞:

(8)

If A(k; n) and B(k; n) are matrices on C r×r for n¿0; k¿0, in an analogous way to the proof of
Lemma 11 of [12, p. 57] it follows that
XX

A(k; n) =

n¿0 k¿0


[n=2]
XX
n¿0 k=0

X X

A(k; n − 2k );

B(k; n) =

n¿0 k¿0

n
XX
n¿0 k=0

B(k; n − k ):

(9)


If B is a matrix in C r×r and n0 is a positive integer we denote by A(B; n0 ), D(B; n0 ) and E(B; n0 )
the real numbers:
A(B; n0 ) =

[n=2]
n0 X
X
(kBk) n−2k
n=0 k=0

E(B; n0 ) =

n0 X
n
X
n=0 k=0

k !(n − 2k )!


;

D(B; n0 ) =

n0 X
n
X
(kBk)2(n−k)
n=0 k=0

(kBk)2(n−k)+1
:
k !(2(n − k ) + 1)!

k !(2(n − k ))!

;

(10)

(11)

This paper is organized as follows. In Section 2 some new properties of Hermite matrix polynomials
are established. Section 3 deals with the Hermite matrix polynomial series expansions of e A t , sin(At )
and cos(At ) of an arbitrary matrix as well as with theirs nite series truncation with a pre xed
accuracy in a bounded domain. Finally, in Section 4 analytic-numerical approximations of problems
(1)–(3) are contructed in terms of Hermite matrix polynomial series expansions. Given an admissible
error ¿0 and a bounded domain D, an approximation in terms of Hermite matrix polynomials is
contructed so that the error with respect to the exact solution is uniformly upper bounded by
 in D.

2. On Hermite matrix polynomials

Let A be a positive stable matrix. By (6) it follows that
ex t



2A+t 2I

=

X Hn (x; A)
n¿0

n!

t n:

Hence


X ( 2Ax) n
n¿0

n!

n

t =

X t 2n
n¿0

n!

!

X Hn (x; A)
n¿0

n!

t

n

!

(12)

108

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

and by (9) one gets
X t 2n
n¿0

n!

!

X Hn (x; A)

n!

n¿0

t

n

!

=

[n=2]
XX
Hn−2k (x; A)
n¿0 k=0

k !(n − 2k )!

t n:

(13)

By identi cation of the coecient of t n in (12) and (13) one gets
n



x I = ( 2A)

−n

[n=2]
X

n!
Hn−2k (x; A);
k !(n − 2k )!

k=0

− ∞¡x¡+∞:

(14)

Lemma 2.1. Let be A a positive stable matrix; K¿2 and n¿0 integer. Then
√ √
2
kHn (x; A)k2 6 n! 2n K n e x ;

K
:
|x|¡ √
k 2Ak2

(15)

Proof. It is clear that for n = 0 the inequality (15) holds true. Suppose that (15) is true for
k = 0; 1; : : : ; n. Taking norms in the three terms formula (5) and using the induction hypothesis
one gets
p

kHn+1 (x; A)k2 6 |x| k (2A)k2 kHn (x; A)k2 + 2nkHn−1 (x; A)k2

√ √
p
2
2
6 K n! 2n K n e x + 2n (n − 1)! 2n−1 K n−1 e x

√ √
2
= n! 2n K n−1 e x (K 2 + 2n )

=
Since


K2
√
+
n+1

s




2
K2
+
(n + 1)! 2n K n−1 e x  √
n+1

p



!

K2 √
2n 
6 √ + 2
n+1
2

=

s



2n 
:
n+1

(16)

K2 + 2 √ 2
√ 6 2K ;
2

by (16) one gets (15) for n + 1.

The following result that may be regarded as a matrix version of the algorithm given in [10],
provides an ecient procedure for computing nite matrix polynomial series expansions in terms
of matrix polynomials satisfying a three terms formula. In particular, it is applicable to the Hermite
matrix polynomials. Note the remarkable fact from a computational point of view that the evaluation
of a matrix polynomial at x is only expressed in terms of P0 ( x ).
Theorem 2.1. Let {Pn (x)}n¿0 be a sequence of matrix polynomials such that
An Pn (x) = (xI − Bn )Pn−1 (x) − Cn Pn−2 (x);

n¿1;

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

109

where An is an invertible matrix in C r×r and the degree of Pn (x) is n. Let Q(x) be a matrix
polynomial de ned by
Q (x ) =

n
X

Ej Pj (x);

j=0

where Ej is a matrix in C r×r . Let x be any real number and consider the sequence of matrices
de ned by
Dn = En ;
Dn−1 = En−1 + Dn A−1
 I − Bn );
n (x

and for j = n − 2; : : : ; 0;
Dj = Ej + Dj+1 A−1
 I − Bj+1 ) − Dj+2 A−1
j+2 Cj+2 :
j+1 ( x

Then Q( x ) = D0 P0 ( x ).
Proof. See [10].
3. Hermite matrix polynomials series expansions

We begin this section with Hermite matrix polynomial series expansion of exp(Bt ); sin(Bt ) and
cos(Bt ) for matrices satisfying the spectral property
|Re(z )|¿|Im(z )|

for all z ∈ (B):

(17)

Theorem 3.1. Let B be a matrix in C r×r satisfying (17). Then

eB x = e

X Hn (x; 1 B 2 )
2
n¿0

n!

;

−∞¡x¡+∞;




1
1 X (−1) n
H2n x; B 2 ;
cos(Bx) =
e n¿0 (2n)!
2


(18)

−∞¡x¡+∞;

(19)



(20)

1 X (−1) n
1
H2n+1 x; B 2 ;
sin(Bx) =
e n¿0 (2n + 1)!
2

−∞¡x¡+∞:

Let E (B; x; n); C (B; x; n) and S (B; x; n) be respectively the nth partial sum of series (18)–(20)
respectively. Given ¿0 and c¿0; let a = c +  and let n0 ; n1 and n2 be the rst positive integers
satisfying

(21)
A(Ba; n0 )¿ exp(akBk2 + 1)− ;
e
D(Ba; n1 )¿e(cosh(akBk2 )−);

(22)

E(Ba; n2 )¿e(sinh(akBk2 )−);

(23)

110

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

where A; D and E are de ned by (10) and (11). Then for |x|¡c and n ¿ max{n0 ; n1 ; n2 } one
gets
ke B x −E (B; x; n)k2 ¡

(24)

kcos(Bx)−C (B; x; n)k2 ¡;

(25)

ksin(Bx)−S (B; x; n)k2 ¡:

(26)

Proof. Let A = 12 B 2 . By the spectral mapping theorem [3, p. 569] and (17) it follows that






1 2
Re
b
2



−∞¡x¡+∞:




1
(Re(b))2 − (Im(b))2 ¿0; b ∈ (B):
2

Thus A is a positive stable matrix and taking t = 1 in (6), B = 2A, one gets

(A) =

e

Bx

=

1 2
b ; b ∈  (B ) ;
2

X e
n¿0

n!

Hn



1
x; B 2 ;
2

=

If n0 ¿ 1 one gets




n0

X e
 1 
X kHk (x; 1 B 2 )k2

1 2

Bx X e

2
Hk x; B 2
= e
Hk! x; B
6
:
e −



k
2
k!
2
k
!
2
k=0

2

k¿n0

(27)

k¿n0

By (7) for ¿0, c¿0, a = c +  and |x|¡a, it follows that
e

X kHk (x; 1 B 2 )k2
2

k!

k¿n0

6e

X Hk ( kBk2 ai )
2

k !i k

k¿0

−e

n0
X
Hk ( kBk22 ai )
k=0

k !i k

:

(28)

By (8), (10), (27) and (28) one gets



n0
n0

X
Hk ( kBk22 ai )
1 2


Bx X e
Hk x; B
6 exp(akBk2 +1)−e
e −


k!
2
k !i k
k=0
k=0
2

= exp(akBk2 +1)−A(Ba; n0 ):

By [12, p. 57] one gets
lim

n→∞

[k=2]
k−2l
n X
X
(kBk2 a)
k=0 l=0

l!(k − 2l)!

= exp(akBk2 +1):

Hence, taking the rst positive integer n0 satisfying (21), then by (29) one gets



n0

1 2

Bx X e

Hk x; B
6;
e −


k!
2
k=0

2

|x|¡c; n¿n0 :

Considering (14) for the positive stable matrix A = 12 B 2 , it follows that
2n

x I =B

−2n

n
X
k=0





1
(2n)!
H2(n−k) x; B 2 :
k !(2(n − k ))!
2

(29)

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

111

Taking into account the series expansion of cos(Bx) and (9) we can write
cos(Bx) =

X (−1) n B 2n x2n

(2n)!

n¿0

=

X X (−1)
n¿0 k¿0

n¿0 k=0

n+k

k ! (2n)!

Hence
cos(Bx) =

=

n
XX

H2n







1
(−1) n
H2(n−k) x; B 2
k ! (2(n − k ))!
2

1
x; B 2
2





1
1 X (−1) n
H2n x; B 2 ;
e n¿0 (2n)!
2

=

X X (−1) k
n¿0

k¿0

k!

!







(−1) n
1
H2n x; B 2 :
(2n)!
2

−∞¡x¡+∞:

(30)

By [12, p. 57] one gets
lim

n→∞

n X
k
X
(kBk2 a)2(k−l)
k=0 l=0

l!(2(k − l))!

= e cosh(akBk2 ):

Let us consider the n1 th partial sum of the series (30). Then in an analogous way to the previous
computations, for |x|¡c one gets
1
kcos(Bx)−C (B; x; n1 )k2 6 cosh(akBk2 )− D(aB; n1 );
e

a = c + ;

where D(aB; n1 ) is given by (10). Taking the rst integer n1 ¿1 satisfying (22) one gets (25).
By (14) we also have
x 2n+1 I = B−(2n+1)

n
X
k=0





(2n + 1)!
1
H2(n−k)+1 x; B 2 :
k !(2(n − k ) + 1)!
2

and in an analogous way to the series expansion of cos(Bx), one gets (20). By [12, p. 57] one gets
lim

n→∞

n X
k
X
(kBk2 a)2(k−l)+1
k=0 l=0

l!(2(k − l) + 1)!

= e sinh(akBk2 ):

Then in an analogous way to the previous computations, one gets (26). Hence the result is established.
Now we show that condition (17) can be removed in the construction of accurate Hermite matrix
polynomial expansions of exp(At ); cos(At ) and sin(At ) of an arbitrary matrix A in C r×r .
Lemma 3.1. Let A be a matrix in C r×r and let
any positive number such that
¿ max{|Re(z )|+|Im(z )|; z ∈ (A)}:

Then the matrix B = A +
I satis es (17).
Proof. By the spectral mapping theorem (A +
I ) = {z +
; z ∈ (A)} and for z ∈ (A) one gets
|Re(z +
)| = |Re(z ) +
|¿
− |Re(z )|¿|Im(z )| = |Im(z +
)|. Hence the result is established.

112

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

Corollary 3.1. Let A be a matrix in C r×r and let
be a number satisfying the condition of
Lemma 3.1. Let ¿0; c¿0; a = c + . With the notacion of Theorem 3.1 it follows that
(i) If n0 satis es (21) for B = A +
I; then
ke Ax −e−
x E (A +
I; x; n)k2 ¡;

n¿n0 ; |x|¡c:

(31)

(ii) Let n1 and n2 be positive integers satisfying (22) and (23) respectively for = 2 = ′ ; and
B = A +
I; then for |x|¡c and n¿ max{n1 ; n2 };
ksin(Ax)−{cos(
x)S (A +
I; x; n)− sin(
x)C (A +
I; x; n)}k2 ¡;

(32)

kcos(Ax)−{cos(
x)C (A +
I; x; n)+ sin(
x)S (A +
I; x; n)}k2 ¡:

(33)

Proof. (i) The result is a consequence of Theorem 3.1 and the formula e Ax = e−
x e (A+
I )x . The proof
of part (ii) follows from the expressions

sin(Ax) = sin[(A +
I )x] cos(
x)− cos[(A +
I )x] sin(
x);
cos(Ax) = cos[(A +
I )x] cos(
x)+ sin[(A +
I )x] sin(
x);
and Theorem 3.1.
In the following example we illustrate the use of Hermite matrix polynomials for computing e A ,
sin A and cos A for a matrix A satisfying (17). As it has been proved in Corollary 3.1, condition
(17) is not necessary taking an appropiate value of
in accordance with Lemma 3.1. It is interesting
to point out that algorithm of Theorem 2.1 has been adapted for the Hermite matrix polynomials
using (5). Computations have been perfomed using Mathematica version 2:2:1.
Example 3.1. Consider the matrix




3 −1 1
0 1;
A= 2
1 −1 2
where (A) = {1; 2; 2}. Using the minimal theorem [3, p. 571] one gets the exact value of e A :


2e 2
A
e =  −e + 2e2
−e + e2



−e2
e − e2
e − e2



e2
e2 
e2
−7:38905609893065

7:38905609893065

4:670774270471604 −4:670774270471604

7:38905609893065

14:7781121978613

=
 12:05983036940225 −4:670774270471604




7:38905609893065 
:

113

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

Given an admissible error  = 10−5 , from (21), taking n0 = 30 the approximation E (A; 1; 30) provides
the required accuracy
E (A; 1; 30) = e

30
X
1
n=0



n!

Hn



1
1; A 2
2



−7:389056098931726

14:77811219786323



=
 12:0598303694045

7:389056098931725




7:389056098931723 
;

−4:670774270472999

4:670774270472778 −4:670774270472778

7:389056098931503


A
e −E (A; 1; 30)
= 3:181762178061584×10−12 :
2

Of course, in practice the number of terms required to obtain a pre xed accuracy uses to be smaller
than the one provided by (21), because the error bounds given by Theorem 3.1 is valid for any
matrix satisfying (17). So for instance taking n0 = 19 one gets


14:77810950722812



E (A; 1; 19) = 
 12:05982687088079
4:670772244388193

−7:389054626492605
−4:670771990145276
−4:670772244388193


A
e −E (A; 1; 19)
= 6:356409123149743 × 10−6 :
2

7:389054626492605




7:389054626492603 
;
7:389054880735518

In an analogous way we have


− cos(2)

sin(2) + cos(2)



cos(2)





sin A = 
 − sin(1) + sin(2) + cos(2) sin(1) − cos(2) cos(2) 
− sin(1) + sin(2)
sin(1) − sin(2) sin(2)



0:4161468365471424 −0:4161468365471424

0:4931505902785393

=
 −0:3483203945293571

1:257617821355039

0:06782644201778521 −0:06782644201778521




−0:4161468365471424 
:

0:909297426825682

Taking n0 = 8 one gets the Hermite matrix polynomial approximation


8
1
(−1) n
1X
H2n+1 1; A2
S (A; 1; 8) =
e n=0 (2n + 1)!
2




with

0:4931487870648995

=
 −0:3483221953348546



0:4161486588860599 −0:4161486588860598

1:257619641285814

0:0678264635512047 −0:06782646355120475

ksin A−S (A; 1; 8)k2 = 4:446422404096298×10−6 :




−0:4161486588860596 
;

0:90929744595096

114

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

The number of terms provided by (23) for the accuracy  = 10−5 is n0 = 19. Finally approximating
cos A by Hermite matrix polynomial series we have


cos(2) − sin(2)



sin(2)

cos A = 
 − cos(1) + cos(2) − sin(2)



− cos(1) + cos(2)

cos(1) + sin(2)
cos(1) − cos(2)

− sin(2)




− sin(2) 


cos(2)

−1:325444263372824

0:909297426825682 −0:909297426825682

−0:956449142415282

0:956449142415282 −0:4161468365471424

=
 −1:865746569240964




1:449599732693821 −0:909297426825682 
:

Taking n0 = 8 one gets the approximation


8
1X
1
(−1) n
C (A; 1; 8) =
H2n 1; A2
e n=0 (2n)!
2




with error



−1:325448071525842

0:909299412639782 −0:909299412639782

=
 −1:865751647963166




1:449602989077106 −0:909299412639782 
;

−0:956452235323384

0:956452235323384 −0:4161486588860598

kcos A − C (A; 1; 8)k2 = 9:1659488356359×10−6 :

For a pre xed accuracy  = 10−5 , the expression (22) gives n0 = 14.
4. Applications

In this section we construct matrix polynomial approximations of problems (1)–(3) expressed in
terms of Hermite matrix polynomials. It is well known that the solution of problem (1) is
Y (x) = e Ax y0 :

Let
be as in Lemma 3.1, ¿0, c¿0, a = c + . Let n0 be the rst positive integer such that
A((A +
I )a; n0 )¿ exp(akA +
I k2 +1)−


:
e(1 + ky0 k2 )

Then by Corollary 3.1 it follows that
ke Ax y0 −e−
x E (A +
I; x; n)y0 k2 ¡;

|x|¡c; n¿n0 :

Thus
Yen (x) = e(1−
x)

"

n
X
Hk (x; 12 (A +
I ) 2 )
k=0

k!

#

y0 ;

(34)

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

115

is an approximate solution of problem (1) such that if Y (x) is the exact solution one gets
kY (x)−Yen (x)k2 ¡;

|x|¡c; n¿n0 :

(35)

Problem (2) can be solved considering the extended system








0
Z =
−A2

Y
Z=
;
Y′





I
Z;
0



P
;
Z (0) =
Q

but such an approach increases the computational cost [1], and involves a lack of explicitness in
terms of two vector parameters that is very interesting to study boundary value problems associated
to (2) using the shooting method [9].
Consider problem (2) where A is an invertible matrix in C r×r . By [6] the pair {cos(Ax); sin(Ax)}
is a fundamental set of solutions of the equation
Y ′′ +A2 Y = 0;

−∞¡x¡+∞;

because Y1 (x) = cos(Ax), Y2 (x) = sin(Ax) satisfy


Y1 (0)
W (0) =
Y1′ (0)





Y2 (0)
I
=
Y2′ (0)
0

0
A



is invertible in C2r×2r :

Hence, the unique solution of problem (2) is given by
Y (x) = cos(Ax)P + sin(Ax)A−1 Q;

−∞¡x¡+∞:

(36)

If
is given by Lemma 3.1, the expresion (36) and Corollary 3.1 suggest the approximation
Yen (x) = {cos(
x)C (A +
I; x; n) + sin(
x) (A +
I; x; n)} P

+ {cos(
x)S (A +
I; x; n) − sin(
x)C (A +
I; x; n)} A−1 Q

= C (A +
I; x; n) cos(
x)P− sin(
x)A−1 Q





+S (A +
I; x; n) sin(
x)P + cos(
x)A−1 Q ;
Yen (x) =

"

n
X
(−1) k H2k (x; 12 (A+
I )2 )

+

k=0

"

(2k )!

#

P
A−1 Q
cos(
x) − sin(
x)
e
e

n
X
(−1) k H2k+1 (x; 21 (A+
I )2 )
k=0

(2k + 1)!

#

!

P
A−1 Q
sin(
x) + cos(
x)
e
e

!

:

(37)

By Corollary 3.1, for n¿ max{n1 ; n2 }, where n1 and n2 are given by the Corollary 3.1, one gets
kY (x)−Yen (x)k2 6(kPk2 + kA−1 k2 kQk2 );

|x|¡c:

(38)

We conclude this section with the construction of Hermite matrix polynomials approximations of
the solution of problem (3). By [2, p. 195] the solution of (3) is given by
X (t ) = e A t C e Bt :

(39)

116

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117

Let
¿ max{
A ;
B } where
A ¿ max{|Re(z ) + |Im(z )|; z ∈ (A)};

and
B ¿ max{|Re(z ) + |Im(z )|; z ∈ (B)}:

Corollary 3.1 suggests the approximation
Xen (t ) = e−
t E (A +
I; t; n)C e−
t E (B +
I; t; n);

(40)

and note that

X (t )−Xen (t )

= (e A t − e−
t E (A +
I; t; n))C e B t + e−
t E (B +
I; t; n)C (e B t − e−
t E (B +
I; t; n)):

(41)

Consider the domain |t|¡c and take
K = max{c(kBk2 +
); 2}+1:

(42)

By (15) and (18) it follows that |t|¡K= (kBk2 +
) and
kE (B +
I; t; n)k2 6e

c 2 +1

X
j¿0

s

(2K ) j
= L:
j!

(43)

Taking norms in (41) and using (43) one gets
kX (t )−Xen (t )k2 6 ke A t − e−
t E (A +
I; t; n)k2 kCk2 e ckBk2 + LkCk2 ke B t
−e−
t E (B +
I; t; n)k2 :

(44)

Taking the rst positive integer n′0 such that
A((A+
I )a; n′0 )¿ exp(wkA+
I k2 a+1)−

and the rst positive integer n′1 such that

 e−ckBk2
;
2e(kCk2 + 1)

A((B +
I )a; n′1 )¿ exp(kB +
I k2 a + 1) −


;
2L(kCk2 + 1)e

then for n¿ max{n′0 ; n′1 } and |t|¡c, by Corollary 3.1 and (39), (40), (43) it follows that
kX (t ) − Xen (t )k2 ¡;

|t|¡c:

References

[1] T. Apostol, Explicit formulas for solutions of the second-order matrix di erential equation Y ′′ = AY , Amer. Math.
Monthly 8 (1975) 159–162.
[2] R. Bellman, Introduction to Matrix Analysis, McGraw-Hill, New York, 1965.

E. Defez, L. Jodar / Journal of Computational and Applied Mathematics 99 (1998) 105–117
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]

117

N. Dunford, J. Schwartz, Linear Operators, Part I, Interscience, New York, 1957.
Z. Gajic, M. Qureshi, Lyapunov Matrix Equationin Systems Stability and Control, Academic Press, New York, 1995.
G. Golub, C.F. Van Loan, Matrix Computations, Johns Hopkins Univ. Press, Baltimore, M.A, 1989.
L. Jodar, Explicit solutions for second order operator di erential equations with two boundary value conditions,
Linear Algebra Appl. 103 (1988) 73–86.
L. Jodar, R. Company, Hermite matrix polynomials and second order matrix di erential equations, J. Approx. Theory
Appl. 12 (2) (1996) 20–30.
L. Jodar, E. Defez, On Hermite matrix polynomials and Hermite matrix functions, J. Approx. Theory Appl. 14 (1)
(1998) 36 – 48.
H.B. Keller, Numerical Solution of Two Point Boundary Value Problems, SIAM Regional Conf. Series in Appl.
Maths., vol. 24, Philadelphia, 1976.
A.G. Law, C.N. Zhang, A. Rezazadeh, L. Jodar, Evaluation of a rational function, Numer. Algorithms 3 (1992)
265–272.
C.B. Moler, C.F. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, SIAM Rev. 20 (1978)
801–836.
E.D. Rainville, Special Functions, Chelsea, New York, 1960.
S. Saks, A. Zygmund, Analytic Functions, Elsevier, Amsterdam, 1971.
S.M. Serbin, Rational approximations of trigonometric matrices with applications to second-order systems of
di erential equations, Appl. Math. Comput. 5 (1979) 75–92.
S.M. Serbin, S.A. Blalock, An algorithm for computing the matrix cosine, SIAM J. Sci. Statist. Comput. 1 (2)
(1980) 198–204.
S. Serbin, C. Serbin, A time-stepping procedure for X ′ = A1 X + XA2 + D; X (0) = C, IEEE Trans. Autom. Control.
25 (1980) 1138–1141.
R.S. Varga, On higher order stable implicit methods for solving parabolic partial di erential equations, J. Math.
Phys. 40 (1961) 220–231.
R.C. Ward, Numerical computation of the matrix exponential with accuracy estimate, SIAM J. Numer. Anal. 14
(1977) 600–619.