vol20_pp537-551. 155KB Jun 04 2011 12:06:09 AM

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

LEAST SQUARES (P, Q)-ORTHOGONAL SYMMETRIC SOLUTIONS
OF THE MATRIX EQUATION AND ITS OPTIMAL
APPROXIMATION∗
LIN-LIN ZHAO† , GUO-LIANG CHEN† , AND QING-BING LIU‡

Abstract. In this paper, the relationship between the (P, Q)-orthogonal symmetric and symmetric matrices is derived. By applying the generalized singular value decomposition, the general expression of the least square (P, Q)-orthogonal symmetric solutions for the matrix equation AT XB = C
is provided. Based on the projection theorem in inner space, and by using the canonical correlation
decomposition, an analytical expression of the optimal approximate solution in the least squares
(P, Q)-orthogonal symmetric solution set of the matrix equation AT XB = C to a given matrix is
obtained. An explicit expression of the least square (P, Q)-orthogonal symmetric solution for the
matrix equation AT XB = C with the minimum-norm is also derived. An algorithm for finding the
optimal approximation solution is described and some numerical results are given.


Key words. Matrix equation, Least squares solution, (P, Q)-orthogonal symmetric matrix,
Optimal approximate solution.

AMS subject classifications. 65F15, 65F20.

1. Introduction. Let Rm×n denote the set of all m × n real matrices, and
SRn×n , ORn×n denote the set of n × n real symmetric matrices and n × n orthogonal
matrices, respectively. In denotes n × n unit matrix. The notations AT , k A k stand
for the transpose and the Frobenius norm of A, respectively. For A = (aij ) ∈ Rm×n ,
B = (bij ) ∈ Rm×n , A ∗ B = (aij bij ) ∈ Rm×n represents the Hadamard product of
matrices A and B. Let SORn×n = {P ∈ Rn×n |P T = P, P 2 = I} denote the set of
n × n generalized reflection matrices.
Definition 1.1. Given P, Q ∈ SORn×n , we say that X ∈ Rn×n is (P, Q)orthogonal symmetric, if
(P XQ)T = P XQ.
We denote by SRn×n (P, Q) the set of all (P, Q)-orthogonal symmetric matrices.
∗ Received by the editors June 3, 2009. Accepted for publication August 22, 2010. Handling
Editor: Peter Lancaster.
† Department of Mathematics, East China Normal University, Shanghai, 200241, China
(correspondence should be addressed to Guo-liang Chen, glchen@math.ecnu.edu.cn). Supported by NSFC
grants (10901056, 10971070, 11071079) and Shanghai Natural Science Foundation (09ZR1408700).

‡ Department of Mathematics, Zhejiang Wanli University, Ningbo 315100, China. Supported by
Foundation of Zhejiang Educational Committee (No. Y200906482) and Ningbo Natural Science
Foundation (2010A610097).

537

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

538

L.L. Zhao, G.L. Chen, and Q.B. Liu

In this paper, we consider the following problems.
Problem I. Given P, Q ∈ SORn×n , A ∈ Rn×m , B ∈ Rn×l , and C ∈ Rm×l , find

X ∈ SRn×n (P, Q) such that
(1.1)

k AT XB − C k2 = min .

Problem II. Given P, Q ∈ SORn×n , A ∈ Rn×m , B ∈ Rn×l , and C ∈ Rm×l , find
Y ∈ SRn×n such that
(1.2)

k (P A)T Y (QB) − C k2 = min .

Problem III. Let SE be the solution set of Problem I. Given X ∗ ∈ Rn×n , find
ˆ ∈ SE such that
X
ˆ − X ∗ k2 = min k X − X ∗ k2 .
kX
X∈SE

˜ ∈ SE such that
Problem IV. Let SE be the solution set of Problem I, find X

˜ k2 = min .
kX
An inverse problem [2, 3, 6, 7] arising in structural modification of the dynamic
behavior of a structure calls for the solution of certain linear matrix equations. The
matrix equation
AT XB = C
with X being orthogonal-symmetric has been studied by Peng [15] which gives the
necessary and sufficient conditions for the existence and the general solution expression. In [16], the necessary and sufficient conditions for the solvability of the matrix
equation
AH XB = C
over the sets of reflexive and anti-reflexive matrices are given, and the general expressions for the reflexive and anti-reflexive solutions are obtained. Don [9], Magnus [12],
and Chu [5] have discussed the matrix equation
BXAT = T
where the solution matrices are known to have a given structure (e.g., symmetric,
triangular, diagonal), either directly from the matrix equation or indirectly from the

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010


ELA

http://math.technion.ac.il/iic/ela

Least Squares (P, Q)-Orthogonal Symmetric Solutions

539

equivalent vector equation. But they did not consider the least squares solutions of
the equation.
For the least squares problem, the (M, N )-symmetric Procrustes problem of the
matrix equation AX = B has been treated in [13]. The least squares orthogonalsymmetric solutions of the matrix equation AT XB = C, and the least squares symmetric, skew-symmetric solutions of the equation BXAT = T have been considered,
respectively, in [14] and [8]. Recently, Qiu, Zhang and Lu in [17] have proposed an
iterative method for the least squares problem of the matrix equation BXAT = F .
Problem III, that is, the optimal approximation problem of a matrix with the
given matrix restriction, is proposed in the processes of test or recovery of linear
ˆ is a matrix
systems with incomplete data or revising data. The optimal estimation X
that not only satisfies the given restriction but also best approximates the given
matrix.

In this paper, we will discuss the least square (P, Q)-orthogonal symmetric solutions and its optimal approximation for the matrix equation AT XB = C. By using
the generalized singular value decomposition (GSVD), the projection theorem and
the canonical correlation decomposition (CCD), we obtain the general expressions of
the solutions for Problem I, II, III and IV.
The paper is organized as follows. In section 2, we will give the general expressions
of the solutions for Problem I and II. In section 3, we will discuss Problem III and
IV. In section 4, we will give an algorithm to compute the solution of Problem III
and numerical examples.
2. The solutions of Problem I and II. In this section, we derive the general
expressions for the solutions of Problem I and II.
Theorem 2.1. Problem I has a solution if and only if Problem II has a solution.
Proof. Suppose X be one of the solutions of Problem I, then we have (P XQ)T =
P XQ, and
(2.1)

k AT XB − C k2 = min .

Let Y = P XQ, then Y T = Y . From (2.1), we get k (P A)T Y QB − C k2 = min,
that is, Y is one of least squares symmetric solutions of Problem II.
On the contrary, if Y is one of the least squares symmetric solutions of Problem

II, then we have Y T = Y and
(2.2)

k (P A)T Y (QB) − C k2 = min .

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

540

L.L. Zhao, G.L. Chen, and Q.B. Liu

Let X = P Y Q, then (P XQ)T = P XQ. From (2.2), we get k AT XB−C k2 = min,
that is, X is one of the least squares (P, Q)-orthogonal symmetric solutions of Problem
I.

Lemma 2.2. Let D1 = diag(a1 , a2 , · · · , an ) > 0, D2 = diag(b1 , b2 , · · · , bn ) > 0,
and E = (eij ) ∈ Rn×n , then there exists a unique S ∈ SRn×n such that
k D1 SD2 − E k2 = min,

(2.3)
and

S = φ ∗ (D1 ED2 + D2 E T D1 ),

(2.4)
where φ = (φij ), φij =

1
,
a2i b2j +a2j b2i

i, j = 1, 2, · · · , n.

Proof. For any S = (sij ) ∈ SRn×n , E = (eij ) ∈ Rn×n , we have
k D1 SD2 − E k2 =


n X
n
X

(ai sij bj − eij )2 =

(ai sii bi − eii )2 +

i=1

i=1 j=1

X

n
X

[(a2i b2j


+

a2j b2i )s2ij

− 2(ai bj eij + aj bi eji )sij + (e2ij + e2ji )].

1≤i 0, DQB = diag(β1 , β2 , · · · , βs ) > 0 with 1 > α1 ≥ α2 ≥
· · · ≥ αs > 0, 0 < β1 ≤ β2 ≤ · · · ≤ βs < 0, and α2i + βi2 = 1, i = 1, 2, · · · , s.

.

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

541


Least Squares (P, Q)-Orthogonal Symmetric Solutions

Let W = (W1 , W2 , W3 , W4 ), W1 ∈ Rn×t , W2 ∈ Rn×s , W3 ∈ Rn×(r−s−t) , W4 ∈
Rn×(n−r) , and let
(2.6)

(2.7)

˜ ij ), i.e., X
˜ ij = WiT Y Wj ,
W T Y W = (X
 ˜
t
C11

s
C˜21
U T CV =
m−s−t
C˜31
l+t−r

i, j
C˜12
C˜22
C˜32
s

= 1, 2, 3, 4,

C˜13
C˜23 
.
C˜33

r−s−t

Theorem 2.4. Given A ∈ Rn×m , B ∈ Rn×l , C ∈ Rm×l , and P, Q ∈ SORn×n .
Let the GSVD of the matrix pair [P A, QB] be of form (2.5). Partition W T Y W
and U T CV according to (2.6) and (2.7), respectively. Then the general solution of
Problem II can be expressed as


−1
˜ 14
˜ 11
X
C˜12 DQB
C˜13
X
 D−1 C˜ T
˜ 24 
˜ 22
X
DP−1A C˜23 X
 −1

12
(2.8)
W ,
Y = W −T  QB
−1
T
T
˜ 34 
˜ 33


X
C˜13
C˜23
DP A
X
˜T
˜T
˜ 44
˜T
X
X
X
X
14
24
34
where
˜ 22 = φ˜ ∗ (DP A C˜22 DQB + DQB C˜ T DP A ), φ˜ = (φ˜ij ),
X
22
φ˜ij =

α2i βj2

1
, i, j = 1, 2, · · · , s.
+ α2j βi2

˜ 11 , X
˜ 33 , X
˜ 44 are arbitrary symmetric, X
˜ 14 , X
˜ 24 , X
˜ 34 are arbitrary.
The matrices X
Proof. Suppose that Y is one of the least squares symmetric solutions for Problem
˜ ij = X
˜ T , i, j = 1, 2, 3, 4.
II, then Y T = Y , and so (W T Y W )T = W T Y W , i.e., X
ji
Substitute the matrices P A, QB in (2.5) into (1.2), from orthogonal invariance
of the Frobenius norm together with (2.6) and (2.7), we have
k (P A)T Y (QB) − C k2
= k U ΣTP A W T Y W ΣQB V T − C k2
= k ΣTP A (W T Y W )ΣQB − U T CV k2

 
˜ 13
˜ 12 DQB
0
X
X

˜ −
˜

=

0 DP A X22 DQB DP A X23
0
0
0

C˜11
C˜21
C˜31

C˜12
C˜22
C˜32


2
C˜13

C˜23 

.
˜
C33

So the condition k (P A)T Y (QB)−C k2 = min is equivalent to the following conditions:
˜ 22 DQB − C˜22 k2 = min,
˜ 12 DQB − C˜12 k2 = min, k DP A X
kX
˜ 23 − C˜23 k2 = min .
˜ 13 − C˜13 k2 = min,
kX
k DP A X

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

542

L.L. Zhao, G.L. Chen, and Q.B. Liu

˜ 12 = C˜12 D−1 , X
˜ 13 = C˜13 , X
˜ 23 = D−1 C˜23 . From Lemma 2.2, X
˜ 22 =
Then, X
QB
PA
1
T
˜
˜
˜
˜
˜
˜
φ ∗ (DP A C22 DQB + DQB C22 DP A ) with φ = (φij ), φij = α2 β 2 +α2 β 2 , i, j = 1, 2, · · · , s.
i

j

j

i

Then the general solution of Problem II can be expressed as by (2.8). The proof is
completed.
Theorem 2.5. Let A ∈ Rn×m , B ∈ Rn×l , C ∈ Rm×l , and P, Q ∈ SORn×n . If
the GSVD of [P A, QB] is of form (2.5), W T Y W and U T CV are partitioned into
(2.6) and (2.7) respectively, then the general solution of Problem I can be expressed
as


−1
˜ 14
˜ 11
X
C˜12 DQB
C˜13
X
 −1 ˜ T
˜ 24 
˜ 22
X
DP−1A C˜23 X
 −1
 D C12
(2.9)
W Q,
X = P W −T  QB
−1
T
T
˜ 34 
˜ 33


X
C˜13
C˜23 DP A
X
˜ 44
˜T
˜T
˜T
X
X
X
X
14

24

34

˜ 11 , X
˜ 22 , X
˜ 33 , X
˜ 44 , X
˜ 14 , X
˜ 24 , X
˜ 34 are the same as in Theorem 2.4
where X
Proof. From Theorem 2.1 and Theorem 2.4, it can be easily proved.
3. The solutions of Problem III and IV. In this section, we derive analytical
expressions of the solutions for Problem III and IV. To this end, we first transform
the least squares problem (1.1) with respect to the matrix equation AT XB = C into
a consistent matrix equation, by using the projection theorem.
Lemma 3.1. (Projection Theorem [18]) Let S be an inner product space, K be a
subspace of S. For given x ∈ S, if there exists a y0 ∈ K such that k x − y0 k≤k x − y k
holds for all y ∈ K, then y0 is unique. Moreover y0 is the unique minimization vector
in K if and only if (x − y0 )⊥K.
Theorem 3.2. Suppose that the matrices P , Q, A, B, and C are given in
Problem I, and the matrix X0 is one of the solutions of Problem I. Let
C0 = AT X0 B.

(3.1)

Then the (P, Q)-orthogonal symmetric solution set of the consistent matrix equation
AT XB = C0

(3.2)

is the same as the solution set of Problem I.
Proof. Let
L = {Y |Y = AT XB, ∀X ∈ SRn×n (P, Q), A ∈ Rn×m , B ∈ Rn×l }.
Then L is a subspace of Rm×l . From (3.1), it is obvious that C0 ∈ L, and
k AT X0 B − C k=

min

X∈SRn×n (P,Q)

k AT XB − C k= min k Y − C k .
Y ∈L

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

Least Squares (P, Q)-Orthogonal Symmetric Solutions

543

Now, by Lemma 3.1, we have
(AT X0 B − C)⊥L.
For all X ∈ SRn×n (P, Q), we have
(AT XB − AT X0 B) ∈ L.
It then follows that,
k AT XB − C k2 = k AT XB − AT X0 B + AT X0 B − C k2
= k AT XB − AT X0 B k2 + k AT X0 B − C k2 .
Hence, the conclusion of this theorem holds.
From Theorem 3.2, we easily see that the optimal approximate (P, Q)-orthogonal
ˆ of the consistent matrix equation (3.2) to a given matrix X ∗ is
symmetric solution X
just the solution of Problem III. Thus, how to find C0 is the crux for solving Problem
III. So we need the following theorem.
Theorem 3.3. Suppose that the matrices P , Q,
Problem I. Let the GSVD of the matrix pair [P A, QB]
matrix C0 can be expressed as

C˜13
0
C˜12
˜ 22 DQB C˜23
(3.3)
C0 = U  0 DP A X
0

0

0

A, B, and C are given in
be of form (2.5). Then the


V T,

˜ 22 is the same as in Theorem 2.4.
where X
Proof. From Theorem 2.5, we know that the least squares (P, Q)-orthogonal
symmetric solution X0 of Problem I can be given by (2.9). By substituting (2.9)
and (2.5) into the equation C0 = AT X0 B, after straightforward computation, we can
immediately obtain (3.3).
Evidently, (3.3) shows that the matrix C0 given in Theorem 3.3 is dependent
only on the given matrices A, B, C, P, and Q, but independent on the least squares
(P, Q)-orthogonal symmetric solution X0 of Problem I. Furthermore, we can conclude
that
k C0 − C k2 =

min

X∈SRn×n (P,Q)

k AT XB − C k2 .

From the equation above, we know that the matrix equation AT XB = C is consistent
if and only if C0 = C.
To derive the solutions of Problem III and IV, we need to use the CCD of the
matrix pair [P A, QB].

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

544

L.L. Zhao, G.L. Chen, and Q.B. Liu

Lemma 3.4. Suppose that the matrices P , Q, A, B, and C are given in Problem
I. Decompose the matrix pair [P A, QB] by using CCD (see[10]) as
(3.4)

P A = M (ΠP A , 0)EP A ,

where EP A ∈ Rm×m , EQB ∈ Rl×l

Is
 0


 0
ΠP A = 
 0

 0
0

QB = M (ΠQB , 0)EQB ,

are nonsingular matrices, M ∈ ORn×n , and

0
0
Λj 0 




0
0 
Ih
,
,
Π
=

QB
0
0 
0

∆j 0 
0 It′

are block matrices, with the diagonal matrices Λj and ∆j given by
Λj = diag(λ1 , · · · , λj ), 1 > λ1 ≥ · · · ≥ λj > 0,
∆j = diag(σ1 , · · · , σj ), 0 < σ1 ≤ · · · ≤ σj < 1, and Λ2j + ∆2j = Ij .
Here, g = rank(P A), h = rank(QB), s = rank(P A) + rank(QB) − rank([P A, QB]),
j = rank(B T P A) − s, t′ = rank(P A) − s − j, and g = s + j + t′ .
Let M = (M1 , M2 , M3 , M4 , M5 , M6 ), M1 ∈ Rn×s , M2 ∈ Rn×j , M3 ∈ Rn×(h−s−j) ,


−1
M4 ∈ Rn×(n−h−j−t ) , M5 ∈ Rn×j , M6 ∈ Rn×t . Partition M T Y M and EP−T
A C0 EQB
into the following forms:
(3.5)

(3.6)

M T Y M = (Xij ), Xij = MiT Y Mj ,

C11
s

j
 C21
−1

 C31
EP−T
C
E
=
t
0
A
QB
C41
m−g
s

i, j = 1, 2, · · · , 6,
C12
C13
C22
C23
C32
C33
C42
C43
j
h−s−j


C14
C24 

C34  .
C44
l−h

Theorem 3.5. Suppose that the matrices P, Q,A, B, and C are given in Problem
I, then the equation AT XB = C has a solution X ∈ SRn×n (P, Q) if and only if the
equation (P A)T Y QB = C has a solution Y ∈ SRn×n , and X = P Y Q.
Proof. Suppose X be one of the (P, Q)-orthogonal symmetric solutions of the
equation AT XB = C, and let Y = P XQ. Then, we have Y T = Y and (P A)T Y QB =
C, that is, Y is one of the symmetric solutions of the equation (P A)T Y QB = C.
Conversely, if the equation (P A)T Y QB = C has a solution Y ∈ SRn×n , then
let X = P Y Q, we have (P XQ)T = P XQ and AT XB = C, that is, X is one of the
(P, Q)-orthogonal symmetric solutions of the equation AT XB = C, and X = P Y Q.
The proof is completed.

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

545

Least Squares (P, Q)-Orthogonal Symmetric Solutions

Theorem 3.6. Suppose that matrices P, Q,A, B are given in Problem I, and C0
is given by (3.3). Let the CCD of the matrix pair [P A, QB] be of form (3.4). Partition
−1
the matrices M T Y M and EP−T
A C0 EQB according to (3.5) and (3.6), respectively. Then
the general (P, Q)-orthogonal symmetric solution of the equation AT XB = C0 can be
expressed as


T
C11 C12 C13 X14 X15 C31
 CT X
T 
C32
22 X23 X24 X25

 12
 T
T
T 
 T
 C13 X23 X33 X34 X35 C33
X = PM  T
(3.7)
 M Q,
T
T
 X14 X24
X34
X44 X45 X46 

 T
T
T
T
 X15 X25
X35
X45
X55 X56 
T
T
C31 C32 C33 X46
X56
X66
where
−1
−1
T
T
T
T
T
X15 = (C21
− C12 Λj )∆−1
j , X25 = (C22 − X22 Λj )∆j , X35 = (C23 − X23 Λj )∆j ,

the matrices Xii , i = 2, 3, 4, 5, 6 are symmetric matrices with suitable dimensions,
and other unknown Xij are arbitrary.
Proof. From Theorem 3.5, we first consider the symmetric solutions of the
equation (P A)T Y QB = C0 . Since Y T = Y , we have M T Y M is symmetric, i.e,
T
Xij = Xji
, i, j = 1, 2, · · · , 6. By inserting the matrices P A and QB in (3.4) into the
equation (P A)T Y QB = C0 , we get
EPT A (ΠP A , 0)T M T Y M (ΠQB , 0)EQB = C0
Since EP A , EQB are nonsingular, then
−1
(ΠP A , 0)T M T Y M (ΠQB , 0) = EP−T
A C0 EQB

According to (3.5) and (3.6), we have
X11
 Λj X21 + ∆j X51


X61
0


X12
Λj X22 + ∆j X52
X62
0

X13
Λj X23 + ∆j X53
X63
0

 
0
C11
 C21
0 
=
0   C31
C41
0

C12
C22
C32
C42

From the above equation, we get
T
C11
= C11 , Ci4 = 0, C4j = 0, (i, j = 1, 2, 3, 4).

and
X11 = C11 , X12 = C12 , X13 = C13 , X61 = C31 , X62 = C32 , X63 = C33 ,
Λj X21 + ∆j X51 = C21 , Λj X22 + ∆j X52 = C22 , Λj X23 + ∆j X53 = C23 .

C13
C23
C33
C43


C14
C24 
.
C34 
C44

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

546

L.L. Zhao, G.L. Chen, and Q.B. Liu

After straightforward computation, and from Theorem 3.5, the (P, Q)-orthogonal
symmetric solution for the equation AT XB = C0 can be expressed as (3.7).
The following lemmas are important for deriving an analytical formula of the
solution for Problem III.
R

Lemma 3.7. ([15]) Suppose that the matrices E ∈ Rn×m , K ∈ Rm×n , F ∈
, H ∈ Rm×n , and D = diag(a1 , a2 , · · · , an ) > 0.

n×m

(1) There exists a unique G ∈ Rn×m such that
g(G) =k G − E k2 + k GT − K k2 = min,
and
G=

1
(E + K T ).
2

(2) There exists a unique G ∈ Rn×m such that
g(G) =k G − E k2 + k GT − K k2 + k DG − F k2 + k GT D − H k2 = min,
and
G=

1
ϕ1 ∗ (E + K T + DF + DH T ),
2

with ϕ1 = (ϕij ), ϕij = 1/(1 + a2i ), (i, j = 1, 2, · · · , n).
Lemma 3.8. ([15]) Suppose that D = diag(a1 , a2 , · · · , an ) > 0, and E, F, H ∈
Rn×n , then there exists a unique G ∈ SRn×n such that
g(G) =k G − E k2 + k DG − F k2 + k GT D − H k2 = min,
and
G=

1
ϕ2 ∗ (E + E T + D(F + H T ) + (F T + H)D),
2

with ϕ2 = (ϕij ), ϕij = 1/(1 + a2i + a2j ), (i, j = 1, 2, · · · , n).
Theorem 3.9. Given X ∗ ∈ Rn×n , and the matrices P, Q,A, B, C are the same
as in Problem I. Partition the matrix M T P X ∗ QM into the following form
(3.8)

¯ij )6×6
M T P X ∗ QM = (X

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

Least Squares (P, Q)-Orthogonal Symmetric Solutions

547

compatibly with the row partitioning of ΠP A , where the matrix M is given in (3.4).
ˆ for Problem III and X
ˆ can be expressed as
Then there exists a unique solution X


ˆ 15 C T
ˆ 14 X
C11 C12 C13 X
31
 CT X
ˆ 25 C T 
ˆ 24 X
ˆ 23 X
 12 ˆ 22 X
32 
 T
T
ˆ
ˆ 35 C T 
ˆ 34 X
ˆ 33 X
C
X
X

33 
23
ˆ = P M  13
X
(3.9)
 M T Q,
T
T
T
ˆ 14
ˆ
ˆ
ˆ
ˆ
ˆ
 X
X24 X34 X44 X45 X46 
 T

ˆ 56 
ˆ
ˆT X
ˆT X
ˆT X
ˆ 55 X
 X
X
15
25
35
45
T
T
ˆ 46
ˆ 56
ˆ 66
X
X
C31 C32 C33 X
where
¯ 22 + X
¯ T )∆2 + 2(Λj C22 ∆2 + ∆2 C T Λj )
ˆ 22 = 1 φˆ ∗ [∆2 (X
X
j
22
j
j
j 22
2
¯ 52 + X
¯ T )∆2 − ∆2 (X
¯T + X
¯ 25 )Λj ∆j ],
−∆j Λj (X
25
j
j
52
1
T
T
¯ 23 + X
¯ 32
¯ 35
¯ 53 ),
ˆ 23 = 1 ∆2j (X
) + Λj C23 − Λj ∆j (X
+X
X
2
2
−1
−1
T
T
T
ˆ 15 = (C21
ˆ
ˆT
ˆ
ˆT
X
− C12 Λj )∆−1
j , X25 = (C22 − X22 Λj )∆j , X35 = (C23 − X23 Λj )∆j ,
with φˆ = (φˆij ) ∈ Rs×s , φˆij = 1/(σi2 σj2 + λ2i σj2 + λ2j σi2 ), (i, j = 1, 2, · · · , s), and other
ˆ ij = 1 (X
¯ ij + X
¯ T ).
unknown X
ji
2
Proof. It is easy to verify that the solution set SE is nonempty and is a closed
convex set. Therefore, there exists a unique solution for Problem III [17]. From
Theorems 3.2 and 3.3, we know that the solution set SE of Problem I is the same as
the (P, Q)-orthogonal symmetric solution set of the consistent equation (3.2). From
Theorem 3.6, we know that the (P, Q)-orthogonal symmetric solution of the consistent
equation (3.2) can be expressed as (3.7).
From the orthogonal invariance of the Frobenius norm together with (3.8) and
(3.7), we have
k X − X ∗ k2
= k M T P XQM − M T P X ∗ QM k2

C11 − X
¯ 11 C12 − X
¯ 12 C13 − X
¯ 13

 C T − X
¯
¯
¯ 23
X22 − X22 X23 − X
21
 12
 T
T
¯ 31 X − X
¯ 32 X33 − X
¯ 33
 C − X
23
=
 13
T
T
T
¯ 41 X24
¯ 42 X34
¯ 43
 X14
−X
−X
−X
 T
¯ 51 X T − X
¯ 52 X T − X
¯ 53
 X15 − X
25
35

C31 − X
¯ 61 C32 − X
¯ 62 C33 − X
¯ 63

¯ 14
X14 − X
¯
X24 − X24
¯ 34
X34 − X
¯ 44
X44 − X
T
¯ 54
X45
−X
T
¯ 64
X46 − X

Hence,
k X − X ∗ k2 = min,

∀X ∈ SE

¯ 15
X15 − X
¯
X25 − X25
¯ 35
X35 − X
¯ 45
X45 − X
¯ 55
X55 − X
T
¯ 65
X56 − X

T
¯ 16
C31
−X
T
¯
C32 − X26
T
¯ 36
C33
−X
¯ 46
X46 − X
¯ 56
X56 − X
¯
X66 − X66


2













Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

548

L.L. Zhao, G.L. Chen, and Q.B. Liu

if and only if

(3.10)

¯ ii k2 = min, i = 3, 4, 5, 6.
k Xii − X
¯ i4 k2 + k X T − X
¯ 4i k2 = min, i = 1, 2, 3.
k Xi4 − X
i4
T
¯ j4 k2 = min, j = 5, 6.
¯ 4j k2 + k X4j
k X4j − X
−X
¯ 56 k2 + k X T − X
¯ 65 k2 = min,
k X56 − X
56

and
(3.11)

¯ 25 k2
¯ 22 k2 + k (C T − X T Λj )∆−1 − X
k X22 − X
22
22
j
¯ 52 k2 = min,
+ k ∆−1 (C22 − Λj X22 ) − X
j

and
(3.12)

¯ 23 k2 + k ∆−1 (C23 − Λj X23 ) − X
¯ 53 k2
k X23 − X
j
T
T
T
¯ 32 k2 + k (C23
¯ 35 k2 = min .
+ k X23
−X
− X23
Λj )∆−1 − X
j

By making use of Lemma 3.7 (1) and Lemma 2.1, we know that the solution of (3.10)
is of the form
ˆ ij = 1 (X
¯ ij + X
¯ T ).
X
ji
2
By Lemma 3.8, we know that the solution of (3.11) is
ˆ 22 = 1 φˆ ∗ [∆2 (X
¯ 22 + X
¯ T )∆2 + 2(Λj C22 ∆2 + ∆2 C T Λj )
X
j
22
j
j
j 22
2
T
2
2 ¯T
¯
¯
¯
−∆j Λj (X52 + X25 )∆j − ∆j (X52 + X25 )Λj ∆j ],
with φˆ = (φˆij ) ∈ Rs×s , φˆij = 1/(σi2 σj2 + λ2i σj2 + λ2j σi2 ), (i, j = 1, 2, · · · , s). From
Lemma 3.7 (2) and (3.12), we get
1
T
T
¯ 23 + X
¯ 32
¯35
¯ 53 ).
ˆ 23 = 1 ∆2j (X
) + Λj C23 − Λj ∆j (X
+X
X
2
2
ˆ 25 = (C T − X
ˆ T Λj )∆−1 , X
ˆ 35 = (C T −
From Theorem 3.6, we immediately get X
22
22
23
j
ˆ T Λj )∆−1 . Then, the proof is completed.
X
23
j
In Theorem 3.9, if X ∗ = 0, then we will derive an analytical expression of the
solution for Problem IV.
Theorem 3.10. Let matrices P, Q,A, B, C be given in Problem I. Then there
˜ for Problem IV and X
˜ can be expressed as
exists a unique solution X


T
ˆ 15
C11
C12
C13
0
X
C31
 CT
T 
ˆ 22
ˆ 25
X
Λj C23 0
X
C32

 12
 T
T
T
T 
C
C
Λ
0
0
C

C
 T

j
j
13
23
23
33
˜ = PM 
X
(3.13)
 M Q,
 0
0
0
0
0
0 
 T

T
ˆ 15
ˆ 25
 X
X
∆j C23 0
0
0 
C31
C32
C33
0
0
0

Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

Least Squares (P, Q)-Orthogonal Symmetric Solutions

549

ˆ 15 , X
ˆ 25 , X
ˆ 22 are the same as in Theorem 3.9.
where X
Proof. The proof of this theorem is similar to that of Theorem 3.9, and it only
needs to let X ∗ = 0. From (3.9), we can easily get (3.13).

4. Numerical examples. Based on Theorem 3.9, we formulate the following
ˆ of problem III.
algorithm to find the solution X
Algorithm
Step 1. Input matrices A, B,C, P , Q and X ∗ ;
Step 2. Make the GSVD of the matrix pair [P A, QB], and partition U T CV
according to (2.7);
Step 3. Compute C0 by (3.3);
−1
Step 4. Make the CCD of the matrix pair [P A, QB], and partition EP−1A C0 EQB
according to (3.6);

ˆ by (3.9).
Step 5. Compute X
Example 4.1. Given




0.7119 1.1908 −0.1567 −1.0565
0.5287 −2.1707 0.6145
A =  1.2902 −1.2025 −1.6041 1.4151  , B =  0.2193 −0.0592 0.5077  ,
0.6686 −0.0198 0.2573 −0.8051
−0.9219 −1.0106 1.692




−0.4326 −1.1465 0.3273
−0.4326 0.2877
1.1892
 −1.6656 1.1909

0.1746 
C=
, X ∗ =  −1.6656 −1.1465 −0.0376  ,
 0.1253
1.1892 −0.1867 
0.1253
1.1909
0.3273
0.2877 −0.0376 0.7258
and





−0.2105 −0.6612 0.7201
0.3306 0.0408
0.9429
P =  −0.6612 −0.4463 −0.6031  , Q =  0.0408 −0.9987 0.0289  .
0.7201 −0.6031 −0.3432
0.9429 0.0289 −0.3318
By using the Algorithm, we get the matrix C0 and the unique solution of problem
III as follows:



−0.0935 −0.6535 0.5605
0.0188 0.1046 0.1768
 −0.1044 0.0193

0.2883
, X
ˆ =  0.0785 −0.0959 0.0694  .
C0 = 
 0.1114
0.3393 −0.1869 
0.1123 0.2182 0.0267
0.0095
0.1344 −0.4000


Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

550

L.L. Zhao, G.L. Chen, and Q.B. Liu

ˆ = C0 , and kX
ˆ − X ∗ k = 2.1273. So
It is easy to verify that (P XQ)T = P XQ, AT XB
the algorithm is feasible.
Example 4.2. Let



1
−1
1
1
1
2
0
0
 0

 −1 1 −0.8
0
0
0
0






0
0
 −1.2 −0.4 0.8 −0.8 0 
 0
A=
, B = 
 −0.9 −0.6 0.6 −0.6 0 
 1.2 −1
0



 1
 0
1
−1
0
0 
0
0
0
0
0
0
0
0
0
0



0.5
3.7168 −1.3336 −0.2051 0.0036
 0.2
 3.9767 −5.8704 3.9192 −1.1718 






 1

C =  3.1742 −4.9043 2.9143 −0.9511  , X = 

 1.3


 5.9993 −5.2680 3.0619
0.6273 
 1
2.2422 −2.2761 1.3852
0.5735
0.4

0
0
0
0
0
0







,




1 0.36 1 1.5 1.2
1
0.3 1.2 2
1
1.5 0.1 1.4 1
1
1
0.5 1.4 1.2 0.5
2.3 1.2 0.4 2 1.5
0.8 1.2 1.2 0.6 0

and






P =




1
0
0
0
0
0

0
1
0
0
0
0

0
0 0
0
0 0
−1 0 0
0 −1 0
0
0 1
0
0 0

0
0
0
0
0
1

According to the Algorithm, we get

3.3260
 4.7817


C0 =  3.5890

 6.5907
2.4131













, Q = 







−1.4450
−4.9037
−4.0614
−4.7240
−2.0962

1
0
0
0
0
0

0 0 0
1 0 0
0 −1 0
0 0 1
0 0 0
0 0 0

0.1227
4.1157
3.4539
2.9934
1.3866

0
0
0
0
0

0
0
0
0
1
0

0
0
0
0
0
−1







.









,



and





ˆ =
X






0.1221
−1.7333 −0.6189
0.3631
−0.3802 0.4001
−1.7333
1.0051
−23.6549 −34.8874 4.5926
1.0001 


−0.6189 −23.6549
0.1443
−1.2418
1.0103 −1.2000 
.
−0.3631 34.8874
1.2418
0.6237
0.0003
0.8498 

−0.3802
4.5926
1.0103
−0.0003
1.9557 −0.6000 
−0.4001 −1.0001
1.2000
0.8498
0.6000
0







.




Electronic Journal of Linear Algebra ISSN 1081-3810
A publication of the International Linear Algebra Society
Volume 20, pp. 537-551, August 2010

ELA

http://math.technion.ac.il/iic/ela

Least Squares (P, Q)-Orthogonal Symmetric Solutions

551

It is easy to verify that X is the (P, Q)-orthogonal symmetric solution of the
ˆ − X ∗ k = 43.7618.
equation AT XB = C0 , and kX
REFERENCES

[1] J.P. Aubin. Applied Functional Analysis. John Wiley and Sons, 1979.
[2] M. Baruch. Optimization procedure to correct stiffness and flexibility matrices using vibration
tests. AIAA Journal, 16:1208–1210, 1978.
[3] A. Berman and J.E. Nagy. Improvement of a large analytical model using test data. AIAA
Journal, 21:1168–1173, 1978.
[4] J. Cai, G.L. Chen. An iterative method for solving a kind of constrained linear matrix equations
system. Computational and Applied Mathematics, 28:309–328, 2009.
[5] K.W.E. Chu. Symmetric solutions of linear matrix equation by matrix decompositions. Linear
Algebra and its Applications, 119:25–50, 1989.
[6] H. Dai. Linear matrix equation from an inverse problem of vibration theory. Linear Algebra
and its Applications, 246:31–47, 1996.
[7] H. Dai. Correction of structural dynamic model. In Proceeding of Third Conference on Inverse
Eigenvalue Problems, Nanjing, China, pp. 47–51, 1992.
[8] Y.B. Deng, X.Y. Hu, and L. Zhang. Least squares solution of B T XA = T over symmetric, skewsymmetric and positive semidefinite X. SIAM Journal on Matrix Analysis and Application,
25:486–494, 2003.
[9] F.J.H. Don. On the symmetric solution of a linear matrix eqution. Linear Algebra and its
Applications, 93:1–7, 1987.
[10] G.H. Golub and H. Zha. Perturbation analysis of the canonical correlations of matrix pairs.
Linear Algebra and its Applications, 210:3–28, 1994.
[11] A.P. Liao and Y. Lei. Least-squares solution with the minimum-norm for the matrix eqaution (AXB, GXH) = (C, D). Computers and Mathematics with Applications, 50:539–549,
2005.
[12] J.R. Magnus. L-structured matrices and linear matrix equations. Linear and Multilinear Algebra, 14:67–88, 1983.
[13] J. Peng, X.Y. Hu, and L. Zhang. The (M, N )-symmetric Procrustes problem. Applied Mathematics and Computation, 198:24–34, 2008.
[14] X.Y. Peng, X.Y. Hu, and L. Zhang. The orthogonal-symmetric or orthogonal-anti-symmetric
least squares solutions of the matrix equation. Journal of Engineering Mathematics,
6:1048–1052, 2006.
[15] X.Y. Peng, X.Y. Hu, and L. Zhang. The orthogonal-symmetric solutions of the matrix eqaution
and the optimal approximation. Numerical Mathematics A Journal of Chinese Universities, 29:126–132, 2007.
[16] X.Y. Peng, X.Y. Hu, and L. Zhang. The reflexive and anti-reflexive solutions of the matrix
equation AH XB = C. Journal of Computational and Applied Mathematics, 186:638–645,
2007.
[17] Y.Y. Qiu, Z.Y. Zhang, and J.F. Lu, Matrix iterative solutions to the least squares problem
of B T XA = F with some linear constraints. Applied Mathematics and Computation,
185:284–300, 2007.
[18] R.S. Wang. Function Analysis and optimization Theory. Beijing University of Aeronautics and
Astronautics Press, Beijing, 2003.
[19] M. Wei. Some new properties of the equality constrained and weighted least squares problem.
Linear Algebra and its Applications, 320:145–165, 2000.

Dokumen yang terkait

AN ALIS IS YU RID IS PUT USAN BE B AS DAL AM P E RKAR A TIND AK P IDA NA P E NY E RTA AN M E L AK U K A N P R AK T IK K E DO K T E RA N YA NG M E N G A K IB ATK AN M ATINYA P AS IE N ( PUT USA N N O MOR: 9 0/PID.B /2011/ PN.MD O)

0 82 16

ANALISIS FAKTOR YANGMEMPENGARUHI FERTILITAS PASANGAN USIA SUBUR DI DESA SEMBORO KECAMATAN SEMBORO KABUPATEN JEMBER TAHUN 2011

2 53 20

EFEKTIVITAS PENDIDIKAN KESEHATAN TENTANG PERTOLONGAN PERTAMA PADA KECELAKAAN (P3K) TERHADAP SIKAP MASYARAKAT DALAM PENANGANAN KORBAN KECELAKAAN LALU LINTAS (Studi Di Wilayah RT 05 RW 04 Kelurahan Sukun Kota Malang)

45 393 31

FAKTOR – FAKTOR YANG MEMPENGARUHI PENYERAPAN TENAGA KERJA INDUSTRI PENGOLAHAN BESAR DAN MENENGAH PADA TINGKAT KABUPATEN / KOTA DI JAWA TIMUR TAHUN 2006 - 2011

1 35 26

A DISCOURSE ANALYSIS ON “SPA: REGAIN BALANCE OF YOUR INNER AND OUTER BEAUTY” IN THE JAKARTA POST ON 4 MARCH 2011

9 161 13

Pengaruh kualitas aktiva produktif dan non performing financing terhadap return on asset perbankan syariah (Studi Pada 3 Bank Umum Syariah Tahun 2011 – 2014)

6 101 0

Pengaruh pemahaman fiqh muamalat mahasiswa terhadap keputusan membeli produk fashion palsu (study pada mahasiswa angkatan 2011 & 2012 prodi muamalat fakultas syariah dan hukum UIN Syarif Hidayatullah Jakarta)

0 22 0

Pendidikan Agama Islam Untuk Kelas 3 SD Kelas 3 Suyanto Suyoto 2011

4 108 178

ANALISIS NOTA KESEPAHAMAN ANTARA BANK INDONESIA, POLRI, DAN KEJAKSAAN REPUBLIK INDONESIA TAHUN 2011 SEBAGAI MEKANISME PERCEPATAN PENANGANAN TINDAK PIDANA PERBANKAN KHUSUSNYA BANK INDONESIA SEBAGAI PIHAK PELAPOR

1 17 40

KOORDINASI OTORITAS JASA KEUANGAN (OJK) DENGAN LEMBAGA PENJAMIN SIMPANAN (LPS) DAN BANK INDONESIA (BI) DALAM UPAYA PENANGANAN BANK BERMASALAH BERDASARKAN UNDANG-UNDANG RI NOMOR 21 TAHUN 2011 TENTANG OTORITAS JASA KEUANGAN

3 32 52