Quasi-Newton Method for Absolute Value Equation Based on Upper Uniform Smoothing Approximation Function

TELKOMNIKA, Vol.14, No.3, September 2016, pp. 1134~1141
ISSN: 1693-6930, accredited A by DIKTI, Decree No: 58/DIKTI/Kep/2013
DOI: 10.12928/TELKOMNIKA.v14i3.3785

 1134

Quasi-Newton Method for Absolute Value Equation
Based on Upper Uniform Smoothing Approximation
Function
Longquan Yong*, Shouheng Tuo
School of Mathematics and Computer Science, Shaanxi University of Technology,
Hanzhong 723001, China
*Corresponding author, e-mail: yonglongquan@126.com

Abstract
Generally, absolute value equation (AVE), Ax - |x| = b, is an NP-hard problem. Especially, how to
find all solutions of AVE with multi-solutions is actually a more difficult problem. In this paper, an upper
uniform smooth approximation function of absolute value function is proposed, and some properties of
uniform smooth approximation function are studied. Then, AVE, Ax - |x| = b, where A is a square matrix
whose singular values exceed one, is transformed into smooth optimization problem by using the upper
uniform smooth approximation function, and solved by quasi-Newton method. Numerical results in solving

some given AVE problems demonstrated that our algorithm is valid and superior to that by lower uniform
smooth approximation function.
Keywords: quasi-Newton method, absolute value equation, absolute value function, upper uniform
smoothing approximation function, singular value
Copyright © 2016 Universitas Ahmad Dahlan. All rights reserved.

1. Introduction
Consider the absolute value equation (AVE):

Ax − x = b

(1)

A ∈ R n×n , x, b ∈ R n , and x denotes the vector with absolute value of each
component of x . Mangasarian O.L et al. shown that AVE (1) is NP-hard in general form. Hence

Where

how to solve the AVE directly attracts much attention [1].
Based on a new reformulation of the AVE (1) as the minimization of a parameter-free

piecewise linear concave minimization problem on a polyhedral set, Mangasarian [2] proposed
a finite computational algorithm that is solved by a finite succession of linear programs. In the
recent interesting paper of Mangasarian [3, 4], a semismooth Newton method is proposed for
solving the AVE, which largely shortens the computation time than the succession of linear
programs (SLP) method. It showed that the semismooth Newton iterates are well defined and
bounded when the singular values of A exceed 1. However, the global linear convergence of
the method is only guaranteed under more stringent condition than the singular values of A
exceed 1. Mangasarian [5] formulated the NP-hard n-dimensional knapsack feasibility problem
as an equivalent absolute value equation in an n-dimensional noninteger real variable space
and proposed a finite succession of linear programs for solving the AVE (1).
A generalized Newton method, which has global and finite convergence, was proposed
for the AVE by Zhang, et al., [6]. A smoothing Newton algorithm to solve the AVE (1) was
presented by Louis Caccetta [7]. The algorithm was proved to be globally convergent and the
convergence rate was quadratic under the condition that the singular values of A exceed 1.
This condition was weaker than the one used in Mangasarian [3, 4]. Recently, AVE (1) has been
investigated in Jiri Rohn [8-10], especially, Jiri proposed an algorithm for computing all solutions
of AVE. Iterative method, Minimum Norm Solution, and Picard-HSS iteration method to AVE
are demonstrated in [11-13]. Harmony search algorithm with chaos and Smooth Newton method

Received March 11, 2016; Revised May 18, 2016; Accepted June 2, 2016


 1135

ISSN: 1693-6930

TELKOMNIKA

based on lower uniform smoothing approximation function are proposed in [14, 15]. Yong has
applied AVE to solve two-point boundary value problem of linear differential equation [16].
Compared with literature [15], the basic contribution of present work is that an upper
uniform smooth approximation function of absolute value function is established. After replacing
the absolute value function by upper uniform smooth approximation function, the non-smooth
AVE is formulated as smooth nonlinear equations, furthermore, an unconstrained smooth
optimization problem, and solved by quasi-Newton method.
Upper uniform smoothing approximation function of absolute value function is given,
and its properties are studied in section 2. Non-smooth AVE is formulated as a smooth
nonlinear equations, furthermore, an unconstrained smooth optimization problem; meanwhile,
some lemmas for AVE that will be used later are given in section 3. Quasi-Newton method to
AVE is proposed in section 4. Effectiveness of the method is demonstrated in section 5 by
solving some randomly generated AVE problems with singular values of

Section 6 concludes the paper.

A exceeding 1.

All vectors will be column vectors unless transposed to a row vector. For x ∈ R the 2norm will be denoted by ||x||, while |x| will denote the vector with absolute values of each
component of x . The notation A ∈ R n×n will signify a real n × n matrix. For such a matrix AT
will denote the transpose of A . We write I for the identity matrix, e for the vector of all ones
( I and e are suitable dimension in context). X = diag{xi } for the diagonal matrix whose
n

elements are the coordinates xi of x ∈ R n .

2. Upper Uniform Smoothing Approximation Function of Absolute Value Function
Definition 2.1 A function f µ := R1 → R1 , µ > 0 is called a uniformly smoothing

f := R1 → R1 if, for any t ∈ R1 , f µ is

approximation function of a non-smooth function

continuously differentiable, and there exists a constant κ such that:


f µ (t ) − f (t ) ≤ κµ , ∀µ >0 ,
Where κ > 0 is constant independent on t .

φ (t ) = t
uniformly smoothing approximation function of φ (t ) .
Obviously, absolute value function

is non-smooth. Following we will give a

−t
 t

π
φµ (t=
) sin µ ⋅ ln  e sin µ + e sin µ  , 0 < µ < . Then


2



(i) 0 < φµ (t ) − φ (t ) ≤ sinµ ln 2 ;

Theorem 2.1 Let

(ii)

d (φµ (t ) )
dt

< 1 , and

d (φµ (t ) )
dt

= 0.
t =0

Proof (i).



−t

t



t



t− t

−t − t



φµ (t ) − φ (t )=sinµ ln  e sin µ + e sin µ  − sin µ ln e sin µ =sinµ ln  e sin µ + e sin µ  ,





Since









t = max{t , −t} , so t − t ≤ 0, −t − t ≤ 0 . Moreover, t − t or −t − t is equal to 0 at


t− t

−t − t




least. Then 0=sinµ ln1 < sin µ ln  e sin µ + e sin µ  ≤ sin µ ln(1 + 1)=sinµ ln 2 , That is:







0 < φµ (t ) − φ (t ) ≤ sin µ ln 2 .

Quasi-Newton Method for Absolute Value Equation Based on Upper… (Longquan Yong)

1136 

ISSN: 1693-6930
(ii)
−t

t


d (φµ (t ) )

=

dt

e sin µ − e sin µ
e

t
sin µ

+e

−t
sin µ

2t


=

e sin µ − 1
e

2t
sin µ

,

+1

Thus,

d (φµ (t ) )
dt

< 1 and

Figure 1-3 show

−t
 sint µ
sin µ
−e
e
= t
−t
 e sin µ + e sin µ


d (φµ (t ) )
dt

φµ (t )

and

t =0

φµ (t ) − t

with
=
µ



 =0 .

 t =0

0.6,
=
µ 0.2,
=
µ 0.06 .

µ=0.6
1

|t|

φ µ(t)

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-1

-0.8

-0.6

(a)

-0.4

-0.2

φµ (t )

0
t

0.2

with

0.4

0.6

0.8

1

µ = 0.6

Figure 1.

(b)

φµ (t )

and

φµ (t ) − t

with

φµ (t ) − t

with

µ = 0.6

with

µ = 0.2

µ = 0.6

µ=0.2
1

|t|

φ µ(t)

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-1

-0.8

-0.6

(a)

-0.4

-0.2

φµ (t )

0
t

0.2

with

0.4

0.6

0.8

1

µ = 0.2

Figure 2.

φµ (t )

and

φµ (t ) − t

(b)

φµ (t ) − t

with

µ = 0.2

TELKOMNIKA Vol. 14, No. 3, September 2016 : 1134 – 1141

 1137

ISSN: 1693-6930

TELKOMNIKA
µ=0.06
1

|t|

φ µ(t)

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-1

-0.8

-0.6

(a)

-0.4

-0.2

φµ (t )

0
t

0.2

with

0.4

0.6

0.8

1

µ = 0.06

Figure 3.

φµ (t )

and

From Figure 1-3, we can see that

µ,

and lim+ φµ (t ) = φ (t ) . So
µ →0

φµ (t )

φµ (t ) − t

φµ (t )

(b)

φµ (t ) − t

with

µ = 0.06

with

µ = 0.06

is decreasing with the decrease of parameter

is an upper uniform smooth approximation function of

absolute value function.
3. Smoothing approximation function of AVE
Define H := R n → R n by:

H ( x=
):

Ax − x − b .

(2)

H is a non-smooth function due to the non-differentiability of the absolute value
function. In theory, x is a solution of the AVE (1) if and only if H ( x) = 0 . In this section we
give a smoothing function of H and study its properties. Firstly we give some lemmas.
[1]

Lemma 3.1
(a) The AVE (1) is uniquely solvable for any

b ∈ R n if the singular values of A exceed

1.
(b) The AVE (1) is uniquely solvable for any
Lemma 3.2. For a matrix
(a) The singular values of
(b) A−1 < 1 .
Lemma 3.3

Then,

A ∈ R n×n , the following conditions are equivalent.
A exceed 1.

[15]

. Suppose that

nonsingular.
[15]
Lemma 3.4 . Let

b ∈ R n if A−1 < 1 .

A is nonsingular and

A−1 B < 1 . Then, A + B is

D = diag (d ) with di ∈[−1,1], i =
1, 2, , n . Suppose that A−1 < 1 .

A + D is nonsingular.

H µ := R n → R n , µ > 0 is called a uniformly smoothing
approximation function of a non-smooth function H := R n → R n if, for any x ∈ R n , H µ is
Definition 3.1

[15]

. A function

continuously differentiable, and there exists a constant κ such that:

H µ ( x) − H ( x) ≤ κµ , ∀µ >0 .
Where κ > 0 is constant without depend on

x.

Quasi-Newton Method for Absolute Value Equation Based on Upper… (Longquan Yong)

1138 

ISSN: 1693-6930

φ ( x) = (φ ( x1 ), φ ( x2 ), , φ ( xn ) )
π
For
any
0 0 , the Jacobian of H µ

(

H µ′ ( x) = A − diag φµ′ ( x1 ),φµ′ ( x2 ), ,φµ′ ( xn )

at

x ∈ R n is:

)

Where,
xi

− xi

e sin µ − e sin µ
, i 1, 2, , n .
φµ ′ ( xi ) =
=
− xi
xi
e sin µ + e sin µ

π
A−1 < 1 . Then H µ′ ( x) is nonsingular for any 0 < µ < .
2
π
1, 2, , n .
Proof: Note that for any 0 < µ < , by Theorem 2.1, φµ ′ ( xi ) < 1, i =
2
Hence, by Lemma 3.4, we have H µ ′ ( x) is nonsingular.
Theorem 3.2. Let H ( x) and H µ ( x) be defined as (2) and (3), respectively. Then,
H µ ( x) is a uniformly smoothing approximation function of H ( x) .
π
Proof: For any 0 < µ < , by Theorem 2.1.
2
Theorem 3.1. Suppose that

H=
φ=
µ ( x ) − H ( x)
µ ( x ) − φ ( x)
Denote
goes to zero.

2

n

i =1

i

i

x( µ ) is the solution of (3), then x( µ ) converges to the solution of (1) as µ

For any 0 < µ <

θ ( x) =

∑ φµ ( x ) − φ ( x ) ≤ n ln 2 ⋅ sin µ .

π
, Define θ := R n → R and θ µ := R n → R by:
2

2
2
1
1
H ( x) , θ µ ( x ) = H µ ( x) .
2
2

Theorem 3.3. Suppose that

A−1 < 1 . Then, for any 0 < µ <

TELKOMNIKA Vol. 14, No. 3, September 2016 : 1134 – 1141

π
and x ∈ R n ,
2

 1139

ISSN: 1693-6930

TELKOMNIKA

∇θ µ ( x) = 0 implies that θ µ ( x) = 0 .

π
and x ∈ R n , ∇θ µ ( x) = [ H µ ′ ( x)]T H µ ( x) . By Theorem 3.1,
2
H µ′ ( x) is nonsingular. Hence, if ∇θ µ ( x) = 0 , then H µ ( x) and θ µ ( x) = 0 .
Proof: For any

0