Directory UMM :Data Elmu:jurnal:O:Operations Research Letters:Vol28.Issue2.2001:

Operations Research Letters 28 (2001) 81–92
www.elsevier.com/locate/dsw

Biconvex programming approach to optimization
over the weakly ecient set of a multiple objective ane
fractional problem 
Hoang Q. Tuyen, Le D. Muu ∗
Department of Optimization & Control, Hanoi Institute of Mathematics, Box 631, Bo Ho, Hanoi, Viet Nam
Received 1 April 1999; received in revised form 1 October 2000; accepted 1 November 2000

Abstract
We formulate the problem of optimizing a convex function over the weakly ecient set of a multicriteria ane fractional
program as a special biconvex problem. We propose a decomposition algorithm for solving the latter problem. The proposed
c 2001
algorithm is a branch-and-bound procedure taking into account the ane fractionality of the criterion functions. 
Published by Elsevier Science B.V.
Keywords: Ane fractional; Weakly ecient; Optimization over the weakly ecient set; Biconvex programming;
Decomposition

1. Introduction and the problem statement
Consider the following multicriteria mathematical programming problem:

min{F(x) = (f1 (x); : : : ; fp (x)) | x ∈ X };
n

(VP)

where X ⊂ R is compact and each fi (i = 1; : : : ; p) is ane fractional on X .
Ane fractional functions are widely used as performance measures in some management situations, production planning and scheduling and the analysis of nancial enterprise [8,17]. Thus, multicriteria programming
problems with ane fractional criterion functions are important and have wide applications.
We recall that a point x ∈ X is called weakly ecient for Problem (VP) if whenever F(y) ¡ F(x) and
y ∈ X then F(y) = F(x). By WE(F; X ) we shall denote the set of all weakly ecient points. Here and
subsequently for two vectors a = (a1 ; : : : ; ap ) and b = (b1 ; : : : ; bp ) the notation a ¡ b (resp. a6b) means that


This paper is supported in part by the Basic Program in Natural Sciences.
Corresponding author.
E-mail address: ldmuu@thevinh.ncst.ac.vn (L.D. Muu).


c 2001 Published by Elsevier Science B.V.
0167-6377/01/$ - see front matter 

PII: S 0 1 6 7 - 6 3 7 7 ( 0 1 ) 0 0 0 5 2 - 9

82

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92

ai ¡ bi (resp. ai 6bi ) for all i = 1; : : : ; p. A special case of the multicriteria ane fractional problem (VP) is
the multicriteria linear problem where X is polyhedral and each fi is linear. It is well known that even in
the linear case the weakly ecient set is not necessarily convex. Therefore, the computational e ort required
to generate all of the weakly ecient points becomes unmanageable and seems to grow exponentially with
problem size.
In some situations, however, a real-valued function, say f, is available which acts as a criterion function
for measuring the importance of or for discriminating among the ecient alternatives. The problem of nding
a most preferred weakly ecient point (with respect to f) can be written as
min{f(x) | x ∈ WE(F; X )}:

(P)

As an example of Problem (P), let us consider a domestic–foreign investment model which has p investment
sources. Suppose that one can choose domestic or foreign investment from each investment source. Let

x = (x1 ; : : : ; x n ) represent the vector of investment levels. Let (ai )T x and (bi )T x denote the domestic and
foreign investment pro ts resulting from ith investment source at the level x. The overall goal of the decision
maker is to determine a minimum cost-investment feasible plan x which is given by the function f(x) := dT x.
However for each investment source, the decision maker also wants to maintain a low domestic investment
level relative to the foreign investment level at each investment project. This model leads to a minimization
problem over the ecient set of a multiple objective linear fractional program where every criterion function
fi (x) is the ratio of the domestic and foreign investment pro ts, i.e., fi (x) = (ai )T x=(bi )T x (i = 1; : : : ; p). Note
that for the multiple objective ane fractional program (VP), the weakly ecient set, in general, is much
more easily handled than the ecient set. In fact, the weakly ecient set of (VP) is always compact while its
ecient set may be neither open nor closed [8]. In response to this diculty, in the above model, instead of
minimizing the cost function f(x) over the ecient set of all feasible investment plans, the decision makers
would minimize f(x) over the weakly ecient set rather than the ecient set. Since the weakly ecient set
contains the ecient set, the minimum taken on the weakly ecient set, in general, is less than that taken on
the ecient set.
Problem (P) can be considered as a direct development of optimization problem over the weakly
ecient set of a multiple objective linear program. The latter problem has been considered by some authors
[2,3– 6,20].
A main diculty of Problem (P) arises from the fact that the weakly ecient set, in general, is neither
convex nor given explicitly as a constrained set of an ordinary mathematical programming problem. Because
WE(F; X ) is rarely a convex set, Problem (P) even with f linear, may have local extrema which are not

global. Such an example can be easily found (see e.g. Fig. 1 in [7]).
Recently Problem (P) has been studied by Malivert [15]. It is shown in [15] that this problem can be
proceeded by solving a sequence of convex-constrained penalized problems of the form
min{f(x) + tk pw (x) | x ∈ X };
where tk ¿ 0, and pw is a certain penalty function representing the weakly ecient set which is neither convex
nor di erentiable. So the penalized problems remain dicult global optimization ones.
This work was intended as an attempt to develop methods for globally solving Problem (P) when WE(F; X )
is the weakly ecient set of a convex constrained multicriteria ane fractional program, and f is a convex
function.
Based upon a necessary and sucient condition for weak eciency established in [15] we equivalently
formulate Problem (P) as a special case of convex programs with an additional biconvex constraint. We then
propose a branch-and-bound algorithm for approximating a globally optimal solution to the latter problem.
The bounding operation is based upon a Lagrangian duality which, taking into account the ane fractionality
of the criterion functions, can be performed by standard methods of convex programming.

83

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92

2. Biconvex programming formulation

Let A ⊂ Rp ; B ⊂ Rn be two convex sets, and let g : A × B → R. The function g is said to be biconvex on
A × B if g(:; y) is convex on A for each xed y ∈ B, and g(x; :) is convex on B for each xed x ∈ A.
A mathematical programming problem involving minimizing a biconvex function under linear constraints
was introduced rst by Al-Khayyal and Falk [1] in 1983. In recent years, mathematical programming problems
where the objective function and=or constraints are biconvex functions have been studied by several authors
[10,11,19]. A solution approach based upon a primal-dual relaxation has been developed in [10] for this class.
In this section, we formulate the problem of optimizing a convex function over the weakly ecient set of a
multicriteria ane fractional program as a special convex program with an additional biconvex constraint.
To be precise, we suppose that the criterion functions in the multicriteria programming problem (VP) are
given by


Ap x + sp
A 1 x + s1
F(x) =
;:::;
;
B1 x + t 1
Bp x + t p
where Ai ; Bi are n-dimensional vectors, si ; ti are real numbers for all i = 1; : : : ; p. As usual we assume that

Bi x + ti ¿ 0 for all x ∈ X and all i = 1; : : : ; p. It is well known that an ane fractional function is both
pseudoconvex and pseudoconcave.
By de nition, the weakly ecient set of Problem (VP) can be given by
WE(F; X ) = {x ∈ X | @y ∈ X : F(x) − F(y) ¿ 0}:
Since X is compact, WE(F; X ) is closed and connected [8,14,18]. Thus, an optimal solution of (P) always
exists. The following theorem due to Malivert [15] will be useful for our purpose.
Theorem 2.1 (Malivert [15]). A vector x ∈ X is weakly ecient if and only if there exist real numbers
i ¿0; i = 1; : : : ; p; not all zeros such that
p


i [(Bi x + ti )Ai − (Ai x + si )Bi ](y − x)¿0 ∀y ∈ X:

i=1

p
p
By dividing each i by j=1 j ¿ 0 we may assume that j=1 j = 1. So if



p



 :=  = (1 ; : : : ; p ) | ¿0;
j = 1 ;


j=1

then

WE(F; X ) =



x ∈ X | ∃ ∈ ;

p



i [(Bi x + ti )Ai − (Ai x + si )Bi ](y − x)¿0 ∀y ∈ X

i=1

Thus (P) can be formulated as the following semi-in nite programming problem:

(IP)

min
f(x)
subject to x ∈ X;  ∈ ;
p

i [(Bi x + ti )Ai − (Ai x + si )Bi ](y − x)¿0
i=1

∀y ∈ X:




:

84

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92

De ne the function g :  × X → R by setting, for each (; x) ∈  × X ,
g(; x) := − min
y∈X

p


i [(Bi x + ti )Ai − (Ai x + si )Bi ]y

i=1

and denote by C the (p × n)-matrix whose ith row is ti Ai − si Bi (i = 1; : : : ; p).
Proposition 2.1. (i) g is a continuous biconvex function on  × X .

(ii) g(; x) + T Cx¿0 for all (; x) ∈  × X .
(iii) Problem (IP) is equivalent to the problem
(P)

min
f(x)
subject to g(; x) + T Cx60;
x ∈ X;  ∈ :

p
Proof. (i) Since X is compact and the function i=1 i [(Bi x+ti )Ai −(Ai x+si )Bi ]y is continuous, the continuity
of g follows immediately from the maximum theorem [4].
Let 06 61; 1 ; 2 ∈ . Then 1 + (1 − )2 ∈ . For each xed x ∈ X we have
1

2

g(  + (1 − ) ; x) = − min
y∈X


= max
y∈X

p


( i1 + (1 − )i2 )[(Bi x + ti )Ai − (Ai x + si )Bi ]y

i=1

p


( i1 + (1 − )i2 )[(Ai x + si )Bi − (Bi x + ti )Ai ]y:

i=1

For simplicity of notation let Lix (y) := [(Ai x + si )Bi − (Bi x + ti )Ai ]y. Then
g(; x) = max
y∈X

p


i Lix (y):

i=1

Thus,
g( 1 + (1 − )2 ; x) = max
y∈X

p

i=1



= max
y∈X

( i1 + (1 − )i2 )Lix (y)

= max
y∈X

1

p


i1 Lix (y)

+ (1 − )

i=1

p


p




i2 Lix (y)

i=1

i1 Lix (y) + (1 − ) max
y∈X

i=1

2

= g( ; x) + (1 − )g( ; x)

p


i2 Lix (y)

i=1

⇒ g(; x) is a convex function with respect to :
To prove the convexity of g in x we take x1 ; x2 ∈ X . Then for each xed  ∈  we have
g(; x1 + (1 − )x2 ) = max
y∈X

p


i [(Ai ( x1 + (1 − )x2 ) + si )Bi

i=1

−(Bi ( x1 + (1 − )x2 ) + ti )Ai ]y

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92



= max
y∈X

p


i [(Ai x1 + si )Bi − (Bi x1 + ti )Ai ]y

i=1

+ (1 − )

p


2

2

i [(Ai x + si )Bi − (Bi x + ti )Ai ]y

i=1

6 max
y∈X

p




i [(Ai x1 + si )Bi − (Bi x1 + ti )Ai ]y

i=1

+ (1 − ) max
y∈X

1

p


i [(Ai x2 + si )Bi − (Bi x2 + ti )Ai ]y

i=1

= g(; x ) + (1 − )g(; x2 )
⇒ g(; x) is convex with respect to x:
(ii) It is clear that
min
y∈X

p


i [(Bi x + ti )Ai − (Ai x + si )Bi ]y

i=1
p

6



i [(Bi x + ti )Ai − (Ai x + si )Bi ]x

∀x ∈ X:

i=1

Then, since
p
p


i [(Bi x + ti )Ai − (Ai x + si )Bi ]x =
i (ti Ai − si Bi )x;
i=1

i=1

we have

min
y∈X

p


i [(Bi x + ti )Ai − (Ai x + si )Bi ]y

i=1
p

6



i (ti Ai − si Bi )x:

i=1

Thus by the de nition of g(; x) and the matrix C we have g(; x) + T Cx¿0 for all (; x) ∈  × X .
(iii) In view of Theorem 2.1 x ∈ WE(F; X ) if and only if there exists a  ∈  such that
p


i [(Bi x + ti )Ai − (Ai x + si )Bi ](y − x)¿0 ∀y ∈ X:

p


i [(Bi x + ti )Ai − (Ai x + si )Bi ]x

i=1

These inequalities can be rewritten as

i=1

6

p


i [(Bi x + ti )Ai − (Ai x + si )Bi ]y

∀y ∈ X:

i=1

Since
p

i=1

[(Bi x + ti )Ai − (Ai x + si )Bi ]x =

p

i=1

(ti Ai − si Bi )x;

85

86

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92

we have
p


i (ti Ai − si Bi )x6 min
y∈X

i=1

p


i [(Bi x + ti )Ai − (Ai x + si )Bi ]y:

i=1

Using again the de nition of g(; x) and the matrix C we can write the last inequality as g(; x) + T Cx60.

Corollary 2.1. For each  ∈  the function g(; x) + T Cx is ane on the set
X := {x ∈ X | g(; x) + T Cx60}:
Proof. For simplicity of notation let
h(; x) := g(; x) + T Cx:
By Proposition 2.1, h(; :) is convex on X and h(; x)¿0 for all (; x) ∈  × X . Thus X is convex, and
X = {x ∈ X | g(; x) + T Cx = 0}:
Let x; y ∈ X , 06t61. Since X is convex, tx + (1 − t)y ∈ X . Then
0 = h(; tx + (1 − t)y)6th(; x) + (1 − t)h(; y) = 0:
Hence
h(; tx + (1 − t)y) = th(; x) + (1 − t)h(; y):
Remark. To evaluate the function h at each (; x) we have to solve the problem
min
y∈X

p


i [(Bi x + ti )Ai − (Ai x + si )Bi ]y:

i=1

This is a linear program if X is a polyhedral convex set given by the format traditional in linear programming.
3. Solution method
In this section we shall describe a decomposition method for approximating a globally optimal solution of
Problem (P) with f being a continuous convex function on X . As usual, for a given ¿0, we call a point
x an -optimal solution to Problem (P) if x is feasible and f(x) − f∗ 6(|f(x)| + 1), where f∗ denotes the
optimal value of (P).
In view of Proposition 2.1, minimizing a convex function over the weakly ecient set of a multicriteria
ane fractional programming problem amounts to solving the biconvex program (P). This problem can be
treated by existing methods of biconvex programming [10,19]. Below we propose a decomposition algorithm
for solving Problem (P) which takes into account the speci c structure of the biconvex-constrained function
and the simplex appearing in this problem. The proposed algorithm is a branch-and-bound procedure using a
Lagrangian bounding operation and a simplicial subdivision.
3.1. Lagrangian bounding operation
De ne the function  :  → R by setting
() := min{f(x) | x ∈ X g(; x) + T Cx60}:

(P )

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92

87

Problem (P) can then be rewritten as
min{() |  ∈ }:

(MP)

Note that since X;  are closed, and f and g are continuous on X and  × X , respectively, the function ,
by the maximum theorem [4], is continuous on .
In view of Corollary 2.1, the function g(; x) + T Cx is ane on the feasible domain of Problem (P ). So
in an important case when X is polyhedral, the feasible domain of Problem (P ) is polyhedral too.
The following proposition gives a relationship between Problems (P), (P) and (MP). The proof of this
proposition is obvious from the results of the preceding section.
Proposition 3.1. A point (∗ ; x∗ ) is optimal to Problem (P) if and only if x∗ is optimal to (P); ∗ is
optimal to (MP); and f∗ = f(x∗ ) = (∗ ).
Note that unlike general mathematical programming problems having nonconvex feasible domains, a feasible
point of Problem (P) can be computed by solving a standard convex program. In fact, if  ∈  and x is
an optimal solution of the convex problem
min{g(; x) + T Cx | x ∈ X };
then (; x ) is feasible for (P). Hence x is feasible for (P). So upper bounds for f∗ can be computed by
existing methods of convex programming. As the algorithm (to be described below) executes, more and more
feasible points can be found, and thereby upper bounds for f∗ can be iteratively improved.
We now compute a lower bound for f∗ by using Lagrangian duality. It is well recognized [9] that the
duality gap obtained by solving Lagrangian dual is often reduced to zero in the limit by appropriate re nement
of the partition sets. Let S be a fully dimensional subsimplex of . Let V (S) denote the vertex-set of S.
Consider Problem (P) restricted to S, i.e.,
(PS)

f∗ (S) := min
subject to

f(x)
g(; x) + T Cx60
x ∈ X;  ∈ S:

Let L(; ; x) be the Lagrangian function of this problem. That is
L(; ; x) = f(x) + (g(; x) + T Cx):

(1)

De ne the function m(; ) as
m(; ) = min{f(x) + (g(; x) + T Cx)}:
x∈X

From the well-known Lagrangian duality theorem we have
m(; )6() ∀¿0; ∀ ∈ S:

(2)

Since, by Corollary 2.1, g(; x) + T Cx is ane on the feasible domain of Problem (P ), we have
sup m(; ) = ():

(3)

¿0

Let
uS () = min m(; ):

(4)

∈S

From (2) it follows that
uS () = min m(; )6 min () = f∗ (S) ∀¿0:
∈S

∈S

88

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92

Hence
sup uS ()6f∗ (S):
¿0

Thus we can obtain a lower bound (S) for f∗ (S) by setting
(S) = sup uS ():
¿0

Noting that
uS () = min min{f(x) + (g(; x) + T Cx)}
∈S x∈X

we have the following lemma.
Lemma 3.1. Let R+ denote the set of nonnegative real numbers. Then the function uS () is concave on R+ ;
and



p

:
i [(Ai x + si )Bi − (Bi x + ti )Ai ] y
uS () = min min max f(x) +  T Cx +
∈V (S) x∈X y∈X

i=1

Proof. Since for each xed  and x the function f(x) + (g(; x) + T Cx) is linear in , the function
m(; ) = min{f(x) + (g(; x) + T Cx)}
x∈X

is concave in . Thus the function
uS () = min m(; )
∈S

is concave on R+ , because it is the minimum of a family of concave functions.
Using the de nition of g(; x) we have

uS () = min min
∈S x∈X

+  max
y∈X

= min
x∈X



f(x) + T Cx

p


i [(Ai x + si )Bi − (Bi x + ti )Ai ]y

i=1




f(x) + min T Cx
∈S

p

+  max
y∈X





i [(Ai x + si )Bi − (Bi x + ti )Ai ]y

i=1

= min f(x) + min max
x∈X

+

p


∈S y∈X







T Cx

i [(Ai x + si )Bi − (Bi x + ti )Ai ]y

i=1

For simplicity of notation let


‘ (; x; y) :=  T Cx +

p

i=1






i [(Ai x + si )Bi − (Bi x + ti )Ai ] y :

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92

89

Since X and S are compact and the function l (:; x; :) is bilinear on S × X ), by the well-known minimax
theorem [16] one can interchange the min–max and max–min operations. Thus


uS () = min f(x) + max min ‘ (; x; y) :
x∈X

y∈X

∈S

Observing that, for each xed x and y, the function ‘ (·; x; y) is linear in  we have
min ‘ (; x; y) = min ‘ (; x; y):
∈V (S)

∈S

For each j ∈ V (S), let
uj () := min max{f(x) + ‘ (j ; x; y)}:
x∈X y∈X

Then
uS () = jmin uj ()
 ∈V (S)

= jmin min max{f(x) + ‘ (j ; x; y)};
 ∈V (S) x∈X y∈X

which proves the lemma.
Remark. In view of Lemma 3.1, computing the lower bound (S) amounts to maximizing the concave
function uS () on R+ . To evaluate uS () at each ¿0 we have to solve standard minimax subproblems, one
for each vertex of S.
3.2. Simplicial bisection
At each iteration k of the algorithm to be described below a subsimplex of the simplex  will be bisected
into two subsimplices in such a way that as the algorithm executes the obtained lower and upper bounds tend
to the same limit. We shall use the simplicial bisection via a longest edge. This bisection is widely used in
global optimization (see e.g. [12,13]). This simplicial bisection can be described as follows. Let Sk be a fully
dimensional subsimplex of  to be bisected at iteration k. Let v k and wk be two vertices of Sk such that the
edge joining these vertices is longest. Let uk be another point on this edge. Thus uk = tk v k + (1 − tk )wk with
0 ¡ tk ¡ 1. Bisect Sk into two subsimplices Sk1 and Sk2 , where Sk1 and Sk2 are obtained from Sk by replacing
v k and wk , respectively, by uk . It is well known [12,13] that Sk = Sk1 ∪ Sk2 , and that if {Sk } is an in nite
sequence of nested simplices generated by this simplicial bisection process such that 0 ¡ 0 ¡ tk ¡ 1 ¡ 1 for
every k, then the sequence {Sk } shrinks to a singleton.
Now we are in a position to describe the algorithm.
Algorithm
Initialization: Fix a tolerance ¿0. Set S0 :=  and solve the univariate convex program (concave maximization)
(S0 ) := sup{uS0 () | ¿0};
to obtain a lower bound for f∗ .
Choose  ∈  and compute an upper bound for f∗ by solving the convex program
() := min{f(x) | g(; x) + T Cx60; x ∈ X }:
If x is an optimal solution of this problem, then (; x ) is feasible for Problem (P) (hence x ∈ WE(F; X )).
Let x 0 be the best feasible point known at this iteration and 0 := f(x 0 ) (hence f∗ 6a0 ).

90

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92



{S0 } if 0 − 0 ¿ (| 0 | + 1);

otherwise:
Let k ← 0 and go to Iteration k.
Iteration k (k = 0; 1 : : :).
Step k1 (selection):
(a) If -k = ∅ then the algorithm terminates: x k is an -optimal solution and k is an -optimal value to
Problem (P).
(b) If -k = ∅, then select Sk ∈ -k such that
Set 0 := (S0 ), and -0 :=

k := (Sk ) = min{ (S) | S ∈ -k }:
Choose k ∈ Sk and compute (k ) to obtain a new upper bound for f∗ .
Step k2 (bisection): Bisect Sk into two simplices Sk1 and Sk2 by the simplicial bisection described above.
Step k3 (bounding): Compute (Skj ) (j = 1; 2) by solving
(Skj ) := sup{uSk j () | ¿0};

(j = 1; 2):

k

Step k4 (Updating): As ( ), (Sk1 ) and (Sk2 ) are computed one or more new feasible points have been
found. Let x k+1 be the currently best feasible point (with respect to f) among x k and the newly generated
feasible points.
Set k+1 := f(x k+1 )
and
-k+1 ← {S ∈ (-k \{Sk }) ∪ {Sk1 ; Sk2 } : k+1 − (S) ¿ (| k+1 | + 1)}:
Increase k by 1 and go to iteration k.
Convergence Theorem. (i) If the algorithm terminates at iteration k then x k is a global -optimal solution
to Problem (P).
(ii) If the algorithm does not terminate then k ր f∗ ; k ց f∗ as k → +∞; and any cluster point of
the sequence {x k } is a global optimal solution to Problem (P).
Proof. (i) If the algorithm terminates at iteration k, then -k = ∅. This implies that k − k 6(| k | + 1). Since
k 6f∗ , and k = f(x k ), it follows that f(x k ) − f∗ 6(|f(x k )| + 1). Hence x k is a global -optimal solution.
(ii) Since for every k we have Sk = Sk1 ∪ Sk2 , by the rule for computing lower bound (Sk ) we have
k = (Sk )6 (Sk+1 ) = k+1

∀k:

Also, since k+1 is the currently smallest upper bound determined at Step k4, we have k+1 6 k for every k.
Thus both ∗ = lim k and ∗ = lim k do exist and satisfy
(5)

∗ 6f∗ 6 ∗ :

Suppose that the algorithm does not terminate. Then it generates an in nite sequence of nested simplices
that, for simplicity of notation, we also denote by {Sk }. Since the subdivision is the simplicial bisection, this
sequence shrinks to a singleton, say ∗ ∈ . By the rule for computing lower bound k we have
k = sup min m(; )¿ min m(; ) ∀¿0:
¿0 ∈Sk

∈Sk

Noting that the sequence {Sk } tends to ∗ as k → +∞ we obtain
∗ = lim k ¿m(; ∗ ) ∀¿0:

(6)

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92

91

Since (k ) is an upper bound determined at Step k1 and k+1 is the currently smallest upper bound obtained
at Step k4, we have
k+1 6(k ) ∀k:
Since k → ∗ , it follows from the continuity of  that
∗ = lim k 6lim (k ) = (∗ ):

(7)

On the other hand, by the Lagrangian duality theorem for the convex problem determining (∗ ) we have
sup m(; ∗ ) = (∗ ):
¿0

Then from (6) and (7) it follows that
∗ 6(∗ )6 ∗ ;
which together with (5) implies ∗ = f∗ = ∗ = (∗ ).
Let x∗ be any cluster point of the sequence {x k }. By de nition we have k = f(x k ). Since k → f∗ ,
by the continuity of f we have f∗ = f(x∗ ). Since x k ∈ WE(F; X ) for all k and WE(F; X ) is closed [8],
x∗ ∈ WE(F; X ). Hence x∗ is a globally optimal solution of Problem (P).
Remark. (1) When  ¿ 0 the algorithm must terminate after a nite number of iterations. Indeed, if the
algorithm does not terminate at iteration k, then k − k ¿ (| k | + 1). Since k − k → 0 as k → +∞,
it follows that when  ¿ 0 the inequality k − k ¿ (| k | + 1) cannot happen with in nitely many k. This
implies that the algorithm must terminate after a nite number of iterations.
(2) Since uS ()6f∗ (S) for all ¿0, instead of computing (S) = sup{uS () | ¿0}, we can take (S) =
sup{uS () | ¿0} − k , where k ¿ 0 and k ց 0 as k → +∞.
(3) Since the branching operation takes place in the criterion space, the proposed algorithm is expected to
apply to problems where the number of the criteria is relatively small. The number of the decision variables
may be larger.
Acknowledgements
The authors would like to express their gratitude to the referee for his useful comments and remarks on an
earlier version of this paper which helped them very much to improve the paper.
References
[1] F.A. Al-Khayyal, J.E. Falk, Jointly constrained biconvex programming, Math. Oper. Res. 8 (1983) 273–286.
[2] L.T.H. An, D.T. Pham, L.D. Muu, Numerical solution for optimization over the ecient set by DC optimization algorithms, Oper.
Res. Lett. 19 (1996) 117–128.
[3] H. Benson, An algorithm for optimizing over the weakly-ecient set, European J. Oper. Res. 25 (1986) 192–199.
[4] C. Berge, Topological Spaces, MacMillan, New York, 1968.
[5] S. Bolintineanu, Optimality condition for minimization over the (weakly or properly) ecient set, J. Math. Anal. Appl. 173 (1993)
523–541.
[6] S. Bolintineanu, M.E. Maghri, Penalisation dans l’optimisation sur l’ensemble faiblement ecient, Rech. Oper. 31 (1997) 295–312.
[7] E.U. Choo, D.R. Atkins, Bicriteria linear fractional programming, J. Optim. Theory Appl. 36 (1982) 203–220.
[8] E.U. Choo, D.R. Atkins, Connectedness in multiple linear fractional programming, Management Sci. 29 (1983) 250–255.
[9] M. Dur, R. Horst, Lagrange duality and partioning techniques in nonconvex global optimization, J. Optim. Theory Appl. 95 (1997)
347–369.
[10] C.A. Floudas, V. Visweswaran, A global optimization algorithm for certain classes of nonconvex NLPs, Part 1: theory, Comput.
Chem. Eng. 14 (1990) 1397–1417.

92
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]

H.Q. Tuyen, Le D. Muu / Operations Research Letters 28 (2001) 81–92
C.A. Floudas, V. Visweswaran, Primal-relaxed dual global optimization approach, J. Optim. Theory Appl. 78 (1993) 187–225.
R. Horst, An algorithm for nonconvex programming problems, Math. Programming 10 (1976) 312–321.
R. Horst R, H. Tuy, Global Optimization (Deterministic Approaches), 3rd Edition, Springer, Berlin, 1996.
D.T. Luc, Theory of Vector Optimization, Springer, Berlin, 1989.
C. Malivert, Mulicriteria fractional optimization, Proceedings of the second Catalans Days on Applied Mathematics, Presses
Universitaires de Perpignan, Paris, 1995, pp. 189 –198.
B. Ricceri, S. Simons (Eds.), Minimax Theory and Application, Kluwer Academic Publishers, Dordrecht, 1998.
S. Schaible, Fractional programming: applications and algorithms, European J. Oper. Res. 7 (1981) 111–120.
R.E. Steuer, Multiple Criteria Optimization: Theory, Computation, and Application, Wiley, New York, 1986.
V. Visweswaran, C.A. Floudas, A global optimization algorithm for certain classes of nonconvex NLPs, Part 2: applications of
theory and test problems, Comput. Chem. Eng. 14 (1990) 1419–1434.
S. Yamada, T. Tanino, M. Inuiguchi, An inner approximation method for optimization over the weakly ecient set, J. Global Optim.
16 (2000) 197–217.