Directory UMM :Data Elmu:jurnal:O:Operations Research Letters:Vol26.Issue4.2000:

Operations Research Letters 26 (2000) 193–197
www.elsevier.com/locate/dsw

Worst-case analysis of the greedy algorithm for a generalization of
the maximum p-facility location problem
M.I. Sviridenkoa; b; ∗
b Basic

a Sobolev Institute of Mathematics, Novosibirsk, Russia
Research Institute in Computer Science, University of Aarhus, Denmark

Received 1 February 1999; received in revised form 1 December 1999

Abstract
In this work we consider the maximum p-facility location problem with k additional resource constraints. We prove that,
the simple greedy algorithm has performance guarantee (1 − e−(k+1) )=(k + 1). In the case k = 0 our performance guarantee
c 2000 Elsevier Science B.V. All rights reserved.
coincides with bound due to [4].
Keywords: Approximation algorithm; Greedy algorithm; Worst-case analysis

1. Introduction


and only if

Let I = {1; : : : ; n} and J = {1; : : : ; m}. In this work
we consider the following optimization problem:

f(T )6f(S) +

max{f(S): |S| = P};
S ⊆I

(1)

where f(S) is a polynomially computable set function.
Nemhauser et al. [4] consider this problem with
nonnegative nondecreasing submodular functions
f(S) (function is submodular if f(S) + f(T )¿
f(S ∪ T ) + f(S ∩ T ) for all S; T ⊆ I ). They prove that
the simple greedy algorithm has performance guarantee 1 − e−1 . The main property used in their proof is
that nondecreasing set function f(S) is submodular if


X

(f(S ∪ {p}) − f(S))

p∈T \S

for all S; T ⊆ I . Wolsey [5] studies problem (1) with
the following objective function:
XX
fij xij ;
(2)
f(S) = max
xij ¿0

i∈S j∈J

subject to
X
xij 6Aj ;


j ∈ J;

(3)

X

i ∈ S:

(4)

i∈S

xij 6Bi ;

j∈J

∗ Correspondence address: Basic Research Institute in Computer
Science, University of Aarhus, Denmark.
E-mail address: sviri@brics.dk, svir@math.nsc.ru (M.I.

Sviridenko)

Wolsey proves that function (2) – (4) is a submodular
set function and, consequently, the greedy algorithm
has performance guarantee 1 − e−1 .

c 2000 Elsevier Science B.V. All rights reserved.
0167-6377/00/$ - see front matter
PII: S 0 1 6 7 - 6 3 7 7 ( 0 0 ) 0 0 0 2 2 - 5

194

M.I. Sviridenko / Operations Research Letters 26 (2000) 193–197

In this work we study more general class of objective functions. We prove that the greedy algorithm
gives a (1 − e−(k+1) )=(k + 1)-approximation solution
for problem (1) with nonnegative nondecreasing objective function satisfying the following inequality:
X
(f(S ∪ {p}) − f(S))
f(T )6(k + 1)f(S) +

p∈T \S

(5)
for all S; T ⊆ I . We introduce the natural members
of this class of functions and prove that function
(2) – (4) with k additional resource constraints:
XX
fij xij ;
(6)
f(S) = max
xij ¿0

i∈S j∈J

subject to
X
xij 6Aj ;

j ∈ J;


(7)

X

i ∈ S;

(8)

Set S 0 = {q}; t = 1: At the step t we have a partial
solution S t−1 . Find
t = max (f(S t−1 ∪ {i}) − f(S t−1 )):
i∈I \S t−1

(10)

Let the maximum in (10) is attained on the index it .
Set S t = S t−1 ∪ {it }. If |S t | ¡ P + 1, set t = t + 1 and
do the next step, otherwise, stop.
In the proof of the performance guarantee we will
use the inequality due to Wolsey [5]: if P and D are

arbitrary positive integers, i ; i = 1; : : : ; P are arbitrary
nonnegative reals and 1 ¿ 0 (notice that Wolsey uses
slightly more general conditions), then
PP
i=1 i
Pt−1
mint=1; :::; P ( i=1 i + Dt )
P

1
¿ 1 − e−P=D :
(11)
¿1 − 1 −
D

i∈S

xij 6Bi ;

j∈J


XX

cijl xij 6Cl ;

l = 1; : : : ; k;

(9)

i∈S j∈J

satis es inequality (5). Let us denote function
(6) – (9) by f(S; A; C) (or f(S) for short) where
A = (A1 ; : : : ; Am ) and C = (C1 ; : : : ; Ck ). We assume that all input numbers of the problem,
i.e. (fij ; cijl ; Aj ; Bi ; Cl ), are nonnegative. Notice
that we may only consider the feasible solutionsPS of problem (6) – (9) satisfying the equality
i∈S xij = Aj . This can be done by introducing a dummy facility q such that fqj = 0; Bq = m;
cqjl = 0 for all j ∈ J; l = 1; : : : ; k. We also assume that
q ∈ S for any feasible solution S and |S| = P + 1.
The value of the function f(S) can be computed in

polynomial time by any known polynomial algorithm
for linear programming. If k = 1 and all Bi ¿m then
problem (6) – (9) is the continuous multiple-choice
knapsack problem and f(S) can be computed by the
algorithm with running time O(m|S|) [1,6].
2. Algorithm and its analysis
We now describe the greedy algorithm for solving
problem (1).

Theorem 1. The worst-case ratio of the greedy algorithm for solving problem (1) with objective function
satisfying condition (5) is equal (1−e−(k+1) )=(k +1).
Proof. The proof is a straightforward modi cation
of the Nemhauser’s, Wolsey’s and Fisher’s one. Let
S t ; t = 0; : : : ; P be the sets de ned in the description
of the algorithm and let S ∗ be an optimal solution of
the problem (1). For all t = 0; : : : ; P − 1 we have
X
(f(S t ∪ {i}) − f(S t ))
f(S ∗ ) 6 (k + 1)f(S t ) +
i∈S ∗ \S t


t

6 (k + 1)f(S ) + Pt+1 :

(12)

The last inequality follows from the facts that
f(S t ∪{i})−f(S t )6t+1 and |S ∗ \S t |6P. Using
Pt inequalities (11), (12) and the fact that f(S t )= i=1 i
we obtain
PP
f(S P )
i=1 i
¿
Pt−1
f(S ∗ )
mint=1; :::; P {(k + 1) i=1 i + Pt }
1
(1 − e−(k+1) ):

¿
k +1
We now give an upper and lower bounds of the
numbers (1 − e−(k+1) )=(k + 1) for k¿0:
ke−(k+1) + 1
k +1
=
k
+
1 − e−(k+1)
1 − e−(k+1)
k+1
k +e
e
≈ k + 1:58:
= k + k+1
6k +
e
−1
e−1

k +1¡

195

M.I. Sviridenko / Operations Research Letters 26 (2000) 193–197

and therefore holds for all k¿0.

The optimal solution of this problem with B =
C1 −  is a feasible solution of problem (6) – (9)
corresponding the function f(S; A; C1 − ; : : : ; Ck ).
Since f(S; A; C) = G(C1 ) we have

3. Properties of the function f (S; A; C )


f(S; A; C1 − ; C2 ; : : : ; Ck ) G(C1 − )
¿1 −
:
¿
f(S; A; C)
G(C1 )
C1

The last inequality is equivalent to the following one:
−1

k

e ¿k(1 − e

)+1

We will prove the inequality

f(S; A; C1 − ; C2 ; : : : ; Ck )
¿1 −
:
f(S; A; C)
C1

(13)

The next theorem is the main statement of this paper. It
generalizes the property of the capacitated maximum
p-facility location problem proved by Wolsey [5].

for all  ∈ [0; C1 ]. At rst, consider the continuous
knapsack problem:
X
gi xi ;
G(B) = max

Theorem 2. The function f(S; A; C) satis es the following inequality for all S; T ⊆ I
X
(f(S ∪ {p}) − f(S));
f(T )6(k + 1)f(S) +

X

where k is a number of constraints (9).

i∈I

ci xi 6B;

i∈I

06xi 6bi ;

i ∈ I:

Without loss of generality,
P we assume that g1 =c1 ¿
g2 =c2 ¿ · · · ¿gn =cn and
i∈I ci bi ¿ B. If the input
data violate the second assumption then, trivially, xi =
bi for all i ∈ I . The optimal solution of this problem
: : : ; s − 1,
has the following property: xi = bi for i =
P1;
s−1
xi = 0 for i = s + 1; : : : ; n and xs = B − i=1 ci bi for
some index s (See the proof of this property in the
case when all bi = 1 in [3], Theorem 2:1. The general case can be treated in the same way). Using this
property we obtain that for any  ∈ [0; B] the following inequality holds: G(B − )=(B − )¿G(B)=B
and therefore G(B − )=G(B)¿1 − =B. Moreover,
we may assume that the optimal solution of the knapsack problem with budget B −  is not bigger in each
coordinate then the optimal solution of the problem
with budget B. Using the same argument we now can
prove inequality (13). Let (yij ); i ∈ S; j ∈ J be an optimal solution of problem (6) – (9) corresponding the
function f(S; A; C). Consider the following knapsack
problem
XX
fij xij ;
G(B) = max
i∈S j∈J

XX

cij1 xij 6B;

i∈S j∈J

06xij 6yij ;

i ∈ S; j ∈ J:

p∈T \S

Proof. We will prove an equivalent inequality
X
f(S ∪ {p}; A; C)
(k + 1)f(S; A; C) +
p∈T

¿f(T ; A; C) + |T |f(S; A; C):
Let xpj (T ), p ∈ T; j ∈ J be an optimal solution
of problem (6) – (9) corresponding
the function
P
f(T ; A; C). Let pl =
j∈J cpjl xpj (T ), p =
(p1 ; : : : ; pk ) and Xp = (xp1 (T ); : : : ; xpm (T )) then
f(S ∪ {p}; A; C)¿f(S; A − Xp ;
X
fpj xpj (T ):
C − p ) +

(14)

j∈J

Inequality (14) holds since from the optimal solution
(yij ), i ∈ S; j ∈ J of problem (6) – (9) corresponding
the function f(S; A − Xp ; C − p ) one can obtain the
feasible solution (xij ), i ∈ S ∪ {p}; j ∈ J of problem
(6) – (9) corresponding the function f(S ∪ {p}; A; C)
with objective value equal the value of the right-hand
side of (14). This solution is de ned as follows: xij =
yij for i ∈ S; j ∈ J and xij = xij (T ) for i = p; j ∈ J .
Summing (14) over p ∈ T and adding (k + 1)f(S)
we have
X
f(S ∪ {p})
(k + 1)f(S) +
p∈T

X
f(S; A − Xp ; C − p ):
¿(k + 1)f(S) +f(T ) +
p∈T

196

M.I. Sviridenko / Operations Research Letters 26 (2000) 193–197

Therefore, it is sucient to prove the inequality
X
f(S; A − Xp ; C − p )¿|T |f(S)
(k + 1)f(S) +
p∈T

for proving the statement of the theorem. Using inequality (13) we obtain
f(S; A − Xp ; C − p )


pk
f(S; A − Xp ; C1 − p1 ; : : : ;
¿ 1−
Ck

x|S|j (S)

Ck−1 − pk−1 ; Ck ):
Summing this
P inequalities over p ∈ T and using the
facts that
p∈T pl 6Cl and f(S; A − Xp ; C1 −
p1 ; : : : ; Ck−1 − pk−1 ; Ck )6f(S; A; C) we have
X
f(S; A − Xp ; C − p )
f(S; A; C) +
p∈T

X

¿

By using Proposition 1, we will nd a feasible solutions of the problems in the left hand side of (15) such
that the inequality will hold. Therefore the inequality
will hold for the optimal solutions too. Let (xij (S))
be an optimal solution of problem (6) – (9) corresponding the function f(S; A; C). Recall that Xp =
(xp1 (T ); : : : ; xpm (T )). De ne vectors




x1j (S)
x1j (T )




..
..
xj (S) = 
 and xj (T ) = 
:
.
.

f(S; A − Xp ; C1 − p1 ; : : : ;

p∈T

x|T |j (T )

By assumptions made in introduction, kxj (S)k =
kxj (T )k = Aj ¿0. We construct (by Proposition 1)
: : : ; |T | such that kxjp k=Aj −xpj (T ),
vectors xjp ; p=1;P
jp
jp
= (|T | − 1)xj (S). Therex 6xj (S) and
p∈T x
1p
mp
fore, the matrix (x ; : : : ; x ) = (xijp ); i ∈ S; j ∈ J is
a feasible solution of problem (6) – (9) corresponding
the function f(S; A − Xp ; C) and
XX
XXX
fij xijp = (|T | − 1)
fij xij (S):
p∈T i∈S j∈J

Ck−1 − pk−1 ; Ck ):

i∈S j∈J

Consequently,
X
f(S; A − Xp ; C)¿(|T | − 1)f(S; A; C):

Applying the same argument k times we obtain
X
f(S; A − Xp ; C − p )
kf(S) +

p∈T

p∈T

¿

X

f(S; A − Xp ; C):

4. Discussion

p∈T

Hence, it is sucient to prove the following inequality
X
f(S; A − Xp ; C)¿|T |f(S; A; C):
f(S; A; C) +
p∈T

(15)
n
Let kzk be
Pnthe rectilinear norm of a vector z ∈ R+ ,
i.e. kzk = i=1 zi . At rst we prove the following
proposition.

Proposition 1. If z ∈ Rn+ ;  ∈ Rk+ ; kzk = kk = a then
we can nd vectors z 1 ; : : : ; z k ∈ Rn+ such that z i 6z
i); kz i k = a − i
(a6b if ai 6bi for each
Pcoordinate
n
i
for all i = 1; : : : ; k and i=1 z = (k − 1)z.
Proof. We partition the vector z into k vectors
y1 ; : : : ; yk with nonnegative components such that
Pk
kyi k = i and i=1 yi = z. Then we de ne z i = z − yi .
It is easy to see that all properties of the claim are
held for the vectors z i .

We have given the performance guarantee of the
greedy algorithm for generalization of the maximum
capacitated p-facility location problem.
There are a number of interesting questions raised
by this work. Clearly, the analysis of Section 2 can be
applied to the set functions satisfying the following
condition: for all S; T ⊆ I
X
(f(S ∪ {p}) − f(S));
f(T )6(k + 1)f(S) + L
p∈T \S

(16)
where L ¿ 0 is a some constant. Are there natural
members of this class of functions? Does there exist a
characterization of set functions satisfying conditions
(5) or (16)? Are other results from the theory of submodular set functions generalizable to these classes of
set functions?
Another open question concerns the hardness of the
approximation of the considered problems. Feige [2]

M.I. Sviridenko / Operations Research Letters 26 (2000) 193–197

proves that 1 − e−1 is the best possible performance
guarantee for the Max p-Cover unless P = NP. Max
p-Cover is a special case of the problem (1) with functions (2) – (4), i.e. the case k = 0. We conjecture that
the result of Feige can be extended to the case of general integer k¿0.
References
[1] M.E. Dyer, An O(n) algorithm for the multiple choice knapsack
problem, Math. Programming 29 (1984) 57–63.
[2] U. Feige, A Threshold of ln n for Approximating Set Cover,
J. Assoc. Comput. Math. 45 (1998) 634–652.

197

[3] S. Martello, P. Toth, Knapsack Problems: Algorithms and
Computer Implementations, Wiley, New York, 1990.
[4] G.L. Nemhauser, L.A. Wolsey, M.L. Fisher, An analysis of
approximations for maximizing submodular set functions-1,
Math. Programming 14 (1978) 265–294.
[5] L.A. Wolsey, Maximising real-valued submodular functions:
primal and dual heuristics for location problems, Math. Oper.
Res. 7 (1982) 410–425.
[6] E. Zemel, An O(n) algorithm for the linear multiple choice
knapsack problem and related problems, Inform. Process. Lett.
18 (1984) 123–128.