Directory UMM :Data Elmu:jurnal:O:Operations Research Letters:Vol27.Issue4.2000:

Operations Research Letters 27 (2000) 149–152
www.elsevier.com/locate/dsw

Why the logarithmic barrier function in convex and
linear programming?
Jean B. Lasserre ∗
LAAS-CNRS, 7 Av. du Colonel Roche, 31077 Toulouse cedex 4, France
Received 1 December 1999; received in revised form 1 September 2000; accepted 6 September 2000

Abstract
c
We provide a simple interpretation of the use of the logarithmic barrier function in convex and linear programming.
2000 Elsevier Science B.V. All rights reserved.
MSC: 90C05; 90C25
Keywords: Convex programming; Linear programming; Interior point methods; Logarithmic barrier function

1. Introduction
The logarithmic barrier function (LBF) in convex
and linear programming has become more and more
popular in view of its good performances. It is, in fact,
a particular choice among many others in the class

of interior penalty functions. However, apart from
its a posteriori numerical eciency, and the so-called
“self-concordance” property (see, e.g. [2,4,5]), there
has been so far no clue on where the LBF is coming
from.
We shall demonstrate here that surprisingly enough,
this function can be viewed as a (a priori) “naive”
approximation in the interior of the feasible set. Given
a measurable function f : Rn → R and a Borel set
 ⊂ Rn with non-empty interior, the basic ingredient

is the well-known approximation
Z

1
sup f(x) ≈ ln
epf(x) d x
p
x∈



(for large p) of the global maximum of f over the set
 (see e.g. Hiriart-Urruty [6] and references therein).
If we consider a convex program minx {f0 (x) |
fi (x)60} with all fk : Rn → R, convex, k =
0; 1; : : : ; m, then applying the approximation scheme
(1.1) to the Lagrangian
H (; x):=f0 (x) +

Fax: +33-561-336936.
E-mail address: lasserre@laas.fr (J.B. Lasserre).

m
X

i fi (x);

i=1

in

sup H (; x) = f0 (x) + sup
¿0



(1.1)

¿0

"

m
X
i=1

#

i fi (x) ;

(which is nothing less than f0 on the feasible set)

yields the LBF with controlling parameter  := p−1

c 2000 Elsevier Science B.V. All rights reserved.
0167-6377/00/$ - see front matter
PII: S 0 1 6 7 - 6 3 7 7 ( 0 0 ) 0 0 0 5 7 - 2

150

J.B. Lasserre / Operations Research Letters 27 (2000) 149–152

(see also [8] for other approximation schemes of convex functions).
The approximation (1.1) could be considered as
“naive” since the exact value 0 = supi ¿0 i fi (x) is
R∞
replaced by p−1 ln 0 epi fi di . In addition, so far,
there has not been any convincing numerical report
of the eciency of (1.1) when used directly in global
optimization (see e.g. [6]).
In fact, if the LBF had been presented this way, one
might had suspected it would not yield an ecient

method.
However, it is indeed well suited. For instance, in
linear programming, at the (unique) minimizer of the
primal LBF, one retrieves as multipliers, the minimizer
of the dual LBF (see e.g. [4]). In fact, we show that
if we use the approximation (1.1) in Fenchel duality,
one retrieves the dual LBF and vice versa.
The correspondence between Laplace and Fenchel
transforms via exponentials and logarithms in the
Cramer transform, has already been used to establish
nice parallels between optimization and probability
(see e.g. [1,3]) via a change of algebra. The interested
reader is referred to [1,7] and the many references
therein.

2. The logarithmic barrier function
We rst consider the general convex programming
problem and then specialize to linear programming.
2.1. Convex programming


Following notation as in [2], the LBF associated
with problem (P) is just
m

x 7→ (x; ) :=

f0 (x) X

ln(−fi (x));


where  ¿ 0 is the barrier parameter. Of course,
(x; ) is de ned only on the set of points in the interior of the feasible set of (P). Most of today’s interior
point methods are based on this function.
Whereas (x; ) could be just viewed as a particular choice among many in the family of “interior”
penalty functions, there has been so far no explanation
of where it is coming from.
The purpose of this note is to show that surprisingly
enough,  is in fact an approximation of h(0) in an
apparently very naive way.

Duality in convex programming implies that
)
(
m
X
i fi (x)
sup inf f0 (x) +
¿0

x

i=1

(

= inf max f0 (x) +
x

¿0


(P) 7→ min{f0 (x) | fi (x)60; i = 1; 2; : : : ; m}; (2.1)
where fi : Rn → R are convex functions, i =
0; 1; : : : ; m.
Let h : Rm → R ∪ {−∞; +∞} be the optimal value
of the parametrized problem:
y 7→ h(y) := inf {f0 (x) | fi (x)6y; i = 1; 2; : : : ; m}:
(2.2)
It is assumed that Slater’s condition holds true, that
is, there is some x0 such that
fi (x0 ) ¡ 0

i = 1; 2; : : : ; m:

(2.3)

m
X

)


i fi (x) ;

i=1

(2.5)

the “interesting” part being the left-hand side of (2.5)
since the right-hand side is just a rephrasing of (2.1).
But in fact, it is this right-hand side which, when approximated via (1.1), yields the LBF.
Indeed, whenever x is feasible for (P), one has
)
(
m
X
i fi (x)
f0 (x) = max f0 (x) +
¿0

Consider the general convex programming problem
(P):


(2.4)

i=1

i=1

=f0 (x) + max
¿0

( m
X

)

i fi (x) ;

i=1

or, equivalently,


m 
X
max i fi (x)
f0 (x) = f0 (x) +
i=1

i ¿0

and we show below that in fact
(x; ) = −m ln  + f0 (x)
Z ∞

m
X
ln
e(fi (x)=) d :
+
i=1

0

(2.6)

151

J.B. Lasserre / Operations Research Letters 27 (2000) 149–152

In other words, the term max i fi (x) which is exactly
equal to 0, is “approximated” by
ln

Z



(e

fi (x) p

) d

0

 7→ h∗ () := sup{′ y − h(y)}

1=p

y

which for large p (or small ) is indeed an approximation. Developing yields:

 
Z ∞

;
e(fi (x)= d = −
fi (x)
0
whenever fi (x) ¡ 0 (which is the case for an interior
point x), so that

Z ∞
(fi (x)=)
e
d =  ln(−=fi (x))
ln
0

=  ln () −  ln (−fi (x))
and (2.6) follows. Therefore, since −m ln  is a constant, minimizing  reduces to minimizing

Z
m
X
−1
ln
e i fi (x) di
f0 (x) +
i=1

≈ f0 (x) + max
¿0

( m
X

It is well known that h is convex and its Fenchel
transform h∗

)

i fi (x) :

i=1




= sup  y + max{−c x | Ax¿y} ;
y

x¿0

(2.9)

∗ ∗

satis es (h ) (b)=h(b). In (2.9) let us do the approximation
1=p
Z

e−pc x d x
−h(y) ≈ ln
Ax¿y; x¿0

and the same approximation for the “supy ”. It yields:
"Z "
Z
 #p #1=p
1=p





e y

h∗ ()≈ln

e−pc x d x



h () ≈ ln

Z

p′ y

e

y

Z

where A is a (m; n) matrix. Again we assume that
Slater’s condition holds at some point x0 , i.e., there is
some x0 ¿ 0 s.t. Ax0 ¿ b.
Let h : Rm → R ∪ {−∞; +∞} be de ned as
(2.8)



−pc′ x

e

d x dy

Ax¿y; x¿0

1=p

:

Interchanging the integration, we obtain
"Z
# #1=p
"Z
Ax

−pc′ x
p′ y
e
e
dy d x
h () ≈ ln


−∞

= ln  pm
= ln p−m

(2.7)

;

or again

"

We now restrict to the linear programming case and
show how using this approximation, the LBF of the
dual can be obtained.
Consider the LP problem

dy

Ax¿y; x¿0

y

2.2. Linear programming

y 7→ h(y) := min{c′ x | Ax¿y; x¿0}:



x¿0

In fact, this is more than just a coincidence and we
now illustrate in linear programming how the above
approximation is well suited for duality via Laplace
and Legendre–Fenchel transforms.

(P) 7→ min{c′ x | Ax¿b; x¿0};



"

i

−(m+n)



x¿0

i=1

m
Y

!−1 Z

i−1

i=1

= ln p

=

m
Y

m
Y
i=1

Z



e−pc x ep Ax d x




ep(A −c) x d x

x¿0

i−1

n
Y
i=1

1=p



(c − A

#1=p

)−1
i

#1=p

 
n
1
1X
m+n
ln

ln (c − A′ )i
p
p
p
i=1

m



1X
ln (i )
p

(2.10)

i=1

with everything well de ned provided  ¿ 0 and
A′  ¡ c, which is precisely the interior of the feasible
set of the dual.

152

J.B. Lasserre / Operations Research Letters 27 (2000) 149–152

parameter  that appears in the LBF
R comes from the
approximation of “sup f” by “ ln ef= ”.

Hence, if we now express h(b) via
h(b) = (h∗ )∗ (b) = sup {b′  − h∗ ()}


and use instead the above approximation (2.10) of
h∗ (), one gets
)
(
m
m
1X
1X


ln(c−A )i +
ln(i ) ;
h(b) ≈ sup b +
p
p

i=1

i=1

(2.11)

−1

since we can remove the constant (m+n)p ln(p−1 )
(note also that the latter constant vanishes as p → ∞).
We recognize in (2.11) the LBF associated with the
dual (D) of (P).
In fact, from the above we also have shown that
n



m

1X
1X
ln (c − A′ )i −
ln (i )
p
p
i=1

i=1

−1

approximates the p ln of the Laplace transform of
e−ph(·) . Thus, as h(b) is the Fenchel tranform at b of
h∗ (·), maximizing the LBF in (2.11) is approximating
the Cramer transform of e−ph(·) at b. The controlling

References
[1] F. Bacelli, G. Cohen, J. Olsder, J.P Quadrat, Syncronization
and Linearity, Wiley, New York, 1992.
[2] R. Cominetti, J. San Martin, Asymptotic analysis of the
exponential penalty trajectory in linear programming, Math.
Progr. 67 (1994) 169–187.
[3] P. Del Moral, G. Salut, Maslov optimization theory,
LAAS-report 94461, 1994.
[4] D. den Hertog, Interior Point Approach to Linear, Quadratic
and Convex Programming, Kluwer, Dordrecht, 1994.
[5] O. Guller, L. Tunc el, Characterization of the barrier parameter
of homogeneous convex cones, Math. Progr. 81 (1998)
55–76.
[6] J.B. Hiriart-Urruty, Conditions for global optimality, in
Handbook of Global Optimization, R. Horst, P.M. Pardalos
(Eds.), Kluwer, Dordrecht, 1994.
[7] V.P. Maslov, Methodes Operatorielles, Editions Mir, Moscou,
1973, Traduction Francaise, 1987.
[8] A. Seeger, A new representation formula for convex
functions on separable normed spaces, Dept. de Mahematiques,
Universite d’Avignon, Avril, 1996.