29
b. if
F
is not convex vector function or D is not convex, there may be no current method for finding
, F
M δ
.
2.7.2 Transformations and auxiliary optimization problems associated to P
1
, P
2
, and P
3
We consider the transformation given by
q q
+ +
→ =
R R
: ,...,
1
θ θ
θ
Relative to P
1
, P
2
, and P
3
, we consider the following transformated problems 2.18, 2.19, and 2.20, respectively.
PS
1
⎟⎟⎠ ⎞
⎜⎜⎝ ⎛
+
∑ ∑
= =
∈
min
1 1
x g
u x
f
k k
k q
k k
k k
q k
k D
x
υ υ
θ μ
θ λ
2.18
PS
2
⎟⎟⎠ ⎞
⎜⎜⎝ ⎛
∑
= ∈
min
1
x f
k k
q k
k D
x
υ
θ λ
2.19
PS
3
, min
1
⎟⎟⎠ ⎞
⎜⎜⎝ ⎛
∑
= ∈
s x
g u
s k
k k
q k
k D
x
υ
θ μ
2.20 with
υ
,
q q
u u
u R
R →
=
+
: ,...,
1
. We now define for problem 2.18 an auxiliary optimization problem
, ,
,
1
θ μ
λ
q AP
as follows for
1
, },
{ \
+ +
Δ ∈
∈ μ
λ Z
q
, where
⎭ ⎬
⎫ ⎩
⎨ ⎧
= +
× ∈
= Δ
∑ ∑
= =
+ +
+
1 ,
1 1
1 q
i i
q i
i q
q
μ λ
μ λ
R R
. Find
, ,
, ,
min arg
, ,
,
1 1
θ μ
λ ψ
θ μ
λ q
x q
M
D x
∈
=
the set of optimal solutions for 2.18, where
+
→ R D
q x
: ,
, ,
;
1
θ μ
λ ψ
is given by
, ,
, ;
1 1
1
x g
u x
f q
x
k k
q k
q k
k k
q k
q k
k
θ μ
θ λ
θ μ
λ ψ
∑ ∑
= =
+ =
2.21 For
∞ =
q
relation 2.21 is replaced by
⎭⎬ ⎫
⎩⎨ ⎧
= ∞
max ,
max max
, ,
, ;
1
x g
u x
f x
k k
k k
k k
k k
k
θ μ
θ λ
θ μ
λ ψ
2.22
30
By using the lines of White [207, 209], Bowman [19], Karlin [82], relative to the problem 2.18 we obtain the following results:
Theorem 2.5
1
a
If
∞ ≠
q
, then
1 1
, ,
, E
q M
⊆
θ μ
λ
if
1
,
+ +
Δ ∈
μ λ
;
2
a
If a certain uniform dominance condition hold Bowman [19], then
1 1
, ,
, E
M ⊆
∞
θ μ
λ
for
1
,
+ +
Δ ∈
μ λ
;
3
a
If
∞ ≠
q
,
} {
⋅
k q
k
f θ
,
q k
, 1
=
are all convex on D and D is convex, then
U
1
, 1
1
, ,
,
+ +
Δ ∈
⊆
μ λ
θ μ
λ q
M E
;
4
a
U
1
, 1
1
, ,
,
+ +
Δ ∈
∞ ⊆
μ λ
θ μ
λ M
E
;
5
a
If D is finite, then there exists a
} {
\
+
∈ Z q
, such that
U
1
, 1
1
, ,
,
+ +
Δ ∈
=
μ λ
θ μ
λ q
M E
q q
≥ ∀
. For problem 2.19 an auxiliary optimization problem
, ,
2
θ λ
q AP
as follows for
2
}, {
\
+ +
Δ ∈
∈ λ
Z q
, where
⎭ ⎬
⎫ ⎩
⎨ ⎧
= ∈
= Δ
∑
= +
+
1
1 2
q i
i q
λ λ
R
. Find
, ,
, min
arg ,
,
2 2
θ λ
ψ θ
λ q
x q
M
D x
∈
=
, where
+
→ R D
q x
: ,
, ;
2
θ λ
ψ
is given by
, ,
;
1 2
x f
q x
k q
k q
k k
θ λ
θ λ
ψ
∑
=
=
. 2.23 For
∞ =
q
equation 2.23 is replaced by
max ,
, ;
2
x f
x
k k
k k
θ λ
θ λ
ψ =
∞
2.25 By using the lines of White [207, 209], Bowman [19], Karlin [82], relative to the
problem 2.19 we obtain the following results:
Theorem 2.6
1
a
If
∞ ≠
q
, then
2 2
, ,
E q
M ⊆
θ λ
if
2
+ +
Δ ∈
λ
;
31
2
a
If a certain uniform dominance condition hold Bowman [19], then
2 2
, ,
E M
⊆ ∞
θ λ
if
2
+ +
Δ ∈
λ
;
3
a
If
∞ ≠
q
,
} {
⋅
k q
k
f θ
,
q k
, 1
=
are all convex on D and D is convex, then
U
2
, ,
2 2
+ +
Δ ∈
⊆
λ
θ λ
q M
E
;
4
a
U
2
, ,
2 2
+ +
Δ ∈
∞ ⊆
λ
θ λ
M E
;
5
a
If D is finite, then there exists a
} {
\
+
∈ Z q
, such that
U
2
, ,
2 2
+ +
Δ ∈
=
λ
θ λ
q M
E q
q ≥
∀
. For problem 2.20 an auxiliary optimization problem
, ,
3
θ μ
q AP
as follows for
2
}, {
\
+ +
Δ ∈
∈ μ
Z q
, Find
, ,
, min
arg ,
,
3 3
θ μ
ψ θ
μ q
x q
M
D x
∈
=
, where
+
→ R D
q x
: ,
, ;
3
θ μ
ψ
is given by
∑
= ,
, ;
3 s
k k
q k
k
g u
q x
θ μ
θ μ
ψ
2.25 For
∞ =
q
equation 2.25 is replaced by
max ,
, ,
3
x g
u x
s k
k k
k k
θ μ
θ μ
ψ
= ∞
2.19 Finally for problem 2.20 we obtain the following results:
Theorem 2.7
1
a
If
∞ ≠
q
, then
3 3
, ,
E q
M ⊆
θ μ
if
2
+ +
Δ ∈
μ
2
a
If a certain uniform dominance condition hold Bowman [19], then
3 3
, ,
E M
⊆ ∞
θ λ
if
2
+ +
Δ ∈
μ
3
a
If
∞ ≠
q
,
} {
⋅
k q
k
f θ
,
q k
, 1
=
are all convex on D and D is convex, then
U
2
, ,
3 3
+ +
Δ ∈
⊆
μ
θ μ
q M
E
4
a
U
2
, ,
3 3
+ +
Δ ∈
∞ ⊆
μ
θ μ
M E
32
5
a
If D is finite, then there exists a
} {
\
+
∈ Z q
, such that
U
2
, ,
3 3
+ +
Δ ∈
=
μ
θ μ
q M
E q
q ≥
∀
.
Remark 2.7 Some proofs of these results are given in Sudradjat and Preda [189].
2.7.3 Non-convex auxiliary optimization problem The problem is to finding points in D which are in E or close to points in E. We also
wish to use convexity and concavity properties. For these to be meaningful, we need appropriate convex sets within which to embed our analysis.
n
R
is too large, because we have stipulate merely that
∈ x
F
q
+
R
for all
D x
∈
and not for all
∈ x
n
R
. When D is convex, all that we need state is that
F
is defined on
D
, with
∈ x
F
q
+
R
for all
D x
∈
. In the following we use lines given by White [149].
a Case of concave
∞ ≠
⋅ ⋅
q g
u f
k k
k
}, {
}, {
We assume that the {
k
f
} are all concave on
D
and look at the choice of
} {
⋅
k
θ
and
q
and associated algorithms for auxiliary optimization problems
, ,
,
1
θ μ
λ
q AP
,
, ,
2
θ λ
q AP
and
, ,
3
θ μ
q AP
. Generally even if
⋅
k k
f θ
is concave on
D
it is not necessarily true that
⋅
k q
k
f θ
is concave on
D
. We need to choose
} {
⋅
k k
f θ
so that, at least for some
q
,
} {
⋅
k q
k
f θ
are all concave on
D
. The following form of
} {
⋅
k
θ
will provide an instance which will do what is required, namely
q k
a t
t
k k
, 1
, log
= +
=
ϕ ϕ
θ
over the range
, 1
− −
≥
k
a t
ϕ q
k ,
1 =
, where
R ⊆
} {
k
a
Lemma 2.1 If
} {
⋅
k
f
are concave on
D
, then
} {
⋅
k q
k
f θ
are all concave on
D
for all
} {
\
+
∈ Z q
and
, min
2 1
q q
q =
such that
33 ]
1 [log
min min
] 1
[log min
min
, 1
2 ,
1 1
+ +
≤ +
+ ≤
∈ =
∈ =
k k
k D
x q
k k
k D
x q
k
a x
g u
q a
x f
q
2.27 provided that
] [
min x
f a
k D
x k
∈
≥
,
q k
, 1
=
2.28
Proof: For
1 ,
, 1
− −
≥ ∈
≥
k
a z
R z
q
,
2 2
2 2
] log
1 [
log
k k
k q
k q
k
a z
a z
q a
z q
dz a
z d
+ +
− −
+ =
+
−
θ
. Thus for any given
, 1
− −
≥
k
a z
2 2
≤ dz
z d
q k
θ
, if
log 1
k
a z
q +
+ ≤
. Replacing
z
by
x f
k
and
x g
u
k k
, we see that
⋅
q k
θ
is concave on
D
provided that 2.26 and 2.27 hold.
□
b Case of convex
} {
⋅
k
f
,
} {
⋅
k k
g u
and finite D
If
} {
⋅
k
θ
are all convex on
+
R
and
} {
⋅
k
f
and
} {
⋅
k k
g u
are all convex on
D
, then
} {
⋅
k q
k
f θ
are all convex on
D
. This applies, for example, when
+
∈ ∀
= R
t t
t
k
ϕ ϕ
ϕ θ
,
q k
≤ ≤
1
. In this case the auxiliary optimization problem
, ,
,
1
θ μ
λ
q AP
become one of minimizing a convex function over a finite set D, also for
, ,
2
θ λ
q AP
and
, ,
3
θ μ
q AP
. This also hold in the case of
∞ =
q
. Let us now assume that
x ψ
is
, ,
, ;
1
θ μ
λ ψ
q x
,
, ,
;
2
θ λ
ψ
q x
or
, ,
;
3
θ μ
ψ q
x
respectively, and that
x ψ
is convex on
D
. Then consider the following algorithm, given the qualifier
} ,
, ,
{ θ
μ λ
q
,
} ,
, {
θ λ
q
or
} ,
, {
θ μ
q
respectively for ease of
34
exposition :
t
x ψ
∂
is the subdifferential of
ψ
at
t
x x
=
and
φ ψ
≠ ∂
t
x
Rockafellar, [150];
t
S
is any finite non-empty subset of
t
x ψ
∂
obtained by some specified method;
1
x
is the first component of x.
Algorithm 2.1
i Select
} {
\
+
∈ R
ε
. ii
Set
D D
t
=
. iii
Set t = 1. iv
Assume that we have derived
t
D
. v
Find
t
S
and
min arg
x x
t
D x
t
ψ
∈
∈
vi Set
t t
t t
S y
x x
y D
x D
∈ ∀
− ≤
− ∈
=
+
ε
: {
1
vii If
φ =
+
1 t
D
, set
min arg
] ,...,
[
1
x x
t
x x
x A
ψ
∈
∈
, and stop.
viii If
φ ≠
+1 t
D
, go to step v.
We have the following theorem.
Theorem 2.8
i Algorithm 2.1 terminates in a finite number of iterations.
ii If
ψ
is the minimal value of
ψ
on D, then
ε ψ
ψ ψ
+ ≤
≤
A
x
, where
} ,
, {
3 2
1
ψ ψ
ψ ψ ∈
.
Proof: i Let us suppose that the algorithm, is not finite.
Because D is finite, there exist a set
} {
\ }
{
+
⊆ Z r
such that
1 ≥
∀ =
r x
x
r
t
where
x
is some member of D and
35 1
1
≥ ∀
∈
+
r D
x
r r
t t
, then
1 ,
1
≥ ∂
∈ ∀
− ≤
−
+
r x
y x
x y
r r
t t
ψ ε
. Thus
1 ,
1
≥ ∂
∈ ∀
− ≤
−
+
r x
y r
x x
y
r r
t t
ψ ε
. This is possible.
ii Let
φ =
+1 t
D
and
, \
1 +
=
s s
s
D D
Y
t s
≤ ≤
1
, then
U
t s
s
Y D
1 =
=
. Let
s
Y x
∈
. Then
1 +
∉
s
D x
and
ε −
≥ −
s
x x
y
for some
s
x y
ψ ∂
∈
. Because
} ,
, {
3 2
1
ψ ψ
ψ ψ ∈
are convex,
ε ψ
ψ −
≥ −
≥ −
s s
x x
y x
x
. Hence
ε ψ
ψ ψ
+ ≤
≤
A
x
. □
Remark 2.8 Step v really only requires finding a feasible solution in
t
D
. The use of
1
x
as an objective function is merely to facilitate this.
Remark 2.9 Step v is a subproblem of minimizing a linear function over those
solutions defined by a polytope, say
t
Z
, generated by the subgradient constraints, which are also in D. When all function are differentiable,
t
D
is singleton gradient vector for
ψ
at
1
x x
=
. In the general case
t
x ψ
∂
and hence
t
D
may be found in term of the subdifferentials of the functions
} {
⋅
k k
f θ
, Rockafellar, [150].
Remark 2.10 If D is the vertex set of a polytope
D
, then any solution from
t
D
is also a vertex of
t
Z D
∩
. Thus a vertex search subalgorithm such as that of Murty [124] can
36
be developed. Other procedures for enumerating the vertices of a polytope may be adaptable for this problem Matheiss and Rubin, [116].
Remark 2.11 If
D
is integral set of a polytope, the subproblem in step v is an integer linear programming problem, for which a range of algorithms exist.
Whatever method is use to solve the auxiliary problem, the convexity of
ψ
is helpful in providing lower bounds, because if
} {
s
x
is any set of solutions generated, then
] [
max min
, s
s x
y s
D x
x x
y x
s
− +
≥
∂ ∈
∈
ψ ψ
ψ
.
Remark 2.12 If D is a subset of a polytope
D
, we note that
min x
D x
ψ ψ
∈
≥
. 2.22 The determination of right-hand side of 2.22 is a convex programming problem. Lower
bounds may be useful in determining how close the best solution to date is to an optimal solution, so that computations may be termined early if wished.
c Case of mixture of concave and convex
∞ ≠
⋅ ⋅
q g
u f
k k
k
}, {
}, {
From a global optimization point of view this is the hardest problem of all. Consider
1
M
,
2
M
,
1
M
and
2
M
are non-empty subsets of
} ,...,
1 {
q
such that
} ,...,
1 {
2 1
2 1
q M
M M
M =
∪ =
∪
,
φ
= ∩
2 1
M M
, and
φ =
∩
2 1
M M
;
} {
⋅
k q
k
f θ
be concave on
D
for
} {
;
1
⋅ ∈
k q
k
f M
k θ
be convex on
D
for
2
M k
∈
, also that
} {
⋅
k k
q k
g u
θ
be concave on
D
for
1
M k
∈
and
} {
⋅
k k
q k
g u
θ
be convex on
D
for
2
M k
∈
;
∑ ∑
∈ ∈
⋅ +
⋅ =
⋅
1 1
, ,
, ;
1 M
k k
k q
k k
k q
k M
k k
g u
f q
θ μ
θ λ
θ μ
λ ψ
, ,
, ;
2 2
2
⋅ +
⋅ =
⋅
∑ ∑
∈ ∈
k k
q k
M k
k k
q k
M k
k
g u
f q
θ μ
θ λ
θ μ
λ ψ
Then, dropping the qualifiers
} ,
, ,
{ θ
μ λ
q
for ease of exposition, we have
2 1
⋅ +
⋅ =
⋅
ψ ψ
ψ
where
1
⋅
ψ
and
2
⋅
ψ
are respectively concave and convex on
D
.
37
The following algorithm is an extension of an algorithm of White [206].
t
S
2
is any finite subset of the sub-differential
2 t
x ψ
∂
of
2
⋅
ψ
at
t
x x
=
obtained by some specified method.
Algorithm 2.2
i Select
} {
\
+
∈ R
ε
. ii
Set
D D
=
1
. iii
Set t = 1. iv
Assume that we have derived
t
D
. v
Find
t
S
and
min arg
x x
t
D x
t
ψ
∈
∈
vi Set
t t
t t
S y
x x
y D
x D
∈ ∀
− ≤
− ∈
=
+
ε :
{
1
vii If
φ =
+
1
t
D
, set
min arg
] ,...,
[
1
x x
t
x x
x A
ψ
∈
∈
and stop viii
If
φ ≠
+
1
t
D
, go to step v. We have the following theorem, where
2
x
ψ
∂
is the subdifferential of
2
⋅
ψ
at point x.
Theorem 2.9
i If
U
D x
x
∈
∂
2
ψ
is compact, then Algorithm 2.2 terminates in a finite number of iterations.
ii If
ψ
is the minimal value of
⋅ ψ
on D, then
ε ψ
ψ ψ
+ ≤
≤
B
x
Proof: i Let us suppose that the algorithm, is not finite.
Because of the assumptions about the sub-differentials and the compactness of D, there exist a
n
y R
∈
and a set
} {
\ }
{
+
⊆ Z r
such that
38 1
2
1
≥ ∀
− ≤
−
+
r x
x y
r r
t t
ε
Thus
1 2
1
≥ ∀
− ≤
−
+
r r
x x
y
r r
t t
ε
, Inequalities which is not possible.
ii Let
φ =
+
1
t
D
and
, \
1
+
=
s s
s
D D
Y
t s
− =1
, then
U
t s
s
Y D
1
=
=
. Let
s
Y x
∈
. Then
1 +
∉
s
D x
and
ε −
≥ −
s
x x
y
for some
2
t
x ψ
∂
. Because
2
⋅
ψ
is convex,
ε ψ
ψ −
≥ −
≥ −
2 2
s s
x x
y x
x
. Also,
1 1
s
x x
ψ ψ
≥
. Thus
ε ψ
ψ ψ
+ ≤
≤
B
x
. □
Remark 2.13 Step v is a subproblem involving the minimization of a concave function
over
t
D
. When D is a polytope, so is
t
D
, and algorithms exist for solving such problems e.g. Glover and Klingman, [62]; Falk and Hoffman, [49]; Carino, [23]. We
note that in this case step vi adds cutting constraints which exclude the current solution and for which the dual simplex method is useful Hadley, [65].
Remark 2.14 If D is the integral set of a polytope
D
, then the subproblem takes an integer programming form, for which algorithm exist.
Remark 2.15 If D is the vertices of a polytope
D
, then except in special cases e.g. when vertices are integral as in the assignment problem some new algorithm is
required.
39
C
HAPTER 3
STOCHASTIC DOMINANCE
3.1 Introduction.