93
5.6 The modified subgradient method
In this section, we briefly present an algorithm of the modified subgradient method suggested by Gasimov [57] which can be applied for solving a large class of nonconvex
and nonsmooth constrained optimization problems. This method is based on the construction of dual problems by using sharp Lagrangian functions and has some
advantages Azimov and Gasimov[6], Gasimov [58], Rockafellar [150]. Some of them are the following:
- The zero duality gap property is proved for suffciently large class of problems; - The value of dual function strongly increases at each iteration;
- The method does not use any penalty parameters; - The presented method has a natural stopping criterion.
Now, we give the general principles of the modified subgradient method. Let
X
be any topological linear space,
X S
⊂
be a certain subset of
,Y X
be a real normed space and
Y
be its dual. Consider the primal mathematical programming problem defined as P
inf inf
= =
∈
x g
to subject
x f
P
S x
where f is a real-valued function defined on S and g is a mapping of S into
Y
: For every
X x
∈
and
Y y
∈
let
⎩ ⎨
⎧ ∞
+ =
∈ =
Φ .
, ,
, otherwise
y x
g and
S x
if x
f y
x
5.93 We define the augmented Lagrange function associated with problem P in the following
form: see Azimov and Gasimov [6] and Rockafellar and Wets [153,
} ,
, {
inf ,
, u
y y
c y
x c
u x
L
Y y
− +
Φ =
∈
for
X x
∈
,
Y u
∈
and
≥ c
. By using 5.93 we concretize the augmented Lagrangian associated with P:
, ,
, u
x g
x g
c x
f c
u x
L −
+ =
, 5.94
where
X x
∈
,
Y u
∈
and
≥ c
.
94
It is easy to show that,
, ,
sup inf
inf
,
c u
x L
P
Y c
u S
x
+
× ∈
∈
=
R
The dual function H is defined as
, ,
inf ,
c u
x L
c u
H
S x
∈
=
5.95 for
Y u
∈
and
≥ c
. Then, a dual problem of P is given by
, ,
sup
,
c u
H P
Sup
Y c
u
+
× ∈
=
R
.
Any element
+
× ∈
R ,
Y c
u with
, c
u H
P Sup
=
is termed a solution of
P
.
Proofs of the following three theorems can be found in Gasimov [58].
Theorem 5.3. Suppose in P that f and g are continuous, S is compact and a feasible
solution exists. Then
P Sup
InfP =
and there exists a solution to P. Furthermore, in this case, the function H in
P
is concave and finite everywhere on
+
×R Y
, so this maximization problem is efficiently unconstrained.
Theorem 5.4. Let
sup inf
P P
=
and for some
+
× ∈
R ,
Y c
u ,
, ,
, inf
u x
g x
g c
x f
c u
x L
S x
− +
=
∈
. 5.96 Then
x
is a solution to P and
, c
u
is a solution to
P
if and only if gx = 0.
5.97 When the assumptions of the theorems, mentioned above, are satisfied, the maximization
of the dual function H by using the subgradient method will give us the optimal value of the primal problem.
It will be convenient to introduce the following set :
} minimizes
{ ,
S x
over x
g u
x g
c x
f x
x c
u S
∈ −
+ =
95
Theorem 5.5 Let S be a nonempty compact set in
n
R
and let f and g be continuous so that for any
, ,
, c
u S
c u
m k
k +
× ∈
R R
is not empty. If
, c
u S
x ∈
, then
, x
g x
g −
is a subgradient of H at
, c
u
. Now we are able to present the algorithm of the modified subgradient method.
Algorithm Initialization Step. Choose a vector
,
1 1
c u
with
1
≥ c
let k = 1, and go to main step.
Main Step. Step 1. Given
,
k k
c u
. Solve the following subproblem :
. ,
min S
x to
subject u
x g
x g
c x
f
k k
∈ −
+
Let
k
x
be any solution. If
=
k
x g
, then stop;
,
k k
c u
is a solution to dual problem
P
,
k
x
is a solution to primal problem P. Otherwise, go to Step 2.
Step 2. Let
1 1
k k
k k
k k
k k
k
x g
s c
c x
g s
u u
ε +
+ =
− =
+ +
5.76 where
k
s
and
k
ε
are positive scalar stepsizes, replace k by k + 1; and repeat Step 1. One of the stepsize formulas which can be used is
3
5 ,
k k
k k
k k
x g
c u
H H
s −
= α
where
k
H
is an approximation to the optimal dual value,
2
k
α
and
k k
s ε
. The following theorem shows that in contrast with the subgradient methods developedfor
dual problems formulated by using ordinary Lagrangians, the new iterate improves the cost for all values of the stepsizes
k
s
and
k
ε
.
Theorem 5.6. Suppose that the pair
+
× ∈
R R
m k
k
c u
,
is not a solution to the dual problem and
,
k k
k
c u
S x
∈
. Then for a new iterate
,
1 1
+ +
k k
c u
calculated from 5.76 for all positive scalar stepsizes
k
s
and
k
ε
we have
96
2 1
1
2 ,
,
k k
k k
k k
k
x g
s c
u H
c u
H ε
+ ≤
−
+ +
.
5.7 Defuzzification and solution of defuzzificated problem