Directory UMM :Journals:Journal_of_mathematics:BJGA:

P DI&P DE-constrained optimization problems with
curvilinear functional quotients as objective vectors
Ariana Pitea, Constantin Udri¸ste and S¸tefan Mititelu

Abstract. In this work we introduce and perform a study on the multitime multi-objective fractional variational problem of minimizing a vector
of quotients of path independent curvilinear integral functionals (M F P )
subject to certain partial differential equations (P DE) and/or partial differential inequations (P DI), using a geometrical language. The paper is
organized as follows: §1 formulates a P DI&P DE-constrained optimization problem. §2 states and proves necessary conditions for the optimality
of the problem (M P ) of minimizing a vector of path independent curvilinear integral functionals constrained by P DIs and P DEs. §3 analyzes
necessary efficiency conditions for the problem (M F P ), and §4 studies
different types of dualities.
M.S.C. 2000: 49J40, 49K20, 58E17, 65K10, 53C65.
Key words: P DI&P DE constraints, multi-objective fractional variational problem,
Pareto optimality, quasiinvexity, duality.

1

P DI&P DE-constrained optimization problem

Let (T, h) and (M, g) be Riemannian manifolds of dimensions p and n, respectively.
The local coordinates on T and M will be written t = (tα ) and x = (xi ), respectively.

Let J 1 (T, M ) be the first order jet bundle associated to T and M .
To develop our theory, we recall the following relations between two vectors v =
(v j ) and w = (wj ), j = 1, a:
v=w
v 0,
γt0 ,t1

∂x
(t), γ = 1, p, are partial velocities and γt0 ,t1 is a piecewise C 1 -class
∂tγ
curve joining the points t0 and t1 . The closed Lagrange 1-forms densities fαℓ and kαℓ
will be used to define certain quotients of curvilinear integral functionals. Also we
accept that the Lagrange matrix density
where xγ (t) =

g = (gab ) : J 1 (T, M ) → Rms ,

a = 1, s,

b = 1, m,


m < n,

of C ∞ -class defines the partial differential inequations (P DI) (of evolution)
g(t, x(t), xγ (t)) <
= 0,

(1.1)

t ∈ Ωt0 ,t1 ,

and the Lagrange matrix density
h = (hba ) : J 1 (T, M ) → Rqs ,

a = 1, s,

b = 1, q, q < n,

defines the partial differential equation (P DE) (of evolution)
(1.2)


h(t, x(t), xγ (t)) = 0,

t ∈ Ωt0 ,t1 .

The purpose of this work is to study the multitime multi-objective fractional variational problem of minimizing a vector of quotients of path independent curvilinear
functionals
Z

Z
r
α
1
α
fα (t, x(t), xγ (t)) dt 
fα (t, x(t), xγ (t)) dt

γt0 ,t1

 Zγt0 ,t1

Z
,
.
.
.
,
,


1
α
r
α
kα (t, x(t), xγ (t)) dt
kα (t, x(t), xγ (t)) dt
γt0 ,t1

γt0 ,t1

knowing that the function x(t) satisfies the boundary conditions x(t0 ) = x0 , x(t1 ) =

x1 , or x(t)|∂Ωt0 ,t1 = given, the partial differential inequations of evolution (1.1),
and the partial differential equation of evolution (1.2). Such a problem is called
P DI&P DE-constrained optimization problem (see also [2]-[8], [10], [11], [19], [20]).
Introducing the notations
Z
Z
F ℓ (x(·)) =
fαℓ (t, x(t), xγ (t)) dtα , K ℓ (x(·)) =
kαℓ (t, x(t), xγ (t)) dtα ,
γt0 ,t1

γt0 ,t1

the above-mentioned extremizing problem can be written

77

P DI&P DE-constrained optimization problems

(M F P )





min



x(·)












F 1 (x(·))
F r (x(·))
,..., r
K 1 (x(·))
K (x(·))
subject to
x(t0 ) = x0 , x(t1 ) = x1 ,
g(t, x(t), xγ (t)) <
= 0, t ∈ Ωt0 ,t1 ,
h(t, x(t), xγ (t)) = 0, t ∈ Ωt0 ,t1 .
µ

Let C ∞ (Ωt0 ,t1 , M ) be the space of all functions x : Ωt0 ,t1 → M of C ∞ -class, with
the norm
p
X
kxk = kxk∞ +
kxα k∞ .
α=1


The set

F(Ωt0 ,t1 ) = {x ∈ C ∞ (Ωt0 ,t1 , M ) | x(t0 ) = x0 , x(t1 ) = x1 , g(t, x(t), xγ (t)) <
= 0,
h(t, x(t), xγ (t)) = 0, t ∈ Ωt0 ,t1 }
is called the set of all feasible solutions of the problem (M F P ).
Partial differential inequations/equations mathematically represent a multitude of
natural phenomena, and in turn, applications in science and engineering ubiquitously
give rise to problems formulated as P DI&P DE-constrained optimization. The areas
of research who strongly motivate the P DI&P DE-constrained optimization include:
shape optimization in fluid mechanics and medicine, material inversion - in geophysics,
data assimilation in regional weather prediction modelling, structural optimization,
and optimal control of processes. P DI&P DE-constrained optimization problems are
generally infinite dimensional in nature, large and complex. As a result, this class
of optimization problems present significant reasoning and computational challenges,
many of which have been studied in recent years in Germany, USA, Romania, etc.
As computing power grows and optimization techniques become more advanced, one
wonders whether there are enough commonalities among P DI&P DE-constrained
optimization problems from different fields to develop ratiocinations and algorithms
for more than a single application. This question has been the topic of many papers,

conferences and recent scientific grants.
The basic optimization problems of path independent curvilinear integrals with
P DE constraints or with isoperimetric constraints, expressed by the multiple integrals or path independent curvilinear integrals, were stated for the first time in our
works [12]-[18]. The papers [15], [17], [18] focuss on multitime maximum principle in
multitime optimal control problems.

2

Necessary conditions of optimality

In order to obtain necessary conditions for the optimality of the problem (M F P ),
we start with a vector of path independent curvilinear functionals,
F (x(·)) =

Z

α

fα (t, x(t), xγ (t)) dt =


γt0 ,t1

Z

γt0 ,t1

fαr (t, x(t), xγ (t)) dtα

µZ



fα1 (t, x(t), xγ (t)) dtα , . . . ,
γt0 ,t1

= (F 1 (x(·)), . . . , F r (x(·))),

78

Ariana Pitea, Constantin Udri¸ste and S¸tefan Mititelu


and we formulate a simplified P DI&P DE-constrained minimum problem

F (x(·))

 min
x(·)




subject to
(M P )
x(t0 ) = x0 , x(t1 ) = x1 ,



g(t, x(t), xγ (t)) <


= 0, t ∈ Ωt0 ,t1 ,

h(t, x(t), xγ (t)) = 0, t ∈ Ωt0 ,t1 .

We are interested in finding necessary conditions for the optimality, respectively
efficiency conditions, for the problem (M P ) in the domain F(Ωt0 ,t1 ).

Definition 2.1. A feasible solution x◦ (·) ∈ F(Ωt0 ,t1 ) is called efficient point for the
program (M P ) if and only if for any feasible solution x(·) ∈ F(Ωt0 ,t1 ), the inequality


F (x(·)) <
= F (x (·)) implies the equality F (x(·)) = F (x (·)).
To analyze the previous problem, we start with the case of a single functional.
Lagrange 1-form density of C ∞ Let s = (sα ) : J 1 (Ωt0 ,t1 , M ) → Rp be a closed
Z
sβ (t, x(t), xγ (t)) dtβ . Consider the

class which produces the action S(x(·)) =

γt0 ,t1

following P DI&P DE-constrained variational problem

min S(x(·))



 x(·)
subject to
(SP )

g(t, x(t), xγ (t)) <


= 0, t ∈ Ωt0 ,t1 ,

h(t, x(t), xγ (t)) = 0, t ∈ Ωt0 ,t1 .

We define the auxiliary Lagrange density 1-form L = (Lα ) as

Lα (t, x(t), xγ (t), λ, µ(t), ν(t)) = λsα (t, x(t), xγ (t))+ < µα (t), g(t, x(t), xγ (t)) >
+ < να (t), h(t, x(t), xγ (t)) >, α = 1, p,
a
where λ is real number and µ(t) = (µα (t)) = (µaαb (t)), ν = (να (t)) = (ναb
(t)) are
Lagrange multipliers subject to the condition that the 1-form L = (Lα ) is closed.
Extending the results in [15], [18], the necessary conditions for the optimality of a
feasible solution x◦ (·) ∈ F(Ωt0 ,t1 ) in the problem (SP ) are

∂Lα
∂Lα







 ∂x (t, x (t), xγ (t)) − Dγ ∂xγ (t, x (t), xγ (t)) = 0, α = 1, p (Euler-Lagrange PDE)
< µα (t), g(t, x◦ (t), x◦γ (t)) >= 0, t ∈ Ωt0 ,t1 , α = 1, p,



 µ (t) > 0, t ∈ Ω
α = 1, p.
α
t0 ,t1 ,
=

Definition 2.2. If λ 6= 0, the optimal feasible solution x◦ (·) of the problem (SP ) is
called normal.

Without loss of generality, if x◦ (·) is an optimal normal solution of the problem
(SP ), we can assume that λ = 1.
The following Theorem describes the previous necessary optimality conditions in
the language of [4], [15], [18], [19].

P DI&P DE-constrained optimization problems

79

Theorem 2.3. Let s = (sα ) be a closed 1-form of C ∞ -class. If x◦ (·) ∈ F(Ωt0 ,t1 ) is a
normal optimal solution of the problem (SP), then there exist the multipliers λ, µ(t),
ν(t) satisfying the following conditions:

∂g
∂sα

(t, x◦ (t), x◦γ (t))+ < µα (t),
(t, x◦ (t), x◦γ (t)) >
λ



∂x
∂x


µ


∂sα
∂h





(t, x (t), xγ (t)) > −Dγ λ
(t, x◦ (t), x◦γ (t))
+ < να (t),


∂x
∂xγ





∂g
∂h
(V C)
+ < µα (t),
(t, x◦ (t), x◦γ (t)) > + < να (t),
(t, x◦ (t), x◦γ (t)) > = 0,

∂xγ
∂xγ




1,
p
(Euler-Lagrange
PDEs)
t


,
α
=
t
,t
0 1






<
µα (t), g(t, x◦ (t), x◦γ (t)) >= 0, t ∈ Ωt0 ,t1 , α = 1, p,




µα (t) >

= 0, t ∈ Ωt0 ,t1 , α = 1, p,

[λ = 1].

Now we turn back to the vector problem (M P ). To develop further our theory, we
need the result in the following Lemma (for the single-time case, see [3]).
Lemma 2.4. The function x◦ (·) ∈ F(Ωt0 ,t1 ) is an efficient solution of the problem
(MP) if and only if x◦ (·) is an optimal solution of each scalar problem Pℓ (x◦ (·)),
ℓ = 1, r, where

min F ℓ (x(·))

 x(·)




subject to


x(t0 ) = x0 , x(t1 ) = x1 ,

Pℓ (x )

g(t, x(t), xγ (t)) <

= 0, t ∈ Ωt0 ,t1 ,



h(t,
x(t),
x
(t))
=
0, t ∈ Ωt0 ,t1 ,

γ


j
j

F (x(·)) ≤ F (x (·)), j = 1, r, j 6= ℓ.

Proof. In order to prove the direct implication, we suppose that the function x◦ (·) ∈ F(Ωt0 ,t1 ) is an efficient solution of the problem (M P ) and there is
k ∈ {1, . . . , r} such that x◦ (·) ∈ F(Ωt0 ,t1 ) is not an optimal solution of the scalar
problem Pk (x◦ (·)). Then there exists a function y(·) ∈ F(Ωt0 ,t1 ) such that
F j (y(·)) ≤ F j (x◦ (·)),

j = 1, r, j 6= k; F k (y(·)) < F k (x◦ (·)).

These relations contradict the efficiency of the function x◦ (·) ∈ F(Ωt0 ,t1 ) for the
problem (M P ). Consequently, the point x◦ (·) ∈ F(Ωt0 ,t1 ) is an optimal solution for
each program Pℓ (x◦ (·)), ℓ = 1, r.
Conversely, let us consider that the function x◦ (·) ∈ F(Ωt0 ,t1 ) is an optimal solution of all problems Pℓ (x◦ (·)), ℓ = 1, r. Suppose that x◦ (·) ∈ F(Ωt0 ,t1 ) is not an
efficient solution of the problem (M P ). Then there exists a function y(·) ∈ F(Ωt0 ,t1 )
such that F j (y(·)) ≤ F j (x◦ (·)), j = 1, r, and there is k ∈ {1, . . . , r} such that
F k (y(·)) < F k (x◦ (·)). This is a contradiction to the assumption that the function
x◦ (·) ∈ F(Ωt0 ,t1 ) minimizes the functional F k (x(·)) on the set of all feasible solutions
of problem Pk (x◦ (·)). Therefore, the function x◦ (·) ∈ F(Ωt0 ,t1 ) is an efficient solution
of the problem (M P )

80

Ariana Pitea, Constantin Udri¸ste and S¸tefan Mititelu

Lemma 2.5. Let ℓ be fixed between 1 and r. If the function x◦ (·) ∈ F(Ωt0 ,t1 ) is a
[normal] optimal solution of the scalar problem Pℓ (x◦ (·)), then there exist the real vectors (λjℓ ), j = 1, r, and the matrix functions µℓ , νℓ , such that the following conditions
are satisfied
∂fαj
∂g
(t, x◦ (t), x◦γ (t))+ < µℓα (t),
(t, x◦ (t), x◦γ (t)) >
∂x
∂x
µ
∂h
∂f j
+ < νℓα (t),
(t, x◦ (t), x◦γ (t)) > −Dγ λjℓ α (t, x◦ (t), x◦γ (t))
∂x
∂xγ
λjℓ

∂g
∂h
+ < µℓα (t),
(t, x◦ (t), x◦γ (t)) > + < νℓα (t),
(t, x◦γ (t), x◦γ (t)) >
∂xγ
∂xγ
t ∈ Ωt0 ,t1 ,

α = 1, p



= 0,

(Euler-Lagrange PDEs)

< µℓα (t), g(t, x◦ (t), x◦γ (t)) >= 0,

t ∈ Ωt0 ,t1 ,

α = 1, p,

µℓα (t) >
= 0, t ∈ Ωt0 ,t1 , α = 1, p,
λjℓ ≥ 0 [λℓℓ = 1].
For a proof, see [9].
Definition 2.6. The function x◦ (·) ∈ F (Ωt0 ,t1 ) is called normal efficient solution of
the problem (MP) if it is normal optimal solution for at least one of the problems
Pℓ (x◦ (·)), ℓ = 1, r.
It follows the main result of this section.
Theorem 2.7. If x◦ (·) ∈ F (Ωt0 ,t1 ) is a normal efficient solution of the problem (MP),
then there exist a vector λ◦ ∈ Rr and the smooth matrix functions µ◦ (t) = (µ◦α (t)),
ν ◦ (t) = (να◦ (t)), which satisfy the following conditions

∂g
∂fα

(t, x◦ (t), x◦γ (t)) > + < µ◦α (t),
(t, x◦ (t), x◦γ (t)) >
< λ◦ ,



∂x
∂x


µ


∂fα
∂h





 + < να (t),
(t, x (t), xγ (t)) > −Dγ < λ◦ ,
(t, x◦ (t), x◦γ (t)) >


∂x
∂x
γ






∂g
∂h

+ < µ◦α (t),
(t, x◦ (t), x◦γ (t)) > + < να◦ (t),
(t, x◦ (t), x◦γ (t)) > = 0,
(M V )
∂xγ
∂xγ



t ∈ Ωt0 ,t1 , α = 1, p (Euler-Lagrange PDEs)




< µ◦α (t), g(t, x◦ (t), x◦γ (t)) >= 0, t ∈ Ωt0 ,t1 , α = 1, p,






µ◦α (t) >

= 0, t ∈ Ωt0 ,t1 , α = 1, p,




λ

0,


< e, λ◦ >= 1, e = (1, . . . , 1) ∈ Rr .

Proof. If the function x◦ (·) ∈ F(Ωt0 ,t1 ) is a [normal] efficient solution of the problem (M P ), according to Lemma 2.4, the point x◦ (·) ∈ F(Ωt0 ,t1 ) is a normal efficient
solution of each scalar problem Pℓ (x◦ (·)), ℓ = 1, r. According to Lemma 2.5, there
exist the matrix λjℓ , j, ℓ = 1, r, and the functions µℓα , νℓα , satisfying the following
conditions

81

P DI&P DE-constrained optimization problems


∂g
∂f j



(t, x◦ (t), x◦γ (t)) >
λjℓ α (t, x◦ (t), x◦γ (t))+ < µℓα (t),


∂x
∂x


µ



∂h
∂f j




+ < νℓα (t),
(t, x (t), xγ (t)) > −Dγ λjℓ α (t, x◦ (t), x◦γ (t))


∂x
∂xγ





∂h
∂g




(F V )ℓ + < µℓα (t),
(t,
x
(t),
x
(t))
>
+
<
ν
,
(t,
x
(t),
x
(t))
>
= 0,
ℓα
γ
γ

∂xγ
∂xγ





t ∈ Ωt0 ,t1 , α = 1, p (Euler-Lagrange PDEs)






<
µ
(t),
g(t,
x
(t),
x
(t))
>= 0, t ∈ Ωt0 ,t1 , α = 1, p,
ℓα

γ




µℓα (t) >

= 0, t ∈ Ωt0 ,t1 , α = 1, p,


λjℓ ≥ 0 [λℓℓ = 1].
Making the sum of all relations (F V )ℓ from ℓ = 1 to ℓ = r and denoting
Λj =

r
X

λjℓ ,

ℓ=1

Mα (t) =

r
X

µℓα (t),

Nα (t) =

ℓ=1

r
X

νℓα (t),

ℓ=1

the following relations are obtained

∂g
∂fαj


(t, x◦ (t), x◦γ (t))+ < Mα (t),
(t, x◦ (t), x◦γ (t)) >
Λ

j

∂x
∂x



µ


∂h
∂fαj




+
<
N
(t),
Λ
(t))
>
−D
(t,
x
(t),
x
(t, x◦ (t), x◦γ (t))

α
j
γ
γ


∂x
∂xγ




(F V ) + < Mα (t), ∂g (t, x◦ (t), x◦ (t)) > + < Nα (t), ∂h (t, x◦ (t), x◦ (t)) > = 0,
γ
γ


∂xγ
∂xγ



t ∈ Ωt0 ,t1 , α = 1, p (Euler-Lagrange PDEs)







< Mα (t), g(t, x (t), xγ (t)) >= 0, t ∈ Ωt0 ,t1 , α = 1, p,





Mα (t) >

= 0, t ∈ Ωt0 ,t1 , α = 1, p,

λj ≥ 0, j = 1, r.
We divide the relations (F V ) by S =

r
X

Λj ≥ 1 and we denote

j=1

Λ◦j =

Λj
,
S

Mα◦ (t) =

Mα (t)
,
S

Nα◦ (t) =

Nα (t)
.
S

Thus we obtain the relations from the statement

3

Necessary efficiency conditions
for the problem (MFP)

Consider x◦ (·) ∈ F(Ωt0 ,t1 ) being a feasible solution of the problem (M F P ) and
for each index j between 1 and r, let us introduce the real number
R◦j (x◦ (·)) =

F j (x◦ (·))
.
K j (x◦ (·))

82

Ariana Pitea, Constantin Udri¸ste and S¸tefan Mititelu

For a fixed index ℓ, we



min


x(·)





(F P R)ℓ










consider the following pair of extremizing problems
F ℓ (x(·))
K ℓ (x(·))
subject to
x(t0 ) = x0 , x(t1 ) = x1
g(t, x(t), xγ (t)) <
= 0, t ∈ Ωt0 ,t1 ,
h(t, x(t), xγ (t)) = 0, t ∈ Ωt0 ,t1 ,
F j (x(·)) − R◦j (x◦ (·))K j (x(·)) <
= 0,


min F ℓ (x(·)) − R◦ℓ (x◦ (·))K ℓ (x(·))


x(·)




subject to


x(t0 ) = x0 , x(t1 ) = x1
(SP R)ℓ

g(t,
x(t), xγ (t)) <

= 0, t ∈ Ωt0 ,t1 ,



h(t,
x(t),
x
(t))
= 0, t ∈ Ωt0 ,t1 ,

γ


j
j
F (x(·)) − R◦ (x◦ (·))K j (x(·)) <
= 0,

j = 1, r, j 6= ℓ.

j = 1, r, j 6= ℓ,

With the statements of Jagannathan [3], we have

Lemma 3.1. The function x◦ (·) ∈ F(Ωt0 ,t1 ) is optimal in (F P R)ℓ if and only if it
is optimal in (SP R)ℓ , ℓ = 1, r.
Using Lemma 2.4 and Lemma 3.1, we can formulate
Theorem 3.2. The function x◦ (·) ∈ F(Ωt0 ,t1 ) is an efficient solution for the problem
(MFP) if and only if it is an optimal solution for each problem (SP R)ℓ , ℓ = 1, r.
Proof. We shall prove this statement using the double implication.
The necessity. Let us suppose that the function x◦ (·) ∈ F(Ωt0 ,t1 ) is efficient
for the problem (M F P ). Then it is optimal for problem (F P R)ℓ , ℓ = 1, r, according
to Lemma 2.4. Also, for any ℓ = 1, r, if the function x◦ (·) ∈ F(Ωt0 ,t1 ) is optimal
for problem (F P R)ℓ , then it is optimal for problem (SP R)ℓ , ℓ = 1, r (according to
Lemma 3.1).
The sufficiency. Let us suppose that the function x◦ (·) ∈ F(Ωt0 ,t1 ) is efficient for the problem (SP R)ℓ , for all ℓ = 1, r. Then it is optimal for problem (F P R)ℓ ,
ℓ = 1, r, according to Lemma 3.1. Also, for any ℓ = 1, r, the function x◦ (·) ∈ F (Ωt0 ,t1 )
is optimal for problem (F P R)ℓ , therefore it is optimal for problem (M P F ) (according
to Lemma 2.4)
Remark. The function x◦ (·) ∈ F(Ωt0 ,t1 ) is a normal efficient solution of the
problem (M F P ) if it is a normal optimal solution for at least one of the scalar
problems (F P R)ℓ , ℓ = 1, r.
Consider λ = (λ1 , . . . , λr ) ∈ Rr and the matrix functions µ : Ωt0 ,t1 → Rmsp ,
ν : Ωt0 ,t1 → Rqsp such that the auxiliary Lagrange 1-form L = (Lα ),
¡
¢
Lα (t, x(t), xγ (t), λ, µ(t), ν(t)) = λj fαj (t, x(t), xγ (t)) − R◦j (x◦ (·))kαj (t, x(t), xγ (t))
+ < µα (t), g(t, x(t), xγ (t)) >
+ < να (t), h(t, x(t), xγ (t)) >,

α = 1, p,

83

P DI&P DE-constrained optimization problems

be closed. Having in mind the background introduced above, we can state the main
results of this section. First of all, we shall introduce our necessary efficiency conditions.
Theorem 3.3 (Necessary efficiency conditions). Let the function x◦ (·) ∈
F(Ωt0 ,t1 ) be a normal efficient solution of problem (M F P ). Then there exist Λ1◦ , Λ2◦ ∈
Rr and the smooth functions M ◦ : Ωt0 ,t1 → Rmsp , N ◦ : Ωt0 ,t1 → Rspq , such that we
have

∂fαj
∂kαj


Λ1◦
(t, x◦ (t), x◦γ (t)) − Λ2◦
(t, x◦ (t), x◦γ (t))

j
j

∂x
∂x




∂h
∂g



(t, x◦ (t), x◦γ (t)) > + < Nα◦ (t),
(t, x◦ (t), x◦γ (t)) >
+ < Mα◦ (t),


∂x
∂x


½

j
j


1◦ ∂fα


2◦ ∂kα


Λ
−D
(t,
x
(t),
x
(t))

Λ
(t, x◦ (t), x◦γ (t))
γ

j
γ
j

∂x
∂x

γ
γ



∂g
(M F V )
(t, x◦ (t), x◦γ (t)) >
+ < Mα◦ (t, x◦ (t), x◦γ (t)),

∂x
γ



¾


∂h






(t),
+
<
N
(t,
x
(t),
x
(t))
>
= 0,

α
γ

∂xγ




t ∈ Ωt0 ,t1 , α = 1, p (Euler-Lagrange PDEs)







<
M
(t),
g(t,
x
(t),
x◦γ (t)) >= 0, t ∈ Ωt0 ,t1 , α = 1, p,

α



>

α = 1, p,
M
(t)
0,
t


t0 ,t1 ,

α
=

1◦
1◦
Λ ≥ 0,
< e, Λ >= 1, e = (1, . . . , 1) ∈ Rr .
Proof. There are λ1jℓ , λ2jℓ , j = 1, r, and the functions µℓα (t), νℓα (t), such that

¸

j
j

1 ∂fα


2 ∂kα



λjℓ
(t, x (t), xγ (t)) − λjℓ
(t, x (t), xγ (t))


∂x
∂x






∂h
∂g


(t, x◦ (t), x◦γ (t)) > + < νℓα (t),
(t, x◦ (t), x◦γ (t)) >
+ < µℓα (t),


∂x
∂x


½


j

∂k j

1 ∂fα

(t, x◦ (t), x◦γ (t)) − λ2jℓ α (t, x◦ (t), x◦γ (t))
 −Dγ λjℓ
∂xγ
∂xγ
(F V )ℓ
¾


∂h
∂g






(t,
x
(t),
x
(t))
>
+
<
ν
(t),
(t,
x
(t),
x
(t))
>
= 0,
+
<
µ
(t),

ℓα
ℓα
γ
γ


∂xγ
∂xγ



t ∈ Ωt0 ,t1 , α = 1, p (Euler-Lagrange PDEs)







< µℓα (t), g(t, x (t), xγ (t)) >= 0, t ∈ Ωt0 ,t1 , α = 1, p,




µ

ℓα (t) >

= 0, t ∈ Ωt0 ,t1 , α = 1, p,

λjℓ ≥ 0 [λℓℓ = 1].
We make the sum of all relations (F V )ℓ after ℓ = 1, r and denoting
¯1 =
Λ
j

r
X
ℓ=1

we obtain

λ1jℓ ,

¯ 2j =
Λ

r
X
ℓ=1

λ2jℓ ,

¯ α (t) =
M

r
X
ℓ=1

µℓα (t),

¯α (t) =
N

r
X
ℓ=1

νℓα (t)

84

Ariana Pitea, Constantin Udri¸ste and S¸tefan Mititelu

(F V )


j
j


¯ 1j ∂fα (t, x◦ (t), x◦ (t)) − Λ
¯ 2 ∂kα (t, x◦ (t), x◦ (t))

Λ
γ
j
γ


∂x
∂x






¯α (t), ∂h (t, x◦ (t), x◦γ (t)) >
¯ α (t), ∂g (t, x◦ (t), x◦γ (t)) > + < N
+ + < Nα (t),
(t, x (t), xγ (t)) > = 0,
+ < Mα (t),


∂xγ
∂xγ




t ∈ Ωt0 ,t1 , α = 1, p (Euler-Lagrange PDEs)






¯

< Mα (t), g(t, x (t), xγ (t)) >= 0, t ∈ Ωt0 ,t1 , α = 1, p,



¯ α (t) > 0, t ∈ Ωt ,t , α = 1, p,

M

0 1
=


¯ 1 ≥ 0.
Λ
j

Dividing the relations (F V ) by S =

r
X

¯ 1 ≥ 1 and denoting
Λ
j

j=1

Λ1◦
j =

¯1
Λ
j
,
S

Λ2◦
j =

¯2
Λ
j
,
S

Mα◦ (t) =

¯ α (t)
M
,
S

Nα◦ (t) =

¯α (t)
N
,
S

the relations (F V ) take the form (M F V )

4

A dual program theory

Let ρ be a real number and b : C ∞ (Ωt0 ,t1 , M ) × C ∞ (Ωt0 ,t1 , M ) → [0, ∞) a functional. Let a = (aα ) be a closed Lagrange
1-form. We associate the path indepenZ

aα (t, x(t), xγ (t)) dtα . The definition of

dent curvilinear functional A(x(·)) =

γt0 ,t1

the quasiinvexity (see also [2], [7], [10], [11], [20]) helps us to state the results included
in this section.
Definition 4.1. The functional A is called [strictly] (ρ, b)-quasiinvex at the point
x◦ (·) if there is a vector function η : J 1 (Ωt0 ,t1 , M ) × J 1 (Ωt0 ,t1 , M ) → Rn , vanishing at the point (t, x◦ (t), x◦γ (t), x◦ (t), x◦γ (t)), and the functional θ : C ∞ (Ωt0 ,t1 , M ) ×
C ∞ (Ωt0 ,t1 , M ) → Rn , such that for any x(·) [x(·) 6= x◦ (·)], the following implication
holds
Ã
Z

(A(x(·)) <
= A(x (·))) →

b(x(·), x◦ (·))

{< η(t, x(t), xγ (t), x◦ (t), x◦γ (t)) ,

γt0 ,t1

∂aα
(t, x◦ (t), x◦γ (t)) > + < Dγ η(t, x(t), xγ (t), x◦ (t), x◦γ (t)),
∂x

∂aα


α


2
<
(t, x (t), xγ (t)) >}dt [ + < να (t),
(t, y(t), yγ (t)) >


∂y
∂y
½
(M F D)

∂kαℓ

1◦ ∂fα


(t, y(t), yγ (t))−Λ2◦
(t, y(t), yγ (t))
−D
γ Λℓ



∂yγ
∂yγ



¾


∂h
∂g



(t, y(t), yγ (t)) > + < να (t),
(t, y(t), yγ (t)) > = 0,
+ < µα (t),


∂yγ
∂yγ




t ∈ Ωt0 ,t1 , α = 1, p





< µα (t), g(t, y(t), yγ (t)) > + < να (t), h(t, y(t), yγ (t)) > >

= 0,



α
=
1,
p,
t


t0 ,t1


Λ1◦ ≥ 0, < e, Λ1◦ >= 1, e = (1, . . . , 1) ∈ Rr .

To formulate our original results, we use the minimizing functional vector π(x(·))
of the problem (M F D) at the point x(·) ∈ F(Ωt0 ,t1 ) and the maximizing functional
vector
δ(y(·), yγ (·), Λ1◦ , Λ2◦ , µ(·), ν(·))
of the dual problem (M F D) at
(y(·), yγ (·), Λ1◦ , Λ2◦ , µ(·), ν(·)) ∈ ∆,
where ∆ is the domain of the problem (M F D).
Theorem 4.2 (Weak duality). Let x◦ (·) be a feasible solution of the problem
(M F P ) and y(·) be a normal efficient solution of the dual problem (M F D). Assume
that the following conditions are fulfilled:
2◦
1◦ ℓ
2◦ ℓ
a) Λ1◦
ℓ > 0, Λℓ > 0, ℓ = 1, r, Λℓ F (y(·)) − Λℓ K (y(·)) = 0;

b) for any ℓ = 1, r, the functional F (x(·)) is (ρ′ℓ , b)-quasiinvex at the point y(·)
and −K ℓ (x(·)) is (ρ′′ℓ , b)-quasiinvex at the point y(·) with respect to η and θ;
c) the functional
Z
[< µα (t), g(t, x(t), xγ (t)) > + < να (t), h(t, x(t), xγ (t)) >]dtα
γt0 ,t1

is (ρ′′′ , b) - quasiinvex at y(·) with respect to η and θ;
d) one of the functionals of b), c) is strictly (ρ′ℓ , b)-quasiinvex;
′′ℓ 2◦
′′′ >
e) ρ′ℓ Λ1◦
ℓ + ρ Λℓ + ρ
= 0.

Then, the inequality π(x (·)) ≤ δ(y(·), yγ (·), Λ1◦ , Λ2◦ , µ(·), ν(·)) is false.
The proof will be given in a further paper (see also, [9]).
Theorem 4.3 (Direct duality). Let x◦ (·) ∈ F(Ωt0 ,t1 ) be a normal efficient solution
of (M F P ) and suppose that the hypotheses of Theorem 4.2 are satisfied. Then there are

86

Ariana Pitea, Constantin Udri¸ste and S¸tefan Mititelu

the vectors Λ1◦ , Λ2◦ ∈ Rr and the smooth functions µ◦ : Ωt0 ,t1 → Rmsp , ν ◦ : Ωt0 ,t1 →
Rqsp such that (x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·)) is an efficient solution of the dual
(M F D) and π(x◦ (·)) = δ(x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·)).
Proof. We take into account that the point x◦ (·) is a normal efficient solution of the
problem (M F P ). Therefore, according to Theorem 3.3, there are Λ1◦ , Λ2◦ ∈ Rr and
the smooth functions µ◦ : Ωt0 ,t1 → Rmsp , ν ◦ : Ωt0 ,t1 → Rqsp satisfying the relations
(M F V )C . Also, < να◦ (t), h(t, x◦ (t), x◦γ (t)) > = 0, α = 1, p. Hence
(x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·)) ∈ ∆, π(x◦ (·)) = δ(x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·))
Therefore, the statements are proved.
Theorem 4.4 (Converse duality). Let (x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·)) be an efficient solution of the dual problem (M F D). Suppose the following conditions are
fulfilled:
a) x
¯(·) is a normal efficient solution of the primal problem (M F P );
ℓ ◦
2◦ ℓ ◦
b) for any ℓ = 1, r, F ℓ (x◦ (·)) > 0, K ℓ (x◦ (·)) > 0, Λ1◦
ℓ F (x (·))−Λℓ K (x (·)) = 0;

′ℓ

c) for any ℓ = 1, r, F (x(·)) is (ρ , b)-quasiinvex at the point x (·) and −K ℓ (x(·))
is (ρ′′ℓ , b)-quasiinvex at the point x◦ (·), with respect to η and θ;
d) the functional
Z
[< µα (t), g(t, x(t), xγ (t)) > +< να (t), h(t, x(t), xγ (t)) >] dtα
γt0 ,t1

is (ρ′′′ , b)-quasiinvex at the point x◦ with respect to η and θ;
e) one of the functionals of c), d) is strictly (ρ′ℓ , b), (ρ′′ℓ , b) or (ρ′′′ℓ , b)-quasiinvex
with respect to η and θ, respectively;
′′ℓ 2◦
′′′ >
f) ρ′ℓ Λ1◦
ℓ + ρ Λℓ + ρ
= 0.
Then x
¯(·) = x◦ (·) and moreover, π(x◦ (·)) = δ(x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·)).
Proof. Let us suppose that x
¯(·) 6= x◦ (·). According to Theorem 3.3, there are
r
1◦
2◦
¯ ,Λ
¯ ∈ R and the functions µ
the vectors Λ
¯ : Ωt0 ,t1 → Rmsp , ν¯ : Ωt0 ,t1 → Rqsp ,
satisfying the conditions (M F V )C . It follows
+ < ν¯α (t), h(t, x
¯(·), x
¯γ (·)) >= 0, α = 1, p
¯ 1◦ , Λ
¯ 2◦ , µ
¯ 1◦ , Λ
¯ 2◦ , µ
and therefore (¯
x, x
¯γ , Λ
¯, ν¯) ∈ ∆. Moreover, π(¯
x) = δ(¯
x, x
¯γ , Λ
¯, ν¯). Ac1◦
2◦



cording to Theorem 4.2, we have π(¯
x(·)) 6≤ δ(x (·), xγ (·), Λ , Λ , µ (·), ν ◦ (·)). Conse¯ 1◦ , Λ
¯ 2◦ , µ
quently, δ(¯
x(·), x
¯γ (·), Λ
¯(·), ν¯(·)) 6≤ δ(x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·)). Then
the maximal efficiency of the point (x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·)) is contradicted.
Hence, x
¯(·) = x◦ (·) and π(x◦ (·)) = δ(x◦ (·), x◦γ (·), Λ1◦ , Λ2◦ , µ◦ (·), ν ◦ (·))
Remark 4.5. To make a computer aided study of P DI&P DE-constrained optimization problems we can perform symbolic computations via Maple software package
(see also [1], [16]).
Acknowledgements. Partially supported by PRIN 2008, Growth, sustainable development and technological diffusion: Dynamical Models and Econometrics - University Mediterranea of Reggio Calabria and University of Milan, Italy, and by Academy
of Romanian Scientists, Splaiul Independentei 54, 050094 Bucharest, Romania.

P DI&P DE-constrained optimization problems

87

References
[1] M.T. Calapso, C. and Udri¸ste, Isothermic surfaces as solutions of Calapso PDE,
Balkan J. Geom. Appl. 13 (1) (2008), 20-26.
[2] M. Ferarra, S¸t. and Mititelu, Mond-Weir duality in vector programming with
generalized invex functions on differentiable manifolds, Balkan J. Geom. Appl.,
11, (1), 2006, 80-87.
[3] R. Jagannathan, Duality for nonlinear fractional programming, Z. Oper. Res., 17
(1973), 1-3.
[4] P. Kanniappan, Necessary conditions for optimality of nondifferentiable convex
multiobjective programming, JOTA 40, (2) (1983), 167-174.
[5] S¸t. Mititelu and I.M. Stancu-Minasian, Invexity at a point: generalization and
clasifications, Bull. Austral. Math. Soc. 48 (1993), 117-126.
[6] S¸t. Mititelu, Efficiency conditions for multiobjective fractional variational programs, Appl. Sciences (APPS) 10 (2008), 162-175 (electronic).
[7] S¸t. Mititelu, and C. Udri¸ste, Vector fractional programming with quasiinvexity on
Riemannian manifolds, 12th WSEAS CSCC Multiconf., Heraklion, Crete Island,
Greece, July 22-25, 2008; New Aspects of Computers, Part III, pp. 1107-1112.
[8] B. Mond and I. Husain, Sufficient optimality criteria and duality for variational
problems with generalised invexity, J. Austral. Math. Soc., Series B, 31 (1989),
106-121.
[9] A. Pitea, Integral Geometry and PDE Constrained Optimization Problems, Ph.
D Thesis, University ”Politehnica” of Bucharest, 2008.
[10] V. Preda, On Mond-Weir duality for variational problems, Rev. Roumaine Math.
Appl. 28 (2) (1993), 155-164.
[11] V. Preda and S. Gramatovici, Some sufficient optimality conditions for a
class of multiobjective variational problems, An. Univ. Bucure¸sti, Matematic˘aInformatic˘a 61 (1) (2002), 33-43.
[12] C. Udri¸ste and I. T
¸ evy, Multi-time Euler-Lagrange-Hamilton theory, WSEAS
Trans. Math., 6 (6) (2007), 701-709.
[13] C. Udri¸ste and I. T
¸ evy, Multi-time Euler-Lagrange dynamics, Proc. 5-th WSEAS
Int. Conf. Systems Theory and Sci. Comp., Athens, Greece, August 24-26 (2007),
66-71.
[14] C. Udri¸ste, P. Popescu and M. Popescu, Generalized multi-time Lagrangians and
Hamiltonians, WSEAS Trans. Math. 7 (2008), 66-72.
[15] C. Udri¸ste and L. Matei, Teorii Lagrange-Hamilton, Monographs and Textbooks
8, Geometry Balkan Press, 2008.
[16] C. Udri¸ste, Tzitzeica theory - opportunity for reflection in Mathematics, Balkan
J. Geom. Appl., 10 (1) (2005), 110-120.
[17] C. Udri¸ste, Multitime controllability, observability and Bang-Bang principle,
JOTA, 139 (1) (2008), 141-157.
[18] C. Udri¸ste, Simplified multitime maximum principle, Balkan Journal of Geometry
and Its Applications, 2009.
[19] F.A. Valentine, The problem of Lagrange with differentiable inequality as added
side conditions, in ”Contributions to the Calculus of Variations, 1933-37”, Univ.
of Chicago Press, 1937, 407-448.

88

Ariana Pitea, Constantin Udri¸ste and S¸tefan Mititelu

[20] T. Weir, and B. Mond, Generalized convexity and duality in multiobjective programming, Bull. Austral. Math. Soc. 39 (1989), 287-299.
Authors’ addresses:
Ariana Pitea and Constantin Udri¸ste,
University Politehnica of Bucharest, Faculty of Applied Sciences,
313 Splaiul Independent¸ei, Bucharest, RO-060042, Romania.
E-mail: apitea@mathem.pub.ro, udriste@mathem.pub.ro
S
¸ tefan Mititelu, Technical University of Civil Engineering,
Department of Mathematics and Informatics,
124 Lacul Tei Bvd., Bucharest, RO-020396, Romania.
E-mail: st mititelu@yahoo.com