Splitting Mathematical Programming Models For Portofolio Selections.

59 is the maximizer of ˆ , , υ υ μ s x Λ , over Z s x ∈ , υ . Our result follows then from standard sufficient conditions for problem 4.6–4.10 see,e.g.,Rockafellar [150, Theorem. 28.1]. We can also develop duality relations for our problem. With the Lagrangian 4.11 we can associate the dual function , max υ υ u x L u D x X ∈ = . We are allowed to write the maximization operation here, because the set X is compact and , υ u x L is continuous. The dual problem has the form ⎪⎩ ⎪ ⎨ ⎧ ∈ ∈ . , min υ υ υ υ υ U U u u D u 4.18 The set υ U is a closed convex cone and ⋅ D is a convex functional, so 4.18 is a convex optimization problem. □ Theorem 4.5 Assume that 4.11–4.13 has an optimal solution. Then problem 4.18 has an optimal solution and the optimal values of both problems coincide. Furthermore, the set of optimal solutions of 3.31 is the set of functions U ∈ uˆ satisfying 4.11– 4.13 for an optimal solution xˆ of 4.11–4.13. Proof. The theorem is consequence of Theorem 4.4 and general duality relations in convex non-linear programming see Beale [10, Theorem. 2.165]. Note that all constraints of our problem are linear or convex polyhedral, and therefore we do not need any constraints qualification conditions here. ■

4.4 Splitting

Let us now consider the special form of problem 4.11–4.13, with ∑ = = υ 1 ] [ k k k x R E w x f , k w for υ , 1 = k . Recall that the random returns υ , 1 , , 1 , R = = k n j k j , have discrete distributions with realizations υ , 1 , , 1 , = = k T t r k jt , attained with probabilities k t p . 60 In order to facilitate numerical solution of problem 4.11–4.13, it is convenient to consider its split-variable form: ⎥⎦ ⎤ ⎢⎣ ⎡ ∑ = υ 1 max k k k x R w E 4.19 subject to k k V x R ≥ , υ , 1 = k a.s., 4.20 k k Y V 2 f , υ , 1 = k 4.21 X ∈ x . 4.22 In this problem, k V is a random variable having realizations k t v attained with probabilities k t p , υ , 1 , , 1 = = k T t , and relation 3.33 is understood almost surely. In the case of finitely many realizations it simply means that k t k j n j k jt v x r ≥ ∑∑ = = υ 1 1 , υ , 1 , , 1 = = k T t . 4.23 We shall consider two groups of Lagrange multipliers: a utility function υ υ U ∈ u , and vector , ≥ ∈ k T k θ θ R . The utility functions ⋅ υ u will correspond to the dominance constraints 4.21, as in the preceding section. The multipliers k t k t p θ , υ , 1 , , 1 = = k T t , will correspond to the inequalities 4.23. The Lagrangian takes on the form + ⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛ − + = ∑ ∑∑ ∑ ∑∑ = = = = = = k t n j j k jt k T t k t k t n j j k jt k T t k t k v x r p x r p w u V x 1 1 1 1 1 1 , , , υ υ υ υ υ θ θ L ∑∑ ∑∑ = = = = − + υ υ π 1 1 1 1 k k l k m l k l k t k k T t k t y u v u p , 4.24 where the random variable k V is identified by its realization k T k v v ,..., 1 . Thus we put ,..., , 2 1 υ T k k k v v v V = and ,..., 1 υ υ V V V = . The optimality conditions can be formulated as follows. Theorem 4.6 If ˆ , ˆ υ V x is an optimal solution of 4.19–4.22, then there exist U ∈ u and a nonnegative vector υ υ θ ˆ T R ∈ , such that ˆ , ˆ , , max ˆ , ˆ , ˆ , ˆ , υ υ υ υ υ υ θ θ υ υ u V x u V x T V x L L X R × ∈ = , 4.25 61 ˆ ˆ ˆ 1 1 = − ∑ ∑ = = m l k l k k l k t k T t k t y u v u p π , υ , 1 = ∀k 4.26 ˆ ˆ ˆ 1 = − ∑ = n j j k jt k t k t x r v θ , υ , 1 , , 1 = = k T t . 4.27 Conversely, if for some function υ υ U ∈ ˆ u and nonnegative vector υ υ θ ˆ T R ∈ , an optimal solution ˆ , ˆ υ V x of 4.25 satisfies 4.20–4.22 and 4.26–4.27, then ˆ , ˆ υ V x is an optimal solution of 4.19–4.22. Proof. By Proposition 4.1, the dominance constraints 4.21 is equivalent to finitely many inequalities υ , 1 , , 1 ], [ ] [ = = − ≤ − + + k m i Y y x R y k k i k k i E E . Problem 4.19–4.22 takes on the form: ⎥⎦ ⎤ ⎢⎣ ⎡ ∑ = υ 1 max k k k x R w E subject to ∑ = ≥ n j k t j k jt v x r 1 , υ , 1 , , 1 = = k T t υ , 1 , , 1 ], [ ] [ = = − ≤ − + + k m i Y y x R y k k i k k i E E . X ∈ x Let us introduce Lagrange multipliers m i k i , 1 , = μ , υ , 1 = k associated with the dominance constraints. The standard Lagrangian takes on the form: ⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛ − + = Λ ∑ ∑∑ ∑ ∑∑ = = = = = = n j k t j k jt k t k T t k t j n j k jt k T t k t k v x r p x r p w V x 1 1 1 1 1 1 , , , θ θ μ υ υ υ υ υ ∑ ∑∑ ∑ ∑ ∑∑ = + = = = = + = = − + − − m l k l k i k l k m i k i T t n j j k jt k i t k m i k i y y x r y p 1 1 1 1 1 1 1 ] [ ] [ π μ μ υ υ . Rearranging the last two sums, exactly as in the proof of Theorem 4.4, we obtain the following key relation. For every ≥ k μ , setting + = − − = ∑ 1 η μ η μ k i m i k i k y u k 62 we have , , , , , , υ υ μ υ υ υ υ θ θ μ k u V x V x L = Λ , where ,..., 1 υ μ μ υ μ k k k u u u = . The remaining part of the proof is the same as the proof of Theorem 4.4. The dual function associated with the split-variable problem has the form , , , sup , , υ υ υ υ υ θ θ υ υ u V x u T V x L D X R ∈ ∈ = and the dual problem is, as usual, , min , υ υ θ θ υ υ u u D U ≥ ∈ , 4.28 The corresponding duality theorem is an immediate consequence of Theorem 4.5 and standard duality relations in convex programming. Note that all constraints of our problem 4.19–4.22 are linear or convex polyhedral, and therefore here we do not need additional constraints qualification conditions. □ Theorem 3.4 Assume that 4.19–4.22 has an optimal solution. Then the dual problem 4.28 has an optimal solution and the optimal values of both problems coincide. Furthermore, the set of optimal solutions of 4.28 is the set of functions υ υ U ∈ ˆ u and vectors ≥ υ θ satisfying 4.25–4.27 for an optimal solution ˆ , ˆ υ V x of 4.19– 4.22. We can analysis in more detail the structure of the dual function: ∑ ∑ ∑ ∑ ∑ ∑ = = = = = = ∈ ∈ ⎭ ⎬ ⎫ ⎩ ⎨ ⎧ + − + = υ θ θ 1 1 1 1 1 1 , sup , , k T t k t k t k t j n j k jt T t k t k t j n j k jt T t k t V x v u p v x r p x r p v u D T R X 1 1 k l k m l k l y u ∑∑ = = − υ π ∑ ∑ ∑ ∑∑ = = = ∈ = = ∈ ⎭ ⎬ ⎫ ⎩ ⎨ ⎧ − − + + = υ π θ θ 1 1 1 1 1 ] [ sup 1 max k n j k l k l T t k t k t k t k t R V j k j m j k t T t k t x y u v v u p x r p T X . 63 In the last equation we have used the fact that X is a simplex and therefore the maximum of a linear form is attained at one of its vertices. It follows that the dual function can be expressed as the sum ∑ ∑ = + = + + = T t k T k t k t k t k u D v u D p D v u D 1 1 1 , , , , θ θ θ υ 4.29 with ∑∑ = = ≤ ≤ − = υ θ θ 1 1 1 1 max k k jt k t k t k t n j r p D , 4.30 ] [ sup , k t k t k t v k t k t v v u u D t θ θ − = , υ , 1 , , 1 = = k T t , 4.31 and ∑∑ = = + − = m k k l m l k l k T y u v u D 1 1 1 , π , υ , 1 = k . 4.32 If the set X is a general convex polyhedron, the calculation of D involves a linear programming problem with n variables. To determine the domain of the dual function, observe that if k t k y u θ − 1 , then +∞ = − ∞ → ] [ lim k t k t k t v v v u k t θ , and thus the supremum in 4.22 is equal to ∞ + . On the other hand, if k t k y u θ ≥ − 1 , then the function k t k t k t v v u θ − has a nonnegative slope for k k t y v 1 ≤ and nonpositive slope k t θ − for k k t y v 1 ≥ . It is piecewise linear and it achieves its maximum at one of the break points. Therefore } : , { 1 k t k k t k t y u R U u D dom θ θ ≥ × ∈ = − + . At any point of the domain, ] [ max , 1 k l k l k l m k k t k t y y u u D θ θ − = ≤ ≤ . 4.33 The domain of k D is the entire space T R . □ 64

4.5 Decomposition