Results for problem 4.9. Qualitative Theory of Linear Programming

4.2. QUALITATIVE THEORY OF LINEAR PROGRAMMING 35 The conditions x ∗ ∈ Ω p , x ∗ ∈ Ω d in Theorem 3 can be replaced by the weaker x ∗ ≥ 0, λ ∗ ≥ 0 provided we strengthen 4.15 as in the following result, whose proof is left as an exercise. Theorem 4: Saddle point x ∗ ≥ 0 is optimal for the primal if and only if there exists λ ∗ ≥ 0 such that Lx, λ ∗ ≤ Lx ∗ , λ ∗ ≤ Lx ∗ , λ for all x ≥ 0, and allλ ≥ 0, 4.17 where L: R n xR m → R is defined by Lx, λ = c ′ x − λ ′ Ax − b 4.18 Exercise 4: Prove Theorem 4. Remark. The function L is called the Lagrangian. A pair x ∗ , λ ∗ satisfying 4.17 is said to form a saddle-point of L over the set {x|x ∈ R n , x ≥ 0} × {λ|λ ∈ R m , λ ≥ 0}.

4.2.2 Results for problem 4.9.

It is possible to derive analogous results for LPs of the form 4.9. We state these results as exercises, indicating how to use the results already obtained. We begin with a pair of LPs: Maximize subject to c 1 x 1 + . . . + c n x n a il x 1 + . . . + a in x n = b i , x j ≥ 0 , 1 ≤ i ≤ m , 1 ≤ j ≤ n . 4.19 Minimize subject to λ 1 b 1 + . . . + λ m b m λ 1 a 1j + . . . + λ m a mj ≥ c j , 1 ≤ j ≤ n . 4.20 Note that in 4.20 the λ i are unrestricted in sign. Again 4.19 is called the primal and 4.20 the dual. We let Ω p , Ω d denote the set of all x, λ satisfying the constraints of 4.19, 4.20 respectively. Exercise 5: Prove Theorems 1 and 2 with Ω p and Ω d interpreted as above. Hint. Replace 4.19 by the equivalent LP: maximize c ′ x, subject to Ax ≤ b, −Ax ≤ −b, x ≥ 0. This is now of the form 4.10. Apply Theorems 1 and 2. Exercise 6: Show that x ∗ ∈ Ω p is optimal iff there exists λ ∗ ∈ Ω d such that x ∗ j 0 implies m X i=1 λ ∗ i a ij = c j . Exercise 7: x ∗ ≥ 0 is optimal iff there exists λ ∗ ∈ R m such that Lx, λ ∗ ≤ Lx ∗ , λ ∗ ≤ Lx ∗ , λ for all x ≥ 0, λ ∈ R m . where L is defined in 4.18. Note that, unlike 4.17, λ is not restricted in sign. Exercise 8: Formulate a dual for 4.7, and obtain the result analogous to Exercise 5. 36 CHAPTER 4. LINEAR PROGRAMMING 4.2.3 Sensitivity analysis. We investigate how the maximum value of 4.10 or 4.19 changes as the vectors b and c change. The matrix A will remain fixed. Let Ω p and Ω d be the sets of feasible solutions for the pair 4.10 and 4.11 or for the pair 4.19 and 4.20. We write Ω p b and Ω d c to denote the explicit dependence on b and c respectively. Let B = {b ∈ R m |Ω p b 6= φ} and C = {c ∈ R n |Ω d c 6= φ}, and for b, c ∈ B × C define M b, c = max {c ′ x|x ∈ Ω p b} = min {λ ′ b|λ ∈ Ω d c} . 4.21 For 1 ≤ i ≤ m, ε ∈ R, b ∈ R m denote bi, ε = b 1 , b 2 , . . . , b i−1 , b i + ε, b i+1 , . . . , b m ′ , and for 1 ≤ j ≤ n, ε ∈ R, c ∈ R n denote cj, ε = c 1 , c 2 , . . . , c j−1 , c j + ε, c j+1 , . . . , c n ′ . We define in the usual way the right and left hand partial derivatives of M at a point ˆb, ˆ c ∈ B × C as follows: ∂M + ∂b i ˆb, ˆ c = lim ε → 0 ε 0 1 ε {Mˆbi, ε, ˆc − Mˆb, ˆc} , ∂M − ∂b i ˆb, ˆ c = lim ε → 0 ε 0 1 ε {Mˆb, ˆc − Mˆbi, −ε, ˆc} , ∂M + ∂c j ˆb, ˆ c = lim ε → 0 ε 0 1 ε {Mˆb, ˆcj, ε − Mˆb, ˆc} , ∂M − ∂c j ˆb, ˆ c = lim ε → 0 ε 0 1 ε {Mˆb, ˆc − Mˆb, ˆcj, −ε} , Let ◦ B, ◦ C denote the interiors of B, C respectively. Theorem 5: At each ˆb, ˆ c ∈ ◦ B × ◦ C, the partial derivatives given above exist. Furthermore, if ˆ x ∈ Ω p ˆb, ˆ λ ∈ Ω d ˆ c are optimal, then ∂M + ∂b i ˆb, ˆ c ≤ ˆλ i ≤ ∂M − ∂b i ˆb, ˆ c , 1 ≤ i ≤ m , 4.22 4.3. THE SIMPLEX ALGORITHM 37 ∂M + ∂c j ˆb, ˆ c ≥ ˆx j ≥ ∂M − ∂c j ˆb, ˆ c , 1 ≤ j ≤ n , 4.23 Proof: We first show 4.22, 4.23 assuming that the partial derivatives exist. By strong duality M ˆb, ˆ c = ˆ λ ′ ˆb, and by weak duality Mˆbi, ε, ˆc ≤ ˆλ ′ ˆbi, ε, so that 1 ε {Mˆbi, ε, ˆc − Mˆb, ˆc} ≤ 1 ε ˆ λ ′ {ˆbi, ε − ˆb}ˆλ i , for ε 0, 1 ε {Mˆb, ˆc − Mˆbi, −ε, ˆc} ≥ 1 ε ˆ λ ′ {ˆb − ˆbi, −ε} = ˆλ i , for ε 0. Taking limits as ε → 0, ε 0, gives 4.22. On the other hand, M ˆb, ˆ c = ˆ c ′ ˆ x, and M ˆb, ˆ cj, ε ≥ ˆcj, ε ′ ˆ x, so that 1 ε {Mˆb, ˆcj, ε − Mˆb, ˆc} ≥ 1 ε {ˆcj, ε ′ − ˆc} ′ ˆ x = ˆ x j , for ε 0, 1 ε {Mˆb, ˆc − Mˆb, ˆcj, −ε} ≤ 1 ε {ˆc − ˆcj, −ε} ′ ˆ x = ˆ x j , for ε 0, which give 4.23 as ε → 0, ε 0. Finally, the existence of the right and left partial derivatives follows from Exercises 8, 9 below. ♦ We recall some fundamental definitions from convex analysis. Definition: X ⊂ R n is said to be convex if x, y ∈ X and 0 ≤ θ ≤ 1 implies θx+1−θy ∈ X. Definition: Let X ⊂ R n and f : X → R. i f is said to be convex if X is convex, and x, y ∈ X, 0 ≤ θ ≤ 1 implies fθx + 1 − θy ≤ θf x + 1 − θf y. ii f is said to be concave if −f is convex, i.e., x, y ∈ X, 0 ≤ θ ≤ 1 implies f θx + 1 − θy ≥ θf x + 1 − θfy. Exercise 8: a Show that Ω p , Ω d , and the sets B ⊂ R m , C ⊂ R n defined above are convex sets. b Show that for fixed c ∈ C, M·, c : B → R is concave and for fixed b ∈ B, Mb, · : C → R is convex. Exercise 9: Let X ⊂ R n , and f : X → R be convex. Show that at each point ˆx in the interior of X, the left and right hand partial derivatives of f exist. Hint: First show that for ε 2 ε 1 0 δ 1 δ 2 , 1ε 2 {fˆxi, ε 2 − f ˆx} ≥ 1ε 1 {f ˆxi, ε 1 − f ˆx} ≥ 1δ 1 {f ˆxi, δ 1 − f ˆx} ≥ 1δ 2 {f ˆxi, δ 2 − f ˆx}. Then the result follows immediately. Remark 1: Clearly if ∂M∂b i ˆb exists, then we have equality in 4.22, and then this result compares with 3.14. Remark 2: We can also show without difficulty that M ·, c and Mb, · are piecewise linear more accurately, linear plus constant functions on B and C respectively. This is useful in some computational problems. Remark 3: The variables of the dual problem are called Lagrange variables or dual variables or shadow-prices. The reason behind the last name will be clear in Section 4.

4.3 The Simplex Algorithm