The Simplex Algorithm. The Simplex Algorithm

4.3. THE SIMPLEX ALGORITHM 39 Again, if {A j |j ∈ Izθ } is linearly independent, then we let z ∗ = zθ and we are done. Otherwise we repeat the procedure above with zθ . Clearly, in a finite number of steps we will find an optimal decision z ∗ which is also vertex. ♦ At this point we abandon the geometric term “vertex” and how to established LP terminology. Definition: i z is said to be a basic feasible solution if z ∈ Ω p , and {A j |j ∈ Iz} is linearly independent. The set Iz is then called the basis at z, and x j , j ∈ Iz, are called the basic variables at z . x j , j 6∈ Iz are called the non-basic variables at z. Definition: A basic feasible solution z is said to be non-degenerate if Iz has m elements. Notation: Let z be a non-degenerate basic feasible solution, and let j 1 j 2 . . . j m constitute Iz. Let Dz denote the m × m non-singular matrix Dz = [A j 1 .. . A j 2 .. . . . . .. . A j m ], let cz denote the m-dimensional column vector cz = c j 1 , . . . , c j m ′ and define λz by λ ′ z = c ′ z[Dz] −1 . We call λz the shadow-price vector at z. Lemma 3: Let z be a non-degenerate basic feasible solution. Then z is optimal if and only if λ ′ zA ≥ c j , for all , j 6∈ Iz . 4.25 Proof: By Exercise 6 of Section 2.2, z is optimal iff there exists λ such that λ ′ A j = c j , for , j ∈ Iz , 4.26 λ ′ A j ≥ c j , for , j 6∈ Iz , 4.27 But since z is non-degenerate, 4.26 holds iff λ = λz and then 4.27 is the same as 4.25. ♦

4.3.2 The Simplex Algorithm.

The algorithm is divided into two parts: In Phase I we determine if Ω p is empty or not, and if not, we obtain a basic feasible solution. Phase II starts with a basic feasible solution and determines if it is optimal or not, and if not obtains another basic feasible solution with a higher value. Iterating on this procedure, in a finite number of steps, either we obtain an optimum solution or we discover that no optimum exists, i.e., sup {c ′ x|x ∈ Ω p } = +∞. We shall discuss Phase II first. We make the following simplifying assumption. We will comment on it later. Assumption of non-degeneracy. Every basic feasible solution is non-degenerate. Phase II: Step 1. Let z be a basic feasible solution obtained from Phase I or by any other means. Set k = 0 and go to Step 2. Step 2. Calculate [Dz k ] −1 , cz k , and the shadow-price vector λ ′ z k = c ′ z k [Dz k ] −1 . For each j 6∈ Iz k calculate c j − λ ′ z k A j . If all these numbers are ≤ 0, stop, because z k is optimal by Lemma 3. Otherwise pick any ˆ j 6∈ Iz k such that c ˆ j − λ ′ z k A ˆ j 0 and go to Step 3. Step 3. Let Iz k consist of j 1 j 2 . . . j m . Compute the vector γ k = γ k j 1 , . . . γ k j m ′ = [Dz k ] −1 A ˆ j . If γ k ≤ 0, stop, because by Lemma 4 below, there is no finite optimum. Otherwise go to Step 4. Step 4. Compute θ = min {z k j γ k j |j ∈ iz, γ k j 0}. Evidently 0 θ ∞. Define z k+1 by 40 CHAPTER 4. LINEAR PROGRAMMING z k+1 j =    z k j − θγ k j θ z k j = 0 , , , j ∈ Iz j = ˆj j 6= ˆj and j 6∈ Iz . 4.28 By Lemma 5 below, z k+1 is a basic feasible solution with c ′ z k+1 c ′ z k . Set k = k + 1 and return to Step 2. Lemma 4: If γ k ≤ 0, sup {c ′ x|x ∈ Ω p } = ∞. Proof: Define zθ by z j θ =    z j − θγ k j θ z j = 0 , , , j ∈ Iz j = ˆj j 6∈ Iz and j 6= ˆj . 4.29 First of all, since γ k ≤ 0 it follows that zθ ≥ 0 for θ ≥ 0. Next, Azθ = Az − θ X j∈Iz γ k j A j + θA ˆ j = Az by definition of γ k . Hence, zθ ∈ Ω p for θ ≥ 0. Finally, c ′ zθ = c ′ z − θc ′ z k γ k + θc ˆ j = c ′ z + θ{c ˆ j − c ′ z k [Dz k ] −1 A ˆ j } = c ′ z + θ{c ˆ j − λ ′ z k A ˆ j }i . 4.30 But from step 2 {cˆj − λ ′ z k A ˆ j } 0, so that c ′ zθ → ∞ as θ → ∞. ♦ Lemma 5: z k+1 is a basic feasible solution and c ′ z k+1 c ′ z k . Proof: Let ˜ j ∈ Iz k be such that γ k ˜ j 0 and z k ˜ j = θγ k ˜ j . Then from 4.28 we see that z k+1 ˜ j = 0, hence Iz k+1 ⊂ Iz − {˜j} S{ˆj} , 4.31 so that it is enough to prove that A ˜ j is independent of {A j |j ∈ Iz, j 6= ˜j}. But if this is not the case, we must have γ k ˜ j = 0, giving a contradiction. Finally if we compare 4.28 and 4.29, we see from 4.30 that c ′ z k+1 − c ′ z k = θ{c ˆ j − γ ′ z k A ˆ j } , which is positive from Step 2. ♦ Corollary 2: In a finite number of steps Phase II will obtain an optimal solution or will determine that sup {c ′ x|x ∈ Ω p } = ∞. Corollary 3: Suppose Phase II terminates at an optimal basic feasible solution z ∗ . Then γz ∗ is an optimal solution of the dual of 4.24. Exercise 2: Prove Corollaries 2 and 3. Remark 1: By the non-degeneracy assumption, Iz k+1 has m elements, so that in 4.31 we must have equality. We see then that Dz k+1 is obtained from Dz k by replacing the column A j by 4.3. THE SIMPLEX ALGORITHM 41 the column A ˆ j . More precisely if Dz k = [A j 1 .. . . . . .. . A j i−1 .. . A ˜ j .. . A j i+1 .. . . . . .. . A j m ] and if j k ˆj j k+1 then Dz k+1 = [A j 1 .. . . . . .. . A j i−1 .. . A j i+1 .. . . . . .. . A j k .. . A ˆ j .. . A j k+1 .. . . . . .. . A j m ]. Let E be the matrix E = [A j 1 .. . . . . .. . A j i−1 .. . A ˆ j .. . A j i+1 .. . . . . .. . A j m ]. Then [Dz k+1 ] −1 = P E −1 where the matrix P permutes the columns of Dz k+1 such that E = Dz k+1 P . Next, if A ˆ j = m X ℓ=1 γ j ℓ A j ℓ , it is easy to check that E −1 = M [Dz k ] −1 where M =           1 1 .. . 1 −γ j1 γ ˜ j 1 γ ˜ j −γ jm γ ˜ j 1 .. . 1           ↑ ith column Then [Dz k+1 ] −1 = P M [Dz k ] −1 , so that these inverses can be easily computed. Remark 2: The similarity between Step 2 of Phase II and Step 2 of the algorithm in 3.3.4 is striking. The basic variables at z k correspond to the variables w k and non-basic variables correspond to u k . For each j 6∈ Iz k we can interpret the number c j − λ ′ z k A j to be the net increase in the objective value per unit increase in the jth component of z k . This net increase is due to the direct increase c j minus the indirect decrease λ ′ z k A j due to the compensating changes in the basic variables necessary to maintain feasibility. The analogous quantity in 3.3.4 is ∂f ∂u j x k − λ k ′ ∂f ∂u j x k . Remark 3: By eliminating any dependent equations in 4.24 we can guarantee that the matrix A has rank n. Hence at any degenerate basic feasible solution z k we can always find ¯ Iz k ⊃ Iz k such that ¯ Iz k has m elements and {A j |j ∈ ¯ Iz k } is a linearly independent set. We can apply Phase II using ¯ Iz k instead of Iz k . But then in Step 4 it may turn out that θ = 0 so that z k+1 = z k . The reason for this is that ¯ Iz k is not unique, so that we have to try various alternatives for ¯ Iz k until we find one for which θ 0. In this way the non-degeneracy assumption can be eliminated. For details see Canon, et al., [1970]. We now describe how to obtain an initial basic feasible solution. Phase I: Step I. by multiplying some of the equality constraints in 4.24 by −1 if necessary, we can assume that b ≥ 0. Replace the LP 4.24 by the LP 4.32 involving the variables x and y: Maximize − m X i=1 y i subject to a il x 1 + . . . + a in x n + y i = b i , 1 ≤ i ≤ m , x j ≥ 0 , y i ≥ 0 , 1 ≤ j ≤ n , 1 ≤ i ≤ m . 4.32 42 CHAPTER 4. LINEAR PROGRAMMING Go to step 2. Step 2. Note that x , y = 0, b is a basic feasible solution of 4.32. Apply phase II to 4.32 starting with this solution. Phase II must terminate in an optimum based feasible solution x ∗ , y ∗ since the value of the objective function in 4.32 lies between − m X i=1 b i and 0. Go to Step 3. Step 3. If y ∗ = 0, x ∗ is a basic feasible solution for 4.24. If y ∗ 6= 0, by Exercise 3 below, 4.24 has no feasible solution. Exercise 3: Show that 4.24 has a feasible solution iff y ∗ = 0.

4.4 LP Theory of a Firm in a Competitive Economy