Basic Model Optimal Control Theory

6 ∂ ∂ β ∂π ∂ µ ∂ ∂ µ ∂ ∂ ∂ ∂ L S t S t t f S t t h S t S t S t L S t t ; , = − − ≤ ≥ = 1 2 [ ] ∂ ∂µ L t S t f S t g t 1 1 , = + − = [ ] ∂ ∂ µ µ µ ∂ ∂ µ L t b h S t g t t L t 2 2 2 2 , ; , = − ≥ ≥ = [ ] Equations [ ] and [ ] are written as inequalities because of the non- negativity restrictions on g t and S t , which allow for the possibility of a boundary solution. Equations [ ], [ ] and [ ] represent complementary slackness conditions of non-linear programming. Equation [ ], for example, requires that the present value of the marginal benefit of water be no greater than its aggregate marginal imputed costs. Complementary slackness means that if the optimal solution calls for the active use of groundwater [g t ], the marginal net benefit must be exactly equal to the aggregate imputed marginal costs, as would be the situation in the classical optimization problem. n equation [ ], complementary slackness requires that when the inequality constraint is satisfied at the solution as a strict inequality then the corresponding dual vector of variables µ t equals zero at the solution. These equations [ ] to [ ] can be used to solve for g t , S t , µ t and µ t i.e., optimal rate of groundwater extraction at any time t, level of remaining groundwater stock, and the shadow value associated with each constraint.

3. Optimal Control Theory

3.1 Basic Model

Optimal control theory is a dynamic optimization technique developed by Pontryagin et al. which deals with an optimization problem subject to constraints in the forms of the equation of motion. This method is an improvement over the classical calculus of variations method developed by the Bernoulli brothers in the s which was applied in economics in the late s Ramsey, and otelling, . Compared to calculus of variations, the optimal control theory can afford insights into a problem that might be less readily apparent through the calculus of variations and also apply to the problems for which the latter is not convenient Kamien and Schwartz, ; oy et al, . n optimal control problems, variables are divided into three categories, the time variable, state variables and control variables. State variables represent those which describe the condition of the system such as the groundwater stock in the aquifer. The rate of change of a state variable may depend on the value of that variable, time or other variables which can be controlled at any time by the operator 7 of the system. These other variables are called control variables, for instance the groundwater extraction rate. Chiang stated that it is the control variable that gives the optimal control theory its name and occupies the center of the stage in this approach. The movement of the state variables is governed by first-order differential equations or equations of motion. A simple optimal control problem for groundwater resource allocation can be formulated as follows see Syaukat, : Maximize S t g t dt g t t T , Π =  β π [ ] subject to S f S t g t t • = , , [ ] S S = externally determined [ ] Equation [ ] represents the objective function of the problem, which is stated as a maximization of the present value of net benefits, Π, generated from the use of groundwater resource over a planning period [ ,T]. π . is the annual net benefit function to the operator; S t is the state variable of the system, which is the stock of groundwater in the aquifer at time t; g t is the control variable which is the groundwater extraction rate at time t; f . is the equation of motion or differential equation for the state variable; S • = ∂S∂t represents the rate of change of the stock variable groundwater stock with respect to time; S and ฀ remain as defined before. The problem is to choose the path for the control variable g t over [ ,T] to yield the largest possible Π. To solve the above problem, we define a new function called a amiltonian . The amiltonian function associated with equations [ ] to [ ] is H S t g t t t S t g t t t f S t g t t t , , , , , , , λ β π λ = + [ ] where S t , g t , f . and π . have the same meanings as before, . is the present value amiltonian function and λ t is a costate variable, which tells how the maximum value varies as the constraint varies. t represents the shadow value of the groundwater resource; the extra value obtainable when a unit of groundwater resource is added. The maximum principle is applied to solve problem [ ]. An optimal solution to the above problem is a triplet, S t , g t and λ t , and must satisfy the following conditions: After the rish mathematician William Rowan amilton - . 8 a The control variable g t maximizes S t ,g t ,λ t ,t that is, ∂ ∂ β ∂π ∂ λ ∂ ∂ H g t g t t f g t t =  + = [ ] b The state and costate variables satisfy a pair of differential equations S t H t S t f S t g t t • • =  = , , ∂ ∂λ [ ] λ ∂ ∂ λ β ∂π ∂ λ ∂ ∂ • • = −  = − − t H S t t S t t f S t t [ ] c Satisfy the boundary conditions explained in equation [ ]. These three equations, [ ], [ ] and [ ], are the optimal solution to the problem. From the above three equations one algebraic equation and two differential equations , a specific solution can be derived. Once the optimal control variable path g t is found, the equation of motion would make it possible to construct the related optimal state variable path S t and the associated costate variable λ t . 3.2 Extended Model n most cases, the problem of optimal control also includes not only equality constraints, but inequality constraints. n this situation, there are no significant differences in the maximum principle, as those presented in the preceding section. owever, when inequality constraints are present, the amiltonian function needs to be extended to include the constraints and forms a Lagrangian function. For simplicity, lets consider the above problem with an additional inequality constraint [ ]. Maximize S t g t dt g t t T , Π =  β π [ ] Subject to: S f S t g t t • = , , [ ] h S t g t t , , ≥ 0 [ ] S S = externally determined [ ] 9 where h . is an inequality constraint, a constant greater than or equal to zero , and included into the problem. The Lagrangian function is the amiltonian function [ ] plus the constraint [ ] multiplied by its associated Lagrange multiplier µ t . The Lagrangian function is presented as L S t g t t t t H S t g t t t t h S t g t t S t g t t t f S t g t t t h S t g t t t , , , , , , , , , , , , , , λ µ λ µ β π λ µ = + = + + [ ] As in [ ], the Lagrangian function is an extended version of the amiltonian function [ ]. n this situation, the extended version of the optimal control problem [ ] can be solved by using a modified version of the non-linear programming technique thus, the Kuhn-Tucker condition can be applied to solve this problem. The Kuhn-Tucker conditions for the above problem are as follows: Notice that in previous sample, equation [ ], the inequality constraint is to be greater than or equal to b, where b≥ . n this problem, for simplicity, the constraint is written directly to be greater than or equal to , and not b. a At any time t, the optimal control variable g t maximizes the amiltonian [ ] subject to the condition that g t belongs to the set of admissible controls defined by [ ]. ∂ ∂ β ∂π ∂ λ ∂ ∂ µ ∂ ∂ L g t g t t f g t t h g t t =  + + = [ ] µ µ ; , , ; , , t h S t g t t t h S t g t t ≥ ≥ = [ ] where the asterisk on L indicates that the derivatives are evaluated at S t ,g t . The multiplier µ t is piecewise-continuous and continuous on each point of continuity of g t . b The state variable satisfies the differential equation: S t L t S t f S t g t t • • =  = , , ∂ ∂λ [ ] c The costate variable is continuous and has a piecewise-continuous derivative satisfying: λ ∂ ∂ λ β ∂π ∂ λ ∂ ∂ µ ∂ ∂ • • = −  = − − − t L S t t S t t f S t t h S t t [ ] c Satisfy the boundary conditions [ ]. 10 Similarly, specified solutions for g t , S t , λ t and µ t can be found from the above conditions.

4. Dynamic Programming