Foundations of the Calculus of Variations and Optimal ...

2 downloads 0 Views 603KB Size Report
By virtue of (3.212) we have dx dt. D Aet. Be t H) Aetf. Be tf D В D 1. Recalling that x.tf / D 1, we also have. Ae tf C Be tf D 1. Therefore, we must solve the system.
Chapter 3

Foundations of the Calculus of Variations and Optimal Control

In this chapter, we treat time as a continuum and derive optimality conditions for the extremization of certain functionals. We consider both variational calculus problems that are not expressed as optimal control problems and optimal control problems themselves. In this chapter, we relie on the classical notion of the variation of a functional. This classical perspective is the fastest way to obtain useful results that allow simple example problems to be solved that bolster one’s understanding of continuous-time dynamic optimization. Later in this book we will employ the more modern perspective of infinitedimensional mathematical programming to derive the same results. The infinite-dimensional mathematical programming perspective will bring with it the benefit of shedding light on how nonlinear programming algorithms for finitedimensional problems may be generalized and effectively applied to function spaces. In this chapter, however, we will employ the notion of a variation to derive optimality conditions in a fashion very similar to that employed by the variationalcalculus pioneers. The following is a list of the principal topics covered in this chapter: Section 3.1: The Calculus of Variations. A formal definition of a variation is provided, along with a statement of a typical fixed-endpoint calculus of variations problem. The necessary conditions known as the Euler-Lagrange equations are derived. Other optimality conditions are also presented. Section 3.2: Calculus of Variations Examples. Illustrative applications of the optimality conditions derived in Section 3.1 are presented. Section 3.3: Continuous-Time Optimal Control. A cannonical optimal control problem is presented. The notion of a variation is employed to derive necessary conditions for that problem, including the Pontryagin minimum principle. Sufficiency is also discussed. Section 3.4: Optimal Control Examples. Illustrative applications of the optimality conditions derived in Section 3.3 are presented. Section 3.5: The Linear-Quadratic Optimal Control Problem. The linearquadratic optimal control problem is presented; its optimality conditions are derived and employed to solve an example problem. T.L. Friesz, Dynamic Optimization and Differential Games, International Series in Operations Research & Management Science 135, c Springer Science+Business Media, LLC 2010 DOI 10.1007/978-0-387-72778-3 3, 

79

80

3 Foundations of the Calculus of Variations and Optimal Control

3.1 The Calculus of Varations In this section, we take a historical approach in the spirit of the earliest investigations of the calculus of variations, assume all operations performed are valid, and avoid the formal proof of most propositions. As already indicated, our main interest is in necessary and sufficient conditions for optimality.

  3.1.1 The Space C1 t0 ; tf The space continuously differentiable    scalar functions relative  to the  real  of once interval t0 ; tf