Massachusetts Institute of Technology. WWW site for book Information and Orders http://world.std.com/~athenasc/index.html. Athena Scientific, Belmont ...
Nonlinear Programming SECOND EDITION
Dimitri P. Bertsekas Massachusetts Institute of Technology
W W W site for book Information and Orders http://world.std.com/~athenasc/index.html
Athena Scientific, Belmont, Massachusetts
Contents 1. Unconstrained Optimization 1.1. Optimality Conditions 1.1.1. Variational Ideas 1.1.2. Main Optimality Conditions 1.2. Gradient Methods - Convergence 1.2.1. Descent Directions and Stepsize Rules 1.2.2. Convergence Results 1.3. Gradient Methods - Rate of Convergence 1.3.1. The Local Analysis Approach 1.3.2. The Role of the Condition Number 1.3.3. Convergence Rate Results 1.4. Newton's Method and Variations 1.5. Least Squares Problems 1.5.1. The Gauss-Newton Method 1.5.2. Incremental Gradient Methods* 1.5.3. Incremental Forms of the Gauss-Newton Method* 1.6. Conjugate Direction Methods 1.7. Quasi-Newton Methods 1.8. Nonderivative Methods 1.8.1. Coordinate Descent 1.8.2. Direct Search Methods 1.9. Discrete-Time Optimal Control Problems* 1.10. Some Practical Guidelines 1.11. Notes and Sources
p. 1
. . . .
2. Optimization Over a Convex Set 2.1. Optimality Conditions 2.2. Feasible Directions and the Conditional Gradient Method . . 2.2.1. Descent Directions and Stepsize Rules 2.2.2. The Conditional Gradient Method 2.3. Gradient Projection Methods 2.3.1. Feasible Directions and Stepsize Rules Based on Projection
p. 4 p. 4 p. 13 p. 22 p. 22 p. 43 p. 62 p. 64 p. 65 p. 75 p. 88 p. 102 p. 107 p. 108 p. 119 p. 130 p. 148 p. 158 p. 160 p. 162 p. 166 p. 183 p. 187 p. 191 p. 192 p. 209 p. 210 p. 215 p. 223 p. 223 v
vi 2.3.2. Convergence Analysis* 2.4. Two-Metric Projection Methods 2.5. Manifold Suboptimization Methods 2.6. Affine Scaling for Linear Programming 2.7. Block Coordinate Descent Methods* 2.8. Notes and Sources 3. Lagrange Multiplier Theory 3.1. Necessary Conditions for Equality Constraints 3.1.1. The Penalty Approach 3.1.2. The Elimination Approach 3.1.3. The Lagrangian Function 3.2. Sufficient Conditions and Sensitivity Analysis 3.2.1. The Augmented Lagrangian Approach 3.2.2. The Feasible Direction Approach 3.2.3. Sensitivity* 3.3. Inequality Constraints 3.3.1. Karush-Kuhn-Tucker Optimality Conditions 3.3.2. Conversion to the Equality Case* 3.3.3. Second Order Sumciency Conditions and Sensitivity* . . . 3.3.4. Sufficiency Conditions and Lagrangian Minimization* . . 3.3.5. Fritz John Optimality Conditions* 3.3.6. Refinements* 3.4. Linear Constraints and Duality* 3.4.1. Convex Cost Functions and Linear Constraints 3.4.2. Duality Theory: A Simple Form for Linear Constraints . . 3.5. Notes and Sources 4. Lagrange Multiplier Algorithms 4.1. Barrier and Interior Point Methods 4.1.1. Linear Programming and the Logarithmic Barrier* . . . . 4.2. Penalty and Augmented Lagrangian Methods 4.2.1. The Quadratic Penalty Function Method 4.2.2. Multiplier Methods - Main Ideas 4.2.3. Convergence Analysis of Multiplier Methods* 4.2.4. Duality and Second Order Multiplier Methods* 4.2.5. The Exponential Method of Multipliers* 4.3. Exact Penalties - Sequential Quadratic Programming* . . . 4.3.1. Nondifferentiable Exact Penalty Functions 4.3.2. Differentiable Exact Penalty Functions 4.4. Lagrangian and Primal-Dual Interior Point Methods* . . . . 4.4.1. First-Order Methods 4.4.2. Newton-Like Methods for Equality Constraints 4.4.3. Global Convergence
Contents p. p. p. p. p. p.
234 244 250 259 267 272
p. 275 p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p.
Contents 4.4.4. Primal-Dual Interior Point Methods 4.4.5. Comparison of Various Methods 4.5. Notes and Sources 5. Duality and Convex Programming 5.1. The Dual Problem 5.1.1. Lagrange Multipliers 5.1.2. The Weak Duality Theorem 5.1.3. Characterization of Primal and Dual Optimal Solutions . . 5.1.4. The Case of an Infeasible or Unbounded Primal Problem . 5.1.5. Treatment of Equality Constraints 5.1.6. Separable Problems and Their Geometry 5.1.7. Additional Issues About Duality 5.2. Convex Cost - Linear Constraints* 5.2.1. Proofs of Duality Theorems 5.3. Convex Cost - Convex Constraints 5.4. Conjugate Functions and Fenchel Duality* 5.4.1. Monotropic Programming Duality 5.4.2. Network Optimization 5.4.3. Games and the Minimax Theorem 5.4.4. The Primal Function 5.4.5. A Dual View of Penalty Methods 5.4.6. The Proximal and Entropy Minimization Algorithms . . . 5.5. Discrete Optimization and Duality 5.5.1. Examples of Discrete Optimization Problems 5.5.2. Branch-and-Bound 5.5.3. Lagrangian Relaxation 5.6. Notes and Sources 6. Dual Methods 6.1. Dual Derivatives and Subgradients* 6.2. Dual Ascent Methods for Differentiable Dual Problems* . . . 6.2.1. Coordinate Ascent for Quadratic Programming 6.2.2. Decomposition and Primal Strict Convexity 6.2.3. Partitioning and Dual Strict Concavity 6.3. Nondifferentiable Optimization Methods* 6.3.1. Subgradient Methods 6.3.2. Approximate and Incremental Subgradient Methods . . . 6.3.3. Cutting Plane Methods 6.3.4. Ascent and Approximate Ascent Methods 6.4. Decomposition Methods* 6.4.1. Lagrangian Relaxation of the Coupling Constraints . . . . 6.4.2. Decomposition by Right-Hand Side Allocation 6.5. Notes and Sources
vii p. 463 p. 471 p. 473 p. 477 p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p. p.