Model Predictive Control for Discrete Time Polynomial ... - Science Direct

0 downloads 0 Views 8MB Size Report
Keywords: Predictive Control, Polynomial Optimization, Polynomial Control Systems ... control systems with linear constraints and a quadratic cost function, the ...
Copyright © IFAC System, Structure and Control Oaxaca, Mexico, USA, 8-10 December 2004

ELSEVIER

IFAC PUBLICATIONS

www.elsevier.comllocatelifac

MODEL PREDICTIVE CONTROL FOR DISCRETE TIME POLYNOMIAL CONTROL SYSTEMS: A CONVEX APPROACH Tobias Raft' * RoIf Findeisen * Christian Ebenbauer * Frank AIIgower *

• Institute for Systems Theory ill Engineering (1ST), University of Stuttgart, Germany {rajf.findeise,ce,allgower}@ist.uni-stuttgart.de

Abstract: In this paper it is shown that model predictive control (MPC) for discrete time polynomial control systems can be formulated as a convex optimization problem. The proposed approach is based on recently developed numerical algorithms for the optimization problem of minimizing polynomial functions subject to polynomial inequality constraints. The key advantage of applying this approach to MPC is that a global minimum of a nonconvex polynomial optimization problem can be obtained via semidefinite programming. Copyright © 2004 IFAC Keywords: Predictive Control, Polynomial Optimization, Polynomial Control Systems

I. INTRODUCTION

In this paper, a convex approach for MPC of discrete time polynomial control systems is studied. Polynomial control systems are systems where the vector fields are polynomial functions . This class of control systems includes the class of linear control systems and many nonlinear control problems can be formulated as or transformed to polynomial control systems. For this system class the MPC open-loop optimal control problem with a polynomial cost function is a polynomial optimization problem. A polynomial optimization problem is an optimization problem of minimizing a polynomial function subject to polynomial inequality constraints. One approach to solve polynomial optimization is to reduce the original optimization problem to a semidefinite program via the theory of moments (Lasserre, 200 I) or the theory of nonnegative polynomials (Parrilo and Sturmfels, 2000). The key advantage of applying this approach to MPC is that a global minimum of the nonconvex MPC openloop optimal control problem can be obtained. Furthermore, this approach allows to introduce nonconvex state and input constraints in the MPC setup. This is useful in control of singular systems or systems with quanti zed inputs.

Model predictive control (MPC) has become a popular feedback strategy, especially for control systems subject to input and state constraints (Allgower et aI., 1999; Mayne et aI. , 2000; Rawlings, 2(00). The basic principle of MPC is to determine the control input applied by solving at each time instant an openloop optimal control problem. For discrete time linear control systems with linear constraints and a quadratic cost function, the MPC open-loop optimal control problem is a convex quadratic optimization problem. The key advantage in this case is that any local minimum is also a global minimum. Furthermore, efficient numerical algorithms exist which guarantee to converge to the global minimum (Boyd and Vandenberghe, 2004). When the control system is nonlinear, the MPC open-loop optimal control problem is, in general, a nonconvex optimization problem. Nonconvex optimization problems are often hard to solve and rather a local minimum than a global minimum is obtained. To avoid this problem in MPC, suboptimal model predictive controllers have been developed which require a feasible solution instead of an optimal solution (May ne et al., 2000; Scokaert et aI., 1999). 123

The remainder of the paper is organized as follows: In Section 2, the necessary background of discrete time MPC is presented. In Section 3, the MPC open-loop optimal control problem for discrete time polynomial control systems is formulated as a polynomia[ optimization problem . In Section 4, the basic techniques of polynomial optimization are reviewed. In Section 5 the approach is illustrated with a simple example. Finally, conclusions are stated in Section 6.

with

J(v .xd = E(XNlk) +

A polynomial p(x) in x = [XI •... ,xlI ] is a finite linear combination of monomia[s, i.e. p(x) = Lacaxa = La caxf' ... x~n, where Ca ER and a = [ai, .. ., all], ai E No. The degree of a polynomia[ is defined as d = L;'=I ai· The set of all polynomials in x = [XI , ...,x,,] with real coefficients is written as R [x] . A po[ynomial vector field f: Rn --> R", fix) = [fl(x) •. .., f,,(xW is a vector field with fi(x) E R [x], i.e. the entries of the vector field are polynomial functions in x . A polynomial p(x) E R [x] is called positive definite, if p(O) = 0 and p(x) > 0, 'Ix ER" \ to} and positive semidefinite if p(x) 2: 0, 'Ix E R". Let R+ denote the set of positive definite real numbers, then X is the class of functions from R + to KI- which are zero at zero, strictly increasing. and continuous.

2. MODEL PREDICTIVE CONTROL

2. 1 Setup of MPC for Polynomial Control Systems Consider the stabilization problem for the class of discrete time polynomial systems described by the following difference equations

::F.

ltk E u7t'

where Xk E R" is the state and Ilk E Rill is the input. Furthermore, f : Rn X R'" -> Rn is a polynomial vector field and f(O ,O) = O. The set X is a closed subset of Rn and the set vU a compact subset of R"', both containing the origin. In its simplest form, !!l- and 'W are given by

X := {Xk E R"lx"'ill :S Xk :S x lIlax } "It' := {Uk E RlIllulIlin :S Uk :S 1I11l 0, VXk ERn \

gc; X,

to}

OEg

k(Xk) E u?l, 'ixk E g f(xkok(xd) E g, VXk E g E(f(Xbkk)) - E(xd :S - F(Xkok(xd) ,VXk E g.

Assumption (a) and (e) enforce that E is a [ocal contro[ Lyapunov function in g. Assumption (b) and (c) ensure the state constraints and the input constraints in the terminal region g as well as feasibility of the MPC open-loop optimal control problem (3) at the next time instant k + [. Furthermore, assumption (d) ensures that the terminal region g is positively invariant for the closed-loop system. A number of methods to determine a terminal penalty term and a terminal region exist (Allgower et al. , [999; Mayne et al., 2000). An example is the use of a local stabilizing linear control law to determine a suitable terminal penalty E and a terminal region g .

(2)

J(v .xd xI+I1k = f(Xl lb llnd,

where v denotes the stacked control vector v = [Uklb ... , Uk +N_lld T , N the length of the prediction and the control horizon, X/ lk the predicted state at time instant I, obtained by applying the the input sequence uklk•...• ltk+I - llk to the system (I) from the initial condition Xk . Furthermore, the stage cost F : Rn X Rill -> R is assumed to be a polynomial map that satisfies F(O,O) = 0 and is lower bounded by a class ,X function aF : ad llxkll) :S F(Xk ,ud V(Xblld E .::r X 'f /. The terminal penalty term E : Rn --> R is assumed to be a polynomial map with E(O) = 0 and g := {X,~ , kPXN lk :S a} is the terminal constraint. The optimal solution to the MPC open-loop optimal control problem (3) is denoted by v* = [lI kjk.... , lI k+N_ I !k]T. The first element of v· is applied to the system (I). i.e. lIk = lI k1k• Hence, MPC is a feedback law, since the MPC open-loop optimal control problem (3) is solved at each time instant k using the actual state Xk .

(a) (b) (c) (d) (e)

where XlIli".XI/l

Suggest Documents