Iterative Learning Control with Saturation Constraints - CiteSeerX

0 downloads 0 Views 1MB Size Report
Aug 22, 2009 - Consider the stable closed-loop system in Figure 1, which executes a repetitive process with N time samples starting at rest condition at the ...
2009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 10-12, 2009

WeB08.4

Iterative Learning Control with Saturation Constraints Sandipan Mishra, Ufuk Topcu, and Masayoshi Tomizuka

Abstract— We consider the problem of synthesis of iterative learning control schemes for linear systems with saturation constraints. The problem of minimizing the tracking error is formulated as a constrained convex optimization problem, namely a linearly constrained quadratic program. Due the lack of information regarding the disturbances in the process, descent directions cannot be determined without running experiments. This in turn leads to strict limitations on the number of iterations employed in any iterative optimization scheme. Motivated by this fact, we implement an interior point algorithm, specifically the barrier method. The method is demonstrated on a prototype wafer stage testbed and its performance is compared to other existing methods.

I. I NTRODUCTION Iterative learning control (ILC) is a feedforward control design technique for repetitive processes. ILC algorithms use information from earlier trials (executions) of the repetitive process to improve the performance in the current trial. The key design issue in the ILC is the efficient utilization of this information to improve the performance of the closedloop system using as few trials as possible. Due to its simple design, analysis, and implementation, ILC has been employed in many applications including industrial robotics [1], injection molding systems [2], rapid thermal processing [3], micro-scale robotic deposition [4], and rehabilitation robotics [5]. For analysis and design, three alternate formulations of the ILC problem have been proposed in literature. The first formulation is based on the frequency response of the plant and the learning filter [6]. Based on 2-dimensional (2-D) system theory, an alternate approach towards ILC stability and design has also been developed by [7]. Finally, the ILC design problem has been formulated using a lifted domain representation by [8]. This formulation enables analysis and design for time-varying linear plants at the expense of computational complexity increasing with the cycle length. Analogous to iterative optimization schemes, ILC algorithms use experimental data collected during the trials of the underlying repetitive process to minimize an objective function. Therefore, there exist interesting parallels between the ILC design and iterative optimization algorithms. These similarities have already been explored by Hat˝onen [9], Owens et. al. [10], and Norrlof [11]. In more recent work S. Mishra is with the Department of Mechanical Science and Engineering at the University of Illinois, Urbana-Champaign ([email protected]) U. Topcu is with Control and Dynamical Systems at California Institute of Technology ([email protected]) M. Tomizuka is with the Department of Mechanical Engineering at the University of California at Berkeley ([email protected]).

978-1-4244-4524-0/09/$25.00 ©2009 AACC

by [12], robustness and monotonicity of optimization-based ILC schemes have been investigated. On the other and, this analogy has not been fully exploited especially for systems with constraints (such as saturation constraints). Driessen et al proposed a learning scheme for multi-input multi-output square systems with bounded input constraints [13]. Xu et al [14] used a composite energy function based ILC algorithm without the global Lipschitz condition required by [13]. However, both these approaches require that the optimum of the unconstrained ILC problem is also optimal for the corresponding constrained problem. In this paper, we propose an ILC algorithm for linear systems with linear constraints removing the requirement that the optimum of the unconstrained problem lies in the constraint set. Using the lifted system representation, we formulate the ILC problem as a quadratic program (an optimization problem with convex quadratic objective and affine constraints [15]). This formulation captures many ILC design problems including those with truncated `2 and `∞ objective functions and input saturation, rate, and state constraints. An important benefit of this framework is the availability of efficient computational tools [15] as well as the possibility of using system theoretic interpretations of the underlying optimization problem to reduce its computational complexity. We use a barrier method [15], [16] to solve the proposed ILC problem. This implementation utilizes experimentally collected data for computing the current search direction which enhances robustness of the algorithm against modeling uncertainties. We demonstrate the method on a prototype wafer stage with actuator saturation bounds.

II. ILC P ROBLEM S ETUP FOR A F EEDBACK -C ONTROLLED R EPETITIVE P ROCESS Consider the stable closed-loop system in Figure 1, which executes a repetitive process with N time samples starting at rest condition at the beginning of each trial. P is a discrete time linear time-invariant (LTI) plant controlled with the LTI feedback controller C (designed to stabilize the closed-loop system and reject slowly-varying disturbances). The output of the plant for each trial is denoted by yk (j), where the time index j ranges from 0 to N − 1, and the subscript k denotes the trial index (hereafter). The reference signal to be tracked by the output yk (at trial k) of the plant is r. Further, there is a trial independent disturbance d. In addition to the control effort uk , a feedforward control signal uf,k is also injected at the input of the plant. The relationships between

943

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on August 22, 2009 at 19:40 from IEEE Xplore. Restrictions apply.

uf,k r- e ekC(z) − 6

d

?-e ?-e P (z)

⇒ yk (j) =

yk -

j X

r(i)gr (j − i) +

i=0

j X

uf,k (i)gu (j − i)

i=0

+

j X

d(i)gd (j − i),

i=0

Fig. 1.

where gu and gd , are the impulse response coefficients for the transfer functions Gu and Gd , respectively. For a signal v defined on the finite interval [0, N − 1], let the lifted signal T v be v = [v(0) v(1) . . . v(N − 1)] . Then, the expression in (4) can be unified into

Block diagram of the closed loop system (for trial k)

the inputs r, u, and d and the outputs y and e are yk = (I + P C)

−1

P [Cr + (uf,k + d)]

(1)

ek = r − yk ,

(2)

where the time index j is dropped for notational simplicity. The ILC design problem aims to obtain the optimal feedforward control effort u0f (j) by an iterative adjustment based on the data (such as the tracking error) in the previous cycle. In its most general form, the ILC update law is uf,k+1 = F (uf,k , ek ) .

(3)

The design of F : Uf × E → Uf is the central problem in ILC, where Uf is the space of admissible feedforward control efforts and E is the space of measured error ek (·). The learning update law F is a map that refines the a priori knowledge captured by uf,k to a posteriori knowledge captured by uf,k+1 using the experimental data (such as ek ) based on the intent of the designer, while respecting the physical constraints captured by Uf . The repetitive nature of the process results in a 2-D system [7] with the time evolution of the within-trial system and the trial-to-trial evolution. In order to simplify the notation as well as the analysis, we use the lifted ILC formulation [17], [11]. The lifted system formulation provides a method for analyzing linear discrete time repetitive processes. This formulation exploits the finiteness of the length of each trial to reduce the ILC problem into a finite-dimensional design problem. We consider a single-input single-output LTI discrete time repetitive process described by yk (j) = Gr (q −1 )r(j) + Gu (q −1 )uf,k (j) + Gd (q −1 )d(j) ek (j) = r(j) − yk (j), (4) where Gr , Gu , and Gd are transfer functions from r, uf , and d to y, respectively and q −1 denotes the unit delay. Using the convolution form of the equations described above, we get the time domain expression j X

yr→k (j) =

r(i)gr (j − i)

(5)

yk = Gr r + Gu uf,k + Gd d ek = r − yk

where Gr , Gu , Gd ∈ RN ×N . If the closed-loop system is LTI, then   gr (0) 0 ... 0  gr (1) gr (0) ... 0    (8) Gr =  . .. .. . .  . . 0  . gr (N − 1) gr (N − 2) . . . gr (0) Similarly, the matrices Gd and Gu can be generated from the corresponding impulse responses. Note that Gr , Gu , and Gd are Toeplitz due to the time-invariance of the underlying linear system and lower triangular due to causality. Further, these matrices can be well-approximated by banded matrices since the impulse response coefficients become small because of the exponential stability of the closed-loop system [18, p. 141]. This bandedness of the system matrices (Gu , Gr , etc.) offers computational benefits in solving the optimization problems as discussed in Section IV. A. ILC as an Optimization Problem Let us now assume that there are no nonrepetitive events and zero measurement noise ( ψk = 0, nk = 0 ), go back to the lifted ILC problem with a cost function (objective function) J : RN → R+ , and consider the optimization minuf ∈RN J(uf ) subject to ek = (I − Gr ) r − Guf uf,k − Gd d

III. ILC WITH I NPUT S ATURATION

where yr→k represents the component of yk due to r and gr are the impulse response coefficients for Gr . If the system starts from rest, then yk (0) ≡ 0 for all k, and the infinite sum in (5) can be reduced to the finite sum j X i=0

r(i)gr (j − i)

(9)

Typical examples of the cost functions are 12eT e (minimizing  the 2-norm of the tracking errors) or 21 eT e + ρu uTf uf (minimizing a weighted sum of the 2-norms of the tracking error and the feedforward input). It is critical to note that while r and uf,k are known, d is unknown. Therefore, it is not possible to analytically (or in simulation) determine the solution to this optimization problem.

i=−∞

yr→k (j) =

(7)

(6)

We now consider the ILC problem for the closed loop system with input saturation, as shown in Figure 2. The saturation function sat : R → R  umax , if umax ≤ u  u, if − umax ≤ u ≤ umax sat(u, umax ) :=  −umax , if u ≤ −umax

944 Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on August 22, 2009 at 19:40 from IEEE Xplore. Restrictions apply.

uf,k (j) d(j) u (j) k e (j) r(j) e - e k - C(z) - ? - sat(·) - ? e- P (z) yk (j) − 6

Fig. 2. The block diagram of the closed-loop system with input saturation.

on uk (j) translates to the following constraint umax ≥ |uk (j)| = C 1 PC . r(j) + u (j) − d(j) f,k 1 + PC 1 + PC 1 + PC

(10)

In the lifted ILC framework, this inequality constraint can be added to the optimization problem as shown below in 11. minuf ∈RN J(uf ) subject to ek = (I − Gr ) r − Guf uf,k − Gd d     ˜ A b1 ˜ uf  b2 , −A

(11)

˜ ∈ RN ×N is the lifted form of the transfer function where A 1 N 1+P C , while b1 , b2 ∈ R are the lifted forms of the signals C PC r(j) + d(j) 1 + PC 1 + PC C PC b2 (j) = umax + r(j) − d(j) 1 + PC 1 + PC We now focus on the case where the cost function is of the form J(uf ) = 12 eT e, where the above optimization problem becomes an affinely constrained quadratic program [15]. Extensions for more general quadratic cost functions (and affine cost functions) as well as different types of affine equality and inequality constraints are straightforward but we do not consider these in the current paper. b1 (j) = umax −

IV. Q UADRATIC P ROGRAMMING For notational simplicity, let us write the problem in (11) in the form 1 T 2e e

min u

 A :=

˜ A ˜ −A



 ,

b :=

(12)

u

1 T T 2 u G Gu

Au  b.

The primal barrier method, a special type of interior point method [15], is an iterative algorithm for solving inequality constrained convex optimization problems. Applied to the QP in (13), the barrier method repeatedly solves min uT GT Gu − 2wT Gu + κφ(u),

b1 b2

(14)

where κ > 0, φ is the logarithmic barrier function φ(u) := −

m X (bi − aTi u) i=1

 ,

aTi

w := (I −Gr )r−Gd d, G := Guf and u := uf for brevity. The problem in (12) is an affinely constrained least squares problem and is equivalent to the quadratic program (QP) [15] min

A. Barrier method

u

subject to

Au  b e = w − Gu, where

evaluated analytically because it contains w which is unknown (but constant) and cannot be measured. This prohibits using off-the-shelf optimization solvers for the solution of (13). On the other hand, when w appears in the expressions that need to be evaluated in the iterations of the optimization scheme specifically in the form Gu − w, then it may be possible to evaluate the expression because, in fact, Gu − w = e and e can be measured at each iteration (i.e., when the process is ran with the current candidate for the optimizing value of u in the problem in (13). For example, although the gradient of the cost function in the problem in (13) contains w it can still be evaluated with the knowledge of e since it is equal to GT Gu−GT w = GT (Gu−w) = −GT e. We have made extensive use of this observation and its variants in the implementations of optimization schemes for the problem in (13) discussed next. Affinely constrained quadratic programs (QPs) are one of the most studied topics in numerical convex optimization. In fact, they constitute a class of problems in the family of so called convex conic programming problems for which there are computationally efficient optimization schemes. To name a few, the active set methods, the gradient projection methods, and penalty and barrier methods are suitable to solve affinely constrained QPs [15], [16]. Among these methods, we implemented a variant of the active set methods [16, §10.3] and a straightforward barrier method [15, §11.3]. We next discuss the implementation of the barrier method because it outperformed the implementation of the active set method especially in the number of iterations (which is an especially important metric for performance since each iteration of the optimization algorithm requires a run of the process) as shown on an example in section V.

− wT Gu subject to

(13)

When analytical expressions for the cost and constraint functions are available, QPs can be efficiently solved using off-the-shell optimization software such as Mosek [19] and SeDuMi [20]. However, the cost function in (13) cannot be

th

and bi and are the i row of b and A, respectively. φ(u) = ∞ if Au  b. (14) is an unconstrained convex optimization problem and can be solved by any unconstrained optimization solver, e.g. Newton method [15]. Note that as κ approaches zero, the solution of (14) approaches the solution of (13). In the barrier method, a sequence of problems of the form in (14) is solved using an unconstrained optimization solver starting from the previous problems optimal point for a decreasing sequence of κ. In a typical application of the barrier method κ is decreased by a constant factor µ > 1 (typically 10) until a sufficient accuracy is reached. In fact, it can be shown that the solution

945 Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on August 22, 2009 at 19:40 from IEEE Xplore. Restrictions apply.

of (14) is no more than κm suboptimal with respect to the solution of (13) [15] and this bound provides a stopping criterion for the barrier method. For the examples in this paper we adapted the following barrier method from [15] which is stated for completeness. Barrier method given strictly feasible u and κ(0) > 0, µ > 1, and  > 0, set κ ← κ(0) . repeat until κm <  Starting at u, compute u∗ := minu f0 (u) + κφ(u) Set u ← u∗ and κ ← κ/µ In the above implementation u∗ can be computed using any solver for unconstrained optimization problems. Accounting for the fact that each iteration of in the implementation requires an experiment, we use the Newton method primarily because of its fast convergence (in fact Newton method converges quadratically despite the linear convergence rate of computationally lighter first order methods). Furthermore, one can exploit the structure of the problem in (13) to reduce the computational cost of the Newton method. Note that each step of the Newton method requires to solve the linear set of equalities  ∂ uT GT Gu + κφ(u) (GT G + κAT DA)∆u = − ∂u (15) for ∆u which determines the change in u, where D is a diagonal matrix with Dii = (bi − aTi u)−2 (AT DA is the Hessian of φ). In general, solving these linear equations takes O(N 3 ) flops. On the other hand, note that GT G + κAT DA is a banded matrix and therefore it takes O(N K 2 ) flops to solve the equality in (15) if the bandwidth is K [15]. Remark 4.1: A single iteration (i.e., each Newton step) of the barrier method is computationally more demanding compared to a single iteration of the active set method roughly because the active set method uses only first order information (gradient) whereas the Newton step requires second order information (gradient and hessian). On the other hand, the structure and the system theoretic interpretations of the problem (such as bandedness noted above) enables significant reductions in computationally effort in barrier method based techniques for the QPs. Exploiting the system theoretic interpretations of the ILC problems is the subject of the current research. See [21] for a recent discussion on implementations of barrier methods for real time model predictive control problems. V. S IMULATION AND E XPERIMENTAL E VALUATION A. Prototype Single Degree of Freedom Setup The prototype single degree of freedom setup consists of a wafer stage and countermass, shown in Figure 3. The wafer stage and countermass are driven by linear motors. In order to minimize Coulomb friction effects, the wafer stage is mounted on air bushing. The wafer stage is modeled as a simple second order system given by P (s) = 11.79/(5.3s2 + 7.2s). The peak

Fig. 3.

Schematic of wafer stage mechanical structure.

d(j) up (j)

u(j) -

?-

Actuator Fig. 4.

y(j) kw mw s2 +bw s Plant

Block diagram of plant and actuator.

motor current, however, is limited by the maximum amplifier output to 2V. Ignoring motor dynamics, the overall plantactuator model is shown in Figure 4. The overall closed-loop system with the plant, feedback controller, and external disturbances is shown in Figure 2. The measurable signals in this setup are: actuator input uk (j), actuator output up,k (j), and plant output yk (j). The external signals are of two types: the reference signal r(j) is known, while the disturbances d(j) and nk (j) are unknown. The feedback controller is a simple PID controller, designed based on stability margins and bandwidth requirements for the closed loop tracking performance. The discrete time controller   Ts 1 − z −1 C(z) = 30000z −1 1 + 2 + 0.012 1 − z −1 Ts (16) gives the desired closed loop bandwidth of 100 Hz, with sampling rate Ts = 400 µsec. The reference trajectory (position and velocity) to be tracked is a typical forward and reverse scan trajectory, as shown in Figure 5. The goal of the ILC law is to determine the ideal feedforward control effort u∗f that minimizes the two norm of the error profile over the scan, while guaranteeing that the saturation bounds on control input to the plant (based on the actuator model) are not violated. This problem fits into the constrained QP optimization framework developed in section II-A. However, it is not possible to compute the descent direction without explicitly including

946 Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on August 22, 2009 at 19:40 from IEEE Xplore. Restrictions apply.

the disturbance terms in the optimization. Therefore, we need to run the experiment at each step in order to determine the descent direction. The transfer functions required for the optimization are G = P/(1 + P C) and A = 1/(1 + P C). The corresponding lifted matrices G and A are obtained from the impulse responses of these transfer functions in (8).

Fig. 5.

Plot of the reference position r(t) versus time (t).

B. Simulation results Simulation of the repetitive scan process consisted of two steps repeated over: (1) computation of the ILC effort through an optimization step, and (2) simulation of a single waferstage scan based on the ILC effort generated from step (1). The optimization step (Step (1)) used data (error information) from the previous waferstage scan simulation (Step (2)) to compute a new ILC effort. The scan process simulation was repeated until the stopping criterion of the optimization algorithm is satisfied. The scan simulation was used the model of the plant and feedback controller described in the previous section, and an unknown but fixed disturbance was injected at the input of the plant. The actuator saturation model was also included in the simulation. At the end of each scan, trajectory following error was recorded and subsequently used in the optimization step to compute a new ILC effort. Figure 6 shows the 2-norm of the tracking error kek k versus the iteration index k for the barrier method (top figure) and the active set method (bottom figure). Practically, both methods converged to the same value of kek2 . However, the barrier method satisfies the stopping criterion in a much smaller number of iterations compared to the active set method implementation. Therefore, we only used the barrier method for the experimental implementation.

Fig. 6. Trajectory following error kek k2 vs. number of iterations (k) for the barrier method (top figure) and the active set method (bottom figure).

important to note that the convergence of the error norm is not monotonic. This can be attributed to the fact that the cost function also includes the barrier function term in addition to the 2-norm of the error. Figure 10 shows the feedforward control effort from ILC after 20 iterations. We observe that during the initial acceleration phase of the scan trajectory, the ILC effort acts against the feedback control effort in order to ensure that the saturation constraints are not violated. Then, during the midacceleration phase, the ILC effort aids the feedback control effort to make sure that the desired velocity is maintained. During the constant velocity phase, the ILC effort is small. Figure 11 shows the plot of the total control effort going into the actuator. We note that this signal always lies within the saturation bounds ±2V . Therefore, the saturation constraints for the optimization problem are not violated.

C. Experimental Results In this section, we present experimental evaluation of the performance of the proposed interior-point optimization algorithm based ILC scheme. The ILC scheme was implemented for the system described in Section V-A. The trajectory following error plots for iterations 1, 5, 15, and 20 are shown in Figure 7. The decay of the Euclidean 2-norm of the trajectory error across iterations is shown in Figure 9. We observe that after 20 iterations, peak following error is under 85µm (Figure 8) , while the 2-norm kek2 is 6µm. It is

Fig. 7.

Trajectory following error for iterations 1, 5, 15, and 20.

947 Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on August 22, 2009 at 19:40 from IEEE Xplore. Restrictions apply.

straints. The problem of minimizing the tracking error was formulated as a constrained convex optimization problem, namely a linearly constrained quadratic program. Due the lack of information regarding the disturbances in the process, descent directions cannot be determined without running experiments. This in turn leads to strict limitations on the number of iterations employed in any iterative optimization scheme. Motivated by this fact, we implemented an interior point algorithm, specifically the barrier method. The method was demonstrated on a prototype wafer stage testbed and its performance was compared to other existing methods. Fig. 8.

Fig. 9.

Detail plot of the trajectory following error for iteration 20.

R EFERENCES

Trajectory following error 2-norm kek2 against iteration number.

Fig. 10.

ILC feedforward control effort for iteration 20.

Fig. 11.

Total control effort for iteration 20.

VI. C ONCLUSIONS We considered the problem of synthesis of iterative learning control schemes for linear systems with saturation con-

[1] K. Moore, M. Dahleh, and S. Bhattacharyya, “Learning control for robotics,” in Proceedings of 1988 International Conf. on Communications and Control, Baton Rouge, Louisiana, 1988, pp. 976–987. [2] H. Havlicsek and A. Alleyne, “Nonlinear control of an electrohydraulic injection molding machine via iterative adaptive learning,” IEEE Trans. on Mechatronics, vol. 4, no. 3, pp. 312 – 323, 1999. [3] Y. Chen, J.-X.Xu, T.H.Lee, and S.Yamamoto, “An iterative learning control in rapid thermal processing,” in Proc. the IASTED Int. Conf. on Modeling, Simulation and Optimization, Singapore, 1997, pp. 189–92. [4] D. Bristow, M. Tharayil, and A. Alleyne, “A survey of iterative learning control,” Control Systems Magazine, IEEE, vol. 26, no. 3, pp. 96–114, 2006. [5] A. Duschau-Wicke, J. von Zitzewitz, R. Banz, and R. Riener, “Iterative learning synchronization of robotic rehabilitation tasks,” in Proc. of the International Conf. on Rehabilitation Robotics, 2007, pp. 335–340. [6] M. Norrlof and S. Gunnarsson, “Time and frequency domain convergence properties in iterative learning control,” International Journal of Control, vol. 75, pp. 1114–1126, 2002. [7] N. Amann, D. H. Owens, and E. Rogers, “2D systems theory applied to learning control systems,” in Proc. of the 33rd Conf. on Decision and Control, Lake Bruena, FL, USA, 1994, pp. 985–986. [8] B. Dijkstra and O.H.Bosgra, “Extrapolation of optimal lifted system ilc solution, with application to a waferstage,” American Control Conference, 2002. [9] J. Hat˝onen, Issues of algebra and optimality in Iterative Learning Control. University of Oulu Press, 2004. [10] D. H. Owens and J. Hatonen, “Iterative learning control – an optimization paradigm,” Annual Reviews in Control, vol. 29(1), pp. 57–70, 2005. [11] S. Gunnarsson and M. Norrlof, “On the design of ilc algorithms using optimization,” Automatica, vol. 37, no. 1, pp. 2011–2016, 2001. [12] J. J. H. D. H. Owens and S. Daley, “Robust monotone gradientbased discrete-time iterative learning control,” International Journal of Robust and Nonlinear Control, June 2008. [13] B. J. Driessen, N. Sadegh, and K. S. Kwok, “Multi-input square iterative learning control with input rate limits and bounds,” IEEE Trans. on Systems, Man, and Cybernetics, vol. 32(4), pp. 545–550, 2002. [14] J.-X. Xu, Y. Tan, and T.-H. Lee, “Iterative leaning control design based on composite energy function with input saturation,” Automatica, vol. 40(8), pp. 1371–1377, 2004. [15] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Univ. Press, 2004. [16] R. Fletcher, Practical Methods of Optimization, 2nd ed. New York: John Wiley & Sons, 1991. [17] P. Khargonekar, K. Poolla, and A. Tannenbaum, “Robust control of linear time-invariant plants using periodic compensation,” Automatic Control, IEEE Transactions on, vol. 30, no. 11, pp. 1088–1096, 1985. [18] I. T. Y. A. V. Oppenheim, A. S. Willsky, Signals and Systems, 3rd ed. Prentice-Hall, 1983. [19] The MOSEK optimization toolbox for MATLAB manual. Version 5.0, Denmark, available at http://www.mosek.com. [20] J. Sturm, “Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones,” Optimization Methods and Software, vol. 11– 12, pp. 625–653, 1999. [21] F. Wang and S. Boyd, “Fast model predictive control using online optimization,” in Proc. IFAC World Congress, 2008, pp. 6974–6979.

948 Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on August 22, 2009 at 19:40 from IEEE Xplore. Restrictions apply.

Suggest Documents