Iterative Learning Control by Two-Dimensional System Theory Applied ...

2 downloads 0 Views 387KB Size Report
tracking error. Since the time and trial directions in ILC are decoupled,. ILC is often applied by separately designing a feedback and a learning controller.
Proceedings of the 2007 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 2007

FrB17.3

Iterative Learning Control by Two-Dimensional System Theory applied to a Motion System Wojciech Paszke, Roel Merry, and Ren´e van de Molengraft Abstract— For systems that repeatedly perform a given task, iterative learning control (ILC) makes it possible to update the control signal to the system during successive trials in order to improve the tracking performance. Iterative learning control has an inherent 2-D system structure since there are two independent variables, i.e. time and trials. In this paper, the 2-D structure is exploited in a method that yields in a one step synthesis both a stabilizing feedback controller in the time domain and an ILC controller, which guarantees convergence in the trial domain. A norm-bounded uncertainty model is added to guarantee a robust controller performance. The controller synthesis can be performed by means of linear matrix inequalities. The effectiveness of the theoretical results will be illustrated using a motion system.

I. I NTRODUCTION Iterative learning control (ILC), first introduced by Arimoto [1], derives a high performance feedforward signal for systems that perform a given task repeatedly. The basic idea of ILC is to improve the tracking performance of the system during successive trials by updating the control input. Typically, the update is based on the tracking error as   d uk+1 (t) = uk (t)+F ek (t), dt where uk (t) and ek (t) are the feedforward control input and the tracking error of the k th trial respectively and F is a linear filter, which performs a filtering operation on the tracking error. Since the time and trial directions in ILC are decoupled, ILC is often applied by separately designing a feedback and a learning controller. The feedback controller stabilizes the system in the time domain and suppresses unknown disturbances. The learning controller is designed to guarantee convergence in the trial domain. In most cases the design of the learning filter is based on the inverse of a closed-loop model [2]. Since ILC derives a feedforward signal, it does not affect the stability of the system in the time domain. The objective of stabilization of the system in the time domain and convergence of the ILC scheme in the trial domain can be written as a convergence condition on the tracking error as lim kek k = 0. k→∞

W. Paszke is with the Institute of Control and Computation Engineering, University of Zielona G´ora, Zielona G´ora, Poland.

[email protected] R.J.E. Merry and M.J.G. van de Molengraft are with the Control Systems Technology Group, Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands. [email protected],

[email protected]

1-4244-0989-6/07/$25.00 ©2007 IEEE.

The tracking error ek = yk − r, where yk is the output of the system and r is the desired output trajectory. This paper proposes a totally different approach. A onestep synthesis is presented that yields both a stabilizing feedback controller and a learning feedforward controller. The presented design method makes use of 2-D system theory [3] and will be presented in terms of linear matrix inequalities (LMIs). The learning process of ILC can be cast into a 2-D framework due to information propagation in two independent directions, i.e. time and trial directions [4], [5], [6], [2]. An advantage of the 2-D framework is that the stability of the system in the time domain and the convergence of the learning scheme in the trial domain can be analyzed simultaneously. Furthermore, conditions for the existence and derivation of both the stabilizing feedback controller as well as the learning controller can be expressed using this approach. Traditionally, in 2-D systems, stability tests are based on computing zeros of a 2-D characteristic polynomial, which poses in general a numerically complex or even infeasible problem [3]. As an alternative solution, Lyapunov theory within the framework of state space models is used, which reduces the computational complexity [7], [8]. In this way stability conditions can be formulated in terms of LMIs, that can be solved with established effective numerical algorithms. This paper is organized as follows. In Section II, the problem statement is addressed in more detail. The combined design procedure for the feedback and learning controller for ILC and the stability conditions is discussed in Section III. The design procedure makes use of LMI based methods for the design of control schemes for 2-D systems. Optimization techniques for computing the controllers are used to optimize the convergence properties. Finally Section III deals with the integration of a state observer in the design and the robustness against norm bounded system uncertainties. The proposed design procedure is applied to a motion system, which is discussed in Section IV. The obtained results are shown in Section V. Finally conclusions are given in Section VI. Throughout this paper, the null matrix and the identity matrix with appropriate dimensions are denoted by 0 and I respectively. Moreover, sym(X) is used to denote X +X T . All matrix inequalities are considered in sense of L¨owner, i.e. the notation X  Y (respectively X  Y ) means that the matrix X − Y is positive semi-definite (respectively positive definite). The symbol (?) replaces terms that are induced by symmetry.

5484

FrB17.3 II. P RELIMINARIES AND PROBLEM STATEMENT

III. M AIN RESULTS

Let us consider the following discrete-time linear system xk (p + 1) = Axk (p) + Buk (p) yk (p) = Cxk (p),

(1)

where on trial k, xk (p) ∈ Rn is the state vector, yk (p) ∈ Rm is the output vector and uk (p) ∈ Rl is the control input vector. For the tracking control problem the system needs to be stabilized and furthermore the output of the system yk needs to track the desired output trajectory r. The tracking error is given by ek (p) = r(p) − yk (p). (2)

The controller design method is introduced in three parts. The 2-D system representation of ILC will be discussed in more detail in Section III-A. The addition of a state observer in the controller design will be treated in Section III-B. Robustness issues are the subject of Section III-C. A. 2-D system representation of ILC The derivation of a representation for the stability problem of the 2-D linear system is as follows. First, based on (1), (2) and (3) it can be derived that ek+1 (p)−ek (p) = − CAxk+1 (p−1)+CAxk (p−1) − CB∆uk (p−1).

Note that the desired output trajectory is equal every trial and therefore does not depend on k. ILC adjusts the input from the current trial uk (p) to a new input uk+1 (p) for the next trial. Therefore, a general iterative control rule can be defined in the following form

For the system to be described in the form of the Roesser model (4), define the vector

uk+1 (p) = uk (p)+∆uk (p),

ek+1 (p) − ek (p) = −CAηk (p) − CB∆uk (p − 1).

In this model, i and j are the positive integer valued horizontal and vertical coefficients, xh (i, j) ∈ Rn is the horizontal state sub-vector, and xv (i, j) ∈ Rm is the vertical state sub-vector. The matrices G11 , G12 , G21 , G22 are known constant matrices with appropriate dimensions. In this case, the boundary conditions are given by Xh (0) = {xh (0, j) ∀j : j ≥ 0}, Xv (0) = {xv (i, 0) ∀i : i ≥ 0}.

Φ=

G12 G22



 ,

x(i, j) =

xh (i, j) xv (i, j)

ηk (p + 1) = Aηk (p) + B∆uk (p − 1). Let now the modification of the control input law, which is used to update the control input at trial k + 1, be given by ∆uk (p) = −K1 ηk (p + 1) + K2 ek (p + 1),

 .

(7)

Lemma 1 describes a sufficient condition for asymptotic stability. The solution to the inequality (6) can be found by employing computationally efficient algorithms for convex optimization based on LMIs.

(9)

where K1 and K2 are the matrices to be designed. The control law defined by (3) and (9) consists of a state feedback control action on the tracking error of the current trial (i.e. on the trial k + 1) combined with a feedforward based on the previous trial (i.e. on the trial k), which is available for use. The block diagram of the control setup for the proposed ILC scheme is depicted in Fig. 1. r(p) memory

u k (p)

uk+1(p) _

xk (p+1) = Ax k (p) + Bu k (p)

x k (p) C

Fig. 1.

y k (p) _

+

K1

+ K2

where G11 G21

Combining (3), (8) and (1) yields

(5)

Lemma 1: A 2-D system represented by the Roesser model (4) is asymptotically stable if there exist blockdiagonal matrix P  0, P = diag (P1 , P2 ), satisfying [7], [3] ΦT P Φ − P ≺ 0, ∀x(i, j) 6= 0, (6)

(8)

to write

(3)

where ∆uk (p) denotes the modification of the control input. The problem to be addressed is stated as follows. Design a control input uk (p) that makes the output of the closed loop system yk (p) track a given reference r(p) as accurate as possible by updating the control input though successive trials k. To solve the considered control problem, 2-D system theory will be used. For modeling the ILC scheme, 2-D state space models can be used. The most common 2-D state space model is a Roesser model [9], which is defined by the following equation      h G11 G12 xh (i, j) x (i + 1, j) = . (4) G21 G22 xv (i, j) xv (i, j + 1)



ηk (p) = xk+1 (p − 1)−xk (p − 1),

e k (p)

The block diagram of the proposed ILC scheme

With the control law of (3) and (9) the following model for the ILC scheme is obtained      ηk (p + 1) A−BK1 BK2 ηk (p) = . (10) ek+1 (p) −CA+CBK1 I −CBK2 ek (p) The boundary conditions are given by ηk (0) = xk+1 (0)−xk (0) = x0 −x0 = 0, k = 0, 1, . . . e0 (p) = r(p)−y0 (p) = r(p)−CAT x0 , p = 0, 1, . . . , N.

It can be seen that equation (10) takes the Roesser model structure. In (4), the tracking error vector ek (p) plays the

5485

FrB17.3 role of the vertical state vector and the trial state vector ηk+1 (p) plays the role of the horizontal state vector. Since stability of the model (10) guarantees stability along the direction of learning trials in addition to stability of the closed-loop control system, the following theorem provides the LMI condition for stability and provides an algorithm for designing the control law. Theorem 1: Let a system of the form described by (1) be subjected to a control law of the form (9). The resulting closed loop system is stable along the direction of learning trials in addition to the stability of the closed-loop control system if there exist matrices W1  0, W2  0, N1 and N2 of compatible dimensions such that the following LMI is feasible 3 −W1 (?) (?) (?) 0 −W2 (?) (?) 7 6 4 W AT −N T B T −W AT C T +N T B T C T −W (?) 5 ≺ 0. 1 1 1 1 1 N2T B T W2−N2T B T C T 0 −W2 (11) 2

If the above condition holds, the learning matrices K1 and K2 are given by K1 = N1 W1−1 , K2 = N2 W2−1 . (12) Proof: Assume that there exist matrices W1  0, W2  0, N1 and N2 such that the LMI (11) is feasible. Next, set W1 = P1−1 , W2 = P2−1 , N1 = K1 P1−1 , N2 = K2 P2−1 and pre- and post-multiply both sides by diag (P1 , P2 , P1 , P2 ) to obtain 3 −P1 (?) (?) (?) 0 −P (?) (?) 2 7 6 4 (A−BK )T P (−CA+CBK )T P −P (?) 5 ≺ 0. 1 1 1 2 1 K2TB TP1 P2 −K2T B T C T P2 0 −P2 2

(13)

The convex optimization problem can now be formulated as min

W1 0,W2 0,γ1 >0,γ2 >0,N1 ,N2

γ1 +γ2 (16)

subject to (11), (14) and (15). Remark 2: The optimization procedure can place the closed loop poles, i.e. the eigenvalues of (AW1 −BN1 ), at frequency infinite, which causes problems for practical implementation. To overcome this difficulty, the pole placement technique can be applied — see [10], [11] for further details. B. Observer-based ILC scheme The control law designed in the previous subsection assumed that all state variables are available for measurement. In practical applications, it is not always the case. If the complete state vector is not available for measurement, the state can be reconstructed using a state observer. A commonly used state observer is x ˆk (p + 1) = Aˆ xk (p)+Buk (p)+L(yk (p)−C x ˆk (p)), (17) where x ˆk denotes the estimated state vector on trial k and L is the observer matrix to be found. Let the observer error x ek (p) = xk (p) − x ˆk (p). Then using (1) and (17) it follows that x ek (p + 1) = (A − LC)e xk (p),

Make changes of variables G11 = A − BK1 , G12 = BK2 , G21 = −CA+CBK1 , G22 = I −CBK2 to see that (13) can be rewritten as » – −P P Φ ≺ 0, ΦT P −P

where P = diag (P1 , P2 ) and Φ is defined in (7). Finally, apply the Schur complement formula to the above inequality to find that it is equivalent to the LMI (6). In a lot of practical applications, especially in motion systems, the term CB ≈ 0. Hence, CBK2 ≈ 0, which results in a very small or no convergence. To overcome this problem the design procedure is cast into a convex optimization problem, which maximizes the gain K2 . Remark 1: In case of SISO systems (i.e. single input, single output systems), the matrix K2 becomes a scalar. Then, for convergence in the trial domain holds I − CBK2 ≺ I. Using LMI optimization, the value of K2 can be maximized by constraining N2 and W2 . A large K2 leads to a fast convergence and therefore the control error is minimized. To proceed, introduce the scalar variables γ1 > 0 and γ2 > 0. Next, impose the following constraint on N2 (recall that N2 is a scalar) −N2 ≺ −γ1−1 I, which is equivalent to the LMI   −N2 I ≺ 0. I −γ1 I

Similarly, assume that W2 ≺ γ2 I which can be rewritten as   −W2 W2 ≺ 0. (15) W2 −γ2 I

(14)

(18)

where the matrix L is chosen such that the eigenvalues of (A − LC) will be located inside the unit circle. The observer matrix L can be derived as follows. Theorem 2: Suppose that the closed-loop system, defined by (1), (3) and (9), is stable and observable, then the observer error converges to zero if there exist matrices R  0 and M such that the following LMI holds »

−R RA − M C

AT R − C T M T −R

– ≺ 0.

(19)

If the above LMI is feasible, the observer gain matrix L is given by L = M R−1 . Proof: Define a Lyapunov functional candidate for the error system (18) as V (k, p) = x eTk (p)Se xk (p), where S  0 is the matrix to be found. It then follows from Lyapunov’s Second Method that the observer error system (18) is asymptotically stable (i.e. the observer error x ek on a given trial k approaches zero) if the following matrix inequality holds (A − LC)T S(A − LC) − S ≺ 0.

Apply the Schur complement formula and make use of the following change of variables R = S −1 and M = S −1 L to find that the above inequality is equivalent to the LMI (19).

5486

FrB17.3 Remark 3: The observer poles must be dominant over the controller poles in order to provide accurate estimates of the states for the controller. On the other side, the whole system can become unstable when the observer is too fast in comparison to the observed system. This means that the observer poles cannot be placed in any region of the unit circle. To place the poles in specific regions of the interior of the unit circle, the pole placement technique can be used again [10], [11]. In the update of the control input (9), the true system states can be replaced by the estimates states of the observer as ∆uk (p) = −K1 ηbk (p + 1) + K2 ek (p + 1).

the closed-loop control system and the required controller matrices in (9) are given by (12). Proof: Application of Theorem 1 proves that the ILC process modeled by (22) is stable if 3 −W1 (?) (?) (?) 0 −W2 (?) (?) 7 6 4 W AT −N T B T −W AT C T +N T B T C T −W (?) 5 1 1 1 1 1 N2T B T W2−N2T B T C T 0 −W2 1 3 02 0 ˜ 0 C 7 Tˆ T B6 F H −H T C T 0 0 A ≺ 0. + sym @4 W1 E1T −N1TE2T 5 N2TE2T 2

Invoke Lemma 2 to obtain

where ηbk is the estimate of the vector ηk on trial k.

3 −W1 (?) (?) (?) 0 −W2 (?) (?) 7 6 4 W AT −N T B T −W AT C T +N T B T C T −W (?) 5 1 1 1 1 1 N2T B T W2−N2T B T C T 0 −W2 2 3 0 ˜ 0 16 7ˆ + 4 T T T 5 0 0 E1 W1 −E2 N1 E2 N2 W E −N E 1 1  1 2 N2TE2T 2 3 H ˜ 6 −CH 7ˆ T + 4 H −H T C T 0 0 ≺ 0. 0 5 0 2

C. Robustness analysis The analysis is extended to the case when there is uncertainty in the system state space model. Here, only the case is considered when (1) is subjected to norm-bounded uncertainties, i.e. it is assumed that the uncertainty is modeled as an additive perturbation (denoted here by ∆A and ∆B) to the nominal system matrices (A and B) [∆A ∆B] = HF [E1 E2 ] ,

(20)

where H, E1 , E2 are known constant matrices of compatible dimensions and the matrix F is an unknown matrix that satisfies F T F  I. (21)

Finally, apply the Schur complement formula to obtain the inequality (24). Remark 4: Using above presented transformations, it follows immediately that the LMI condition of (19) becomes

For an ILC process with norm-bounded additive model uncertainties, the following 2-D model is obtained »

– „» – ηk (p+1) ∆A−∆BK1 ∆BK2 = ek+1 (p) −C∆A+C∆BK1 C∆BK2 – (22) » –« » A−BK1 BK2 ηk (p) . + −CA+CBK1 I −CBK2 ek (p)

Lemma 2: Let Σ1 , Σ2 be real matrices of appropriate dimensions, then for any matrix F satisfying F T F  I and a scalar  > 0 the following inequality holds [12] Σ1 FΣ2 + ΣT2 F T ΣT1  −1 Σ1 ΣT1 + ΣT2 Σ2 .

(23)

Theorem 3: Let a system of the form (22) with uncertainty structure modeled by (20) and (21) be subjected to a control law of the form (9). If there exist matrices W1  0, W2  0, N1 and N2 of compatible dimensions and a scalar  > 0 such that the following LMI holds

2

−R+E1T E1 4 RA−M C 0

AT R−C T M T −R HT R

3 0 RH 5 ≺ 0. −I

IV. T HE FLEXIBLE SHAFT SYSTEM The controller design method will be applied to the flexible shaft system, shown in Fig. 2. The dynamical properties of the system are comparable to many motion systems such as printers, pick and place robots, etc. The flexible shaft system, consists of two rotating masses, which are connected through a flexible shaft. The first mass is excited by a motor and the position of both masses is measured using incremental encoders. The positions can be measured with a resolution of 3.1416 · 10−3 rad (2000 increments per revolution).

2

−W1 +HH T (?) T T 6 −CHH T −W +CHH C 2 6 6 W1 AT −N T B T −W1 AT C T +N T B T C T 1 1 6 4 N2T B T W2−N2T B T C T 0 0 3 (?) (?) (?) (?) (?) (?) 7 7 −W1 (?) (?) 7 ≺ 0, 0 −W2 (?) 5 E1 W1−E2 N1 E2 N2 −I

(25)

(24)

the resulting ILC process is robustly stable along the direction of learning trials in addition to the stability of

5487

Fig. 2.

Flexible shaft system

FrB17.3 A schematic representation of the flexible shaft system is shown in Fig. 3. The system can be approximated by two masses that are connected by a spring and a damper. The

k F

m1

m2 d x1

Fig. 3.

V. R ESULTS In this section, the theoretical findings are validated by means of simulations using the flexible shaft model. First, the continuous-time state space model (26) is replaced by its discrete counterpart. The matrices of the discrete-time model are computed by assuming a zero-orderhold method with a discrete step size of 0.5 ms as

x2

0.9933 0.0005 0.0067 0.9923 26.7448 6 −26.7448 A =4 0.0067 0.0000 0.9933 26.7448 0.0077 −26.7448 ˆ B = 0.0007 2.6953 0.0000 0.0074 ˆ ˜ C = 0 0 1 0 , D = 0. 2

Schematic representation flexible shaft system

force on the first mass, exerted by the motor, is denoted by F , the position of the two masses are denoted by respectively x1 and x2 . The equations of motion for the system of Fig. 3 can be written as m1 x ¨1 =F − k(x1 − x2 ) − d(x˙ 1 − x˙ 2 ), m2 x ¨2 =k(x1 − x2 ) + d(x˙ 1 − x˙ 2 ).

A ten percent variation of the model parameters k and d in the continuous time model (26) is considered. The corresponding uncertainty matrices equal

A continuous time state-space model with input u = [F ] and state x = [x1 x˙ 1 x2 x˙ 2 ]T can be derived as

3 0 6 −1 7 , H =4 0 5 1 ˜ ˆ E1 = 0.4 0.0001 0.4 0.0001 , E2 =0.

x(t) ˙ = Ac x(t) + Bc u(t) y(t) = Cc x(t) + Dc u(t),

2

(26)

where 2 6 Ac = 6 4 Cc =

0

1

0

0

− mk1

− md1

k m1

d m1

k m2

d m2

ˆ

0

0

0

0

1

0

0 − mk2 ˜

,

1 − md2

3

0

2

7 7, 5

6 Bc = 4

1 m1

0 0

3

Solving the LMI (24) (by using S E D U M I [13] and YALMIP [14] the for LMI computations) yields for the controller matrices of (9)

7 5,

ˆ ˜ K1 = 155.4751 0.1179 −51.9784 0.7667 , K2 = 72.1736.

Dc = 0.

magnitude [dB]

Furthermore, the solution to the LMI (25) gives The measured frequency response function (FRF) of the ˆ ˜T input u to the position of the second mass x2 is shown in L = 103 · 0.0126 2.6852 0.0010 1.0980 . Fig. 4. At low frequencies a double integrator character can The poles of the closed loop system equal be seen, i.e. the system behaves like a single mass. At 52 Hz   a resonance peak is present, which is caused by the low 0.8459 + 0.4401i stiffness of the flexible shaft, that connects the two masses. − 0.4401i  “tempimage˙temp” — 2006/2/19 — 10:49 — page 10.8459 — #1  Based on the measured FRF data, a system model of ρ(A − BK1 ) =   0.9256 + 0.0556i  . the form (26) is identified. The parameters of the identified 0.9256 − 0.0556i model equal m1 = m2 = 1.85·10−4 kgm2 , k = 9.95 Nm/rad and d = 0.00037 Nms/rad. The FRF of the identified model All closed-loop poles are located within the unit circle, so is also shown in Fig. 4. The model is used as system model the state feedback controlled system is stable. for the simulations and for the design of the controller. The reference signal r is depicted in Fig. 5. The reference signal consists of 10 revolutions in positive direction, a return 50 path, 10 revolutions in negative direction and returning to the 0 start position. -50 With the model of the system and the designed controller, -100 an ILC simulation with 10 trials is performed. At each trial, -150 the RMS value of the tracking error e, is calculated as phase [deg]

i

3 0.0000 0.0077 7 , 0.0005 5 0.9923 ˜T ,

100 -100 -200 -300 -400 -500 100

101

102

103

v u N u1 X e(p)2 . RMS(e) = t N p=1

10

1

10

2

10

3

frequency [Hz] Fig. 4. Measured (solid) and identified (dashed) FRFs of the flexible shaft system

In Fig. 6 the RMS value of the tracking error is shown as a function of the trial number. The error converges exponentially to 3.8320·10−4 rad at trial 10. The control input u of trial 10 is shown in Fig. 7. The largest input is required at time instants where the system accelerates or decelerates. This corresponds to the lowfrequent mass behavior of the system.

5488

replacemen

FrB17.3 80

verging ILC scheme. Norm bounded model uncertainties are added to the analysis to add some degree of robustness to the derived controller. The simulations performed using the flexible shaft system show that the designed controller stabilizes the system and results in an exponential convergence of the tracking error to RMS(e)=3.8320·10−4 rad in 10 trials. “tempimage˙temp” — 2006/2/19 — 10:43 — page 1 — #1results in a learning The method presented in this paper controller, which consists of a single gain. The structure of the learning filter can be expanded to further improve the 5 10 15 performance of the ILC scheme. This will be subject of time [s] further research.

i

reference r [rad]

60 40 20 0 -20 -40 -60 -80

0

Fig. 5.

Reference signal r.

R EFERENCES

control imput u [V]

RMS(e) [rad]

[1] S. Arimoto, S. Kawamura, and F. Miyazaki, “Bettering operation of 20 robots by learning,” Journal of Robotic Systems, vol. 1, no. 2, pp. 18 123–140, 1984. 16 [2] D. A. Bristow, M. Tharayil, and A. Alleyne, “A survey of iterative 14 learning control,” IEEE Control Systems Magazine, vol. 26, no. 3, pp. 96–114, 2006. 12 [3] T. Kaczorek, Two-dimensional Linear Systems, ser. Lecture Notes in 10 Control and Information Sciences. Berlin, Germany: Springer-Verlag, 8 1985, vol. 68. 6 [4] Z. Geng, R. Carroll, and J. Xie, “Two-dimensional model and algo4 rithm analysis for a class of iterative learning control,” International Journal of Control, vol. 52, no. 4, pp. 833–862, 1990. 2 [5] Y. Fang and T. Chow, “2-D analysis for iterative learning controller 0 for discrete-time systems with variable initial conditions,” IEEE 25 20 0 10 15 5 Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 50, no. 5, pp. 722–727, 2003. trial [6] J. E. Kurek and M. B. Zaremba, “Iterative learning control synthesis based on 2-D system theory,” IEEE Transactions on Automatic Fig. 6. RMS value of the error Control, vol. 38, no. 1, pp. 121–125, 1993. [7] K. Gałkowski, J. Lam, S. Xu, and Z. Lin, “LMI approach to statefeedback stabilization of multidimensional systems,” International Journal of Control, vol. 76, no. 14, pp. 1428–1436, 2003. VI. C ONCLUSIONS [8] W.-S. Lu, “On a Lyapunov approach to stability analysis of 2-D digital filters,” IEEE Transactions on Circuits and Systems I: Fundamental In this paper, 2-D system theory and LMI techniques Theory and Applications, vol. 41, no. 10, pp. 665–669, 1994. are exploited to develop a computationally efficient method, [9] R. P. Roesser, “A discrete state-space model for linear image processing,” IEEE Transactions on Automatic Control, vol. 20, no. 1, pp. which simultaneously derives a stabilizing feedback coni 1975. troller as well as a learning controller for ILC in a one-step [10] 1–10, M. Chilali, P. Gahinet, and P. Apkarian, “Robust pole placement synthesis. regions,” Transactions on Automatic Control, vol. 44, “tempimage˙temp” — 2007/3/12 in—LMI19:20 —IEEE page 1 — #1 no. 12, pp. 2257–2270, 1999. The controller design is written as an LMI condition, [11] C. Scherer and S. Weiland, DISC Course Linear Matrix Inequalities which guarantees stability in the time domain and converin Control, Delft University of Technology, Delft, The Netherlands, gence in the trial domain. LMI optimization and pole place2002, available at http://www.cs.ele.tue.nl/sweiland/lmi.htm. replacemen ment techniques are used to obtain the optimal controller [12] P. P. Khargonekar, I. R. Petersen, and K. Zhou, “Robust stabilization of uncertain linear systems: Quadratic stabilizability and H∞ control parameters, which stabilize the system and result in a contheory,” IEEE Transactions on Automatic Control, vol. 35, no. 3, pp. 356–361, 1990. [13] J. F. Sturm, “Using SeDuMi 1.02, a MATLAB toolbox for optimization 0.2 over symmetric cones,” Optimization Methods and Software, vol. 11, no. 1, pp. 625–653, 1999, special issue on Interior Point Methods (CD 0.15 supplement with software). 0.1 [14] J. L¨ofberg, YALMIP 3, 2004, available at http://control.ee.ethz.ch/∼joloef/yalmip.msql. 0.05

0 -0.05 -0.1 -0.15 -0.2

0

5

10

15

time [s] Fig. 7.

Control input u of trial 10

5489

Suggest Documents