Optimal Computational Resource Allocation for Control Task under ...

5 downloads 0 Views 314KB Size Report
Sep 2, 2011 - Giulio Mancuso ∗ Enrico Bini ∗ Gabriele Pannocchia ∗∗. ∗ Scuola Superiore Sant'Anna, Italy, (e-mail: g.mancuso@sssup.it, [email protected]).
Preprints of the 18th IFAC World Congress Milano (Italy) August 28 - September 2, 2011

Optimal Computational Resource Allocation for Control Task under Fixed Priority Scheduling Giulio Mancuso ∗ Enrico Bini ∗ Gabriele Pannocchia ∗∗ ∗

Scuola Superiore Sant’Anna, Italy, (e-mail: [email protected], [email protected]) ∗∗ Dept. of Chem. Eng. (DICCISM) – Univ. of Pisa, Italy, (e-mail: [email protected])

Abstract: In this paper a new real-time control system co-design is presented. Given several plants controlled by Linear Quadratic Regulator algorithms running over the same real-time platform, an optimal selection method of the sampling time of each regulator is proposed. The method finds the optimal solution that minimizes an appropriate overall cost function taking into account performance of each subsystem within a constraint on the computational resource. To deal with this problem a new statespace approach for modeling systems with any value of computational delay is proposed. The structure of the optimization problem is then exploited in a way that its solution can be found by solving a minimal set of nonlinear equations. A simple example with three subsystems is presented to highlight the main features of the proposed method. Keywords: Linear systems, real-time control systems, resources allocation, optimal sampling time selection, systems with time delays. 1. INTRODUCTION At present, complex systems in common use range from dishwashers and smart-phones to cars and airplanes. Such systems have tens of control algorithms, mostly embedded in a single real-time operating system. Each task would share limited resources like network bandwidth, memory, computational resources and so on. In a competitive market, effective system design requires either the maximization of performance under given resource constraints or the reduction of resource requirements of a device for given performance targets. In real-time control systems, the design of control tasks is very arduous. This is because not only plant-controller interaction has to be taken into account, but also because the limit on the resources and consequently how the control tasks interact with them have to be considered. In this paper we will focus on optimal computational resource utilization. That is, given a complex system composed of several plants to be controlled with as many control task mapped onto a real-time OS, we propose an optimal selection of the controller sampling times to maximize overall performance by using limited computational resources. 2. PROBLEM DEFINITION AND RELATED WORK Our target is to control m continuous-time plants via a Linear Quadratic Regulation (LQR) control algorithm. Since all the algorithms are implemented in a digital platform, control inputs are piecewise constant and subject to output delay. More in details, for the i−th system we solve: Z ∞ min Ji = (x0i (t)Qi xi (t) + u0i (t)Ri ui (t)) dt 0

s.t. x˙i (t) = Ai xi (t) + Bi ui (t) ui (t) = ui (kTi − ∆i ), ∀t ∈ [kTi , (k + 1)Ti ) , ∀k ∈ N where Ti is the sampling period and with ∆i the time elapsed between sampling and actuation. All the control algorithms Copyright by the International Federation of Automatic Control (IFAC)

share the same computational resource such that each control task competes for it affecting overall performance of all other control systems. Our aim is to find the optimal control sequence and optimal resource allocation such that an appropriate objective function, which takes into account the performance of the overall system, is minimized subject to computational resource constraints. min F (J1 , J2 , . . . , Jm ) s.t. “Resource Constraints”

(1)

where the exact meaning and definition of “Resource Constraints” will be clarified later on. As can be noticed in (1), there are two main ingredients in real-time control co-design, an Objective function and Resource Constraints. Several works has been done in real-time control co-design, the first one is from Lehoczky et al. (1996) where the authors did not take into account the computational delay. The objective function to minimize is a sum of approximations of the real performance cost constrained by the utilization bound. The first work that considered delayed action is from Kim (1998) where a more sophisticated approximation of each cost, and their sum has been used as an objective function. A more recent work is the one from Bini and Cervin (2008) where the objective function is essentially the same as the previous work but an approximation of the delay considers how control tasks interfere with each others. However, all these works are based on a simple sum of an approximated cost performances, which in some cases could not be enough to ensure sufficient performance in each subsystem. In complex devices, systems with very different dynamics could share the same computational resource. Different dynamics could lead to very different cost functions and minimizing their weighted sum could lead to good performances for some subsystems and poor performances for others.

12599

Preprints of the 18th IFAC World Congress Milano (Italy) August 28 - September 2, 2011

3. TASK MODEL AND REAL-TIME SCHEDULING We consider a system composed by m controllers. Each controller is implemented by a software task τi running on a processor. Task τi is characterized by the following features: • a computation time Ci , which represents the time required by the processor to compute the control law; • a period Ti , which is the sampling period of the controller implemented by the task τi . i Another typical feature of the task τi is its utilization Ui = C Ti , which represents the fraction of time that task τi requires to be executed. These tasks are scheduled onto the processor by a Fixed Priority (FP) scheduler (Liu and Layland, 1973). Without loss of generality, we assume the tasks to be sorted by decreasing priority, i.e. τ1 has the highest priority, τm has the lowest one. In FP scheduler, the highest priority task τ1 executes whenever it is ready, τ2 can execute only when τ1 is not. In general τi can execute only when all the higher priority tasks {τ1 , . . . , τi−1 } do not. Hence the amount of delay experienced by tasks increases as the priority decreases. The delay ∆i from the activation to the completion of task τi cannot be determined analytically. First of all, it varies from activation to activation. The worst-case and best-case response-time analysis (by Joseph and Pandya (1986) and Redell and Sanfridson (2002), respectively) computes the maximum and minimum possible task delay among all the possible periodic task activations. However, to best of our knowledge, there is no clear indication of what expression provides the best approximation of the task delay in the context of control systems. Hence, we follow the suggestion by Bini and Cervin (2008), and we define the task delay ∆i as Ci ∆i = . (2) Pi−1 1 − j=1 Uj This approximation follows from the assumption that any higher priority task τj ∈ {τ1 , . . . , τi−1 } execute for Uj t in any interval of length t. In (Bini and Cervin, 2008), it is shown that this approximation provides an effective cost reduction in delay-sensitive systems.

4. OPTIMAL SAMPLED-DATA CONTROL OF LINEAR CONTINUOUS-TIME SYSTEMS 4.1 Introduction Let the following linear time-invariant system be given: x(t) ˙ = Ax(t) + Bu(t) (3) y(t) = Cx(t), with A ∈ Rn×n , B ∈ Rn×m and C ∈ Rp×n . We assume that (A, B) is controllable and that the state is measurable. We want to find the control input u(·) such that the following cost function is minimized: Z ∞  0 0 J= x(t) Q x(t) + u(t) R u(t) dt. 0

with Q and R positive definite matrices of appropriate dimensions. The solution of this problem is given by: u(t) = −R−1 B 0 S x(t), with S solution of the Algebraic Riccati Equation (ARE): A0 S + SA + Q − SBR−1 B 0 S = 0. If we call x0 the initial state of (3), the final value of the cost function will be J = x00 S x0 .

However, if the control is realized by a digital system, the input signal is kept constant for a time T which is usually referred to as sampling time, i.e. u(t) = u(kT ), ∀t ∈ [kT, (k + 1)T ) , ∀k ∈ N, then the state evolves according to: x(t) = Φ(t−kT )x(T k)+Γ(t−kT )u(kT ) t ∈ [kT, (k+1)T ), where we set Z t eA(t−s) ds B. (4) Φ(t) := eAt Γ(t) := 0

If we let xk := x(kT ), uk := u(kT ) and yk := y(kT ), then (3) can be written as xk+1 = Φ(T ) xk + Γ(T ) uk (5) yk = Cxk , and the cost function is now transformed into 0    ∞  X ˆ M ˆ xk Q xk J= , ˆ0 R ˆ uk uk M k=0

where  ˆ Q ˆ M0

ˆ M ˆ R



T

 A0 0  

 Q 0 [ A0 B0 ]s e ds (6) 0 R 0 0    Z T Φ(s) Γ(s) Q 0 Φ(s) Γ(s) = ds. 0 I 0 R 0 I 0 The matrix in (6) is obviously a function of the sampling period T . The value of the cost function will then be ˆ ) x0 , J = x00 S(T (7) ˆ ˆ where S(T ) = S is the positive definite solution of the Discrete Algebraic Riccati Equation (DARE) ˆ +Q ˆ − (Φ0 SΓ ˆ +M ˆ )(Γ0 SΓ ˆ + R) ˆ −1 (Γ0 SΦ ˆ +M ˆ 0 ). Sˆ = Φ0 SΦ Z

=

e

B0 0

s

The optimal control is given by ˆ + R) ˆ −1 (Γ0 SΦ ˆ +M ˆ 0 )xk = K(T )xk . uk = −(Γ0 SΓ

(8)

4.2 Optimal Control with Delayed Action In practice, a delay ∆ ∈ R+ between sampling and actuation always exists due to implementation of the control algorithm in a digital system. As pointed out in the previous section, the delay can be greater than the sampling period. Moreover, it does not necessarily depend on the execution time of the control task only, but also on how the task is scheduled in the resource. In this paper we extend the results in (Furuta, 1985), where the case of ∆ ≤ T is considered, by also taking into account any value of delay (∆ > T ). Indeed, we will show that the final cost of the LQR problem can be written as the sum of two terms, one depending only on the sampling time and one depending on only the delay. Suppose we sample the state with a fixed sampling period T > 0 and, given a constant samplingactuation delay ∆ > 0, we define the strictly positive integer h as:   ∆ , (9) h := T which means that (h − 1)T ≤ ∆ ≤ hT . In the classic literature ˚ om and Wittenmark, 1997) such a (Franklin et al., 1994; Astr¨ system can be modeled as an augmented system that in state space form becomes ˆx ˆ uk , x ˆk+1 = Φ ˆk + Γ (10) where

12600

 0 x ˆk := x0k u0k−h u0k−(h−1) · · · u0k−2 u0k−1

Preprints of the 18th IFAC World Congress Milano (Italy) August 28 - September 2, 2011

Φ(T ) (Γ(T ) − Γ(hT − ∆)) Γ(hT − ∆)  0 0 I  0 0  0 ˆ :=  . Φ .. ..  . . .  .  0 0 0 0 0 0 0 ˆ Γ := [0 0 0 · · · 0 I] , 

0 0 I .. .

··· ··· ··· .. .

 0 0  0   0 0 ··· I  0 ··· 0

k=0

0    x(t) Q 0 x(t) J∆ = dt. (12) u(t) 0 R u(t) 0 An optimal state feedback control law can be found to control the above system by considering the augmented system (10), as detailed by the following results. Theorem 1. Solving the DLQR problem for system (10) with cost function (11) is equivalent to solving the DLQR problem for the following system zk+1 = Φ(T )zk + Γ(T )uk , (13) with cost function 0    ∞  X ˆ M ˆ zk Q zk J= , ˆ0 R ˆ uk uk M Z





k=0

where zk ∈ Rn is defined as:



F

z }| { zk := [Fx Fh Fh−1 . . . F2 F1 ]

xk uk−h

   u  k−(h+1)  ,  ..   .     u k−2

and F := [Fx Fh Fh−1 . . . F2 F1 ] Fx := Φ(∆) Fh := Γ(∆) − Γ((h − 1)T ) Fh−j := Γ((h − j)T ))−Γ((h − j − 1)T ) j = 1, . . . , h − 1, Proof. If we examine the state evolution for kT + ∆ ≤ t ≤ (k + 1)T + ∆, and performing a change of variables inside the integrals, we find x(t) = Φ(t − kT − ∆)x(kT + ∆) + Γ(t − kT − ∆)uk . (14) By defining zk := x(kT + ∆), (15) and consequently z(k+1) := x((k + 1)T + ∆), using (14) we find (13). In the same way, (13) can be found by substituting (14) into (11). 2 Corollary 2. The optimal feedback is given by h X

u0 T

K Fh uk+h−j ,

u1

u(0)

T

2T

(h-1)T

hT

(h+1)T

t

Fig. 1. Initial condition of system. Corollary 4. The final cost of the equivalent problem is: J = J∆ + z00 Sˆ z0   0  x0 x0  u−h (0)   u−h (0)   u  u  −(h−1) (0)   −(h−1) (0)  0 ˆ  , (17)    = J∆ +  .. ..   F SF  . .      u (0)   u (0)  −2

−2

u−1 (0)

u−1 (0)

and Sˆ is the solution of the DARE of the discrete-time system without delay and J∆ is given by (12). If u−h (0) = u−(h−1) (0) = · · · = u−1 (0) = u(0), as shown in Figure 1, we can define " # h−1 X ˆ Fh−j ) = [ Φ(∆) Γ(∆) ] , F := Fx (Fh + j=1



uk−1

uk = K zk = KFx xk +



0

and the cost function becomes 0    ∞ Z (k+1)T +∆  X x(t) Q 0 x(t) J = J∆ + dt, (11) uk 0 R uk kT +∆ where

u(t)

(16)

j=1

where K is the optimal DLQR gain without delay (8). Remark 3. By solving the equivalent DLQR problem we reduce the size of the system used for control computation from (n + hm) number of states to n, independent of the delay. The equivalent system (13) is controllable iff the original discrete system (5) is controllable as well.

and the cost J∆ becomes 0    Z ∆ x(t) Q 0 x(t) J∆ = dt u(t) 0 R u(t) 0  0    ˆ∆ M ˆ∆ x0 Q x0 = , 0 ˆ∆ ˆ∆ u(0) u(0) M R with   Z ∆ 0    ˆ∆ M ˆ∆ Q Φ(s) Γ(s) Q 0 Φ(s) Γ(s) ds. 0 ˆ∆ ˆ∆ = 0 0 I 0 R 0 I M R Finally, the total cost function (17) can be rewritten as J = J∆ + z00 Sˆ z0    0    0  ˆ∆ M ˆ∆ Q x0 x0 x0 ˆ 0 SˆFˆ x0 + F = 0 ˆ∆ ˆ ∆ u(0) u(0) u(0) u(0) M R  0   x0 x0 = Π(T, ∆) u(0) u(0) where   ˆ M ˆ Q Π(T, ∆) := Sˆ∆ (T, ∆) + ˆ∆0 ˆ ∆ M∆ R∆ 0 ˆ ˆ ˆ ˆ S∆ (T, ∆) := F (∆) S(T ) F (∆). Remark 5. In the final cost (17), the effects of the sampling time and of the delay are decoupled into different terms. The sampling time is only taken into account in the solution of the ˆ ), unlike the delay, which affects the overall cost DARE, S(T only through the matrices Fˆ (∆) and J∆ (∆). It is easy to note from (15) that if the original state x ∈ X then the variable z ∈ X. This means that for constrained systems, a constraint on state variable x(k) can be seamlessly translated into a constraint on the variable z(k). The idea behind the change of variable is to shift the sampling time, synchronize it

12601

Preprints of the 18th IFAC World Congress Milano (Italy) August 28 - September 2, 2011

with the control action. Obviously this approach can be applied thanks to synchronization of the state sampling and the control action. This method will fail for systems with time-varying delay or sampling. Besides the current use in the design of sample-data LQR of continuous-time systems, the proposed model for arbitrary delay handling could also prove useful to ˚ om and Wittenmark, 1997) in the link existing results (Astr¨ input-output domain with the state space approach. However, this goes beyond the scopes of the present paper.

1200

1000

f (T , ∆ )

800

600

400



200

5. OPTIMAL RESOURCE ALLOCATION METHOD

0 0

0.2

0.4

0.6

0.8

1

T

5.1 Objective Function of Overall System In this section we introduce the overall objective function F (J1 , J2 , . . . , Jm ) to be minimized, which takes into account the performance of the overall system. Let us start considering the cost of a single control system. It has been shown that the cost function of the system taking into account both  x0delay  and sampling period is J = ξ00 Π(T, ∆)ξ0 with ξ0 = u(0) . Since we are examining a design problem, it would not make sense to have a cost function dependent on the initial state and control. To remove the dependency, we use the result: ξ 0 Π(T, ∆)ξ0 λmin (Π(T, ∆)) ≤ 0 0 ≤ λmax (Π(T, ∆)), ξ0 ξ0 where λmin (Π(T, ∆)) and λmax (Π(T, ∆)) denote respectively the minimum and maximum eigenvalue of Π(T, ∆). Although maximum and minimum eigenvalues are suitable candidates to represent the single cost function, λmax (Π(T, ∆)) could be too pessimistic and λmin (Π(T, ∆)) too optimistic. For this reason the normalized stochastic measure of performance has been introduced. Given the continuous-time system, assuming that the initial condition ξ0 is a zero-mean, uncorrelated, identically-distributed random variable with unitary covariance, a normalized stochastic measure of performance (Diduch and Doraiswami, 1986, 1987) is defined as: Jˆ = trace ( Π(T, ∆) ). (18) ˆ The positive scalar J can be viewed as an average cost performance. However, as it has been noted before, we consider the case where the same resource is shared between systems with very different dynamics (e.g. a car where the same unit could control the power train, suspension, spark-ignition and so on). Thus, the final cost associated with different subsystems may vary significantly, and so it would be inappropriate to sum directly the values Jˆ for each system. To overcome this problem we define the normalized measure for the i−th subsystem as: trace ( Πi (T, ∆) ) trace ( Πi (T, ∆) ) = , (19) fi (Ti , ∆i ) := trace ( Πi (0, 0) ) trace (Si ) where Si is the solution of the continuous time ARE. It is easy to note that fi (0, 0) = 1, fi (Ti , ∆i ) ≥ 1, because any sample-data LQR would be suboptimal with respect to the continuous-time LQR (corresponding to fi (0, 0)). Since we do not want to penalize any subsystem, we minimize the worst performance over all subsystems, and this can be translated into the objective function F (T, ∆) = max {fi (Ti , ∆i )}, (20)

Fig. 2. Cost as a function of sampling time and delay: an example of pathological sampling. which means that the performance of the subsystems always decreases on increasing T and ∆. It has been proved (Schlueter ∂fi > 0 hold when the system has no and Athans, 1971) that ∂T i imaginary poles. Otherwise there always exist a countable infinite number of values of sampling periods called pathological sampling where the system loses complete controllability and the cost could tends to infinity, as shown in Figure 2. However, it has been shown (Diduch and Doraiswami, 1986, 1987) that it is always true if we consider T in a neighborhood of the origin. We can also observe that first and second derivatives of fi exist almost everywhere except at a countable infinite number of points (Schlueter and Athans, 1971; Furuta, 1985). ∂fi The condition ∂∆ > 0 in general is true but there are some i particular cases where this assumption could not be satisfied, more details can be founded in (Skogestad and Postlethwaite, 2007). This can be useful when numerical algorithms use these pieces of information to solve the optimization problems. 5.2 The Period Assignment Problem In this section we investigate how the periods of the controllers can be assigned in a way that the overall cost of (20) is minimized. When m control tasks execute on the same physical processor they are subject to a resource constraint, which can be formulated as a utilization upper bound as follows: m X Ci ≤ α. (22) T i=1 i The value of α indicates the amount of overall resource that is dedicated to the control tasks. If α = 1, then √ the processor is fully dedicated to the controllers. If α = m( m 2 − 1), that is the Liu and Layland bound (Liu and Layland, 1973), then all controllers are guaranteed to have a delay ∆i not greater than the period Ti . A smaller value of α implies a higher cost. If we model the delay ∆i as in (2), then the problem of minimizing (20) can be written as min max {fi (Ti , ∆i )} T,∆

s.t.

i=1,...,m

(23)

g(T) ≤ 0 hi (Ti , ∆i ) = 0

i = 1, . . . , m,

where

i=1,...,m

where T := (T1 , T2 , . . . , Tm ) and ∆ := (∆1 , ∆2 , . . . , ∆m ). Later we will assume that ∂fi ∂fi > 0, > 0, (21) ∂Ti ∂∆i 12602

g(T) :=

m X Ci i=1

Ti

hi (Ti , ∆i ) := ∆i −

−α

1−

(24)

Ci Pi−1

Cj j=1 Tj

.

(25)

Preprints of the 18th IFAC World Congress Milano (Italy) August 28 - September 2, 2011

It can be noticed that, for any given a value of α < 1, a set of ˆ ∆) ˆ can always be found such that problem (23) is values (T, always feasible. It is easy to note that the smaller α the bigger Ti such that equation (24) is satisfied. Constraint (25), instead, gives us a relation between sampling times and delays. Now we will find the solution of the optimization problem (23) by rewriting it in a more suitable way. If we introduce the new variable z ∈ R, and defining the vectors s0 := [T1 . . . Tm ∆1 . . . ∆m z] c0 := [0 0 . . . 0 1] fˆi (s) := fi (Ti , ∆i ) − z

gˆ(s) := g(T) ˆ hi (s) := hi (Ti , ∆i ), the original problem (23) becomes min f0 (s) = c0 s s

s.t. fˆi (s) ≤ 0 i = 1, . . . , m (26) gˆ(s) ≤ 0 hˆi (s) = 0 i = 1, . . . , m. From the KKT conditions we know that the solution s∗ of the problem (26) must satisfy m m X X ∇f0 (s∗ ) +

λi ∇fˆi (s∗ ) +

i=1

ˆ i (s∗ ) + βˆ µi ∇h g (s∗ ) = 0,

(27)

i=1

where all the Lagrange multipliers are scalars which satisfy λi ≥ 0, β ≥ 0, λi fˆi (s∗ ) = 0, µi gˆi (s∗ ) = 0 (28)

∂fi ∂fi > 0 and ∂T > 0, the solution (T∗ , ∆∗ ) of Lemma 6. If ∂∆ i i (26) lies at a point where all the constraint are active.

Proof. We split (27) into a system of “linear” equations in the unknown Lagrangian variables divided in three different sets: m X λi = 1 (29) i=1

∂fi + µi = 0 i = 1, . . . , m ∂∆ i  ∂f1 ∂h2 ∂hm ∂g   λ1 + µ2 + · · · + µm +β =0   ∂T1 ∂T1 ∂T1 ∂T1         ..   .   λi

 ∂fm−1 ∂hm ∂g    + µm +β =0 λm−1 ∂T  ∂Tm−1 ∂Tm−1 m−1        ∂fm ∂g   λm +β =0 ∂Tm ∂Tm ∂fi Since by hypothesis ∂∆ > 0 and i from equation (30) we find that

∂f ∂Ti

∂fi Theorem 8. (Solution of the optimization problem). If ∂∆ > 0 i ∂fi and ∂Ti > 0, the solution of the problem (23) can be found by solving the set of (m − 1) nonlinear equations

fi (Ti , ∆i (T)) − fi+1 (Ti+1 , ∆i+1 (T)) = 0 i = 1, . . . , m − 1 in the (m − 1) variables T1 , T2 , . . . Tm−1 .

(36)

Proof. From Lemma 6, all constraints in (26) are active at the optimal point. Thus, the solution of (26) or of its equivalent problem (23) can be found solving − 1 nonlinear Pmthe Cm i equations (36) along with g(T) = i=1 Ti − α = 0, in the m variables Ti ∀i = 1, . . . , m. Moreover, we use Tm = m PCm−1 Ci and we complete the proof by removing the last α−

i=1

Ti

equation g(T) = 0.

2

6. SIMULATION RESULTS

(31)

The proposed optimal design method has been tested using three different plants, each one featuring different dynamics. The state space matrices of three systems are reported below:  −2.5 0 0 0  1  0  0 0 0 0 0 1 0 A1 = 0 0 1 4.8 , B1 = 0 , C1 = 0 0 0 −4.8 1 1 26.25  0 1.00 0   0  1 0 0 2.67 0 1.82 , C 0 = 0 0 A2 = 00 −0.18 2 0 0 1.00 , B2 = 0 01 0 −0.45 31.18 0 4.55 00 h0 0 i h i h1i 0 0 6 0 5 0 0 0.0085 0 A3 = 10 , B3 = 10 , C3 = 0 , 0 −0.01 −1.4545

> 0 ∀ i = 1, . . . , m

i = 1, . . . , m,

which means: λi = 0 ⇐⇒ µi = 0 λi > 0 ⇐⇒ µi < 0, and it is easy to note from (22), (25) and (28) that

Equation (25) gives us a direct relation between ∆i and i−1 {Tj }j=1 , which means that we can express ∆i as function of T and remove the constraint (25). We denote such relation as ∆i (T). It can also be noted that the last sampling period only affects the performance of the last control (Bini and Cervin, 2008). Intuitively, this can be explained by noting that a lower priority task cannot affect performance of higher priority tasks under Fixed Priority scheduling.

(30)

≥0

z }| { ∂fi λi = −µi ∂∆i

X ∂h ∂g ∂fi ≥ 0; β ≤ 0; µ ≤0 (33) ∂Ti ∂Ti ∂T Now exploiting (32) and (33), we observe that: if β = 0 then λi = 0, µi = 0 ∀i = 1, . . . , m (34) if β 6= 0 then λi 6= 0, µi 6= 0 ∀i = 1, . . . , m. (35) However, (34) is not possible because it would not satisfy (29). Hence, all Lagrange multipliers (β, λi , µi ) are nonzero, i.e. all constraints of (26) are active at the optimal solution. 2 Remark 7. Since the cost function f0 (s) from (26) does not admit gradient equal to zero the solution cannot lie in the interior of the feasible region. λi

(32)

3.6365

0

whereas the LQR matrices used in each controller are: Q1 = diag(12, 3, 5, 2), R1 = 3 Q2 = diag(2, 0.3, 15, 1), R2 = 5 Q3 = diag(0.4, 6, 3.5), R3 = 0.1. The execution times of the control algorithms were set up to be proportional to the state dimension. All optimization algorithms were implemented using Octave and its nonlinear solver “fsolve”. The behavior of each subsystem is shown in Figure 3, where it is easy to note that the cost increases as the sampling and delay increase. Although the dynamics of the systems are very different, the design allows to reach satisfactory behaviors

12603

Preprints of the 18th IFAC World Congress Milano (Italy) August 28 - September 2, 2011

Table 1. Real-Time control system parameters with α = 0.7 and optimized results (times in seconds) System 1 2 3

Ci 0.0020 0.0020 0.0015

∆i 0.0020 0.0026 0.0028

4.5

60

4

50

Basic design Ti fi 0.0090 1.0042 0.0080 1.0272 0.0050 1.3361

Optimized design ∆i Ti fi 0.0020 0.1537 1.0527 0.0020 0.0926 1.0527 0.0016 0.0022 1.0527 80 70

∆3 = 0.03

60

3.5 40

2.5

f(T3 , ∆3 )

f(T2 , ∆2 )

f(T1 , ∆1 )

50 3

30 ∆2 = 0.3

40

∆3 = 0.02

30 20 ∆1 = 0.3

2

∆1 = 0.2 1.5 1 0

0.1

0.2

20

∆2 = 0.2

10

∆1 = 0.1 ∆1 ≃ 0

∆3 = 0.01

∆2 = 0.1 0 0.3

0.4

0.5

0

0.1

0.2

0.3

10 ∆2 ≃ 0 0.4

T2

T1

∆3 ≃ 0

0 0.5

0

0.01

0.02

0.03

0.04

0.05

T3

Fig. 3. Costs fi of each subsystem as a function of sampling time and delay. of all systems. In Table 6, Real-Time control systems parameters are shown and compared for a basic design (as detailed next) and for the proposed method. In particular, in the basic design we started from a nominal sampling time for each task, and then we increased them until the utilization constraint was fulfilled. We notice that such a simple design approach is only possible when the number of control subsystems is very limited. On the other hand, the proposed methodology has general applicability. We also notice that using the proposed method, the achieved sub-optimality of the discrete-time LQR systems with respect to the corresponding continuous-time LQR is 5.3% in all subsystems, while in the basic approach the worst case suboptimality is 33.6%. 7. CONCLUSIONS In this paper an optimal technique for real-time control system co-design has been presented. The framework is given by several control task algorithms that share the some computational resource scheduled with a real-time operating system. At the beginning, we presented an extension of the work by (Furuta, 1985) for modeling systems with any value of delay. Such models allow one to facilitate the design phase of optimal controllers by decoupling the effect of delay and sampling time in the final cost. The current approach gives few advantages in the design phase. Moreover, it allows one to keep the dimension of the system to be controlled independent of the delay, unlike in the classical approach. We have also proposed an Objective Function that appropriately takes into account the performance of the overall system and of each control task. The structure of the problem has been exploited, by rewriting its solution as the solution of an appropriate set of nonlinear equations. We have demonstrated our method using an example of three subsystems with very different dynamics. ACKNOWLEDGEMENTS We thank Dr. E.C. Kerrigan for useful comments on this paper.

Bini, E. and Cervin, A. (2008). Delay-aware period assignment in control systems. In Proceedings of IEEE Real-Time Systems Symposium, 291–300. IEEE Computer Society. Diduch, C. and Doraiswami, R. (1987). Sample period effects in optimally designed digital control systems. IEEE Transactions on Automatic Control, 32(9), 838–841. Diduch, C.P. and Doraiswami, R. (1986). The effect of sample period on optimally designed servomechanism control systems. Optimal Control Applications and Methods, 7(4), 355 – 363. Franklin, G., Emami-Naeini, A., and Powell, J.D. (1994). Feedback Control of Dynamic Systems. Addison-Wesley Longman Publishing Co., third edition. Furuta, K. (1985). Sampled-data optimal control of continuous systems for quadratic criterion function taking account of delayed control action. International Journal of Control, 41(4), 1051–1060. Joseph, M. and Pandya, P.K. (1986). Finding response times in a real-time system. The Computer Journal, 29(5), 390–395. Kim, B.K. (1998). Task scheduling with feedback latency for real-time control systems. In Fifth International Conference on Real-Time Computing Systems and Applications, 1998. Proceedings., 37–41. Lehoczky, J.P., Sha, L., and Shin, K.G. (1996). On task schedulability in real-time control systems. In Proceedings of IEEE Real-Time Systems Symposium, 13–21. Liu, C.L. and Layland, J.W. (1973). Scheduling algorithms for multiprogramming in a hard real-time environment. Journal of the Association for Computing Machinery, 20(1), 46–61. Redell, O. and Sanfridson, M. (2002). Exact best-case response time analysis of fixed priority scheduled tasks. In Proceeding of 14th Euromicro Conference on Real-Time Systems, 165– 172. Wien, Austria. Schlueter, R.A. and Athans, M. (1971). On the behaviour of optimal linear sampled-data regulators. International Journal of Control, 13(2), 343–361. Skogestad, S. and Postlethwaite, I. (2007). Multivariable feedback control - Analysis and design. Wiley, 2nd edition.

REFERENCES ˚ om, K.J. and Wittenmark, B. (1997). Computer-controlled Astr¨ systems: theory and design. Prentice Hall, third edition. 12604