UNIVERSITY OF MINNESOTA. This is to certify that I have examined this bound copy of a doctoral thesis by. Raktim Bhattacharya and have found that it is ...
UNIVERSITY OF MINNESOTA
This is to certify that I have examined this bound copy of a doctoral thesis by Raktim Bhattacharya and have found that it is complete and satisfactory in all respects, and that any and all revisions required by the final examining committee have been made.
Prof. Gary J. Balas Name of Faculty Advisor
Signature of Faculty Advisor
Date
GRADUATE SCHOOL
Transformation of Linear Control Algorithms into Operationally Optimal Real-Time Tasks
A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY
Raktim Bhattacharya
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY
Prof. Gary J. Balas, Advisor
January 2003
c Raktim Bhattacharya January 2003 °
Acknowledgements First of all, I would like to thank my advisor Professor Gary. J. Balas for his guidance, patience and support. The work presented in this thesis would not have been possible without his encouragement and the long discussions with him, that enabled me to define this research topic. I would also like to thank Prof. Bruce Francis and Prof. Tryphon Georgiou for sharing with me their insight in sampled-data control systems and multi-rate systems.
I am grateful to my colleagues in “room 15”, for making my stay at the University of Minnesota memorable. I would like to thank Volkan Nalbantoglu, Jeff Barker, Jack Ryan, Jong-Yeob Shin, Andr´es Marcos, Paul Blue, Tam´as Keviczky, Subhabrata Ganguli, Richard Russel, Nachiket Bapat, Praveen Vijayaraghavan, Deenadayalan Naranswami, Brett Otteson and Laurent Wenger for the discussions, coffee breaks, movies, musical interludes, softball, football, food and above all their friendship.
I would like to express my sincere gratitude to Prof. T. R. Bose and Prof. K. S. Roy for developing my interest in Physics and Mathematics, Prof. M. K. Laha and Prof. P. K. Sinha for encouraging me to pursue higher studies and Mr. Abhijit Sircar for my love for Computer Science and algorithms.
I dedicate this thesis to my dear mother Bharati Bhattacharya without whose constant encouragement and love I would not have been able to clear even grade one, to my father Ranendra Kumar Bhattacharya who exposed me to the wonderful world of science and mathematics at an early age and instilled in me the intellectual curiosity which has led me to this endeavour, and to Shabari for her support and love that brought out the best in me.
i
Transformation of Linear Control Algorithms into Operationally Optimal Real-Time Tasks by Raktim Bhattacharya In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Department of Aerospace Engineering & Mechanics University of Minnesota (Defended December 20th , 2002)
Abstract Today, the role of a control algorithm is evolving from static designs, synthesised off-line to dynamic algorithms that adapt in real-time to changes in the controlled system and its environment. Hardware realisation of such control algorithms will result in complex, dynamic real-time systems. These real-time systems are expected to adapt to the time varying nature of the environment they react with. Therefore, it expected that there will be transient overloads on the processor implementing the control algorithm. In the presence of transient overloads, it may not be possible for the processor to allocate adequate computational time to each sub-task so their deadlines can be met. This thesis focusses on transforming linear control algorithms, into anytime control algorithms that can compromise on the quality of the controller output to conserve system resources. This provides flexibility to the scheduling algorithm to guarantee ontime completion of all the real-time tasks within their respective deadlines. The thesis also presents a new model of computation, where a linear controller is transformed into a multi-rate modal system. From the point of view of real-time scheduling, this translates to the decomposition of a real-time task with given deadline to a set of sub-tasks with reduced computational overheads and varied deadlines. It is shown that implementing a controller as a multi-rate system reduces the required computational resorces by a significant amount. The decomposition of a single-rate ii
controller to a multi-rate controller provides flexibility to the scheduling algorithm to adapt to the changes of the environment.
iii
Contents
Chapter 1
Introduction
1
1.1
What is Control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2
History of Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.3
Change in the Role of Control . . . . . . . . . . . . . . . . . . . . . .
3
1.4
Control Systems as Real-Time Systems . . . . . . . . . . . . . . . . .
5
1.4.1
Strictness of Deadlines . . . . . . . . . . . . . . . . . . . . . .
5
1.4.2
Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.4.3
Size and Degree of Coordination . . . . . . . . . . . . . . . . .
6
1.4.4
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.4.5
Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . .
7
1.5
Prior Art
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.6
Contribution of the Thesis . . . . . . . . . . . . . . . . . . . . . . . .
10
1.7
Organisation of the Thesis . . . . . . . . . . . . . . . . . . . . . . . .
10
Chapter 2
Anytime Control Algorithms: Model Reduction Approach 13
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.2
Complexity Analysis via FLOP Count . . . . . . . . . . . . . . . . . .
16
iv
2.3
2.4
Variation of CPU Time for Linear Controllers . . . . . . . . . . . . .
17
2.3.1
Balanced Truncation . . . . . . . . . . . . . . . . . . . . . . .
17
2.3.2
Residualisation . . . . . . . . . . . . . . . . . . . . . . . . . .
18
2.3.3
Variation of Run-Time . . . . . . . . . . . . . . . . . . . . . .
19
Controller Switching Algorithm with Minimal CPU Overhead . . . .
20
2.4.1
Switching Algorithm for Balanced Systems . . . . . . . . . . .
20
2.4.1.1
Switching from higher to lower order controller . . .
20
2.4.1.2
Switching from reduced to full order controller . . . .
21
2.4.1.3
Generalisation of Algorithm . . . . . . . . . . . . . .
22
2.4.1.4
Analysis of the Switching Algorithm . . . . . . . . .
22
Switching Algorithm for Residualised Systems . . . . . . . . .
24
2.4.2.1
Switching from higher to lower order controller . . .
24
2.4.2.2
Switching from reduced to full order controller . . . .
25
2.4.2.3
Generalisation of the Switching Algorithm . . . . . .
26
2.4.2.4
Analysis of the Switching Algorithm . . . . . . . . .
28
Proof of Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
2.5.1
Balanced Realisation Reduction . . . . . . . . . . . . . . . . .
31
2.5.2
State Reduction via Residualisation . . . . . . . . . . . . . . .
34
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
2.6.1
Balanced Realisation Reduction . . . . . . . . . . . . . . . . .
37
2.6.2
State Reduction via Residualisation . . . . . . . . . . . . . . .
39
Comparison with Anytime Algorithms . . . . . . . . . . . . . . . . .
41
2.4.2
2.5
2.6
2.7
v
Chapter 3
A Computational Model for Reduction of Execution Time of Linear Control Algorithms 46
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
3.2
Fundamentals of Digital Multi-Rate Systems . . . . . . . . . . . . . .
48
3.2.1
Sampler S . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
3.2.2
Zero-Order Hold H . . . . . . . . . . . . . . . . . . . . . . . .
49
3.2.3
Lifting Operator . . . . . . . . . . . . . . . . . . . . . . . . .
50
3.2.3.1
Lifting of Discrete-Time Signals . . . . . . . . . . . .
50
3.2.3.2
Lifting Discrete-Time Systems . . . . . . . . . . . . .
51
3.2.4
M -Fold Decimator . . . . . . . . . . . . . . . . . . . . . . . .
52
3.2.5
M -Fold Expander . . . . . . . . . . . . . . . . . . . . . . . . .
52
3.2.6
Type 1 Polyphase Decomposition of Digital Filters . . . . . .
53
3.2.7
Lifting Operator and Blocking Mechanism in Digital Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
Transformation of LTI Controllers to Multi-Rate Systems . . . . . . .
58
3.3
3.3.1
Scheduling Algorithm for Uniform Reduction in CPU Overhead 59
3.3.2
Formal Definition of the Transformation . . . . . . . . . . . .
60
3.3.3
ˆ = T (K) Frequency Response of Ψ
. . . . . . . . . . . . . . .
63
3.3.3.1
ˆ1 . . . . . . . . . . . . . . . . Transfer Function for Ψ
63
3.3.3.2
ˆ2 . . . . . . . . . . . . . . . . Transfer Function for Ψ
65
3.3.3.3
ˆ3 . . . . . . . . . . . . . . . . Transfer Function for Ψ
66
3.3.3.4
ˆ4 . . . . . . . . . . . . . . . . Transfer Function for Ψ
66
vi
3.3.4 3.4
Reduction in Computational Overhead . . . . . . . . . . . . .
69
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
Chapter 4
Optimal Multi-Rate Decomposition of LTI Controllers
78
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
4.2
Background on Real-Time Systems and Scheduling Theory . . . . . .
79
4.2.1
Real-Time Tasks . . . . . . . . . . . . . . . . . . . . . . . . .
80
4.2.1.1
Periodicity of LTI Controllers . . . . . . . . . . . . .
80
4.2.1.2
Computational Time of LTI Controllers . . . . . . .
81
4.2.2
Processor Utilisation Factor . . . . . . . . . . . . . . . . . . .
81
4.2.3
Task Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . .
82
4.2.3.1
Classical Uniprocessor Scheduling Algorithm . . . . .
82
Scheduling Algorithms and Real-Time Control Systems . . . .
83
Background in Robust Control . . . . . . . . . . . . . . . . . . . . . .
84
4.3.1
Linear Fractional Transformations . . . . . . . . . . . . . . . .
84
4.3.2
Linear Feedback Control . . . . . . . . . . . . . . . . . . . . .
85
4.3.3
Structured Singular Value . . . . . . . . . . . . . . . . . . . .
86
4.3.4
Robust Stability and Performance . . . . . . . . . . . . . . . .
88
4.3.4.1
Robust Stability . . . . . . . . . . . . . . . . . . . .
88
4.3.4.2
Robust Performance . . . . . . . . . . . . . . . . . .
88
4.4
Lifting Operator, LM . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
4.5
Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
4.6
Formulation of the Optimisation Problem . . . . . . . . . . . . . . . .
94
4.2.4 4.3
vii
4.7
Solving the Optimisation Problem . . . . . . . . . . . . . . . . . . . .
95
4.7.1
Solution to the Nonlinear Programming Problem . . . . . . .
95
4.7.1.1
Lifted Closed Loop System . . . . . . . . . . . . . .
95
4.7.1.2
Modification of Optimisation Problem . . . . . . . .
99
4.7.1.3
Practical Difficulties . . . . . . . . . . . . . . . . . .
99
4.7.2
4.8
4.9
Solution by a Search Algorithm . . . . . . . . . . . . . . . . . 100 4.7.2.1
Algorithm 1: Lifted Discrete-Time System . . . . . . 100
4.7.2.2
Algorithm 2: Approximated Sampled-Data System . 102
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.8.1
Frequency Response . . . . . . . . . . . . . . . . . . . . . . . 105
4.8.2
Robust Performance . . . . . . . . . . . . . . . . . . . . . . . 106
4.8.3
Time Domain Analysis . . . . . . . . . . . . . . . . . . . . . . 108
Effect of Scheduling Algorithm on Closed-Loop Response . . . . . . . 108 4.9.1
Order of Execution of Tasks and Choice of Input . . . . . . . 108
4.9.2
Preemptive vs Non-Preemptive Scheduling . . . . . . . . . . . 109
Chapter 5
Summary
112
5.1
Anytime Control Algorithms . . . . . . . . . . . . . . . . . . . . . . . 112
5.2
Computationally Efficient Digitial Implementation of Linear Control Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.3
Optimal Multi-rate Decomposition of Linear Control Algorithms . . . 114
5.4
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.4.1
Imprecise Computation . . . . . . . . . . . . . . . . . . . . . . 115
5.4.2
Approximate Solutions of Ordinary Differential Equation . . . 115 viii
5.4.3
Variable Sampling-Rate . . . . . . . . . . . . . . . . . . . . . 116
5.4.4
Multi-Rate Controller Design . . . . . . . . . . . . . . . . . . 116
ix
List of Figures 2.1
Closed-loop System with Disturbance z
. . . . . . . . . . . . . . . .
23
2.2
Closed-loop System with Disturbance z
. . . . . . . . . . . . . . . .
23
2.3
Closed-loop System with Decaying Disturbance ∆yc
. . . . . . . . .
29
2.4
Frequency Response of (γref , Vref ) → (γ, V ) for full order controller (solid), Kp ∈ Kbal (dash-dot) and Kp ∈ Kresid (dash-dash). . . . . . .
43
Closed-loop time response with full order controller (thick solid), Kp ∈ Kbal (thin solid) and Kp ∈ Kresid (dashed). . . . . . . . . . . . . . . .
44
Closed-loop time response with switching of controllers, full order controller (thick solid), Kp ∈ Kbal (thin solid) and Kp ∈ Kresid (dashed). .
45
3.1
A sampler. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
3.2
A zero-order hold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
3.3
Lifting of discrete-time LTI system. . . . . . . . . . . . . . . . . . . .
51
3.4
A M -fold decimator. . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
3.5
M -fold expander. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
3.6
Representation of digital block filtering. . . . . . . . . . . . . . . . . .
55
3.7
Lifting of discrete-time LTI system. . . . . . . . . . . . . . . . . . . .
56
3.8
Representation of digital block filtering with delay chain. . . . . . . .
57
3.9
Transformation of LTI controllers to multi-rate systems.
58
2.5
2.6
x
. . . . . . .
ˆ 1 as a linear periodically time varying system. . . . . . . . . . . . . 3.10 Ψ
64
ˆ 1 as a linear time invariant system. . . . . . . . . . . . . . . . . . . 3.11 Ψ
64
ˆ 2 as a LTI system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 Ψ
65
3.13 Frequency response of Fˆavg (solid), first-order LTI system with cut-off 2π (dashed). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mh
68
ˆ 4 as a LTI system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Ψ
68
3.15 Maximum singular value plot . . . . . . . . . . . . . . . . . . . . . .
71
3.16 Maximum singular value plot . . . . . . . . . . . . . . . . . . . . . .
72
3.17 Closed-loop system in lifted I/O space. . . . . . . . . . . . . . . . . .
72
3.18 Frequency response of closed loop system . . . . . . . . . . . . . . . .
75
3.19 Response of Pˆi , i = 0 (thick solid), 1 (solid), 2 (dash-dot), 3 (dot), 4 (dash-dash) to velocity step command. . . . . . . . . . . . . . . . . .
76
3.20 Response of Pˆi , i = 0 (thick solid),4 (solid) to velocity step command with gust. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
4.1
Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
4.2
Lower linear fractional transformation . . . . . . . . . . . . . . . . . .
85
4.3
Upper linear fractional transformation . . . . . . . . . . . . . . . . .
86
4.4
Output feedback and LFT Fl (P, K) . . . . . . . . . . . . . . . . . . .
86
4.5
The general problem with structured uncertainty . . . . . . . . . . .
87
4.6
Multi-rate sampled-data control system . . . . . . . . . . . . . . . . .
96
4.7
Lifted plant-controller interconnection . . . . . . . . . . . . . . . . . .
97
4.8
Frequency domain tracking response in the lifted input/output space.
110
4.9
Response to a velocity step command of 20f t/s. . . . . . . . . . . . . 111
xi
List of Tables 3.1
State updation pattern of dual-rate linear Controllers . . . . . . . . .
59
3.2
Scheduling algorithm for uniform reduction in CPU overhead. . . . .
60
3.3
Natural frequencies of the controller designed in reference [GB01]. . .
70
xii
Chapter 1 Introduction 1.1
What is Control?
The early machines developed by mankind were operated by humans and were fairly simple. The ability to exercise restraint or directing influence over the machines to achieve the desired purpose is probably the first notion of control that man learnt. Soon, man was able to create complex machinery that was too complicated for him to control manually or too tedious. Thus, there was a need to have machines control machines, or create machines that performed the complicated tasks without human interference. This gave rise to the design of automatic control systems and made the complex machines easier to operate. The invention of control systems is a major intellectual and engineering accomplishment that is still evolving and growing.
1.2
History of Control
Control is an engineering discipline and its progress has been influenced by the practical problems that were required to be solved during various phases of human history. The earliest application of feedback control can probably be found in the attempts made by the Greeks and the Arabs to keep accurate track of times.
The industrial revolution in Europe, initiated the development of self-driven machines and gave birth to the requirement of automatic control systems. This led to the invention of several feedback control devices such as the temperature regulators for 1
furnaces, float valve regulators and pressure regulators for steam boilers, centrifugal flyball governor for regulating the speed of the rotary steam engine, etc. The industrial revolution also led to the invention of accurate measuring devices, or sensors, that increased the reliability of the feedback control systems.
The design of control systems, during the industrial revolution in Europe, was primarily done by trial-and-error, together with engineering intuition. Thus it was more of an art than a science. In the middle of 1800 AD, mathematics was first used to analyse the stability of feedback control systems.
Mathematical analysis of control systems, was initially in terms of differential equations. The first person to use differential equations to analyse a control system and investigate stability properties was a British astronomer G. B. Airy [Air40]. J.C Maxwell analysed the flyball governor in Watt’s steam engine, using linearised differential equations of motion, to find the characteristic equation of the system [Max68]. He showed that the system is stable if the roots of the characteristic equation have negative real parts. The work of E. J. Routh [Rou77], I. A. Vishnegradsky [Vis77] and Lyapunov [Lya07] further developed the stability theory of control systems.
The development of the telephone and mass communication systems and the world wars, gave rise to the frequency domain analysis of control systems. H. S. Black in reference [Bla34], demonstrated the use of negative feedback. Frequency domain analysis of control systems was further developed by H. Nyquist [Nyq32] and H. W. Bode [Bod40].
During the world wars, control systems were designed for guidance and navigation of ships and aircrafts. The problem of accurately pointing guns aboard ships and airplanes was solved using theory of servo mechanisms [Haz34]. A.C. Hall, from M.I.T Radiation Laboratory, used frequency domain techniques to design a control system for an airborne radar [Hal66]. Most of the research in control theory during 1940’s came out this laboratory. N.B. Nichols developed the Nichols Chart [JNP47] for the design of feedback systems, and W. R. Evans developed the root locus [Eva48] technique while working in the M.I.T Radiation Laboratory. 2
With the advent of the space-age, attention of the controls community moved from frequency domain design techniques to time domain design techniques. The space-age saw major independent developements on several fronts in the theory of communication and control. During this phase, the controls community witnessed development of optimal control theory, estimation theory and nonlinear control theory.
The invention of the microprocessor opened up a new area of control. This led to development of digital control theory. Some of the early contributors to the development of digital control theory are J. R. Ragazzini, G. Franklin, L. A Zadeh [RZ52, RF58], E. I. Jury [Jur52] and B. C. Kuo [Kuo63]. Soon digital computers were used in industrial process control [AW84].
Over the past twenty years, several branches of control have emerged, including adaptive, nonlinear, geometric hybrid, fuzzy and neural control frameworks. Control theory, today, provides a rich set of methodologies for analysis and synthesis of complex control systems. Modern control techniques provides a systematic framework to design multi-input-multi-output control systems with multiple objectives. It also provides an explicit framework for representing uncertainty in the model used to design the control system. The uncertainty model enables a control designer to describe the plant as a set of systems or the possible descriptions of the system as it changes over time.
For a detailed description of the history of feedback control, the reader is directed to reference [Lew92].
1.3
Change in the Role of Control
Today, the role of a control algorithm is evolving from static designs, synthesised offline to dynamic algorithms that adapt in real-time to changes in the controlled system and its environment. The paradigm for control system design and implementation is also shifting from a centralised, single processor framework, to a decentralised, distributed processor implementation framework. Distribution and decentralisation of 3
services and components is driven by the falling cost of hardware, increasing computational power, increasingly complex control algorithms and development of new, low cost micro sensors and actuators. A distributed, modular hardware architecture offers the potential benefit of being highly reconfigurable, fault tolerant and inexpensive. Modularity can also accelerate development time of products, since groups can work in parallel on individual system components. These benefits come with a price; the need for sophisticated, reliable software to manage the distributed collection of components and tasks.
Communication within a distributed, decentralised environment becomes a significant issue. Hardware components and software processes operate in synchronous and asynchronous modes. These processes have to communicate with one another with a well defined protocol to effectively control the system. Software tools needed for a distributed, real-time control architecture include: real-time execution, adaptive task scheduling, task synchronisation, communication protocols and adaptive resource management. Hence, software and its interaction with the controlled system will play a significantly larger role in the control of real-time systems.
This drive towards distributed, dynamic control systems requires establishment of tighter ties between the controls and computer science community for these systems to be successful. In 1999, DARPA initiated the Software Enabled Control (SEC) program, in part, to address these issues.
A central theme of the DARPA SEC program is to develop software based control technologies that use dynamic information about the controlled system to adapt, in real-time, to changes in the sub-systems and its operating environment. This software should be flexible to reconfiguration, integrate and coordinate subsystem operations and enable large-scale distribution of control. The Open Controls Platform (OCP) [PMC01], being developed by Boeing St. Louis, Honeywell Laboratories and the Georgia Institute of Technology, under the DARPA SEC program provides a software infrastructure to enable control engineers to work seamlessly, in real-time and simulation-time, within a distributed, control environment. The OCP software is built upon RT-CORBA and is an extension to the Bold Stroke software architecture 4
developed by Boeing St. Louis to support aircraft avionic system integration. The OCP is middleware that consists of a set of services that allow multiple processes running on one or more machines interacting across a network.
1.4
Control Systems as Real-Time Systems
Most control algorithms today, are implemented in digital computers. The popularity of digital computers is due to the versatility of implementing control algorithms in software and the drop in the cost of computing. Increasingly complex control systems are being designed for commercial systems because implementation can be entirely software based. Therefore, control systems today are essentially a composite of computational tasks. Since most control systems interact with the real world, the constituent computational elements of the control system need to execute in realtime. Therefore, control algorithms are realised as real-time computational systems during implementation.
1.4.1
Strictness of Deadlines
Computational tasks occuring in a real-time system have timing constraints, or deadlines, which need to be satisfied for the real-time system to be functionally useful. A real-time system can be classified by three categories based on the nature of the deadlines they face. They could be hard real-time systems, if the consequences of not executing a task before its deadline is catastrophic. Flight control or control of nuclear plant are examples of hard real-time systems. Real-time systems are firm, if the consequences of missed deadlines are not severe. Online banking and airline reservation systems are firm real-time systems. A real-time system is categorised as soft real-time system if the utility of the system degrades with time after the deadline expires. Real-time video streaming and telephone switching systems are examples of soft real-time systems. 1.4.2
Reliability
Real-time systems, like some flight control avionics, operate under stringent reliability requirements. They are hard real-time systems and failure to meet the deadlines of 5
the constituting tasks, may result in catastrophic consequences. An off-line analysis is usually conducted to ensure that the deadlines of all the tasks are met. Such an analysis is made subject to certain assumptions on the workload and failure conditions. 1.4.3
Size and Degree of Coordination
Hardware realisation of traditional control applications result in small real-time systems. The associated real-time tasks are independent of each other and the analysis of such systems for reliability is fairly simple. In most cases, the entire system code can be loaded into memory, or if they are well-defined phases, each phase is loaded just prior to the beginning of the phase.
However, with the increasing role of information based systems (pg. 18 [Mur02]), the level of interaction and cooperation between the sub-tasks is on the rise. In recent times we are faced with situations in which pervasive computing, sensing and communication are common. Control system engineers are facing challenges of controlling large-scale systems and network that result in large complex real-time systems and complicates the notion of reliability. 1.4.4
Environment
The environment in which a real-time system operates, plays an important role in the design of the system. Many environments are well defined and deterministic. These environments give rise to small, static real-time systems in which all deadlines can be guaranteed a priori. The approach taken in relatively small static real-time systems, however, do not scale to larger, more complicated and less controllable environments.
Owing to the benefits of modular design, many complex real-time systems are built by interconnecting sub-components. The behaviour of the overall system is determined not only by the composite behaviour of the sub-systems but also by the interconnection structure between these components.
Recently, networked control systems [Bus01, ZBP01] have gained popularity and is 6
an active area of research. In this framework, the sub-components of the control systems communicate via a communication network. The environment, in which such control systems operate, is affected by network induced delays, packet loss, multiple packet transmission resulting in duplication of signal, etc. Clearly, determining the reliability of the real-time control system in such an environment is not trivial.
Therefore, it is expected, future real-time control systems will be large, complex, distributed and dynamic. They will contain many types of timing contraints, precedence constraints and will need to operate in a fault-prone, highly non-deterministic environments. Such real-time systems are defined as dynamic real-time systems. 1.4.5
Fault Tolerance
Fault tolerance is defined as the system’s ability to deliver the expected service even in the presence of faults. A real-time system may fail to function properly either because of error in its hardware, software, or both, or because it fails to respond in time to meet the timing requirements demanded by the environment it interacts with.
There are different fault tolerant techniques in real-time system, namely • N-Modular Redundancy (NMR) - In this approach, N multiple identical processors concurrently execute the same task and the results produced by these processors are voted on by another processor. This is a general technique that can tolerate most of the hardware faults.
• N-Version Programming (NVP) - This approach is capable of tolerating both software and hardware faults. It is based on the principle of design diversity; i.e., a task is coded by different teams of programmers, in multiple versions.
• Recovery Blocks - This scheme uses multiple alternates to perform the same task. The various alternates are categorised as primary and secondary. The primary task executes first. Once it completes its execution, an acceptance test or a verification test is performed on its outcome. If the result, of the primary 7
task, fails the test, the secondary task executes after undoing the effects of the primary task (i.e., rolling back to the state at which the primary task was invoked) and so on. This continues until an acceptable result is obtained, all alternates are exhausted, or the deadline of the task is missed. This differs from NVP in executing the different versions of the task serially as opposed to parallel execution of versions in the NVP approach. • Imprecise Computations - This approach avoids the timing faults during transient overloads by gracefully degrading the quality of the result via imprecise computations [LLS+ 91]. The imprecise computation model provides scheduling flexibility by trading off result quality to meet task deadlines. In this approach, a task is divided into a mandatory part and an optional part. The mandatory part must be completed before the tasks deadline for an acceptable quality of result. The optional part, which can be skipped, if necessary, to conserve system resources, refines the result. A task is said to have produced a precise result if it has executed both its mandatory and its optional parts before its deadline; otherwise it is said to have produced an imprecise result. From the point of view of controls, if the controller task can be decomposed into mandatory and optional tasks, the mandatory task would be the task that guarantees robust stability and the optional task would be the task that guarantees robust performance.
The overview of the functional characteristics of a real-time system, presented in this section is available in more detail in reference [MM01].
1.5
Prior Art
As mentioned earlier, the role of a control algorithm is evolving from static designs, synthesised off-line to dynamic algorithms that adapt in real-time to changes in the controlled system and its environment. Hardware realisation of such control algorithms will result in complex, dynamic real-time systems.
These real-time systems are expected to adapt to the time varying nature of the en8
vironment they interact with. Therefore, it is expected that there will be transient overloads on the processor implementing the control algorithm. In the presence of transient overloads, it may not be possible for the processor to allocate adequate computational time to each sub-task so their deadlines can be met.
In the controls literature, control algorithms that are based on online optimisation, like receding horizon control [BGW90, Soe92, Pri99, JYH99] can be tuned to vary the computational time it requires. The optimisation problem for these algorithms is essentially minimisation of a cost functional with state and control constraints. The computational time for such algorithms can be varied dynamically, by varying number of decision variables, complexity of system model, accuracy of numerical algorithms and optimality of control action [BBKP02]. Also, since the process of minimisation is iterative in nature, it can be pre-empted to obtain the current best solution. However, determination of the minimum time that should be alloted to the optimisation algorithm, to generate a stabilising solution, may not be trivial. Therefore, any control algorithm that is based on online optimisation is potentially suitable for dynamic real-time systems.
However, for control algorithms that are linear and synthesised off-line, as shown in equation(1.1) and eqn.(1.2), Continuous Time : x˙c = Axc + Bui
(1.1)
uo = Cxc + Dui Discrete Time : ˆ kc + Bu ˆ ki xk+1 = Ax c ˆ ki ˆ kc + Du uko = Cx
(1.2)
the computational time required to compute the output of a linear system is fixed. Computation of controller output involves matrix-vector multiplications and vectorvector additions. The number of flops required to perform these computations is constant and hence will require a fixed amount of CPU time. For computational time less than the required amount, the controller algorithm will not be able to compute 9
a valid output. Therefore, for such algorithms to be suitable for dynamic real-time systems, it is necessary to implement these algorithms using the imprecise computation model. We define anytime control algorithms as control algorithms implemented using imprecise computation model.
1.6
Contribution of the Thesis
This thesis focusses on transforming linear control algorithms, as defined in eqn.(1.1, 1.2), into anytime control algorithms. These algorithms trade the quality of the controller output to conserve system resources. This provides flexibility to the scheduling algorithm to guarantee ontime completion of all the real-time tasks within their respective deadlines.
The thesis also presents a new model of computation, where a linear controller is transformed into a multi-rate modal system. From the point of view of real-time scheduling, this translates to the decomposition of a real-time task with given deadline to a set of sub-tasks with reduced computational overheads and varied deadlines. It can be shown that implementing a controller as a multi-rate system reduces the required computational resorces by a significant amount. Therefore, the decomposition of a single-rate controller to a multi-rate controller also provides flexibility to the scheduling algorithm to adapt to the changes of the environment.
1.7
Organisation of the Thesis
In chapter two, we use model reduction techniques to generate a set of reduced order controllers. For discrete-time linear systems, the computational time reduces monotonically with the number of states in the controller. It is obvious that the reduced order controllers must ensure closed-loop stability and the price for reducing computational time should be at the cost of degraded performance. From the implementation point of view, the available time will dictate the order of controller that can be implemented. Computation of the lowest order controller that guarantees robust stability will be the minimum time that must be alloted to the algorithm for the functioning of the control system. Thus the controllers are switched from higher to lower order to accomodate shortage in CPU time. We propose a switching algorithm 10
that smoothly switches between controllers of different order to accomodate changes in available CPU time. Note that the controllers have to be switched smoothly to enable bumpless transfer with minimal CPU overhead. Construction of the switching algorithm is one of the main challenge of this research because it should have minimal CPU overhead for feasibility. In chapter three, we present an algorithm that transforms linear time invariant controllers to periodically time varying systems, which can be digitally implemented in a computationally efficient manner. Controllers designed in continuous time are discretised when implemented in digital computers. The sampling frequency of the discrete-time controller is typically chosen to be ten times faster than the cutoff frequency of the closed-loop system. If the natural frequencies of the controller are sparsely spaced, then the states corresponding to the slow modes are updated at a rate faster than necessary. This results in unneccesary computation. The computational overhead or execution time can be reduced if the states corresponding to the slow modes of the controller are updated at a slower rate. The simplest transformation would be to decompose the controller into two subsystems (or computational tasks), one containing the fast modes and the other containing the slow modes. Reduction in computational overhead can then be achieved by simply operating the two subsystems at different rates. With such a scheme however, there will be a periodic increase in computational requirement at time instants when both the subsystems need to be updated, which is not desirable. One of the salient features of the proposed algorithm is the distribution over time of the computation required to update the slowly varying states, to achieve a uniform reduction in the computational overhead. From the point of view of real-time tasks, this algorithm decomposes the original task into two tasks, each with reduced run-time but different periodicity.
In chapter four, we transform the LTI controller into a multi-rate system and schedule the computational tasks, associated with the multi-rate sub-systems of the controller, using real-time scheduling algorithms. The scheduling algorithm and the sampling rate of the computational tasks determine the computational overhead of implementing the controller. It turns out that smaller sampling rates result in lower utilisation of computational resources. However, lower sampling rates for controller will cause degradation in robust performance and may introduce instability. Clearly, there is a tradeoff between robust performance of the closed-loop and the utilisation of compu11
tational resources. Therefore, a systematic method of decomposing a LTI controller to a multi-rate system that achieves reduction in the utilisation of computational resources and guarantees robust performance for the controller, is necessary. If the controller is decomposed into modal form, then the problem can be posed as a nonlinear programming problem with the sampling rates of the modal systems as the parameters of optimisation. The constraint on the parameters are the constraints of robust performance for the closed-loop system. If performance can be compromised, then the robust performance constraint can be relaxed to robust stability. The cost function for this optimisation is defined in terms of the number of states in the modal sub-systems and their sampling rates.
12
Chapter 2 Anytime Control Algorithms: Model Reduction Approach 2.1
Introduction
Anytime algorithms are algorithms that trade performance for computation time. They are capable of providing results at any point in their execution. The quality, accuracy or performance of the algorithm improves with increased processing time. Therefore, for anytime algorithms, the binary notion of the correctness associated with traditional computational procedures is replaced by the set of quality measured answers. In this chapter we propose a methodology that will transform linear time invariant control algorithms into anytime algorithms with the help of model reduction and bumpless switching of linear systems.
In recent times, the advance in digital technology has led to the design of complex computational systems. These systems usually interact with an environment that demands more out of some algorithms and less out of others, at different times in their operation life. Therefore it is not feasible to perform accurate computation at all times by all the algorithms in the system. Anytime algorithms provide a technique for allocating computational resources to the most useful algorithm and enable optimal usage of hardware resources.
13
Anytime algorithms differ from conventional computational procedures in several ways [Zil96]. Specifically, anytime algorithms return a result with a measure of its quality. This may be a best guess or a group of possible answers. They also contain information about the output quality given a certain amount of time and information about the input it receives. Anytime algorithms can be interrupted and the solution available at that point of execution can be returned. They can also continue executing past the deadline they are given. This allows systems that use such algorithms to change the computational time, allocated to an anytime algorithm, during run-time. Anytime algorithms always improve the output quality as they are given more time. The improvement in the solution however is larger in the early stages of computation and diminishes over time.
Anytime algorithms first emerged in the area of artificial intelligence. Early application of such algorithms can be found in medical diagnosis and mobile robot navigation. The term anytime algorithm was coined by Dean and Boddy in the late 1980s in the context of their work on time-dependent planning [BD94, BD89]. They used this idea to solve a path planning problem involving a robot assigned to deliver packages to a set of locations. Horvitz introduced a similar idea, called flexible computation, to solve time-critical decision problems in [Hor90]. In 1991, Jane Liu et al. [LLS+ 91] introduced a similar idea, termed as imprecise computation, and applied it to real-time systems. They showed that imprecise computation techniques provide scheduling flexibility by trading off result quality to meet computational deadlines. Ever since then, the concept of imprecise computation has been applied to solve several diverse problems [YAT02, MLFL94, HC95, LKL01]. The idea of anytime algorithms is also similar to the notion of rationality in automated reasoning and search investigated by Russel et al. in [RW89, RW91] Doyle in [Doy90] and D’Ambrosio in [D’A89].
Real-time computational tasks that are anytime algorithms, prove to be useful in the design of real-time systems. The property of anytime algorithms to trade computational time for decision quality results in optimal performance of real-time systems [DB88, Hor87, LNLK87]. Zilberstien in [Zil93] defines a real-time task to be operationally rational if it optimises the allocation of resources to its performance components so as to maximise its overall expected utility. Such tasks are able to vary processing time according to “time-pressure.” This capability can be achieved 14
if traditional algorithms, whose expected run-time is normally fixed, is replaced by more flexible computational modules, namely anytime algorithms.
Currently, control algorithms are implemented as digital control systems, which in essence are real-time systems with a dedicated application. The control algorithms are typically, traditional computational procedures with fixed run-times. In order to make these tasks operationally rational, it is necessary to transform them into anytime algorithms. This serves as the motivation for the research work presented in this chapter. We will term control algorithms that behave as anytime algorithms as anytime control algorithms.
In the controls literature, control algorithms that are based on online optimisation, like receding horizon control [BGW90, Soe92, Pri99, JYH99] can be tuned to vary the computational time it requires. The optimisation problem for these algorithms is essentially minimisation of a cost functional with state and control constraints. The computational time for such algorithms can be varied dynamically, by varying number of decision variables, complexity of system model, accuracy of numerical algorithms and optimality of control action [BBKP02]. Also, since the process of minimisation is iterative in nature, it can be pre-empted to obtain the current best solution. However, determination of the minimum time that should be alloted to the optimisation algorithm, to generate a stabilising solution, may not be trivial. Therefore, any control algorithm that is based on online optimisation can potentially qualify as anytime control algorithm.
However, control algorithms that are linear and synthesised off-line, as shown in equation(1.1) and (1.2), the computational time required to compute the output of a linear system is fixed. Computation of controller output involves matrix-vector multiplications and vector-vector additions. The number of flops required to perform these computations is constant and hence will require a fixed amount of CPU time. The computational time can however be reduced by reducing the order of the controller. It is obvious that the reduced order controllers must ensure closed-loop stability and the price for reducing computational time should be degraded performance. Thus we see that it is possible to vary the computational time of linear controllers, at the cost 15
of quality of result, by executing a reduced order controller. From the implementation point of view, the available time will dictate the order of controller that can be implemented. Computation of the lowest order controller that guarantees robust stability will be the minimum time that must be alloted to the algorithm for the functioning of the control system. Thus the controllers are switched from higher to lower order to accomodate shortage in CPU time.
Note 2.1 Note that the switching has to be done smoothly to enable bumpless transfer with minimal CPU overhead. The remainder of the chapter is organised as follows. First we present the fact that the required CPU time of linear controllers can be varied by reducing the order of controller. Model reduction technique used to generate a set of controllers with decreasing order, each rendering closed-loop stability and requiring decreasing computational time. Obviously, the price for reduction in computational time is the reduction in achievable closed-loop performance. We propose a switching algorithm that smoothly switches between controllers of different order to accomodate changes in available CPU time. Construction of the switching algorithm is the main challenge of this research, as it should have minimal CPU overhead for feasibility. Finally we implement this algorithm on a B737-100 TSRV (Transport System Research Vehicle) linear longitudinal motion model and present the simulation results.
2.2
Complexity Analysis via FLOP Count
The computational overhead of a numerical algorithm is often measured in terms of the number of Floating Point Operations or FLOPS. In this chapter we define a FLOP as one addition, subtraction, multiplication or division of two floating-point numbers. FLOP counts in the early days of computers gave a good estimate of the computation time of an algorithm. This, however, is not true anymore. Features like cache boudaries and locality of reference of code and data can dramatically affect the speed of computation of numerical algorithms. However, for the purpose of analysis of the switching algorithm presented in this chapter, we will assume that the FLOP count of 16
the algorithm give us a good estimate of its computational overhead.
The numerical algorithms in this chapter are mostly matrix-vector multiplications and vector-vector additions. Therefore for a matrix A ∈ Rm×n and a vector x ∈ Rn , the FLOP count for the computation Ax is 2mn. Similarly, the computation x + y, where y ∈ Rn will have a FLOP count of n. We will use these definitions of FLOP counts to analyse the CPU overhead of the switching algorithms presented in this chapter.
2.3
Variation of CPU Time for Linear Controllers
Digital implementation of linear state-space control equations of the form, = Axkc + Bukc xk+1 c
(2.1)
yck = Cxkc involve a fixed number of scalar multiplications and additions. These computations require a fixed amount of time (Tc ) to complete. It is clear that if the available CPU time (Tcpu ) is less than Tc , the controller will fail to compute an output. It also cannot make use of extra CPU time, if available. Thus, for dynamic scheduling we would require control algorithms to be able to produce output within a range of time, i.e Tc ∈ [Tmin , Tmax ]. In this chapter we use linear model reduction technique, namely balanced truncation and residualisation to generate a set of linear time invariant (LTI) systems with varying run-times.
2.3.1
Balanced Truncation
If the controller K defined by eqn.(2.1) is a balanced realisation[Moo81, Enn84, Glo84], then we can partition the state space as xc =
(
17
w z
)
(2.2)
where z represents the weakly controllable and observable states. The controller dynamics in eqn.(2.1) can be written as (
wk+1 z k+1
) yck
# ) " #( Bw wk Aww Awz ukc + = Bz z)k Azw Azz( h i wk = Cw Cz zk "
(2.3)
Since the states z are weakly observable and controllable, they do not contribute significantly, in terms of Hankel singular values, to the controller output yc . A reduced order model of the controller K can be obtained by ignoring these states. The computational time depends on the number of states rejected, i.e. the size of z. The dynamics of the reduced order controller can be written as wk+1 = Aww wk + Bw ukc
(2.4)
yck = Cw wk 2.3.2
Residualisation
In this approach, a LTI system K is decomposed into two systems Ks and Kf such that, K(s) = Ks (s) + Kf (s) The system Ks contains the slow modes and Kf contains the fast modes of K. Model reduction is achieved by assuming that the states of Kf are much faster in reaching their equilibrium than those of Ks . Hence, these faster states can be approximated by their steady-state contributions. This implies that the poles of Kf are large compared to those of Ks and are in the left half plane (i.e. stable). Therefore, the transfer function of the reduced order LTI system Kred can be written as, Kred (s) = Ks (s) + Kf (0) where Kf (0) is the steady-state contribution of Kf (s).
18
If the dynamics of the LTI system K, in discrete-time, is given by xkf Bf ukc + = k Bs xs 0 As h i xkf yck = Cf Cs xk s
xk+1 f xk+1 s
Af
0
Then the dynamics of the reduced order system is = As xks + Bs ukc xk+1 s
(2.5)
yck = Cs xks + Dss ukc where Dss = Cf (I − Af )−1 Bf is the steady-state contibution of Kf . 2.3.3
Variation of Run-Time
If the controller K has order n, then by model reduction, it is possible to generate a set of p controllers K = {Ki }, 1 ≤ i ≤ p, each with order (n − i + 1). The computational time of controller Ki ∈ K, denoted by Tc (Ki ), decreases with increasing i, i.e. Tmax = Tc (K1 ) > . . . > Tc (Kp ) = Tmin . As we reduce the order of the controller, the performance of the closed-loop system may also degrade. However, there is a limit beyond which the stability of the closed-loop system is compromised. Thus the choice of p should be such that each Ki ∈ K, guarantees acceptable performance and closed-loop stability.
Once the set K has been constructed and the execution time of each Ki is known, it is possible to accomodate changes in Tcpu by selecting the best controller that can be executed within the alloted time. The best controller is obviously the highest order controller with Tc (Ci ) ≤ Tcpu . Thus we can accomodate changes in Tcpu by switching between controllers of different orders. However, the switching has to be smooth to prevent impulse like effect in the response of the overall system. A switching scheme with minimal CPU overhead is presented in the next section.
19
2.4
Controller Switching Algorithm with Minimal CPU Overhead
Switching theory of controllers is a well established area [KCMN94]. Unfortunately, many of these ideas cannot be applied directly to this problem as many switching algorithms require simultaneous operation of controllers prior to switching or require significant CPU time. This is opposite of the motivation of the posed problem where switching is done to accomodate changes in Tcpu . We make use of the fact that the states of the controllers Ki ∈ K are subsets of the states of the controller K, to construct the proposed switching algorithm.
There are two cases of switching that might occur. We either switch from a higher to a lower order controller, or from a lower to a higher order controller. The switching algorithm is different for the two cases. We first present the switching scheme assuming K = {K1 , Kp }, i.e. we switch between the highest and the lowest order controller, then we extend it to the case where K = {K1 , K2 , ..., Kp }.
Since we consider two model reduction techniques to construct the set K, namely balanced truncation and residualisation, we will represent Kbal as the the set of controllers obtained using balanced truncation and Kresid as the set of controllers obtained using residualisation.
2.4.1 2.4.1.1
Switching Algorithm for Balanced Systems Switching from higher to lower order controller
If K1 , Kp ∈ Kbal , then by simply switching from the full order controller (K1 ) in eqn.(2.3) to the reduced order controller (Kp ) in eqn.(2.4), there will be an undesirable impulse effect in yck if z states of the controller dynamics are just truncated. For smoothness of yck at the time of switching, it is required that the output and dynamics of both controllers are identical at that time. This can be achieved by modifying the dynamics of the reduced order controller as,
20
wk+1 = Aww wk + Awz z k + Bw ukc
(2.6)
z k+1 = z k yck = Cw wk + Cz z k Thus, in the modified reduced order controller, w and yc evolve with z held constant and this preserves continuity of controller output at the time of switching. 2.4.1.2
Switching from reduced to full order controller
Switching from a lower order controller to a higher order controller is more complicated. While Kp is active, the w states have evolved with z held constant. In general, if the dynamics of z is suddenly added to the system, there could be large transients z trajectory and consequently in the closed-loop response. To minimise and possibly avoid this undesirable effect, we switch to the higher order controller in two steps.
First z is decayed to zero like a second-order system, with Kp still active. This computation is feasible since we have enough computational time to implement K1 . If k0 is the time when K1 switched to Kp , then the controller dynamics during this phase is given by eqn.(2.7). wk+1 = Aww wk + Awz z k + Bw ukc xk+1 = xk2 1 xk+1 = −λ2 xk1 − 2λxk2 2
(2.7)
z k = z k0 xk1 yck = Cw wk + Cz z k where x1 , x2 ∈ R are the states of the second-order filter and are initialised as x1 = 1, x2 = 0. Once the states z have decayed to zero, we switch to the higher order controller K1 .
21
2.4.1.3
Generalisation of Algorithm
In this section the switching scheme is generalised for switching between any two controllers of different order. For the purpose of discussion, let us denote wi and zi as the w and z states of controller Ki .
In general, for a smooth switching from Ki to Kj , where 1 ≥ i > j ≥ p, the vector zj would include the states from wi , whose dynamics are ignored in Kj , along with zi . Thus the ignored states from wi are stacked on top of zi to form zj .
If the switching occurs from Kj to Ki , where 1 ≥ i > j ≥ p, we need to decay the states in zj , whose dynamics are present in Ki , prior to switching. Once these states decay to zero, they are stacked below wj to form wi . The remaining states in zj become zi and remain as constants.
2.4.1.4
Analysis of the Switching Algorithm
When switching from a higher order system to a lower order system, 1. Ideally we would want to fade the z states to zero so that eqn.(2.6) transforms into eqn.(2.4). Unfortunately, the fading out process would require computation, which is expensive.
2. The term Awz z k in eqn.(2.6) is a vector that need not be computed at every time step. Since z is held constant, Awz z k is a constant vector that needs to be computed only once. Extra computation is required only to add this constant vector. This CPU overhead is a small percentage of the total, especially when the size of z is large. The same argument holds for adding Cz z k to yck . Therefore, the addition of terms Awz z k and Cz z k to eqn.(2.6) results in smooth switching from K1 to Kp and requires minimal CPU overhead.
3. The constant z vector in eqn.(2.6) acts as a disturbance at the input and output of the controller. The worst-case effect of this piece-wise constant disturbance 22
on the exogenous output and plant input of the closed-loop system can be determined as follows. Let us represent the closed-loop system by Fig. 2.1, where r, e, yc , y and u are the exogenous input, exogenous output, controller output, plant output and plant input respectively.
e
r u
Plant
y
Act K
y
c
Figure 2.1: Closed-loop System with Disturbance z The disturbace z acts on the system as shown in Fig. 2.2. The worst-case effect of z on e and z on u is then the ∞-norm of the corresponding transfer functions. Since ||z||2 is bounded by ||r||2 , a more accurate bound can be obtained by scaling z by ||Grz ||∞ , where Grz is the transfer function from r to z.
e
r u
Plant
y
Act +
Kp
+
Cz
Awz
z Figure 2.2: Closed-loop System with Disturbance z 4. The pure removal of the z dynamics will cause a change in the “derivative” of the controller output and may cause transients to appear in the closed-loop 23
response. These transients could be removed with the help of a filter that smoothens the “derivative” of the controller output at switching instances. This however will require more computation.
When switching from a lower order system to a higher order system, 1. The time taken to decay the z states to zero depends on λ. If the z states are decayed too fast, transients will appear in the plant input and consequently in the close-loop response. On the other hand, if z is decayed too slowly there will be a large delay in the activation of K1 . Therefore, there is a tradeoff between the transients due to the switching and the delay in the activation of the higher order controller.
2. Switching back the dynamics of z, with initial condition z = 0, will result in transients appearing in the trajectory of z. Since z is weakly controllable, we expect these transients to be small. Moreover, since z is weakly observable, we expect the effect of these transient on yck to be small. The transients in z k and its effect on yck however will increase from K1 to Kp as the observability and controllability of the states z increase. These transients can by removed with a low pass filter. The implementation of the filter will however add computational overhead to the algorithm.
2.4.2 2.4.2.1
Switching Algorithm for Residualised Systems Switching from higher to lower order controller
If K1 , Kp ∈ Kresid , at the time of switching, K1 in eqn.(2.5) and Kp in eqn.(2.5) will have different outputs, in general. This difference will cause an impulse like behaviour in the close-loop response, which is undesirable. Therefore, it is necessary to modify the output of Kp so that the outputs of both the systems are identical at the time of switching. This can be achieved by adding the difference between the output of K1 and Kp , at the time of switching, to the output of Kp .
24
Therefore, if k0 is the time of switching, the error between the output of the two controllers is ∆yck0 = yck0 (K1 ) − yck0 (Kp ) where yck0 (Ki ) is the output of the controller Ki at time k0 . Note that, the only additional computation involved is the subtraction of two vectors. The output of K1 is already known from the previous time-step and the output of Kp is computed in the present time-step. This subtraction is also done only once, at the time of switching.
The definition of Kp modifies to, xk+1 = As wk + Bs ukc s = xkf xk+1 f
(2.8)
yck = Cs xks + Dss ukc + ∆yck0
The difference in the controller output is a result of the difference between xf and its steady-state value, denoted by x¯f , at the time of switching. This can be shown as follows, ∆yck0 = (Cf xkf0 + Cs xks 0 ) − (Cs xks 0 + Dss ukc 0 ) = (Cf xkf0 − Dss ukc 0 ) = Cf xkf0 − Cf (I − Af )−1 Bf ukc 0
(2.9)
= Cf (xkf0 − x¯kf0 ) Therefore, if xf = x¯f at the time of switching, then there will no difference between the outputs of K1 and Kp .
2.4.2.2
Switching from reduced to full order controller
When we switch from Kp to K1 , the output of both the systems at the time of switching must be identical. If we denote k1 as the time when Kp switches to K1 , then the difference in the controller output at k1 is 25
∆yck1 = (Cs xks 1 + Dss ukc 1 + ∆yck0 ) − (Cs xks 1 + Cf xkf1 ) = (Dss ukc 1 − Cf xkf1 ) + ∆yck0 The difference in the controller output can be made zero by first decaying ∆yck0 to zero, prior to switching. The difference in the controller output then becomes ∆yck1 = (Dss ukc 1 − Cf xkf1 ) k1 This difference can be reduced to zero by initialising xf as xkf1 = −A−1 f Bf uc , where k1 is the time when ∆yck0 reaches zero. The dynamics of the controller while ∆yck0 is decaying to zero is given by
xk+1 = As xks + Bs ukc s xk+1 = xkf f xk+1 = xk2 1
(2.10)
xk+1 = −λ2 xk1 − 2λxk2 2 yck = Cs xks + Dss uk + xk1 ∆yck0 In eqn.(2.10), the states x1 , x2 ∈ R are the states of the second order filter with initial condition x1 = 1, x2 = 0 at time k = k0 . The filter is used to smoothly decay the contribution of ∆yck0 in the output of the controller. Once ∆yck0 reaches zero, xf is initialised to its steady-state value, based on the current input, and the controller switches to K1 .
2.4.2.3
Generalisation of the Switching Algorithm
In this section we generalise the switching scheme for any two systems Ki , Kj ∈ Kresid . For the purpose of discussion, let us denote xfi , xsi as the xf and xs states of controller Ki . Let us also denote ∆yci as the ∆yc of Ki . The superscript of ∆yc has been dropped for notational convenience. We will also assume that the controllers in Kresid are arranged in the decreasing order, of the order of the controller. That is, if i < j 26
it implies, order of controller Ki is higher than that of controller Kj . In general, Ki is defined as, = Asi xksi + Bsi ukc xk+1 si Ki : xk+1 = xkfi fi ycki = Csi xksi + Dssi uk + ∆yci Switching from Ki to Kj When switching from a higher order controller to a lower order controller, i.e, Ki to Kj , where 1 ≤ i < j ≤ p, some states from xsi will be residualised. Let those states be xˆsj . Therefore, ( ) ( ) x sj xˆsj x si ≡ , x fj ≡ xˆsj xfi and the difference in the output of the two controllers is ∆ycj = (ycki − Csj xksj − Dssj ukc )|k=k0 where k0 is the time Ki switches to Kj . Switching from Kj to Ki When switching from a lower order controller to a higher order controller, i.e. from Kj to Ki , i < j, a subset of xfj will start evolving and affect the output of Ki . Let those states be xˆfi . Therefore,
xfj ≡
(
xˆfi xfi
)
, x si ≡
(
x sj xˆfi
)
The difference in the controller output, ∆yci can be written as, ∆yci = (Csj xksj + Dssj ukc + ∆ycj ) − (Csi xksi + Dssi ukc ) = Csj xksj + Dssj ukc + ∆ycj − Csj xksj − Cˆfi xˆkfi − Dssi ukc = (Dssj − Dssi )ukc − Cˆfi xˆkfi + ∆ycj
27
If the controller K is in modal form, i.e,
An · · · 0 0 Bn . .. .. .. .. . . . K := 0 · · · A2 0 B2 0 ··· 0 A B 1 1 Cn · · · C1 C2 0 then the A matrix of K is block diagonal and Dssj can be simplifed as, Dssj = Cfj (I − Afj )−1 Bfj #" # " i (I − Aˆ )−1 h ˆf B 0 f i i = Cˆfi Cfi 0 (I − Afi )−1 Bfi ˆf + Cf (I − Af )−1 Bf = Cˆfi (I − Aˆfi )−1 B i i i i ˆ ss + Dss = D i i
Therefore
∆yci = (Dssj − Dssi )ukc − Cˆfi xˆkfi + ∆yci ˆ ss ukc − Cˆf xˆk + ∆yc = D i i fi j
(2.11)
Hence, when switching from Kj to Ki , first the vector ∆ycj needs to be decayed to zero. Then, the states xˆfi are initialised to their steady-state values based on the ˆ current input, i.e. xˆfi = −Aˆ−1 fi Bfi u and the controller Kj switches to Ki . 2.4.2.4
Analysis of the Switching Algorithm
When switching from a higher order system to a lower order system, 1. The vector ∆yck0 in eqn.(2.8) can be treated as a piece-wise constant disturbance, added at the output of Kp as shown in Fig.2.3, and its worst case effect on e and u can be quantified by the ∞-norm of the corresponding transfer functions. 2. Since we are equating the controller output over two consecutive time-steps, it means that the controller output is constant over this interval. This happens 28
e y
r u
Plant
Act Kp
+
∆ yc Figure 2.3: Closed-loop System with Decaying Disturbance ∆yc everytime the controller switches from a higher order system to a lower order system.
3. When residualisation is used as a model reduction tool to generate K1 and Kp , the dynamics of xs and xf are decoupled. Therefore, the sudden elimination of xf dynamics doesn’t cause any effect on the dynamics of xs . This however will cause a sudden change in the “derivative” of yck , which will cause transients to appear in the closed-loop system response. These transients can be removed with a low-pass filter at the cost of adding computation.
When switching from a lower order system to a higher order system, 1. The activation of K1 is delayed by the time ∆yck0 takes to reach zero. Therefore, it is desirable that the decay rate of ∆yck0 , determined by λ in eqn.(2.10), is fast. At the same time, λ has to be such that the transients in e and u because of changing ∆yck0 should be small. The transients in e and u will be small for slowly decaying ∆yck0 , however it will delay the switching time of K1 . Therefore, there is a tradeoff between the transients in e and u and the delay in the activation of K1 .
2. If K is in modal form then it can be written as K = Γ1 + Γ2 + · · · + Γn 29
where Γi is the ith modal system. If the modes of Γi are faster than the modes of Γj , for i > j, then Kp in discrete-time, is defined as, Kp (z) =
i=p X
Γi (z) +
i=1
i=n X
Γi (1)
i=p+1
where Γi (1) is the steady state contribution of Γi (z). Therefore, xf is the state P −1 vector of i=n i=p+1 Γi (z). The corresponding matrix (−Af Bf ) can therefore be written as −1 Ap+1 Bp+1 −1 A B p+2 p+2 B = A−1 (2.12) f .. f . A−1 n Bn
where Ai , Bi are the A, B matrices of the modal system Γi . Therefore, if A−1 i Bi is computed off-line, then the computational overhead in initialising xf is only that required to multiply the matrix in eqn.(2.12) with vector uc . This computation is feasible since it is equivalent to the computation Bf uc .
3. The storage of A−1 i Bi for i ∈ {1, 2, · · · , n}, increases the memory overhead of the implementation.
4. Switching back the dynamics of xf from its steady-state will cause transients to appear in the output of the controller. These transients can be removed with the help of a low-pass filter. However, implementation of the filter will add computational overhead.
5. Since Dss uc is equal to Cf xf at the time the controller switches to K1 , the output of the controller is identical over two consecutive time-steps. This happens everytime the controller switches from a lower order system to a higher order system.
6. Depending on the dimension of xf , uc and yc , the computational overhead required to smoothly decay ∆yck0 , may exceed the computational overhead of K1 , 30
which may be undesirable. If nsf , nss , nu , ny are the dimensions of xf , xs , uc and yc respectively, then the FLOP count of K1 is given by, F1 = 2nss (nss + nu ) + 2nsf (nsf + nu ) + 2ny (nss + nsf ) The FLOP count for the decay process is F2 = 2nss (nss + nu ) + 2ny (nss + nu + 1) + 4 Therefore, if the computation of the decay process is feasible then F1 − F2 ≥ 0 ⇒ n2sf + nsf (nu + ny ) − (ny nu + ny + 2) ≥ 0 Since nsf ∈ Z∗ , the minimum value of nsf can be given by min nsf
2.5
¾¼ » ½q 1 (ny + nu )2 + 4(ny nu + ny + 2) − (nu + ny ) = 2
Proof of Stability
The effect of the switching algorithm on the stability of the closed loop system is analysed in this section. In the following analysis we assume that the closed-loop system as well as the controller are stable systems.
For the purpose of proving stability of the switching logic, we assume that the closedloop system starts with the highest order controller, switches to the lowest order controller and switches back to the highest order controller. This sequence may occur repeatedly. Since we don’t expect Tcpu to fluctuate rapidly, it is reasonable to assume that there are finite switches in finite time.
2.5.1
Balanced Realisation Reduction
Let the plant dynamics be given by xk+1 = Ap xk + Bp up p ypk = Cp xp 31
(2.13)
where xp , up and yp denotes plant states, input and output. If the inter-connection of the plant and the controller in eqn.(2.3) is such that yp = uc and yc = up , then the dynamics of the closed loop with the full order controller K is k+1 xp wk+1 k+1 z
k Ap Bp Cw Bp Cz xp = Bw Cp Aww Awz wk k Bz Cp Azw Azz z
(2.14)
If we denote v T = {xp w}, the closed-loop dynamics with the full order controller can be written as
f0 :
(
v k+1 z k+1
)
=
"
A11 A12 A21 A22
#(
vk zk
)
(2.15)
where Aij is defined from the partition of the A matrix in eqn.(2.14). The closed loop with controller from eqn.(2.6) and eqn.(2.7) are f1 :
(
v k+1 z k+1
)
=
"
A11 A12 0 I
#(
vk zk
)
(2.16)
For the purpose of proving stability, let us assume that z decays as z k+1 = Azz z k . Therefore, the closed-loop system while z is decaying is given by, f2 :
(
v k+1 z k+1
)
=
"
A11 A12 0 A22
#(
vk zk
)
(2.17)
Thus we have three stable systems f0 , f1 , f2 and a switching sequence that switches between them. We use multiple Lyapunov function approach[Bra98] to prove stability of this switched linear system.
The switching sequence S for this switched linear system is S = x0 , (f0 , T0 ), (f1 , T1 ), (f2 , T2 ), (f0 , T3 ), . . .
(2.18)
which means that this hybrid system starts at time T0 , with initial condition x0 = 32
{v0 z0 }T and dynamics given by f0 . At time T1 , the system switches to f1 , and so on. Thus the system fi is active in the time intervals I(i) given by
I(i) ∈ {[Ti , Ti+1 ) , [Ti+3 , Ti+4 ) , . . . , [Ti+3j , Ti+3j+1 ) , . . .}
(2.19)
where j ∈ Z∗ , Z∗ is the set of non-negative integers. Define E(i) as the set of times when system fi is switched to, i.e. E(i) = {Ti , Ti+3 , . . . , Ti+3j , . . . }, j ∈ Z∗
(2.20)
Definition (2.2 in [Bra98]) - Given a strictly increasing sequence of times T = {tk }, k ∈ Z∗ , we say that Vi (xk ) is a Lyapunov-like function for system fi and trajectory xk = {v k z k }T over T if: 1. Vi is a positive definite, continuous function about the origin (zero). 2. Vi (xk+1 ) ≤ Vi (xk ), ∀t ∈ I(i) 3. Vi is monotonically non-increasing on E(i) Denoting xkS as the state trajectory of the closed-loop system under switching sequence S, Branicky in [Bra98] shows that, if for S we have that for all i, Vi is Lyapunov like for fi and xkS over I(i) , then the system is stable in the sense of Lyapunov. We now show that our switched linear system satisfies this.
Since f0 , f1 , f2 are stable systems, there exists positive definite matrices P0 , P1 , P2 such that Vi (x) = xT Pi x; i = 0, 1, 2 (2.21) that are candidate Lyapunov functions for f0 , f1 , f2 and Vi (xk+1 ) < Vi (xk ); i = 0, 1, 2
(2.22)
To prove that Vi is monotonic non-increasing in E(i), let us denote xr as the system states at switching times Tr . Since f0 , f1 and f2 are exponentially stable autonomous 33
systems and the switching doesn’t cause any impulsive jumps in the state trajectories, we can claim that ||xr ||2 ≥ ||xr+1 ||2 ; r ∈ Z∗ ⇒ ||xr ||2 ≥ ||xr+3j ||2 ; r ∈ Z∗ , j ∈ Z+
(2.23)
where Z+ denotes positive integers. Therefore Vi in eqn.(2.21) is non-increasing in E(i). This completes the proof on the stability of the switched linear system in context.
2.5.2
State Reduction via Residualisation
When K1 , Kp ∈ Kresid , the closed-loop systems switches between three systems f0 , f1 and f2 . They are constructed with controllers defined in equations 2.5, 2.8 and 2.10. If the plant-actuator model in Fig. 2.1 is represented as,
xk+1 p
Ap
B1
B2
k e = C1 C2 yk
D11 D21
D12 D22
k xp k r yck
the output of the plant in the absence of external input rk and assuming D22 = 0, can be written as y k = C2 xkp . For the purpose of proving stability, let us assume that ∆yck in eqn.(2.10) decays as ∆yck+1 = Ad ∆yck and treat it as another state of the closed-loop system, where Ad is a matrix with eigen-values within the unit circle. Therefore, the closed-loop systems f0 , f1 and f2 , with ∆yc as an adjoint state, can be written as
f0
xp x s : xf ∆y c
k+1
=
Ap B2 Cs B2 Cf Bs C2 As 0 Bf C2 0 Af 0 0 0
34
0 0 0 0
xp xs xf ∆yc
k
(2.24)
f1
f2
xp x s : xf ∆y c
xp x s : xf ∆y c
k+1
k+1
=
=
Ap B2 Cs B2 Cf B2 xp x Bs C2 As 0 0 s xf Bf C2 0 I 0 ∆y 0 0 0 I c
xp Ap B2 Cs B2 Cf B2 x Bs C2 As 0 0 s Bf C2 0 I 0 xf 0 0 0 Ad ∆yc
k
(2.25)
k
(2.26)
As in section 2.5.1, let us assume that the switching sequence S for this switched linear system is S = {x0 , (f0 , T0 ), (f1 , T1 ), (f2 , T2 ), (f0 , T3 ), . . . }. The systems f0 , f1 and f2 are stable systems, with discontinuous jumps in xf and ∆yc at times {T0 , T3 , · · · } and {T1 , T4 , · · · } respectively.
We will prove the stability of the switched linear system, with discontinuous jumps in xf and ∆yc , using multiple Lyapunov functions. Let us define Vi to be the Lyapunov function for closed-loop system fi . Therefore, as in section 2.5.1, we have to prove Vi is non-increasing in E(i). In the proof we represent time by k. Therefore, the set E(i) in eqn.(2.20), is defined as E(i) = {ki , ki+3 , . . . , ki+3j , . . . }, j ∈ Z∗
(2.27)
Since the three systems f0 , f1 and f2 are stable and there are no discontinuous jumps in xp and xs , the following is true, °( )°k+1 °( )°k ° ° x ° ° x ° ° p ° ° p ≤ ° ° ° , ∀k ° ° xs ° ° xs ° 2 2
35
(2.28)
If xf is initialised at time k3j as, k
k
xf3j = (I − Af )−1 Bf C2 xp3j k k ⇒ kxf3j k2 ≤ k(I − Af )−1 Bf C2 k∞ kxp3j k2 k = γ1 kxp3j k2 Since f0 is stable, γ1 kxkp3j k2 ≥ kxkf k2 , k3j ≤ k < k3j+1 Therefore, the Lyapunov function for f0 can be defined as V0 (xkp , xks ) = kxkp k22 + kxks k22 + γ1 kxkp3j k2 , k3j ≤ k < k3j+1 , j ∈ Z+
(2.29)
Therefore, from eqn.(2.28), we can conclude that V0 (xkp , xks ) is non-increasing in E(0).
To show V1 is non-increasing in E(1) consider the following. Since xkf is constant in k ∈ [k3j+1 , k3j+2 ), f0 is stable and there are no discontinuous jumps in xf when switching from f0 to f1 , the following is true γ1 kxkp3j k2 ≥ kxkf k2 , k3j+1 ≤ k < k3j+2 The discontinuous jump in ∆yck is defined as ∆yck = Cf xkf − Cf (I − Af )−1 Bf C2 xkp ⇒ k∆yck k2 ≤ kCf k∞ kxkk k2 + kCf (I − Af )−1 Bf C2 k∞ kxkp k2 k ≤ γ1 kCf k∞ kxp3j k2 + kCf (I − Af )−1 Bf C2 k∞ kxkp k2 k = γ2 kxp3j k2 + γ3 kxkp k2 Therefore, the Lyapunov function for f1 can be defined as V1 (xkp , xks ) = (1 + γ3 )kxkp k22 + kxks k22 + γ2 kxkp3j k2 , k3j+1 ≤ k < k3j+2 , j ∈ Z+ and from eqn.(2.28), we can conclude that V1 (xkp , xks ) is non-increasing in E(1).
When f2 is active, xkf is constant and ∆yck decays to zero. The bound on xkf and ∆yck for f1 are also valid for f2 . Therefore, the Lyapunov function defined for f1 can also be 36
the Lyapunov function for f2 , and it can be shown similarly that it is non-increasing in E(2).
This completes the stability proof of the switched linear system in context.
2.6
Example
The switching scheme is next applied to a B737-100 TSRV (Transport System Research Vehicle) linear longitudinal motion model. The aircraft model has four states: longitudinal velocity V (ft/s), angle-of-attack α (rad), pitch rate q (rad/s) and pitch angle θ (rad); two control inputs: thrust T (lb) and elevator deflection δe (deg). The elevator actuator and the engine are modelled as 16/(s + 16) and 20/(s2 + 12s + 20) respectively. The control objective is to achieve decoupled tracking response of V and γ reference signals. The controller was designed using H∞ theory and has 18 states. Details of the controller design can be obtained from [GB01].
2.6.1
Balanced Realisation Reduction
Here we study the effect of the switching algorithm on the B737 model with controllers K1 and Kp obtained using balanced trucation. The lowest order controller that guarantees stability of the closed loop with acceptable performance has 13 states. Therefore, for this example we will be switching between a 18 state controller (K1 ) and a 13 state controller (Kp ).
In Fig. 2.4 page 43, we present the frequency response of the four transfer functions, (
γ V
)
=
"
Gγγref GγVref GV γref GV Vref
#(
γref Vref
)
(2.30)
Let us represent the closed-loop system in eqn.(2.30), with controller Ki , as Pi . From the frequency-magnitude plots in Fig. 2.4, there is small difference in the performance of P1 and Pp . The two controllers achieve desired decoupling between γ and V response to reference input. The induced norm of P1 and Pp is 1.12 and 1.27 respec37
tively. Therefore, we see that there is not significant degradation in the performance of the controller by using Kp as the controller.
In time domain, the closed-loop response to a step command of 20 f t/s in V , with the full order controller is shown in Fig. 2.5 page 44. Note that there is no γ response and the control actions are smooth. The full order controller achieves decoupled response to V and γ reference signals as desired. The closed-loop response to the same V command with the lower order controller (without constant z states), is shown in Fig. 2.5 page 44. We observe degradation in the performance as the velocity command induces a γ response with peak amplitude of approximately 0.7 degrees.
In terms of computational times, the 18 state controller takes 1033 MATLAB flops and the 13 state controller takes 648 flops. The overhead of adding constant vectors in Kp is 13 flops (2%). Therefore, by implementing the lower order controller we can accomodate approximately 37% drop in Tcpu in this example, and the CPU overhead for smooth switching is minimal.
The closed-loop response with the switching algorithm incorporated is shown in Fig. 2.6 page 45. In the plots, the simulation starts with the full order controller. At 30 sec, Tcpu falls to 67% and the controller switches Kp . The lower order controller tracks the reference signal till 70 sec. Note the coupled response in γ and V tracking during this time. At 70 sec Tcpu comes back to 100% and the z states begin to decay. Once the z states have decayed to zero, the full order controller is switched back. This happens at 112.23 sec. Thus the dynamics of the controller is changed at times t = 30, 70 and 112.23 seconds. We observe from the trajectories that there are no impulse transients at these times due to switching. The oscillations in γ(t) in Fig. 2.6 page 45, are because of the lower order controller operating at that time. The dotted vertical lines in the state trajectories denote the times when Tcpu changes.
In the simulation for this example, we chose λ = 0.2. With this value of λ, it takes 42.23 seconds for the z states to decay from its value at t = 30s to zero. Depending on the application this may be too long to recover the higher order controller and λ has to be chosen accordingly. Since we are dealing with a transport aircraft 38
model, this blending time may be acceptable. We see that there are small transients in [ γ V δe T ], occuring between 70 and 112.23 seconds, because of the decaying of z. If a faster decay of z is desired then it will cause transients of greater magnitude to occur in the closed-loop response. Also, note that the sudden elimination of z dynamics at time 30s results in small transients in the elevator angle at 30s. The addition of the z dynamics at time 112.23s also doesn’t induce any transients in the closed-loop response.
Thus, the simulation plots in Fig. 2.6 page 45, show that the proposed transformation of the designed controller K to a switched linear system, enables the control algorithm to execute in an environment of dynamically scheduled CPU time.
2.6.2
State Reduction via Residualisation
The effect of the switching algorithm on the B737 model, with controllers K1 and Kp obtained using residualisation, is presented in this section. The lowest order controller obtained using residualisation, that guarantees stability of the closed-loop system, has 10 states. Therefore, in this case we will be switching between a 18 state controller (K1 ) and a 10 state controller (Kp ). Note that Kp ∈ Kresid has fewer number of states than Kp ∈ Kresid .
Let us represent the closed-loop system in eqn.(2.30), with controller Ki , as Pi . The frequency response of the transfer functions, defined by eqn.(2.30), with K1 , Kp ∈ Kresid is shown in Fig. 2.4 in page 43. From these plots we see that Kp ∈ Kresid causes differences from P1 at high frequency. The induced norm of the closed-loop system Pp is 4.29, which is quite large compared to 1.12, the induced norm of P1 .
The closed-loop response to a step command of 20 f t/s in V , with the Kp ∈ Kresid is shown in Fig. 2.5 page 44. We observe that the degradation in the performance as the velocity step command induces a significantly larger γ response and causes oscillatory control actions in both thrust and elevator, as compared with that of K1 .
39
With the switching algorithm implemented, the time response to the same velocity step command is shown in Fig.2.6 page 45. As in the case for K1 , Kp ∈ Kresid , the controller starts with K1 , switches to Kp at time 30s. The computational resources come back to 100% and ∆yc starts to decay. It reaches zero at time 82.4s and the controller switches back to K1 with appropriately initialised xf . The savings in computational overhead, for this example is approximately 59%.
We see that there are noticeable transients, especially in the elevator angle, at times when the controller dynamics changes. The transient at 30s is because of the sudden disappearance of xf dynamics, the transient at 70s is induced by the decaying of ∆yc and the transient at 82.4s is due to the transients in xf , when it begins to evolve from steady-state under the action of uc .
The value of λ = 0.4 was chosen to decay ∆yc for this implementation. A smaller value of λ will reduce the transients at 70s and K1 will take effect earlier. Note that minimising this transient doesn’t require any additional computation. However, reduction of the transients at 30s and 82.4s will require implementation of appropriately defined filters, which will add computational overhead.
Comparing the two switching algorithms, based on the two techniques of model reduction, we observer the following, 1. Kp ∈ Kresid has 10 states and Kp ∈ Kbal has 13 states. Therefore, for this example, the switching algorithm based on residualisation can accomodate a larger reduction in CPU time.
2. Both the techniques induce transients whenever dynamics of states are either discarded or added to the controller. The transients because of the z states are small compared to those induced by xf states.
3. The transients because of the decaying of z states or ∆yc is governed by the choice of λ. A value of λ reduces the transients because of the decaying process, 40
but it delays the activation of the higher order system. Therefore, in both the switching schemes there is a tradeoff between magnitude of transients and delay in activation of the higher order control algorithm.
2.7
Comparison with Anytime Algorithms
In this chapter we have presented a method to transform linear controllers to algorithms that can accomodate changes in CPU time alloted to compute the controller output. The behaviour of the transformed controller is similar to anytime algorithms where accuracy is traded off for computational time. The differences and similarities between the transformed controller algorithm and anytime algorithms can be determined as follows. An anytime algorithm has the following properties in general, 1. Anytime algorithms return a result with a measure of quality - The measure of quality for a anytime control algorithm could be defined as the infinity norm of the transfer function from disturbance to error. If the performance of the controller degrades from higher to lower order, larger value of this metric would indicate poorer quality of result.
2. Anytime algorithms can be pre-empted anytime - Anytime control algorithms, as defined in this chapter, cannot be preempted once it starts. The available CPU time must be known prior to execution of the algorithm. Once the available time is known, the complexity of the algorithm is chosen so that its run-time is less than alloted CPU time. To obtain a valid result, the algorithm must execute till completion.
3. Increasing CPU time at run-time increases quality of result - The anytime control algorithm, as defined, cannot make use of extra CPU time that becomes available during run-time. This feature would require the algorithm to switch to the higher order controller while it is implementing a lower order controller. Such a transition may be possible, but has not been considered in this research work.
41
4. Anytime algorithms always improve output quality as they are given more time - There is a upper limit on the performance of the anytime control algorithm and it is equal to that of the highest order controller. This property of anytime algorithms is due to the iterative nature of the computation. Since we achieve anytime behaviour of control algorithms by switching controllers of different order, this feature is not as pronounced as it is in iterative algorithms. Therefore, we see that the anytime control algorithm, proposed in this chapter, could exhibit the most salient feature of anytime algorithms, namely the trade off between quality of output and computation time. It however has limitations in some of the other anytime properties.
42
0
0
10
→V
→γ
10
−5
ref
γ −10
10
−10
−2
10
0
10 ω (rad/s)
10
2
10
0
0
10 ω (rad/s)
2
10
0
10
→V
→γ ref
−2
10
10
−5
−5
10
ref
10
V
V
−5
10
γ
ref
10
−10
10
−10
−2
10
0
10 ω (rad/s)
10
2
10
−2
10
0
10 ω (rad/s)
2
10
Figure 2.4: Frequency Response of (γref , Vref ) → (γ, V ) for full order controller (solid), Kp ∈ Kbal (dash-dot) and Kp ∈ Kresid (dash-dash).
43
γ (deg)
0.5 0 −0.5 50
100
150
50
100
150
50
100
150
100
150
10
0 0 4000 2000 0 −2000 −4000 0 Elevator (deg)
Thrust (lb)
V (ft/s)
0 20
3 2 1 0 −1 0
50 Time (sec)
Figure 2.5: Closed-loop time response with full order controller (thick solid), Kp ∈ Kbal (thin solid) and Kp ∈ Kresid (dashed).
44
γ (deg)
0.5 0 −0.5
V (ft/s)
0 20
100
150
50
100
150
50
100
150
100
150
10 0 0
Thrust (lb)
50
2000 0 −2000
Elevator (deg)
0 3 2 1 0 0
50 Time (sec)
Figure 2.6: Closed-loop time response with switching of controllers, full order controller (thick solid), Kp ∈ Kbal (thin solid) and Kp ∈ Kresid (dashed).
45
Chapter 3 A Computational Model for Reduction of Execution Time of Linear Control Algorithms 3.1
Introduction
Most control algorithms today are implemented in digital computers. The popularity is due to the versatility of implementing control algorithms in software and the drop in the cost of computing. Increasingly complex control systems are now being designed because implementation can be entirely software based. Therefore control systems today, are essentially a composite of computational tasks. Since most control systems interact with the real world, the constituting computational elements of the control system need to execute in real-time. Therefore control algorithms are realised as real-time computational systems during implementation.
For economic reasons, the hardware used to realise a control system, usually executes several computational tasks, each with its own deadline. Several scheduling algorithms that guarantee on-time completion of these computational tasks have been proposed and studied in the past three decades. In 1973, Liu and Layland [LL73] published a seminal paper that addressed scheduling algorithms for multiprogramming in a hard real-time system. Since then, a vast amount of work has been done by both the operating research and computer science communities. Ramamritham 46
and Stankovic in [RS94a] summarises the current state of the real-time scheduling algorithms.
The execution time or run-time of the computational tasks, that constitute a real-time control system, plays a vital role in any real-time scheduling algorithm. Depending on the task set and their run-times, a scheduling algorithm may or may not be feasible. A scheduling algorithm is said to be feasible for a task set, if it guarantees on-time completion of all the tasks. Therefore, if the run-time of the computational tasks can be reduced, then more tasks could be feasibly scheduled in a hardware with a given clock speed or dually, a task set can be feasibly realised in a hardware with slower clock speed. In addition to this, reduction in the run-times of the scheduled tasks increase the guaranteeability of tasks with timing constraints. The motivation for the research work presented in this chapter stems from these facts.
In this chapter we present an algorithm that transforms linear time invariant controllers to periodically time varying systems, which can be digitally implemented in a computationally efficient manner. Controllers designed in continuous time are discretised when implemented in digital computers. The sampling frequency of the discrete-time controller is typically chosen to be ten times faster than the cutoff frequency of the closed-loop system. If the natural frequencies of the controller are sparsely spaced, then the states corresponding to the slow modes are updated at a rate more than necessary. This results in unneccesary computation.
The computational overhead or execution time can be reduced if the states corresponding to the slow modes of the controller are updated at a slower rate. The simplest transformation would be to decompose the controller into two subsystems (or computational tasks), one containing the fast modes and the other the slow modes. Reduction in computational overhead can then be achieved by simply operating the two subsystems at different rates. With such a scheme however, there will be a periodic increase in computational requirement at time instants when both the subsystems need to be updated, which is not desirable. One of the salient features of the proposed algorithm is the distribution of the computation, required to update the slowly varying states, over time to achieve a uniform reduction in the computational 47
overhead. From the point of view of real-time tasks, this algorithm decomposes the original task into two tasks, each with reduced run-time but different periodicity.
The chapter is organised as follows. Initially some fundamental techniques used in multi-rate systems and digital signal processing are presented. This is followed by details of the transformation of a linear time invariant (LTI) controller to a multi-rate system and the scheduling policy used to update the states to reduce the computational overhead. A framework to analyse of the effect of such a transformation on the closed-loop performance and stability is developed. The theoretical framework for analysis is built on multi-rate filter bank theory and lifting techinique used in the analysis of multi-rate control systems. In the end, this approach is applied to a B737-100 TSRV (Transport System Research Vehicle) linear longitudinal motion model and simulation results are presented.
3.2
Fundamentals of Digital Multi-Rate Systems
This section presents some fundamentals used in analysing digital multi-rate systems. The content of this section is based on references [Vai93, CF95] 3.2.1
Sampler S
Let us denote S as the sampler that transforms continuous-time signal to a discretetime signal. The block diagram shown in Fig. 3.1, takes in continuous-time signal v(t) and transforms it to discrete-time signal w(k).
w(k)
S
v(t)
Figure 3.1: A sampler.
Denoting V (jω) and W (e−jωh ) as the Fourier transform of v(t) and w(k), the following
48
relationship holds (pg. 50 of [CF95]) W (e−jωh ) =
1 Ve (jω) h
(3.1)
where h is the sampling-time and Ve (jω) :=
∞ X
V (jω + jkωs ); ωs = 2π/h
k=−∞
If v(t) is a band-limited signal, i.e. V (jω) = 0 for |ω| > W (e−jωh ) =
3.2.2
ωs , 2
then
1 ωs V (jω); for|ω| < h 2
(3.2)
Zero-Order Hold H
A zero-order hold is a device H, that transforms a discrete-time signal to a continuoustime signal.
v(t)
H
w(k)
Figure 3.2: A zero-order hold.
The transformation is achieved by a simple causal reconstruction given by y(t) = y(tk ) ; tk ≤ t < tk+1 This means that the reconstructed signal is piecewise constant, continuous from the right, and equal to the sampled signal at the sampling times. The reconstructed value is thus held constant until the next sampling instant. The system H has the impulse response H(t) =
1 1 1(t) − 1(t − h) h h
49
The transfer function is therefore (pg 52 in [CF95]) H(jω) =
1 − e−jωh jωh
(3.3)
The Fourier transforms of v(t) and w(k) in Fig. 3.2, are related by the following equation v(jω) = hH(jω)W (e−jωh ) (3.4) 3.2.3
Lifting Operator
Lifting is a commonly used technique in the analysis of multi-rate systems [PPKT83, AY86]. An inter-connection of multi-rate systems can be analysed, using tools developed for single-rate systems, by lifting the faster systems to match the rate of the slowest system. The lifting operator is defined both in continuous time (Ch.10 in [CF95]) and in discrete time (Ch.8 in [CF95]). Discrete-time lifting techniques are used in this chapter, the basics of which are described in the following sections. 3.2.3.1
Lifting of Discrete-Time Signals
Suppose v(k) = {v(0), v(1), · · · } is a discrete-time signal. If we rewrite the signal as
v(0) v(1) .. . v(M − 1)
,
v(M ) v(M + 1) .. . v(2M − 1)
, . . . ≡ {v(0), v(1), . . . .}
(3.5)
then the mapping of v(k) 7→ v(k) is denoted by v(k) = LM v(k). The operator LM is called the lifting operator and the signal v(k) is called the lifted signal. The subscript in LM denotes the factor by which the dimension of the signal has been increased or lifted. The inverse operation, i.e the mapping v(k) 7→ v(k), is the reconstruction of v(k) from v(k). This is denoted by v(k) = L−1 M v(k). In general we will drop the subscript from L. The factor by which the dimension of the signal is lifted will be clear from context.
The operator L, as a system is non-causal and time-varying, however it is norm50
preserving (pg.204 in [CF95]). Therefore ||Lv||2 = ||v||2 The inverse operator L−1 is causal but time-varying. 3.2.3.2
Lifting Discrete-Time Systems
ˆ is a discrete-time, finite-dimensional Consider the system shown in Fig. 3.3, where G v(k)
−1
L
v(k)
^
G
y(k)
L
y(k)
Figure 3.3: Lifting of discrete-time LTI system.
LTI system with underlying period h/M . Lifting the input and output signals so ˆ that the lifted signals correspond to the base period h results in the lifted system G, ˆ −1 . ˆ ≡ LM GL defined as G M " # ˆ ˆ ˆ = A B , then the lifted system can be written as If G ˆ Cˆ D
ˆ= G
ˆ ˆ AˆM −1 B AˆM −2 B AˆM ˆ Cˆ D 0 ˆ ˆ ˆ ˆ ˆ CB D CA .. .. .. . . . ˆ Cˆ AˆM −3 B ˆ Cˆ AˆM −1 Cˆ AˆM −2 B
···
ˆ B
··· ···
0 0 .. .
ˆ ··· D
(3.6)
If Aˆ is stable then AˆM is also stable. Since lifting preserves norms, it follows that the ˆ and G ˆ satisfy (pg.206 in [CF95]): norms of the two transfer functions G ˆ 2 = ||G|| ˆ 2 /M ||G|| ˆ ∞ = ||G|| ˆ ∞ ||G||
51
(3.7) (3.8)
3.2.4
M -Fold Decimator
A M -fold decimator, represented by ↓ M in Fig. 3.4, is a device that takes an input sequence v(k) and returns vD (k) = v(kM ) (3.9) where M is an integer. Only those samples of v(k) which occur at time equal to multiples of M are retained by the decimator. In terms of z transforms, v and vD are v(0), v(1), v(2), ...
M
v(0), v(M), v(2M), ...
Figure 3.4: A M -fold decimator.
related as
M −1 1 X VD (z) = V (z 1/M × e−j2πn/M ) M n=0
(3.10)
Representing e−j2π/M as W , the M th root of unity, we can rewrite eqn.(3.10) as VD (z) =
M −1 1 X V (z 1/M W n ) M n=0
(3.11)
M -fold decimators are linear time-varying systems that down-sample the signal at input. From eqn.(3.11) we see that decimation causes aliasing by adding multiple copies of the compressed spectrum. In general, to avoid aliasing in down-sampling by a factor of M , the bandwidth of the admitted signal should be limited to ωN /M , where ωN = ωs /2 is the Nyquist frequency. This is achieved by passing the signal through a low-pass filter prior to down-sampling. 3.2.5
M -Fold Expander
A M -fold expander, represented by ↑ M in Fig. 3.5 , is a device that takes in input sequence v(k) and produces output vE (k) =
(
v(k/M ) ; k = nM ; n ∈ {0, 1, · · · } 0 ; k 6= nM ; n ∈ {0, 1, · · · }
where M is a positive integer. 52
v(0), v(M), v(2M), ...
v(0), 0 , 0, ..., v(M),..., v(2M), ...
M
Figure 3.5: M -fold expander.
In terms of z-transforms, vE and v are related as VE (z) = V (z M )
(3.12)
Thus the Fourier transform of the output of the expander is a frequency-scaled version of the Fourier transform of the input. Since Fourier transforms repeat itself every 2π, the Fourier transform of the output will contain M copies of the compressed spectrum of the input. These multiple copies in the spectrum of VE (z) are called images. Hence the expander causes an imaging effect.
The zero-valued samples at k 6= nM and the imaging effect created by the expander can be eliminated by passing the output of the expander through a lowpass filter with gain M and cutoff ωN /M . The function of the lowpass filter, called the interpolation/expansion filter, in frequency domain is to remove the images created by the expander. While in the time domain, the convolution of vE with the impulse response of the filter results in the filling up of the zero-valued samples with interpolated values. Depending on the nature of the recontruction of vE , the interpolation filter can be designed to linearly interpolate the intermediate values or hold constant the values at k = nM for M time-steps in the future.
3.2.6
Type 1 Polyphase Decomposition of Digital Filters
To explain the basic idea behind the Type 1 polyphase decomposition of a digital filter, as defined in page 121 of [Vai93], consider a digital filter ˆ H(z) =
∞ X
n=−∞
53
h(n)z −n
By separating the odd and even terms we can write ˆ H(z) =
∞ X
h(2n)z
−2n
+z
−1
n=−∞
∞ X
h(2n + 1)z −2n
n=−∞
ˆ Extending this idea further to M terms we can decompose H(z) as ˆ H(z) =
∞ X
h(nM )z −nM + z −1
n=−∞
∞ X
h(nM + 1)z −nM + · · ·
n=−∞
+ z −(M −1)
∞ X
h(nM + 1 − M )z −nM
n=−∞
This can be compactly written as ˆ H(z) =
M −1 X
ˆ l (z M ) z −l H
(3.13)
l=0
where ˆ l (z) = H
∞ X
hl (n)z −n
(3.14)
n=−∞
with hl (n) ≡ h(M n + l),
0≤l ≤M −1
ˆ l (z) are called the polyphase components of the filter H(z). ˆ The filters H 3.2.7
Lifting Operator and Blocking Mechanism in Digital Signal Processing
Figure 3.6(a) represents a digital filter and Fig. 3.6(b) represents the same implementation as a block digital filter. The signals yB (k) and uB (k) are the blocked versions ˆ of the signals y(k) and u(k) and the block filter H(z) is the blocked version of the ˆ filter H(z) defined by (pg.431, [Vai93]),
ˆ H(z) =
ˆ 0 (z) ˆ 1 (z) ˆ M −1 (z) H H ··· H ˆ M −1 (z) ˆ 0 (z) ˆ M −2 (z) z −1 H H ··· H .. .. .. . . . −1 ˆ −1 ˆ ˆ z H1 (z) z H2 (z) · · · H0 (z) 54
(3.15)
ˆ l (z) are the polyphase components of H(z) ˆ where H as defined in eqn.(3.14).
The blocking mechanism in digital signal processing is identical to the lifting operator used in digital control theory. Therefore the mapping of y(k) to yB (k) in Fig. 3.6(b) is identical to the mapping of v(k) to v(k) defined in eqn.(3.5). Similarly, the mapping of v(k) to v(k) is identical to the mapping of uB (k) to u(k).
y(k)
u(k)
H(z) (a)
M
y(k)
z
M
y0 (k)
u0 (k)
y1 (k)
u1 (k) yB (k)
z2
M
z (M−1)
M
^
H(z)
+
M
M
z−1
+
M
z −2
+
M
z−(M−1)
+
u(k)
uB (k)
y2 (k)
u2 (k)
yM−1(k)
uM−1 (k)
Unblocking Mechanism
Blocking Mechanism (b)
Figure 3.6: Representation of digital block filtering.
The block-filtering setup in Fig. 3.6(b) can be represented using L and L−1 as shown ˆ with input y(k) in Fig. 3.7. Figure 3.7(a) represents a linear discrete-time system G, and output u(k). Introducing LL−1 = I in both input and output channels results in ˆ we transform it Fig. 3.7(b). By absorbing L−1 at the input and L at the output of G, ˆ L by ˆ = LGL ˆ −1 . Figure 3.7(c) is identical to Fig. 3.6(b) if we replace G ˆ by H(z), to G the blocking mechanism and L−1 by the unblocking mechanism. Thus we see that the blocking mechanism in digital signal processing is equivalent to the lifting operator L in control theory, the L−1 operator is equivalent to the unblocking mechanism and ˆ ˆ the system G(z) represents the blocked version of the system G(z).
It is interesting to note that even though the decimators and the expanders in Fig. 3.6(b) produce aliasing components U (zW k ), Y (z) is free from aliasing. A filter55
^
y(k)
u(k)
G (a)
y(k)
L
_y(k)
^
−1
L
G
L
u_(k)
−1
L
u(k)
(b)
y(k)
L
_y(k)
^
G _
_u(k)
−1
L
u(k)
(c)
Figure 3.7: Lifting of discrete-time LTI system.
ˆ bank shown in Fig. 3.6(b) is alias free if, and only if, H(z) is pseudo-circulant [VM88].
A matrix is said to be circulant (pg.249, [Vai93]), if every row is obtained using a single right-shift of the previous row with the rightmost element, which spills over in the process, be circulated back to become the left most element. A pseudo-circulant matrix is a circulant matrix with the additional feature that the elements below the main diagonal are multiplied by z −1 . An example of a 3 × 3 pseudo-circulant matrix is P0 (z) P1 (z) P2 (z) P (z) = z −1 P2 (z) P0 (z) P1 (z) z −1 P1 (z) z −1 P2 (z) P0 (z)
ˆ ˆ From the definition of H(z) in eqn.(3.15), we see that H(z) is pseudo-circulant. Hence the input to output mapping in Fig. 3.6(b) is alias free.
To determine the relationship between the Fourier transforms of u(k) and y(k) let us redraw Fig. 3.6(b) with delay chains as shown in Fig. 3.8. The relationship between the Fourier transforms of v(k) and u(k) in Fig. 3.8 is given by (pg.253, [Vai93]), Ã ! M −1 M −1 X 1 X ˆ s,l (z M ) V (z) z −(M −1−s) z −l H U (z) = M s=0 l=0 56
(3.16)
Delay Chain
Delay Chain y(k)
z (M−1)
v(k)
z −(M−1)
M
z −(M−2)
M
v0 (k)
u0 (k)
v1 (k)
u1 (k) vB (k)
z −(M−3)
H(z)
M
z−1
+
M
z −2
+
M
z−(M−1)
+
uB (k)
v2 (k)
u2 (k)
vM−1(k)
uM−1 (k)
M
M
^
+
M
Unblocking Mechanism
Blocking Mechanism
Figure 3.8: Representation of digital block filtering with delay chain.
Substituting V (z) = z M −1 Y (z) in eqn.(3.16), we get U (z) =
1 M
M −1 X s=0
Ã
zs
M −1 X
!
ˆ s,l (z M ) Y (z) z −l H
l=0
(3.17)
ˆ s,l (z) are the elements of H(z). ˆ where H Equation 3.17 can be written in a more compact manner as
1 h U (z) = 1 z −1 z −2 · · · M
z −(M −1)
i M ˆ H(z )
1 z z2 .. . z (M −1)
Y (z)
(3.18)
ˆ the relationship between the Fourier transforms of u(k) For the lifted system G(z), ˆ M) ˆ M ) replaced by G(z and y(k) in Fig. 3.7(c) is also given by eqn.(3.18) with H(z where ˆ M) = D ˆ + C(z ˆ M I − A) ˆ −1 B ˆ G(z ˆ B, ˆ C, ˆ D) ˆ defined by the matrix partitions in eqn.(3.6). with (A,
57
u(k)
3.3
Transformation of LTI Controllers to Multi-Rate Systems
In this section we present a computationally efficient way of implementing discretetime LTI controller. The computational overhead is measured in terms of the number of states being updated at a given time step. The proposed algorithm updates the states of the controller at a rate based on their natural frequencies. This results in a multi-rate system. Since all the states of the controller are not updated at the same time, the computational requirements for digital implementation is reduced. The effect of such a transformation on system behaviour is analysed using tools from multi-rate filter banks and lifting techniques.
Let us assume that the linear controller, denoted by K, has the following form, K :=
(
x˙ c (t) = Ac xc (t) + Bc y(t) u(t) = Cc xc (t)
where u(t) denotes the controller output, y(t) the plant output fed into the controller and xc (t) the controller states. Let us also decompose controller K into two subsystems Kf and Ks , such that K(s) = Kf (s) + Ks (s), where Kf and Ks contains the fast and slow modes, respectively. The system decomposition is shown in Fig. 3.9.
Kf +
y(t)
u(t)
Ks
Figure 3.9: Transformation of LTI controllers to multi-rate systems.
In a typical digital implementation of K, the sampling rate ωs is chosen to be ten times faster than the cutoff frequency of the closed-loop system. All the states of the controller are updated at this rate. If the modes of K are sparsely distributed in frequency, then this update rate is more than sufficient for the states of Ks . We identify this as unnecessary computation, which can be avoided if the states of Ks 58
are updated at a slower rate.
Let us assume that h = 1/ωs is the time step used to discretise the original controller K. Let us also assume that Ks is such that it is sufficient to update its states every M time steps, where M is a positive integer. For digital implementation, Kf and ˆ f and K ˆ s , with time-step h and hM, Ks are transformed to discrete-time systems K ˆ f are updated at every time step, denoted by respectively. Therefore the states of K ˆ s are updated every kM time step. k, and the states of K
From table(3.1), we see that the computational requirement at time steps k 6= nM, n ∈ Z∗ is lower than that at time steps k = nM . This is because at k 6= nM only the states of Kf are being updated. At time steps k = nM , all the states of the controller are being updated and hence the computational requirement at these time steps is equal to the overhead encountered in conventional digital implementation. Conventional digital implementation implies that all states of the controller are updated at every time step. Time Step 0 ˆf , K ˆs System Updated K
1 ˆf K
2 ˆf K
··· ···
M-1 ˆf K
M ˆf , K ˆs K
M+1 · · · ˆf ··· K
Table 3.1: State updation pattern of dual-rate linear Controllers
Therefore, by simply implementing K as a multi-rate system, we can reduce the computational overhead at time-steps k 6= nM . However the periodic increase in the CPU requirement is undesirable and we wish to reduce the computational requirement uniformly at all time steps.
The key idea presented in this chapter is the scheduling algorithm for updating the ˆ s , so that the computational overhead is reduced at all time-steps. states of K 3.3.1
Scheduling Algorithm for Uniform Reduction in CPU Overhead
Uniform reduction in computational overhead is achieved by distributing the compuˆ s , over time. If the state space of K ˆ s can be tation, required to update the states of K 59
partitioned into M subsets, then the distribution of computation can be achieved by updating these subsets one after another. With such a update policy, all the states ˆ s are updated every M time steps as required. of K ˆ s , updating them along with those Since the subsets will contain fewer states than K ˆ f will reduce the spikes in computational overhead at times k = nM but at the of K same time will increase the computational requirements at times k 6= nM . Clearly, the uniformity of CPU overhead depends on the uniformity of the number of states in each partition.
ˆ s can be achieved quite easily, if K ˆs The process of updating partial states of K ˆ s has M distinct eigenis decomposed into modal form. Let us assume that K ˆ s will yield M sub-systems, denoted values. Therefore modal decomposition of K ˆ s = {K ˆ s0 , K ˆ s1 , . . . ., K ˆs by K }. The computation required to update the states of M −1 Ks is spread over time by updating these modal sub-systems one after another. In ˆ s are updated in M time-steps. this manner, all the states of K ˆ f and K ˆ s are being From table(3.2), we see that at any given time-step k, systems K i updated, where i is the remainder of the integer division k/M . ˆ s has transformed the controller Note 3.1 Note that this round-robin scheduling of K i into a periodically time-varying system. ˆ and the transLet us denote the controller with dynamics shown in table(3.2) as Ψ ˆ as Ψ ˆ = T (K). formation of K to Ψ Time Step 0 ˆ ˆ s0 System Updated Kf , K
1 ˆ ˆ s1 Kf , K
··· ···
M-1 ˆ ˆs Kf , K M −1
M ˆ ˆ s0 Kf , K
M+1 ˆ ˆ s1 Kf , K
··· ···
Table 3.2: Scheduling algorithm for uniform reduction in CPU overhead.
3.3.2
Formal Definition of the Transformation
In this section we formally define the transformation T (K) that achieves uniform reduction of computational overhead. Before we formally define T (K) some definitions 60
are necessary.
Definition 3.1 Let π(K) denote the maximum number of distinct eigen-values of K, where K is a LTI system. Definition 3.2 Let φ(K, h) denote the continuous to discrete time transformation of LTI system K, with discretisation time-step h. The discrete time system is denoted ˆ = φ(K, h). If K ≡ (A, B, C) then K ˆ ≡ (A, ˆ B, ˆ C). ˆ by K
ˆ = φ(K, h), π(K) ˆ = π(K). We assume that φ is such that for K Definition 3.3 Define µ(i, j) as µ(i, j) = i − j × ⌊i/j⌋ ; i, j ∈ Z, j 6= 0 This function simply returns the remainder of the integer division (i/j). Definition 3.4 Define M(K) such that, M(K) decomposes a LTI system K, into modal form and generates a set of modal sub-systems {Ki } where i ∈ Z∗ , i < π(K). Each of the modal systems Ki has associated • state vector xi • output vector ui • input vector y, which is common to all the modal sub-systems • and dynamics defined by (Ai , Bi , Ci ). It is assumed that K has no direct feedthrough, therefore Ki also does not have any direct feedthrough. At this point we are ready to formally define the transformation of a given continuoustime linear time invariant controller K to a multi-rate, linear periodically time varyˆ as follows, ing, discrete-time system Ψ 61
ˆ defined by the following Definition 3.5 T (K) is the transformation from K 7→ Ψ, steps 1. Decompose K into Ks and Kf , where Kf and Ks contain the fast and the slow modes of K, respectively. ˆ f = φ(Kf , h), where h is the time-period of the base clock. 2. Obtain K ˆ s = φ(Ks , hπ(Ks )). 3. Obtain K ˆ s } = M(K ˆ s ), i = {0, 1, . . . , π(K ˆ s ) − 1}. 4. Obtain {K i ˆ as 5. Define dynamics of Ψ xk+1 = f xk+1 = si
(
ˆf y k Aˆf xkf + B ˆ s )) Aˆsi xksi + Bˆsi ysk ; if i = µ(k, π(K ˆ s )) ; if i = 6 µ(k, π(K xksi
(3.19)
In eqn.(3.19), the controller input ysk can be one of the following three j k ´ ˆ s) ysk = y k ; k´ = k/π(K
(3.20)
ysk = y k (3.21) ´ ³ ˆ ˆ s ); y k = 0 f or k < 0 (3.22) ysk = y k + y k−1 + · · · + y k−π(Ks )+1 /π(K ˆ s are updated using the The first definition of ysk means that the states of K i same value of the plant output. The second definition uses the most recent plant output to update xsi , and the last definition uses the running average of the ˆ s ) time-steps, to update the states of K ˆs . plant output over π(K i ˆ as 6. Define output of Ψ uk = ukf + uks
(3.23)
where ukf = Cˆf xkf and uks is one of the following ˆ s )−1 π(K
uks
=
X
j k ´ ˆ s) Cˆsi xksi ; k´ = k/π(K
(3.24)
X
Cˆsi xksi
(3.25)
i=0
ˆ s )−1 π(K
uks =
i=0
62
The difference between the two definitions of uks is that in eqn.(3.24), the updated states of the modal systems do not affect the controller output until all the modal ˆs systems have been updated. Whereas in eqn.(3.25), the updated states of K i
immediately affect
uks .
ˆ has been defined in more than one way, the definition Since the input and output of Ψ ˆ as a dynamical system is dependent on the combination of the input and output of Ψ definitions. In this chapter, we only analyse the frequency domain characteristics of ˆ as defined by the following four combinations. We define, Ψ ˆ 1 to be the dynamics of Ψ ˆ with ysk given by eqn.(3.20) and uks given by • Ψ eqn.(3.24). ˆ 2 to be the dynamics of Ψ ˆ with ysk given by eqn.(3.21) and uks given by • Ψ eqn.(3.24). ˆ 3 to be the dynamics of Ψ ˆ with y k given by eqn.(3.21) and uk given by • Ψ s s eqn.(3.25). ˆ 4 to be the dynamics of Ψ ˆ with ysk given by eqn.(3.22) and uks given by • Ψ eqn.(3.24).
3.3.3
ˆ = T (K) Frequency Response of Ψ
ˆ = From control system point of view, it is important to analyse the dynamics of Ψ T (K) in the frequency domain. To determine the relationship between the norms ||y(k)||2 and ||u(k)||2 , it is necessary to represent the mapping y(k) 7→ u(k) in terms of LTI systems. In the following subsection, we transform the multi-rate systems ˆ i , defined in section 3.3.2, to LTI systems [MB75] and define transfer function for Ψ y(k) 7→ u(k). 3.3.3.1
ˆ1 Transfer Function for Ψ
ˆ 1 , we see that all the modal systems are updated by the From the definition of Ψ same value of the plant output, which is obtained at every kM time-steps, where ˆ s ). Also, the states of K ˆ s do not affect uks until all the modal systems have M = π(K i 63
ˆ 1 is identical to the dual-rate been updated. Therefore, from system point of view Ψ system shown in table(3.1). It only differs from the point of view of computation. ˆ 1 can be represented by Fig. 3.10. Therefore the dynamics of Ψ
Figure 3.10 represents the mapping y(k) 7→ u(k) in terms of samplers and holds. The subscripts s and f in the samplers and holds denote slow and fast sampling and hold ˆ f and the filter F respectively. The signal y(k) is sampled at the rate required by K is the anti-aliasing filter for the slow sampler Ss . u f (k)
^
Kf y(k)
+ F
Hf
^
Ss
Ks
Sf
Hs
u(k)
us (k)
ˆ 1 as a linear periodically time varying system. Figure 3.10: Ψ
The multi-rate system in Fig. 3.10 can be transformed into a single rate system with the help of lifting operators as shown in Fig. 3.11. u f (k) _
^
L−1
Kf
L
y(k)
u(k) _ + L−1
Hf
F
^
Ss
Ks
Hs
Sf
L us (k) _
ˆ 1 as a linear time invariant system. Figure 3.11: Ψ
Assuming (pg. 211-213, [CF95]) Sf Hs = L
−1
I I .. .
h i , Ss = Ss Hf Sf , Ss Hf = I 0 · · · 0 L
I ˆ can thereˆ and G = SGH, the transfer function between y(k) and u(k), denoted by Ψ,
64
fore be written as ˆ 1 = LK ˆ f L−1 + LSf Hs K ˆ s Ss F Hf L−1 Ψ ˆ f L−1 + L(Sf Hs )K ˆ s (Ss Hf )(Sf F Hf )L−1 = LK I h i I ˆ s I 0 · · · 0 L(Sf F Hf )L−1 ˆ f L−1 + LL−1 . K = LK . . I
or
ˆ ˆ Ψ1 = K f + 3.3.3.2
I I .. . I
h ˆs I 0 · · · K
0
i
Fˆ f
(3.26)
ˆ2 Transfer Function for Ψ
If the most recent value of y k is used to update xsi , and xsi does not affect uks until all ˆ s have been updated, then Ψ ˆ 2 is defined by the interconnection shown in Fig. 3.12. K i Therefore, ^
L−1
Kf
u f (k) _
L
_y(k)
+
L−1
Hf
Sf
F
L
_u(k)
^
Ks
us (k) _
ˆ 2 as a LTI system. Figure 3.12: Ψ ˆ 2 = LK ˆ f L−1 + K ˆ s L(Sf F Hf )L−1 Ψ or, ˆf + K ˆ2 = K ˆ s Fˆ f Ψ
65
(3.27)
ˆ s is given by where K
ˆ Ks = 3.3.3.3
Aˆs0 0 0 Aˆs1 .. .. . . 0 0 Cˆs0 Cˆs1 Cˆs0 Cˆs1 .. .. . . ˆ ˆ Cs0 Cs1
··· ···
0 0 .. .
AˆsM −1 · · · CˆsM −1 · · · CˆsM −1 .. . ˆ · · · CsM −1
···
Bˆs0 0 · · · 0 0 Bˆs1 · · · .. .. .. . . . 0 0 · · · BˆsM −1 0 0 .. .
0 0 .. .
··· ···
0 0 .. .
0
0
···
0
(3.28)
ˆ3 Transfer Function for Ψ
If the most recent value of y k is used to update xsi , and xsi affects uks immediately, ˆ 2 is also defined by the interconnection shown in Fig. 3.12, but with K ˆ s defined then Ψ as Aˆs0 0 0 ··· 0 Bˆs0 0 0 ··· 0 ˆs1 ˆs1 0 A 0 · · · 0 0 B 0 · · · 0 0 0 Bˆs2 ··· 0 0 0 Aˆs2 ··· 0 .. .. .. .. .. .. .. .. . . . . . . . . ˆ ˆ 0 0 0 · · · BsM −1 0 0 0 · · · AsM −1 ˆ Ks = ˆ ˆ ˆ ˆ C C C · · · C 0 0 0 · · · 0 s0 s1 s2 sM −1 ˆ ˆ ˆ ˆ ˆ Cˆs0 Aˆs0 Cs1 Cs2 · · · CsM −1 Cs0 Bs0 0 0 ··· 0 Cˆ Aˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ C B C B 0 · · · 0 C A C · · · C s0 s0 s1 s1 s0 s0 s1 s1 s2 sM −1 .. .. .. .. .. .. .. .. . . . . . . . . ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ 0 Cs0 As0 Cs1 As1 Cs2 As2 · · · CsM −1 Cs0 Bs0 Cs1 Bs1 Cs2 Bs2 · · · (3.29) Therefore, ˆ3 = K ˆf + K ˆ s Fˆ f Ψ 3.3.3.4
(3.30)
ˆ4 Transfer Function for Ψ
ˆ 4 requires the most recent running average of the plant output The definition of Ψ ˆ s ), time-steps. The computation of the running average is assumed to over, M = π(K 66
be a separate process, as is often the case, and hence its overhead is not accounted for in the controller implementation.
The sequence ysk as defined by eqn.(3.22) can be written as ys (k) =
µ
y(k) + y(k − 1) + · · · + y(k − M + 1) M
¶
Therefore, in terms of z transforms, k Ys (z) = y³s (0) + ys (1)/z + · ·³· ´ + ³ · · · + ys (k)/z ´ ´ y(0)+0+···+0 y(1)+y(0)+0+···+0 y(2)+y(1)+y(0)+0+···+0 = + + + ··· 2 M Mz ´ ³ ´ ³ Mz ´ ³ y(0)+y(1)/z+··· + y(0)+y(1)/z+··· + · · · + y(0)+y(1)/z+··· = M Mz M z M −1 ³ ´ 1+1/z+···+1/z M −1 = Y (z) M
Therefore the running average can be computed by a digital filter defined by, ¢ 1 ¡ Iny + z −1 Iny + z −2 Iny + · · · + z −(M −1) Iny Fˆavg = M
(3.31)
where Iny is a ny × ny identity matrix and ny is the dimension of the signal y(k). Figure 3.13 shows the frequency response of Fˆavg with sampling interval h = 0.01. The dashed line in Fig. 3.13 is a first order LTI system with cut-off 2π/(M h). Comparing the two plots, we see that Fˆavg acts similar to a lowpass filter with the desired cutoff of 2π/(M h).
ˆ 4 in the lifted input-output space is shown in Fig. 3.14, and the The dynamics of Ψ transfer function is given by ˆf + K ˆ4 = K ˆ s Fˆ avg Ψ
67
(3.32)
0
10
−1
Gain
10
−2
10
−3
10
−2
−1
10
0
10
10
1
ω (rad/s)
2
10
3
10
10
Figure 3.13: Frequency response of Fˆavg (solid), first-order LTI system with cut-off 2π (dashed). Mh
ˆ s is given by eqn.(3.28) and Fˆ avg is given by, where K
Fˆ avg
=
Iny z −M Iny z −M Iny Iny Iny z −M Iny Iny Iny Iny .. .. .. . . . Iny Iny Iny
· · · z −M Iny · · · z −M Iny · · · z −M Iny .. .. . . ··· Iny
u f (k) _
^
−1
Kf
L
L +
_y(k) L−1
F^avg
L
ˆ 4 as a LTI system. Figure 3.14: Ψ
68
^
Ks
us (k) _
_u(k)
3.3.4
Reduction in Computational Overhead
The reduction in computational overhead achieved by the transformation T (K) can be determined as follows. Let us denote η(K) as the number of states of a LTI system K and Tcpu (n), n ∈ Z∗ , as the computational time or overhead required to update n states of a discrete-time LTI system. Let us also assume that Tcpu (n1 ) ≥ Tcpu (n2 ) ≥ 0 if n1 ≥ n2 ≥ 0.
Therefore the number of states being updated at any time step k is ˆ f ) + η(K ˆs ) η(K i ˆ s )⌋. The reduction in computational overhead at any time step k where i = ⌊k/π(K can therefore be defined as ˆ − Tcpu (η(K ˆ f ) + η(K ˆ s )); i = ⌊k/π(K ˆ s )⌋ ∆Tcpu (k) = Tcpu (η(K)) i
(3.33)
ˆ denotes the computational time required if all the states of the conwhere Tcpu (η(K)) troller K are updated at every time step. From the definition of ∆Tcpu in eqn.(3.33), ˆ s . Therefore ∆Tcpu is a periodic we see that it depends on the number of states of K i function of time, the minimum value of which determines the savings in terms of CPU overhead. The lower bound on ∆Tcpu can be determined as follows.
ˆ s is p, then If the maximum multiplicity of the eigen-values of K ˆ s ) ≤ p, ∀i ∈ {0, 1, . . . , π(K ˆ s ) − 1} η(K i therefore the minimum value of ∆Tcpu (k) is given by ˆ − Tcpu (η(K ˆ f ) + p) ∆Tcpu,min = min ∆Tcpu (k) = Tcpu (η(K)) k
Since ∆Tcpu,min satisfies ˆ s) ∆Tcpu,min > 0 ; if p < η(K 69
we can conclude that the transformation T (K) achieves a uniform reduction in the computational overhead by a nonzero amount quantified by ∆Tcpu,min .
3.4
Example
In this section we study the effect of the transformation T (K) on a controller designed for a B737-100 TSRV(Transport System Research Vehicle) linear longitudinal motion model. The aircraft model has four states: longitudinal velocity V (ft/s), angle-of-attack α (rad), pitch rate q (rad/s) and pitch angle θ (rad); two control inputs: thrust T (lb) and elevator deflection δe (deg). The elevator actuator and the engine are modelled as 16/(s + 16) and 20/(s2 + 12s + 20) respectively. The control objective is to achieve decoupled response of V and flight-path-angle γ reference signals. The controller was designed using H∞ theory and has 18 states. Details of the controller design can be obtained from reference [GB01].
The sampling rate of 100 Hz is sufficiently fast to implement this controller in a digital computer. From table(3.4), we see that the natural frequencies of the controller vary from 7.01 × 10−06 Hz to 27.04 Hz. Clearly, updating all the states of this system at 100 Hz will result in unnecessary computation.
No. Natural Frequency (Hz) 1 27.04 2 27.04 3 12.77 4 12.77 5 2.67 6 0.80 7 0.50 8 0.50 9 0.32
No. Natural Frequency (Hz) 10 0.32 11 0.16 12 0.13 13 0.13 14 2.55 × 10−2 15 2.55 × 10−2 16 1.25 × 10−2 17 1.25 × 10−3 18 7.01 × 10−6
Table 3.3: Natural frequencies of the controller designed in reference [GB01].
For this example, we decompose the controller K by assigning the fastest two modes to Kf and the rest to Ks . Modal decomposition of Ks will yield eleven modal sub70
systems, therefore M = π(Ks ) = 11. Note 3.2 Note that the maximum multiplicity of the eigen-values is two, hence we can expect a substantial reduction in the computational overhead. ˆ 1, Ψ ˆ 2, Ψ ˆ 3 and Ψ ˆ 4 for this system. The From section 3.3.2, we can generate systems Ψ largest singular value plots of their corresponding transfer functions, in the lifted input-output space, are shown in Fig. 3.15 and 3.16. 2
Gain
10
1
10
0
10
−2
10
−1
10
0
10
ω (rad/s)
1
10
2
10
Figure 3.15: Maximum singular value plot ˆ ˆ Ψi , i = 1 (dash − dot), 2 (dot), 3 (solid), σ ¯ (K)(dashed) ˆ i , i = 1, 2, 3, 4 are close to the original system K at low The figures reveal that Ψ frequencies, in terms of the largest singular value plots. There is, however, distortion ˆ i , i = 1, 2, 3, cause almost identical distortions at at higher frequencies. Systems Ψ ˆ 4 causes the least distortion. these frequencies. It is interesting to note that Ψ
To the study the frequency domain effect of the transformation T (K) on the closedloop system, we need to define the closed-loop system in the lifted input-output space. This is shown in Fig. 3.17. Note 3.3 A discrete-time representation of the plant for performance analysis. 71
2
Gain
10
1
10
0
10
−2
−1
10
10
0
1
10
10
ω (rad/s)
2
10
Figure 3.16: Maximum singular value plot ˆ (dashed) ˆ 4 (solid), σ ¯ (K) Ψ The discrete-time plant, sampled at 100 Hz is sufficiently close to the continuoustime plant and hence can be used for performance analysis [KA92]. The closed-loop γ(k)
γref (k) Vref (k)
^
V(k)
G
^
y(k)
K u(k)
Figure 3.17: Closed-loop system in lifted I/O space. ˆ ˆ i are compared with that of the original controller K. performance with controllers Ψ ˆ as Pˆ 0 , For the purpose of discussion let us represent the closed-loop system with K ˆ i as Pˆ i . Figure 3.18, plots the maximum singular and the closed loop system with Ψ values of the four closed-loop transfer functions γ ref → γ, γ ref → V , V ref → γ and V ref → V , for the four definitions of T (K).
Note 3.4 Note that we plot singular values because these transfer functions are multivariable systems in the lifted input-output space. 72
From the plots in Fig. 3.18, we see that systems Pˆ i do not differ from Pˆ 0 significantly, in terms of the largest singular values. At low frequencies, the tracking response of Pˆ i is identical to that of Pˆ 0 , and the four transformation achieve satisfactory decoupling of γ and V response. There is however deviation at around 1 rad/s in the four transfer functions.
To investigate the robustness of the transformed controllers we analysed the nominal performance and robust stability of the four closed-loop systems. From our analysis we observed that none of four transformed controllers could achieve the desired robust performance. This is expected since we did not consider robustness when we decomposed K into Kf and Ks .
To study the effect of T (K) in time domain, the step response of the closed-loop system to velocity command are shown in Fig. 3.19. Step responses of the systems ˆ 3 generates oscillatory elevator Pˆi , i = 1, 2, 3, 4 are quite close to that of Pˆ0 , except Ψ output for a velocity step command, which is not desirable. We also observed, from plots not included in this chapter, that the closed-loop respose to a step command in γ, for different Pˆi , i = 1, 2, 3, 4, are also close to that of Pˆ0 . Therefore, in time domain also, there is no significant change in the behaviour of the closed-loop due to the transformation.
ˆ 3 is not observed in the singular plot. Note that the oscillatory elevator output of Ψ This is because, frequency domain analysis of discrete-time systems is restricted to ˆ s on the Nyquist frequency of the system. The influence of the updated states of K i ˆ 3 occurs at a frequency higher than Nyquist frequency of the lifted the output of Ψ ˆ 3. system and hence it is not observable from the singular value plot of Ψ
Since the averaging filter in Ψ4 is a FIR filter, we were interested in analysing the closed-loop response to a velocity step command in the presence of high frequency disturbances, such as a gust. The gust model used is the NASA Dryden Gust Model. Details of the gust model and the implementation is available in [GB01]. From the response in Fig. 3.20, we see that the time response of Pˆ 0 and Pˆ 4 are not significantly different. Therefore, we can conclude that the FIR filter at the input of Ks doesn’t 73
degrade the high-frequency attenuation properties of the closed-loop system.
From the plots of the largest singular values of Pˆ i and step-responses, we observe that the transformation T (K) causes degradation in the respose of the closed-loop system to the reference commands. The degradation in tracking response, however, is not significant for this example.
The reduction in the computational overhead to implement the transformed controller is quite substantial. In a conventional implementation, where all the states of the controller are updated at every time step and assuming the A matrix of the controller is dense, the number of FLOPS (floating point operations) required for this example is 1000 MATLAB flops. If the controller is transformed into modal form, the A matrix of the controller is block-diagonal and the flop count in that case is 720. With the transformed controller T (K), a maximum of only 132 MATLAB flops are required. Therefore, for this example, it is possible to reduce the required computational overhead by 81.66% if A is block-diagonal and 86.8% if A is dense. This reduction in computational overhead is quite significant.
74
0
0
10
γ
ref
→γ
γref → V
10
−5
10
−5
10
−2
10
0
10 ω (rad/s)
2
−2
10
10
0
0
10 ω (rad/s)
2
10
0
10
V
ref
→γ
Vref → V
10
−5
−5
10
10 −2
10
0
10 ω (rad/s)
2
−2
10
10
0
10 ω (rad/s)
Figure 3.18: Frequency response of closed loop system ˆ Ψi , i = 0 (thick solid), 1 (solid), 2 (dot), 3 (dash-dot), 4 (dash-dash)
75
2
10
−4
4
x 10
15 V (ft/s)
γ (deg)
2 0
10
−2 5 −4 0
10
20
30
0 0
40
3000
30
40
10
20 30 Time (sec)
40
2
2000
δe (deg)
Thrust (lbs)
20
2.5
2500
1.5
1500 1000
1
0.5
500 0 0
10
10
20 30 Time (sec)
0 0
40
Figure 3.19: Response of Pˆi , i = 0 (thick solid), 1 (solid), 2 (dash-dot), 3 (dot), 4 (dash-dash) to velocity step command. All four closed-loop systems respond similarly to a velocity step command. The velocity and thrust trajectories are indistinguishable. The elevator command for P1 , P2 and P3 are similar, the elevator command for P4 is oscillatory. The γ response to the velocity command are different but small.
76
−3
x 10 1.5
15
0.5
V (ft/s)
γ (deg)
1
0 −0.5
10 5
−1 0
10
20
30
0 0
40
3000
20
30
40
10
20 30 Time (sec)
40
2.5
2500
2 δe (deg)
Thrust (lbs)
10
2000
1.5
1500
1
1000 0.5
500 0 0
10
20 30 Time (sec)
0 0
40
Figure 3.20: Response of Pˆi , i = 0 (thick solid),4 (solid) to velocity step command with gust. The figure plots the time response to the velocity step command in the presence of high-frequency disturbances, such as a gust. The solid line is the response of the closed-loop system with the single-rate controller. The dashed line is the response of the closed-loop system with the dual-rate controller with the FIR filter at the input of the controller. The FIR filter implements a running average of the sensor output over M time-steps. The time response of both the closed-loop systems are similar. Therefore, the averaging of sensor output doesn’t degrade the high-frequency attenuation properties of the closed-loop system.
77
Chapter 4 Optimal Multi-Rate Decomposition of LTI Controllers 4.1
Introduction
This chapter is an extension of the previous chapter, where we explored innovative models of computation to efficiently implement digital controllers. In the previous chapter, we decomposed a linear time invariant controller into fast and slow systems, Kf and Ks . The system Ks was further decomposed into modal form. The control algorithm was implemented by updating each of the modal sub-systems of Ks along with Kf . Therefore, the controller was implemented as a dual-rate system, and the computational tasks associated with each modal sub-system of Ks was scheduled by a round-robin scheduling algorithm.
In this chapter, we transform the LTI controller into a multi-rate system and schedule the computational tasks, associated with the multi-rate sub-systems of the controller, using real-time scheduling algorithms. The scheduling algorithm and the sampling rate of the computational tasks determine the computational overhead of implementing the controller.
It turns out that smaller sampling rates result in lower utilisation of computational resources. However, lower sampling rates for controller will cause degradation in ro78
bust performance and maybe instability. Clearly, there is a tradeoff between robust performance of the closed-loop and the utilisation of computational resources. Therefore, a systematic method of decomposing a LTI controller to a multi-rate system that achieves reduction in the utilisation of computational resources and guarantees robust performance for the controller, is necessary.
If the controller is decomposed into modal form, then the problem can be posed as a nonlinear programming problem with the sampling rates of the modal systems as the parameters of optimisation. The constraint on the parameters are the constraints of robust performance for the closed-loop system. If performance can be compromised, then the robust performance constraint can be relaxed to robust stability. The cost function for this optimisation is defined in terms of the number of states in the modal sub-systems and their sampling rates.
The chapter is organised as follows. We first present some background on real-time scheduling theory and robust control. A formal definition of the problem is presented next. This is followed by the formulation of the optimisation problem and methods to determine its solution. The chapter concludes with the implementation of the proposed algorithm to a realistic flight control problem, followed by a section that summarises the chapter.
4.2
Background on Real-Time Systems and Scheduling Theory
Real-time systems are defined as those system in which the functionality of the system depends not only on the logical result of the underlying computation, but also on the time at which the results are produced. Control algorithms implemented in a computational hardware are examples of such systems. The underlying computation is normally carried out by several sequential computer programs and are defined as real-time tasks. The fundamental components of a real-time system are the computational tasks and the scheduling algorithm that allocates hardware resources, such as CPU time, to these tasks and guarantees their ontime completion.
79
4.2.1
Real-Time Tasks
Computations occuring in a real-time system that have timing constraints are called real-time tasks. These tasks should finish its execution before a certain time called the deadline. For our purpose, we define real-time tasks as follows. Definition 4.1 Real-time computational tasks are denoted by T. These tasks are essentially implementation of discrete-time linear time invariant (LTI) controllers. It is achieved by multiplying a matrix with a vector as shown (
xk+1 yk
)
=
"
A B C D
#(
xk yk
)
(4.1)
where x ∈ Rns , u ∈ Rnu and y ∈ Rny are the states, input and output of the linear system. Real-time tasks are invoked by each occurence of a particular event. An event is a stimulus generated by a process that is either external (e.g., interrupts from a device) or internal to the system (e.g., clock ticks). Tasks are defined as periodic, if the time interval between two successive invocations, is a constant. Tasks are sporadic, if they are invoked repeatedly with some maximum frequency. Thus the time interval between successive invocations of a sporadic task is of some minimal length. Tasks with irregular periodicities are defined as aperiodic tasks. They may or may not have a minimum periodicity as sporadic tasks. We assume control algorithms, implemented in a digital computer, to be periodic tasks invoked at a fixed rate defined by their sampling interval. 4.2.1.1
Periodicity of LTI Controllers
Definition 4.2 Define periodicity of task T to be the time interval between two consecutive invocations of the task. Since the tasks, defined in eqn.(4.1), are discrete-time LTI control algorithms, their execution is invoked every h seconds, the sampling time of the controller. Therefore, the sampling time of the LTI controller is periodicity of the task T .
80
4.2.1.2
Computational Time of LTI Controllers
Definition 4.3 The computational overhead of task T is denoted by α(T ). We will define α(T ) to be the FLOP count of the computation performed by T . A FLOP is defined as one addition, subtraction, multiplication or division of two floating-point numbers. For a matrix M ∈ Rm×n and a vector v ∈ Rn , the FLOP count for the computation Mv is 2mn.
Therefore, if T is defined as in eqn.(4.1), then α(T ) is defined by α(T ) = 2(ns + ny )(ns + nu )
(4.2)
where ns , nu and ny are the dimensions of the state, input and output of the discretetime LTI system implemented by computational task T . Note 4.1 Measurement of computational time based on FLOP count is not very accurate, especially for algorithms implemented in modern computers. The speed of computation is greatly affected by factors like locality of code and data segment and cache boundaries. However, for our analysis, we will assume that the computational time of an algorithm is solely determined by its FLOP count. 4.2.2
Processor Utilisation Factor
For a task set {Ti }, with computational times α(Ti )/ν and periodicities hi , to be schedulable by any scheduling algorithm, the processor utilisation factor has to be less than one. The processor utilisation factor, as defined in [LL73], is the fraction of processor time spent in the execution of the task set. In other words, the utilisation factor is equal to one minus the fraction of idle processor time. The variable ν represents the number of FLOPs/sec for the processor. Definition 4.4 For a set of tasks T1 , T2 , . . . , Tn , with periodicities h1 , h2 , . . . , hn respectively, the processor utilisation factor is given by n
1 X α(Ti ) U= ν i=1 hi where ν is number of FLOPs/sec for the processor used to compute the tasks Ti . 81
(4.3)
4.2.3
Task Scheduling
Guaranteeing ontime completion of real-time tasks is of great importance. Failure to meet task deadlines can lead to severe consequences. Several task-scheduling paradigms exists in the real-time scheduling literature [RS94b], but we only consider the two classical scheduling algorithms for uniprocessor developed by Liu and Layland. 4.2.3.1
Classical Uniprocessor Scheduling Algorithm
In 1973, Liu and Layland proposed in their seminal paper [LL73], two optimal prioritybased scheduling algorithms for scheduling real-time tasks on a single processor.
Their first algorithm, called Rate Monotonic (RM ) algorithm, assigns priority based on the periodicities of the tasks. Tasks with the smaller periodicity are given higher priority. They showed that this scheme is optimal among all possible scheduling algorithms based on static priority assignment. This scheme is the most popular of the schemes used in real-time systems, because the priority of a task is determined only once and does not have to be evaluated again. The RM algorithm is applicable to periodic tasks. Since, the priorities of the tasks are computed offline, such a scheduling is also called static scheduling. Clearly, this approach cannot be used for tasks that change their execution characteristics over time.
Their second algorithm is called the Earliest-Deadline-First (EDF ) which set priorities of the tasks based on the deadline of the tasks. A task with closer deadline is given a higher priority. This scheme of scheduling can be applied to periodic, sporadic and aporadic tasks. Since the priority of a task may change over time under this scheduling scheme, EDF scheduling is also called dynamic scheduling. In contrast to RM algorithm, EDF algorithm is not as popular because of the associated run-time overhead required to determine the priorities of the tasks.
The advantage of either of these two scheduling algorithms is that, for periodic tasks, there exists schedulability bounds on resource utilisation by the tasks. For RM algorithm, a set of n tasks can be scheduled to meet its execution deadlines, on a 82
single processor, provided the utilisation is no greater n(21/n − 1). For large n, the utilisation is lim n(21/n − 1) = ln(2) ≈ 70% n→∞
The bound on the processor utilisation is obtained by assuming worst-case execution time of the tasks. Better bounds on more accurate characterisation of RM algorithm can be found in [LSD89]. If the periods of the tasks are harmonics of the smallest period, the bound is 1.0. In the case of EDF , the bound is always 1.0. Therefore, the following are true. Lemma 4.1 A task set {Ti }, with computational times {α(Ti )/ν} and periodicities {hi }, i = {1, 2, . . . , n}, is schedulable by rate-monotonic static scheduling algorithm if the processor utilisation U is not more than 70%, i.e., n
1 X α(Ti ) U= ≤ ln(2) ν i=1 hi Furthermore, if the periodicities {hi } are harmonics of the smallest, then the task set is schedulable by the same algorithm if the processor utilisation is not more than 100%, i.e., n 1 X α(Ti ) U= ≤1 νh0 i=1 λi where hi = λi h0 , λi ∈ Z+ , i = {1, 2, . . . , n} and λ1 = 1, i.e., h1 = h0 .
Lemma 4.2 A task set {Ti }, with computational times {α(Ti )/ν} and periodicities {hi }, i = {1, 2, . . . , n}, is schedulable by earliest-deadline-first scheduling algorithm if the processor utilisation U is not more than 100%, i.e., n
1 X α(Ti ) ≤1 U= ν i=1 hi 4.2.4
Scheduling Algorithms and Real-Time Control Systems
Real-time control systems, such as avionics control, operate under stringent reliability requirements. These systems involve real-time tasks that need to complete their execution within their deadlines. Failure to satisfy their timing contraints will result in catastrophic consequences. An offline analysis is conducted to ensure that all 83
the tasks meet their deadlines. These analysis are subject to certain assumptions on workload and assumptions.
In such cases, it is assumed that the environment with which the real-time system interacts, is well-defined and deterministic. Therefore, such real-time systems are static systems and the deadlines can be guaranteed a priori, since the characteristics of the tasks are known a priori. In such environments, it is assumed that the scheduling algorithm operates in a resource suffcient environment and it is made sure that the processor is not overloaded.
In real-time systems, such as the inner loop controller of an unstable plant, the tasks do not change execution characteristics over time and tasks are neither added nor deleted from the task list. In such cases, a table of execution order of the tasks can be determined offline. To optimise the computational resources, EDF algorithm can be used to schedule the tasks. If the tasks have periodicities that are harmonics of the smallest, then RM algorithm can also be used to determine the table of execution order.
4.3
Background in Robust Control
In this section we define nominal performance, robust stability and robust performance of a closed-loop system. The material presented in this section is available in more detail in ref. [ZDG96], [DFT92], and [BDG+ 94]. 4.3.1
Linear Fractional Transformations
Let P be a matrix partitioned as P =
"
P11 P12 P21 P22
#
∈ R(m1 +m2 )×(p1 +p2 )
Then, the linear fractional transformations (LFTs) are defined as the following maps 84
z
w
P
y
u
Figure 4.1: Plant
(conformal representations), Fl (P, •) : Rp2 ×m2 −→ Rm1 ×p1 , Fu (P, •) : Rp1 ×m1 −→ Rm2 ×p2 Definition 4.5 Define lower fractional transformation as Fl (P, ∆l ) := P11 + P12 ∆l (I − P22 ∆l )−1 P21
z y
P
(4.4)
w u
l
Figure 4.2: Lower linear fractional transformation
Definition 4.6 Define upper fractional transformation as Fu (P, ∆u ) := P22 + P21 ∆u (I − P11 ∆u )−1 P12
(4.5)
Clearly, the existence of the inverses in the definitions is necessary for the LFTs to be well defined. 4.3.2
Linear Feedback Control
It is possible to influence the behaviour of a dynamical system with the help of control input. By using the information of the measurement output, the control input can be manipulated to give the system a desired behaviour. This is the concept of dynamic 85
u z
w
P
y
u
Figure 4.3: Upper linear fractional transformation
output feedback control. The measurement output is fed back to the control system that uses this information to define a control function. The control function is then used as control input. Figure 4.4 illustrates this concept, where P is the plant and K is the control system. z
w y
P
u z
=
F( l
P; K )
w
K
Figure 4.4: Output feedback and LFT Fl (P, K)
4.3.3
Structured Singular Value
Consider the interconnected linear system shown in Fig. 4.5. The system ∆ is a block diagonal perturbation that models the uncertainty due to modelling errors, parameter variation, actuator errors, sensor noise, etc., in describing the system P . The exogenous input w = (w1 w2 )T and the objective output z = (z1 z2 )T corresponding to this structure is defined as: w1 the perturbation input corresponding to system perturbations, w2 the performance input corresponding to external disturbances, z1 the perturbation ouput corresponding to system perturbations, 86
w1
z1
P
z2
w2
y
u
K Figure 4.5: The general problem with structured uncertainty
z2 the perturbation output corresponding to the output that has to be controlled. ~ There The perturbation matrix ∆ belongs to a set of block-diagonal matrices ∆. are two types of blocks in the the matrix ∆ : repeated scalar and full blocks. Two non-negative integers s and f represent the number of repeated scalar blocks and the number of full-blocks, respectively. To book keep their dimensions, positive integers r1 , . . . , rs and m1 , . . . , mf are used. The ith repeated scalar block has dimension ri ×ri ~ ⊂ Cn×n , can be defined and the j th full-block has dimension mj × mj . Formally, ∆ as ~ = {diag[δ1 Ir1 , . . . , δs Irs , ∆1 , . . . , ∆f ] : δi ∈ C, ∆j ∈ Cmj ×mj (4.6) ∆ For consistency among all the dimensions, s X i=1
ri +
f X
mj = n
j=1
The blocks δi Iri either denote repeated real blocks (which correspond to real parameter variation) or complex scalar blocks (which can be used to approximate the repeated real scalar blocks by considering real parameters as complex) and blocks ∆j denotes full complex blocks (which can be used to represent unmodelled dynamics or to define performance block to measure robust performance). 87
~ are denoted by The bounded subsets of ∆ ~ = {∆ ∈ ∆ ~ : σ(∆) ≤ 1} B∆ ~ = {∆ ∈ ∆ ~ : σ(∆) < 1} B◦ ∆ where the symbol “◦”denotes the open ball. ~ the structured singular value Definition 4.7 For a given G = Fl (P, K) and ∆, µ∆ ~ (G) is defined by µ∆ ~ (G) =
1 ~ min{σ(∆) : ∆ ∈ ∆, det(I − G∆) = 0}
(4.7)
~ makes I − G∆ singular, in which case µ ~ (G) := 0. unless no ∆ ∈ ∆, ∆ 4.3.4
Robust Stability and Performance
Let G represent the transfer matrix of the closed-loop plant G := Fl (P, K) and refer to the partitioning of G as à 4.3.4.1
z1 z2
!
=
"
G11 G12 G21 G22
#Ã
w1 w2
!
(4.8)
Robust Stability
For a given K, the system Fu (G, ∆) with G = Fl (P, K), is well-posed and stable for ~ if and only if all ∆ ∈ B∆ sup µ∆ (4.9) ~ [G11 (jω)] < 1 ω∈R
4.3.4.2
Robust Performance
To guarantee that performance is achieved for all perturbed plants, a feedback loop is closed between the performance error z2 and the exogenous input w2 with transfer ~ a as the augmented matrix ∆0 , the so called performance perturbation. Defining ∆ 88
perturbation set as ~ a = {diag[∆0 , ∆] : ∆0 ∈ C(nw2 ×nz2 ) , ∆ ∈ ∆} ~ ∆ where nz2 and nw2 are the dimensions of z2 and w2 respectively, robust performance is defined as follows.
For a given K, the system Fu (G, ∆) with G = Fl (P, K), is well-posed, stable and ~ a if and only if satisfies performance, k Fu (G, ∆) k∞ < 1, for all ∆ ∈ B∆ sup µ∆ ~ a [G(jω)] < 1
(4.10)
ω∈R
It implies that if sup µ∆ ~ a [G(jω)] = γ0 > 1 ω∈R
then there is a perturbation ∆, with σ(∆) = i.e., for which k Fu (G, ∆)k∞ = γ0 > 1.
1 γ0
for which performance is not satisfied,
To consider feedback perturbations to P which are themselves dynamical systems ~ define N (∆) as, with the block-diagonal structure of the set ∆, ~ a ) = {∆(·) ∈ RH∞ : ∆(s0 ) ∈ ∆, ~ ∀s0 ∈ C+ } N (∆ ~ a by N (∆ ~ a ). The robustness theorems can be easily generalised by replacing B∆
4.4
Lifting Operator, LM
In section 3.2.3 page 50, we introduced the lifting operator LM . Recall that for a sequence v(k), v(k) = v(0), v(1), v(2), . . .
89
the sequence v(k),
v(k) =
v(0) v(1) .. . v(M − 1)
,
v(M ) v(M + 1) .. . v(2M − 1)
,...
are related as v(k) = LM v(k) In this section, we present some of the properties of the operator LM . Lemma 4.3 For two positive integers m, n ∈ Z+ , L(mn) = Lm Ln = Ln Lm
(4.11)
Proof Assume m = 2, n = 3, then L2 v(k) =
Ã
v(0) v(1)
! Ã ,
v(2) v(3)
v(0) v(1) v(2) L3 L2 v(k) = ,... v(3) v(4) v(5)
= L6 v(k)
90
! Ã ,
v(4) v(5)
!
,...
similarly,
v(3) v(0) L3 v(k) = v(1) , v(4) , . . . v(5) v(2) v(0) v(1) v(2) L2 L3 v(k) = ,... v(3) v(4) v(5)
= L6 v(k)
Lemma 4.4 For two positive, non zero integers m, n ∈ Z+ , −1 −1 −1 −1 L(mn) = L−1 m Ln = Ln Lm
(4.12)
Proof From eqn.(4.11), we have
⇒
L(mn) = Lm Ln I = L−1 (mn) Lm Ln
−1 ⇒ L−1 = L−1 n Lm (mn)
similarly, ⇒
L(mn) = Ln Lm I = L−1 (mn) Ln Lm
−1 ⇒ L−1 = L−1 m Ln (mn)
4.5
Problem Definition
Consider the closed loop system shown in Fig.4.4, where both plant P and controller K are continuous time systems. Let us suppose, during digital implementation, the ˆ with sampling interval h. controller K is transformed into a discrete-time system K,
From control theoretic point of view, the sampling interval h has to be small enough for the closed-loop system to be robustly stable. Large values of h will cause the 91
controller to perform poorly and may even render the closed-loop system unstable. Therefore, from the point of view of control system, robust stability imposes an upper bound on the sampling interval h.
When control algorithms are implemented in digital computers, the smallest value of sampling interval h is restricted by the base clock of the hardware. In addition to the restriction imposed by the base clock, h is also restricted by the computational complexity of the control algorithm. Smaller h implies smaller execution time of the control algorithm so that controller output is available every h seconds. Therefore, to execute a more complex algorithm in a shorter time, a computational hardware with larger FLOPs/sec is required. Such hardware is typically more expensive. Thus, from the point of view of economical digital implementation, larger values of h are desired. Therefore, we observe that there are conflicting requirements on h from control theory and real-time task scheduling theory.
In order to transform the computational task associated with the digital implementation of K, to a operationally rational real-time task, we decompose K into modal form, with K1 , K2 , . . . , Kn as the modal sub-systems of K. If the controller is implemented as a summation of its modal subsystems, i.e. K = K1 + K2 + . . . + Kn , then it is possible to discretise each Ki , i = 1, 2, . . . , n, with different sampling intervals. Let hi be the sampling interval of the modal-system Ki . Therefore, the computational tasks associated with the digital implementation of K, are decomposed into several sub-tasks that execute at different rates. It can be shown that the implementation of the controller as a multi-rate system requires less computational resources than the single rate implementation.
Lemma 4.5 Let Ti be the computational task that implements Ki with periodicity hi . If hi = h0 , ∀i ∈ {1, 2, . . . , n}, then the controller K is implemented as a single rate system, with sampling rate 1/h0 . The utilisation of CPU is Usingle−rate
n n 1 X α(Ti ) 1 X = α(Ti) = ν i=1 h0 νh0 i=0
(4.13)
In the case where hi ≥ h0 , ∀i ∈ S ⊆ {1, 2, . . . , n}, i.e. some of the modal systems 92
have higher periodicity than the single rate controller, the utilisation of CPU is n
Umulti−rate
1 X α(Ti ) = ν i=1 hi
(4.14)
Since hi ≥ h0 , ∀i ∈ S ⊆ {1, 2, . . . , n}, it is obvious that Umulti−rate ≤ Usingle−rate . If the set of tasks Ti , associated with the modal sub-systems of controller K, are implemented on a single processor, a scheduling algorithm has to be adopted that allocates CPU time to each task Ti , and guarantees their completion before their respective deadlines. Here we assume that the deadlines of execution are equal to the periodicities of the individual tasks. Since we are interested in determining the minimum required computation to implement the control algorithm K, we would like to adopt a scheduling algorithm that has minimal CPU idle-time. The scheduling algorithm, that allocates CPU time to the task with the earliest deadline or the EDF scheduling algorithm, achieves 100% processor utilisation, i.e. there is no CPU idletime. Hence, we will use this algorithm to schedule the tasks Ti .
For a set of tasks to be schedulable, the utilisation of the processor has to be less than or equal to unity, i.e n 1 X α(Ti ) ≤1 ν i=1 hi The required FLOPs/sec, to schedule the tasks Ti on a uniprocessor with zero CPU idle time, can then be written as n X α(Ti ) (4.15) νreq = hi i=1 From eqn.(4.15) we observe that larger values of hi results in lower values of νreq . But too low values of hi will cause the closed-loop system to be unstable. Therefore, it is of interest to determine hi , i = {1, 2, . . . , n}, for which νreq is minimised and the closed-loop system achieves robust performance.
93
4.6
Formulation of the Optimisation Problem
Formally, the problem of interest can be defined as an optimisation problem such as, min νreq
(4.16)
¯ ~h) < 1 g(P, K,
(4.17)
~h∈Rn
subject to
where
~h ,
¯ = K
h1 h2 .. . hn
n X
Si φ(Ki , hi )Hi
(4.18)
(4.19)
i=1
¯ ~h) is a nonlinear constraint on ~h that guarantees robust performance for and g(P, K, the closed-loop system.
In eqn.(4.19), Si and Hi represents sampler and hold for the sampling interval hi . The function φ(Ki , hi ), as defined in page 61, denotes the transformation of the continuous time system Ki to a discrete time system via step-invariant transformation. There are several methods to transform a continuous time system to a discrete time system and φ(., .) can be defined appropriately. Different definitions of φ(., .) will yield different values for the ∞-norm defined in eqn.(4.17). In the research work presented in this chapter, we only consider step-invariant transformation.
The optimisation problem defined by equations 4.16, 4.17, 4.18 and 4.19 is a nonlinear programming problem with parameters hi , i = 1, 2, . . . , n. Let us represent this optimisation problem as O.
94
4.7
Solving the Optimisation Problem
¯ is a multi-rate sampled-data system. The The closed-loop system, with controller K, closed-loop system is time-varying and conventional linear system analysis tools can¯ ~h), is not trivial not be used directly. Evaluating the constraint function, g(P, K, for multi-rate sampled-data systems. Therefore, the optimisation problem as posed, cannot be solved very easily and certain modifications are necessary.
To make the closed-loop system amenable for analysis, the sampled-data system is first transformed to a multi-rate discrete time system, by discretising the plant with an arbitrarily small sample time [KA92]. This is accomplished using lifting techniques to transform the multi-rate discrete time system to a linear time invariant discrete time system.
4.7.1
Solution to the Nonlinear Programming Problem
To be able to lift a multi-rate system to a single-rate system, it is necessary that the different sampling intervals in the system are integer multiples of some base interval, say h0 . Therefore, this restricts hi to take on values that are integral multiples of h0 , and h0 can be arbitrarily small. This leads to a change of variable in the optimisation problem O.
Assumption 4.1 Let hi = λi h0 , where λ ∈ Z+ , i = {1, 2, . . . , n} and h0 is the base interval. Therefore, the new parameters of optimisation are λi , i = {1, 2, . . . , n}. Note 4.2 The periodic system then has a periodicity M , where M is the least common multiple of λi , i = {1, 2, . . . , n}. 4.7.1.1
Lifted Closed Loop System
When the modal systems of the controller K is discretised using different sampling intervals, the closed-loop system in Fig.4.4 is transformed into a multi-rate sampled95
data system as shown in Fig.4.6.
z1 (t) z2 (t)
P
w1 (t)
w2 (t)
y (t)
u(t)
S1
K1 ^
H1
S2
K2 ^
H2
. . .
. . .
. . .
Sn
Kn
Hn
^
+
Figure 4.6: Multi-rate sampled-data control system
The system in Fig.4.6 can be transformed into a linear time invariant system by first discretising P with sampling interval h0 to obtain discrete-time system Pˆ , i.e. Pˆ = φ(P, h0 )
(4.20)
The discrete-time plant Pˆ is then lifted M times to obtain another discrete-time plant Pˆ , i.e (4.21) Pˆ = LM Pˆ L−1 M Note that Pˆ has the sampling interval of h0 /M . Next obtain discrete-time modal controller sub-system as ˆ i = φ(Ki , λi h0 ) K (4.22) The closed-loop system with lifted plant Pˆ and multi-rate modal controller subˆ i , is shown in Fig.4.7. systems K The elements Si and Hi represent the sampler and hold circuit associated with the sampling interval hi , i = {0, 1, 2, . . . , n}. The system Fi represents the anti-aliasing filter for the sampler Si , i = {1, 2, . . . , n}. The different line types of the lines connecting the blocks distinguish between continuous time signals and discrete time signals 96
z 1 (k) z 2 (k)
P
^
y(k)
w1 (k) w2 (k)
u(k)
LM1
H0
F1
S1
K^ 1
H1
S0
LM
LM1
H0
F2
S2
K^ 2
H2
S0
LM
K^ n
Hn
S0
LM
+
. . .
LM1
H0
Sn
Fn
Figure 4.7: Lifted plant-controller interconnection
sampled at intervals hi .
ˆ can we written From Fig.4.7, the transfer function from y(k) to u(k), denoted by K as
ˆ = K
n X
ˆ i Si Fi H0 L−1 LM S0 H i K M
(4.23)
i=1
Assuming (pg. 211-213, [CF95]),
S0 H i
−1 = Lλi λi times I I I = L−1 λi .. .
I I .. .
I
λi
Si = Si H 0 S0 97
(4.24)
(4.25)
(λi −1) times
Si H0 = [I
z }| { 0 . . . 0 ]Lλi = [I 0 . . . 0]λi Lλi
(4.26)
−1 −1 and replacing LM = LM/λi Lλi , L−1 M = Lλi LM/λi from eqn.(4.11) and eqn.(4.12), the transfer function in eqn.(4.23) can be written as
ˆ = Pn LM S0 Hi K ˆ S F H L−1 K i=1 i i i 0 M I I Pn −1 −1 ˆ = i=1 LM Lλi .. Ki [I 0 . . . 0]λi Lλi S0 Fi H0 LM . =
i=1 LM/λi
Pn
I I .. . I
I
λi
Kˆi [I 0 . . . 0] Lλ S0 Fi H0 L−1 L−1 i λi M/λi λi λi
The system S0 Fi H0 is the discrete-time system obtained by discretising Fi with sampling interval h0 , i.e. Fˆi = φ(Fi , h0 ) (4.27) ˆ −1 ˆ and the system Lλi S0 Fi H0 L−1 λi = Lλi Fi Lλi is the lifted discrete-time system Fi as defined by eqn.(3.6) in page 51, i.e. Fˆi = Lλi Fˆi L−1 λi The transfer function then becomes ˆ = K
n X i=1
If
LM/λi
ˆi = Γ
I I .. . I
(4.28)
I I .. .
Kˆi [I 0 . . . 0] Fˆi L−1 λi M/λi
I
λi
Kˆi [I 0 . . . 0] Fˆi λi λi
98
(4.29)
then
ˆ = Pn LM/λ Γ ˆ L−1 K i i M/λi Pni=1 ˆ = i=1 Γi
Therefore, in the lifted input-output space, the controller dynamics with multi-rate modal sub-systems, can be written as ˆ = K
n X
ˆi Γ
(4.30)
i=1
ˆ i is given by eqn(4.29). With reference to Fig.4.7, the closed-loop system G, ˆ where Γ in the lifted input-output space can therefore be given as ˆ ˆ = Fl (Pˆ , K) G
4.7.1.2
(4.31)
Modification of Optimisation Problem
The optimisation problem O, is therefore transformed to the following problem
min νreq
~λ∈Z+n
subject to
n 1 X α(Ti ) = h0 i=1 λi
ˆ jωh0 /M )] < 1 sup µ∆ ~ a [G(e
(4.32)
(4.33)
ω∈R
where
~λ ,
λ1 λ2 .. . λn
(4.34)
ˆ jωh0 /M )] is the structured singular value of the lifted discrete-time system and µ∆ ~ a [G(e ˆ jωh0 /M ). G(e 4.7.1.3
Practical Difficulties
The nonlinear programming problem posed by eqn.(4.32,4.33), can be solved in theory. However, the state-of-the-art in nonlinear programming software available on 99
the market, use techniques based on the shooting method to search for an optimal direction. A candidate solution for ~λ could be large enough to potentially destabilise the closed-loop system. Therefore, implementing the optimisation problem as a computer program may be challenging.
Since lifting techniques are used to transform the multi-rate system to a single-rate system, a candidate solution for ~λ can potentially cause the plant model to be lifted by a large integer. This may cause numerical errors to affect the computation of the structured singular value, required to determine the constraints imposed by robust performance.
At the same time, the parameter ~λ takes on integer values, hence the optimisation is essentially a nonlinear integer programming problem, which may be approximated to a standard nonlinear programming. 4.7.2
Solution by a Search Algorithm
Instead of solving the optimisation, posed by eqn.(4.32,4.33), as a nonlinear programming problem, we obtain the solution via a search algorithm as discussed in the following sections. 4.7.2.1
Algorithm 1: Lifted Discrete-Time System
Assuming that the modal systems are arranged in the order of decreasing natural frequencies, i.e., Ki has faster modes than Kj for i < j, the search algorithm begins by assuming N = 1, λi = N, i = {1, 2, . . . , n}, which means that all the modal systems are now sampled at h0 . The variable N is incremented by one until the constraint in eqn.(4.33) is violated. The largest value of N for which the constraint is not violated, is assigned to be the maximum value of λ1 . This completes one iteration. In the second iteration, N once again starts from 1, and λi = N, i = {2, . . . , n}, and the maximum value of λ2 is determined. This procedure is carried out for all the remaining modal-systems, and the maximum value for all λi is determined.
Note 4.3 From the cost function in eqn.(4.32), we see that the largest possible value 100
of λi , i = 1, 2, . . . , n that satisfies eqn.(4.33), will yield the minimum value of νreq . Algorithm 4.1 The search algorithm can be formally defined as Pˆ = φ(P, h0 ) for i = 1, 2, . . . , n for j = 1, 2, . . . , i − 1 ˆ j = φ(Kj , λjmax h0 ) K end N =1 f lag = 0 while flag == 0 for j = i, i + 1, . . . , n λj = N ˆ j = φ(Kj , λj h0 ) K end M = least common multiple of λ1 , λ2 , . . . , λn Pˆ = LM Pˆ L−1 Pn ˆM ˆ K = l=1 Γl from eqn.(4.30) ˆ from eqn.(4.31) ˆ = Fl (Pˆ , K) G ˆ jωh0 /M )] < 1 if supω∈R µ∆ ~ a [G(e λimax = N − 1 flag = 1 else N=N+1 end end end
This algorithm relies on the lifting technique to transform a multi-rate sampled-data system to a discrete-time linear time-invariant system. Since the plant model is lifted by the least common multiple of λi s, this number could be quite large. Lifting a linear system by a large integer, may cause numerical problems that may affect the 101
accuracy of the structured singular value in eqn.(4.33). Therefore, this algorithm may not be very reliable to determine the largest values of λi , especially if the states of some modal sub-system are weakly controllable or observable. 4.7.2.2
Algorithm 2: Approximated Sampled-Data System
ˆ i , for Lifting technique is used to transform the plant Pˆ and the modal systems K which λi have been determined, to a single rate system. If instead of discretising those modal systems, they are approximated by a continuous time system, then the approximated continuous time modal systems and the continuous time plant could be defined as a augmented continuous time plant, Paug . The remaining modal systems are sampled at the same rate. Collectively they represent a single-rate discrete-time system, Kd . The induced norm of the sampled-data system, defined by Paug and Kd can be easily computed by the MATLAB command sdhfnorm (pg. 58,[BDG+ 94]). The second algorithms presented, makes use of this idea to determine the induced norm of the multi-rate sampled-data system.
Definition 4.8 Let Prs be the transfer function from
"
w1 u
#
to
"
z1 y
#
, in Fig.4.5
defined by the state-space partition "
z1 y
#
A
B1 B3
= C1 C3
0 0 rs
0 0
"
w1 u
#
(4.35)
Define P´rs to be Prs with duplicate y and u channels. Therefore P´rs is defined as,
A
z1 C1 y = C 3 y C3
B1 B3 B3 0 0 0
0 0 0
w1 0 u1 0 u2 0 rs
(4.36)
The input variable u1 is the summation of the outputs of the approximated, continuous time, modal sub-systems of the controller. The input variable u2 is the summation of the outputs of the discrete time, modal sub-systems of the controller, held constant between sampling instants. 102
Definition 4.9 Let Pnp be the transfer function from
"
w2 u
#
to
"
z2 y
#
, in Fig.4.5
defined by the state-space partition "
z2 y
#
B2 B3
A
= C2 C3
0 0 np
0 0
"
w2 u
#
(4.37)
Define P´np to be Pnp with duplicate y and u channels. Therefore P´np is defined as,
A
z2 C2 y = C 3 y C3
B2 B3 B3 0 0 0
0 0 0
w2 0 u1 0 u2 0 np
(4.38)
The input variable u1 is the summation of the outputs of the approximated, continuous time, modal sub-systems of the controller. The input variable u2 is the summation of the outputs of the discrete time, modal sub-systems of the controller, held constant between sampling instants. The approximation of the discrete-time system by a continuous time system can be achieved in the following ways. 1 ) 1 − sh/2 K(esh ) ≈ K(1 + sh/2) 1 + sh/2 ) K(esh ) ≈ K( 1 − sh/2 1 − sh/2 K(esh ) ≈ K(s) 1 + sh/2 K(esh ) ≈ K(
(Backward difference)
(4.39)
(Forward Difference)
(4.40)
(Trapezoidal or Tustin’s Method)
(4.41)
(Continuous time with delay h/2)
(4.42)
We use eqn.(4.42) in the algorithm presented next. Algorithm 4.2 The algorithm using sdhfnorm is quite similar to algorithm 4.1 and assumes that the modal systems are arranged in the order of decreasing natural frequencies, i.e., Ki has faster modes than Kj for i < j. It is defined as,
103
for i = 1, 2, . . . , n K1 (s) = 0 for j = 1, 2, . . . , i − 1 sh /2 1+λ K1 (s) = K1 (s) + Kj (s) 1−λjjmax sh00 /2 max
end N =1 f lag = 0 while flag == 0 Kd (z) = 0 for j = i, i + 1, . . . , n Kd (z) = Kd (z) + φ(Kj , N h0 )(z) end Paug = Fl (Prs , K1 ) [gamu,gaml] = sdhfnorm(Paug , Kd , N h0 ) if gamu ≥ 1 λimax = N − 1 flag = 1 else N=N+1 end end end
Note 4.4 Note that in algorithm 4.2, we have simplified the robust performance constraint to robust stability, since sdfnorm only returns the induced norm. Once λirs , the values of λi for robust stability, is obtained, robust performance can be verified. If the constraint for robust performance is violated, then the same procedure can be carried out, with Pnp and λirs as upper bounds on λi , to obtain λimax which satisfies robust performance.
4.8
Example
In this section we study the effect of the multi-rate decomposition of a controller designed for the B737-100 TSRV(Transport System Research Vehicle) linear longitu104
dinal motion model, on the closed-loop system.
The natural frequencies of the controller designed in [GB01] is shown in table(3.4) in page 70. Modal decomposition of the controller yields twelve sub-systems. The values of λimax , i = {1, 2, . . . , 12}, obtained using algorithm 4.2 are ~λmax =
h
1 1 1 1 1 1 13 20 20 20 20 113
iT
(4.43)
The smallest interval h0 is chosen to 0.01s, which is the sampling interval of the controller implement as a single rate system. Therefore the sampling intervals of the different modal systems are 0.01λimax , i = {1, 2, . . . , 12}. These values of λimax decomposes the single-rate controller into sub-systems that run at four different rates. The value of νreq for these values of λimax is 2.2412 × 104 FLOPs/sec. This translates to 224.12 FLOPs every h0 = 0.01 seconds. The single rate implementation of the controller requires 1000 FLOPs every h0 seconds, which means the multi-rate implementation achieves a reduction of 77.59% in terms of computational overhead (in the sense of FLOP count), which is quite significant.
4.8.1
Frequency Response
The frequency response of the multi-rate system can be determined in two ways. If the multi-rate, discrete-time, modal sub-systems of the controller are approximated by their equivalent continuous-time system with delay at the output, the closed-loop system is a continuous-time system and its frequency response can be easily determined.
If the discrete-time lifting technique is used to transform the multi-rate closed-loop sampled-data system, then the plant has to be lifted M times where M is the least common multiple (LCM) of λimax , i = {1, 2, . . . , 12}. For this example, it is the LCM of 1, 13, 20 and 113, which is 29380. Lifting the linear model of the B737-100 vehicle, that many times will cause the “A” matrix of the resulting system to be illconditioned. This will cause numerical inaccuracies in the computation of frequency response and structured singular value for robust performance. However, if the values 105
of λ7max and λ12max are changed to 10 and 20 respectively, then ~λmax modifies to ~λmax =
h
1 1 1 1 1 1 10 20 20 20 20 20
iT
(4.44)
then the LCM is 20 and it is possible to conduct a frequency domain analysis of the multi-rate, sampled-data, closed-loop system using lifting techniques. The value νreq per time h0 sec with this new set of values of λimax is 225.4 and results in 77.46% reduction in the computational overhead compared to the single rate implementation. For this example, we observe that changing the values of λimax doesn’t change the value of νreq much. Figure 4.8 plots the maximum singular values of the four closedloop transfer functions γ ref → γ, γ ref → V , V ref → γ and V ref → V .
Note 4.5 Note that we plot singular values because these transfer functions are multivariable systems in the lifted input-output space. From Fig.4.8, we see that the frequency response of the closed-loop system with the single-rate controller is not significantly different from that of the closed-loop system with the multi-rate controller. The multi-rate decomposition of the linear controller retains the property of decoupled response to γ and V reference commands. The tracking response to the each of the reference commands is also similar to that of the original single-rate controller (in the largest singular value sense). 4.8.2
Robust Performance
The values of ~λmax in eqn(4.44) satisfy robust stability, under multiplicative uncertainty as defined in [GB01], if the multi-rate discrete-time modal sub-systems of the controller are approximated by a continuous-time system with delay at the output, and the structured singular value for robust stability is computed using continuoustime techniques. The approximated continuous-time closed-loop system, for the same values of ~λmax , also satisfies nominal performance, even though they were not considered in the computation of λimax in algorithm 4.2 .
From a control theoretic point of view, tests for robust stability and nominal performance are more accurate if the multi-rate, sampled-data, closed-loop system is 106
transformed to a lifted discrete-time system. When the corresponding ∞-norms for robust stability and nominal performance are computed for the lifted discrete-time system, we observe that closed-loop system is not robustly stable. The closed-loop system however satisfies the contraints for nominal performance.
Therefore, for this example, when robust stability is tested, using the same approximations for the multi-rate discrete-time controller as that used to determine λimax , i = {1, 2, . . . , n}, the resulting closed-loop system is robustly stable. However, when a more accurate characterisation of the multi-rate, discrete-time controller is used, the closed-loop system with the same values of λimax , doesn’t pass the robust stability test.
Clearly, the approximation of the multi-rate, discrete-time controller to a continuoustime controller with delay at the output is not very accurate and the values of λimax are not guaranteed to satisfied robust stability of the lifted discrete-time closed-loop system.
When the values of λimax are changed to ~λmax =
h
1 1 1 1 1 1 6 12 12 12 12 12
iT
(4.45)
the lifted, discrete-time closed-loop system is robustly stable. These values of λimax were obtained by trial and error from the values obtained using algorithm 4.2. The value of νreq for the values of λimax , defined in eqn.(4.45) is 231.67 FLOPs per h0 seconds, and the reduction in computational overhead is 76.83 % of the single-rate implementation.
Therefore, algorithm 4.2 is not guaranteed to yield values of λimax that will result in a robustly-stable closed-loop system, but can be used to determine a nominal solution for ~λmax . Some post-processing of the nominal solution is necessary to determine the ~λmax which satisfies robust stability.
107
4.8.3
Time Domain Analysis
Figure 4.9 plots the step-response of the closed-loop system with the mult-rate controller and the single-rate controller. Two time responses are almost identical. The time-response is shown for the values of ~λmax defined in eqn.(4.45). Therefore, in time-domain, the multi-rate decomposition of the controller doesn’t cause a significant difference in the closed-loop response to the velocity step-command.
4.9
Effect of Scheduling Algorithm on Closed-Loop Response
Recall from section 3.3.2 in page 60, the transformation T (K) which decomposes controller K into fast and slow systems Kf and Ks . The system Ks is further decomˆ s } = M(K ˆ s ), i = {0, 1, . . . , π(K ˆ s ) − 1}. posed into modal systems {K i ˆ s are updated with K ˆ f using round-robin scheduling algorithm. The The systems K i ˆs scheduling algorithm assumed that tasks associated with updating the states of K i are not pre-empted. That led to the creation of the execution table shown in table(3.2) in page 60. Based on the time the tasks begin to execute, we were able to choose the input vector to be latched input (input values at times that are integer multiples of the periodicities of the modal systems), most recent input or the current running average to update the states of the modal sub-systems. The effect of these choices on the transfer function of the closed-loop in the lifted input/output space was analysed in the framework developed in section 3.3.3 in page 63. This framework can also be used to analyse the effect of the order of execution of the modal sub-systems, under round-robin schedule, on the closed-loop system.
Using the analysis framework developed in section 3.3.3 in page 63 , we can also analyse the effect of EDF or RM scheduling algorithm on the closed-loop system. 4.9.1
Order of Execution of Tasks and Choice of Input
Since the computational tasks considered here are tasks that update the states and compute the output of the modal sub-systems of the controller, we can assume the task list and the execution characteristics of the tasks to remain constant over time. Under such assumptions, a table that defines the precise order of execution of the 108
modal systems can be defined for both RM and EDF scheduling algorithms.
From the table, the exact time when a task is executed can be determined and the control value used to update the states of the modal sub-system associated with the task can be chosen to be the latched value, the most recent value or the current running average.
The order in which the modal subsystems, grouped by the associated value of λimax , will be different depending on whether EDF or RM scheduling algorithms is used. ˆ max , are Both will result in 100% CPU utilisation since the periodicities, defined by λ harmonics of the smallest. If the input value used to update the states of the modal systems are latched input values, the order of execution has no effect on the lifted controller transfer function. If however, the values of control are the most recent or the current running average, then the sequence in which these modal sub-systems get updated will affect the transfer function of the lifted controller.
The effect of the order of execution of the modal systems and the choice of the input value used to update their states can be easily determined in the framework developed in section 3.3.3 in page 63. 4.9.2
Preemptive vs Non-Preemptive Scheduling
The round robin algorithm, in section 3.3.2 page 60, assumes that the tasks associated with the digital implementation of the modal sub-systems of the controller are not preempted. The minimum number of FLOPs required to implement the controller, for the particular slow-fast decomposition considered section 3.4 in page 70, is 132 FLOPs. If the same multi-rate system is scheduled using EDF , then the minimum required FLOPs is 112.67. Therefore, we see that for the same multi-rate decomposition of the controller, EDF requires 14.54% less computational time than the round-robin scheduling algorithm. In general, non-preemptive scheduling algorithms [JSM91] insert CPU idle time and hence, require more computational resources to feasibly schedule the same task set than preemptive scheduling algorithms.
109
0
0
10
γref →γ
γref → V
10
−5
10
−5
10
−10
−2
10
−1
10 ω (rad/s)
10
0
−2
10
10
0
−1
10 ω (rad/s)
0
10
0
10
Vref →γ
Vref → V
10
−5
10
−5
−2
10
−1
10 ω (rad/s)
10
0
10
−2
10
−1
10 ω (rad/s)
0
10
Figure 4.8: Frequency domain tracking response in the lifted input/output space. Solid line denotes closed-loop system with single-rate controller, dashed line denotes closed-loop system response with multi-rate controller.
110
−4
γ (deg)
4
x 10
2 0 0
10
15
20
25
30
35
40
45
5
10
15
20
25
30
35
40
45
5
10
15
20
25
30
35
40
45
5
10
15
20
25 30 Time (sec)
35
40
45
Thrust (lbs)
V (ft/s)
15 10 5 0 0 3000
5
2000 1000 0 0 δe (deg)
2 1 0 0
Figure 4.9: Response to a velocity step command of 20f t/s. Solid line denotes closed-loop system with single-rate controller, dashed-dot line denotes closed loop system with multi-rate controller.
111
Chapter 5 Summary 5.1
Anytime Control Algorithms
In chapter two, we presented a method to transform linear controllers to behave similarly as anytime control algorithms. That is, they are able to trade the quality of solution for reduced computational time. The transformation is achieved by generating a set of reduced order controllers via model reduction, and implementing a switching algorithm that smoothly switches between these controllers based on the available computational time.
We considered balanced truncation and residualisation as model reduction trechniques and developed appropriate switching algorithms for each. The switching algorithms proposed, require minimal overhead to ensure smoothness of controller output. This makes it feasible for implementation in an environment where computation is expensive. The idea of anytime control algorithm is then applied to a realistic flight control problem. From the example presented, we observe that substantial reduction in computational time can be accomodated while still keeping degradation in controller performance within acceptable limits.
112
5.2
Computationally Efficient Digitial Implementation of Linear Control Algorithms
In chapter three, we presented an algorithm for computationally efficient digital implementation of linear time invariant controllers. A theoretical framework, built on the multi-rate filter bank theory and lifting techniques, was developed to analyse the effect of the transformation on the closed-loop system performance and stability. We applied this idea on a flight control problem based on the B737-100 TSRV linear longitudinal motion model. From time and frequency domain analysis, we could conclude that, for this example, the transformation did not alter the behaviour of the closed-loop system significantly. However, the reduction in computational overhead is quite significant.
From the point of view of computation, the transformation T (K) can be considered to be a model reduction technique for LTI systems. Model reduction is achieved by partitioning the state space of the LTI system into two subspaces, corresponding to the slow and fast modes of the system. The states corresponding to the fast modes are updated as fast as the base clock. The states corresponding to the slow modes are updated one after another in a round-robin manner. Thus the system composed of the slower modes operates slower than the base clock. This results in a periodically time varying system. Since the number of states required to be updated at a given time instant reduces, the order of the system from the point of view of computation also reduces. Note that instead of partitioning the state space of the LTI system into just two subspaces, we could extend this idea to N partitions in general. In such a scenario, distribution of the computation required to updated the states of those N subspaces will be more complicated.
The theoretical framework developed in chapter three, to analyse the effect of the round-robin state updation policy on the closed-loop system, can also be used to analyse control systems in an environment of priority based scheduling of computational resources. In such an environment, the computational tasks are alloted CPU resources based on their assigned priorities, which could be static or dynamic. Consequently, the order in which these tasks are executed depends on the assigned priorities. Therefore, if a control system is a composite of several computational tasks, the ef113
fect of the sequence of execution of these tasks, on the dynamical behaviour of the controller, can be analysed under the framework presented in this chapter.
In the context of software driven distributed control systems, operating in real or simulated-time, few researchers have studied the effect of the sequence of execution on the dynamics of the overall system. Analysis of the effect of the model of computation, used to realise the control algorithm, is crucial, especially since the paradigm of control system design and implementation is shifting from a centralised, single-processor framework to a decentralised, distributed computing framework. The analysis framework developed in this chapter has potential to contribute towards the development of a systematic approach to analyse these issues.
5.3
Optimal Multi-rate Decomposition of Linear Control Algorithms
In chapter four, we demonstrated that digial implementation of a LTI control algorithm as a multi-rate system requires less computational time that the conventional single-rate implementation. The savings in the computational overhead is because in the multi-rate implementation, all the states of the controller are not updated at the same time.
We posed the problem of decomposing a LTI controller to a multi-rate system as a nonlinear programming problem. The optimisation problem as posed, cannot not be solved owing to certain practical difficulties and numerical issues. Instead, the optimisation problem was solved using search algorithms.
The algorithm for multi-rate decomposition of LTI controllers was applied to a realistic flight control problem based on B737-100 TSRV aircraft model. We could demonstrate substantial reduction in computational overhead by implementing the controller as a multi-rate system.
114
5.4
Future Work
The work presented in this thesis opens several avenues for future research. 5.4.1
Imprecise Computation
In chapter one, we used model reduction techniques to obtain reduce order controllers that required less computational time but achieved poor performance. Switching between controllers of different order, enabled us to accomodate reduction in computational time. However, the computational tasks associated with them cannot produce valid output if they are preempted before completion.
Therefore, it is of interest to investigate controller design techniques that designs a controller, with defined robust performance objectives, as two subsystems, one of which achieves robust stability and the other robust performance. In that case, the computational task that guarantees robust stability would be mandatory and the one that achieves robust performance would be optional. The optional computational task can be skipped during transient processor overloads. The resulting real-time computational task could then be easily implemented using imprecise computation model. Implementation of real-time computational tasks, using imprecise computation model, allows preemption of tasks with valid output at the time of preemption. 5.4.2
Approximate Solutions of Ordinary Differential Equation
Linear control algorithms are essentially a set of linear ordinary differential equations. Solution of such equations can be expressed as an infinite series summation. In that case, the accuracy of the solution is governed by the number of terms in the series. In the framework of anytime algorithms, fewer terms in the series would mean lesser computational time and greater error. The summation can be preempted at any time and a valid output can be obtained. Therefore, expressing the solution of the ordinary differential equation as an infinite series, transforms the associated computational task to behave as anytime algorithms.
Solutions of ordinary differential equations can also be determined using basis function approach. In this approach, depending on the basis function space, the accuracy 115
of the solution can be varied. Therefore, a multi-resolution solution space can be constructed using appropriately defined basis functions. The computational time for more accurate, higher resolution solution would be greater than less accurate, lower resolution solution. Thus, implementing the controller algorithm as a weighted summation of appropriate basis functions, induces anytime properties in the associated computational task.
It is however not clear how these solution techniques will affect robust stability and performance. 5.4.3
Variable Sampling-Rate
In chapter three and four, we demonstrated that implementation of a LTI controller as a multi-rate system, requires less computational resource. If the computational time of the task is fixed, then the processor utilisation is a function of the task periodicities. Lower periodicity results in lower resource utilisation. The algorithms presented in chapter three and four, transformed the LTI controller, in context, into a multi-rate system with no significant degradation in robust performance. The sampling rate of the modal-subsystems can be reduced further, at the cost of degraded performance, to reduce the resource usage during transient processor overloads. Therefore, by changing the periodicities of modal-subsystems, it is possible to transform the associated computational tasks to anytime algorithms. 5.4.4
Multi-Rate Controller Design
In chapter four, we proposed two algorithms for optimal multi-rate decomposition of a LTI controller. These algorithms, based on lifting techniques and approximations of multi-rate discrete-time systems, provided the largest sampling interval of the modal sub-systems of the controller, with the closed loop-system achieving robust performance. The proposed algorithms, however, are not very reliable as demonstrated in chapter four. Therefore, it is necessary to device more reliable algorithms, that determine the largest sampling interval for each of the modal sub-systems of the controller and also satisfy the constraints of robust performance.
If the controller is designed as a multi-rate system and takes into account the variation 116
in the sampling rate as an uncertainty, then such decomposition algorithms are not necessary. The controller, in that case, is already amenable for implementation as an anytime algorithm. Therefore, it is of interest to develop design algorithms that synthesize controllers with the desired structure and property.
117
Bibliography [Air40]
G. B. Airy. On the Regulator of the Clock-Work for Effecting Uniform Movement of Equatorials. Memoirs of the Royal Astronomical Society, 2:249–267, 1840.
[AW84]
Karl J. Astrom and Bjorn Wittenmark. Computer Controlled Systems - Theory and Design. Prentice-Hall, Inc., Englewood Cliffs, N.J. 07632, 1984.
[AY86]
M. Araki and K. Yamamoto. Multivariable multirate sampled-data systems: State-space description, transfer characteristics, and nyquist criterion. IEEE Transactions on Automatic Control, 31(2):145–154, February 1986.
[BBKP02] R. Bhattacharya, G. J. Balas, M. A. Kaya, and A. Packard. Nonlinear Receding Horizon Control of an F-16 Aircraft. Journal of Guidance, Control, and Dynamics, 25(5):924–931, 2002. [BD89]
M. Boddy and T. L. Dean. Solving Time-Dependent Planning Problems. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pages 979–984, Menlo Park, California, 1989. International Joint Conferences on Aritificial Intelligence.
[BD94]
M. Boddy and T. L. Dean. Deliberation Scheduling for Problem Solving in Time-Constrained Environments. Artificial Intellegence, 67(2):245–285, 1994.
[BDG+ 94] G. J. Balas, J. C. Doyle, K. Glover, A. Packard, and R. Smith. µ-Analysis and Synthesis TOOLBOX. The Mathworks, Inc. 24 Prime Park Way, Natick, Mass. 01760-1500, 1994. 118
[BGW90]
R.R. Bitmead, M. Gevers, and V. Wertz. Adaptive Optimal Control: The Thinking Man’s GPC. International Series in Systems and Control Engineering. Prentice Hall, 1990.
[Bla34]
H. S. Black. Stabilized Feedback Amplifiers. Bell System Tech. J., 1934.
[Bod40]
H. W. Bode. Feedback Amplifier Design. Bell System Tech. J., 19:42, 1940.
[Bra98]
M. S. Branicky. Multiple Lyapunov Functions and Other Analysis Tools for Switched and Hybrid Systems. IEEE Transactions in Automatic Control, 43(4):475–482, 1998.
[Bus01]
L. G. Bushnell. Networks and control. IEEE Control Systems Magazine, 21(1):22–23, Feb 2001.
[CF95]
T. Chen and B. Francis. Optimal Sampled-Data Control Systems. Springer-Verlag New York, Incorporated, 175 Fifth Avenue New York, NY 10010, 1995.
[D’A89]
B. D’Ambrosio. Resource Bounded Agents in an Uncertain World. In Working Notes of the IJCAI-89 Workshop on Real-Time Artificial Intelligence Problems, Detroit,Michigan, 1989.
[DB88]
T. L. Dean and M. Boddy. An Analysis of Time-Dependent Planning. In Proceedings of the Seventh National Conference on Artificial Intelligence, pages 49–54, Minneapolis, Minnesota, 1988.
[DFT92]
J. C. Doyle, B. A. Francis, and A. R. Tannenbaum. Feedback Control Theory. Macmillan Publishing Company, 113 Sylvan Ave., Engelwood Cliffs, NJ 07632, 1992.
[Doy90]
J. Doyle. Rationality and Its Roles in Reasoning. In Proceedings of the Eight National Conference on Artificial Intelligence, pages 1093–1100, Menlo Park, California, 1990. American Association for Artificial Intelligence.
[Enn84]
Dale Enns. Model Reduction for Control Design. PhD thesis, Department of Aeronautics and Astronautics, Stanford University, 1984. 119
[Eva48]
W. R. Evans. Graphical Analysis of Control Systems. Transactions of AIEE, 67:547–551, 1948.
[GB01]
S. Ganguli and G. Balas. A TECS Alternative using Robust Multivariable Control. AIAA Guidance, Navigation and Control Conference, August 2001.
[Glo84]
K. Glover. All Optimal Hankel-norm Approximations of Linear Multivariable Systems and their L∞ -error Bounds. Int. J. Control, 39:1115–1193, 1984.
[Hal66]
A. C. Hall. Application of Circuit Theory to the Design of Servomechanisms. J. Franklin Inst., 1966.
[Haz34]
H. L. Hazen. Theory of Servo-mechanisms. J. Franklin Inst., 1934.
[HC95]
Xiaofen Huang and A. M. K. Cheng. Applying Imprecise Algorithms to Real-Time Image and Video Transmission. In Proceedings International Conference on Parallel and Distributed Systems, pages 96–101, 1995.
[Hor87]
E. J. Horvitz. Reasoning about Beliefs and Actions under Computational Resource Constraints. In Proceedings of the Third Workshop on Uncertainty in Artificial Intelligence, pages 429–444, Seattle, Washington, USA, 1987.
[Hor90]
E. J. Horvitz. Computation and Action under Bounded Resources. PhD thesis, Department of Computer Science and Medicine, Stanford University, 1990.
[JNP47]
H. M. James, N. B. Nichols, and R.S Phillips. Theory of Servomechanisms. New York: McGraw-Hill, M.I.T. Radiation Lab. Series, 25, 1947.
[JSM91]
K. Jeffay, D. F. Stanat, and C. U. Martel. On Non-Preemptive Scheduling of Periodic and Sporadic Tasks. Proceedings of the 12th IEEE Real-Time Systems Symposium, San Antonio, Texas, IEEE Computer Society Press, pages 129–139, December 1991.
[Jur52]
E. I. Jury. Recent Advances in the Field of Sampled-Data and Digital Control Systems. Proceedings of IFAC, Moscow, pages 240–246, 1952. 120
[JYH99]
A. Jadbabaie, J. Yu, and J. Hauser. Stabilizing receding horizon control of nonlinear systems: A control lyapunov function approach. American Control Conference, 3:1535–9, 1999.
[KA92]
J. P. Keller and B. D. O. Anderson. A New Approach to the Discretization of Continuous-Time Controllers. IEEE Transactions in Automatic Control, 37(2):214–223, February 1992.
[KCMN94] M. V. Kothare, P. J. Campo, M. Morari, and C. N. Nett. A Unified Framework for the Study of Anti-Windup Designs. Automatica, 30(12):1869– 1883, 1994. [Kuo63]
B. C. Kuo. Analysis and Synthesis of Sampled-Data Control Systems. New Jersey: Prentice-Hall, 1963.
[Lew92]
F. L. Lewis. Applied Optimal Control and Estimation. Prentice Hall, Upper Saddle River, New Jersey 07458, 1992.
[LKL01]
J. Lee, E. Kim, and D. Lee. Imprecise Data Computation for High Performance Asynchronous Processors. In Proceedings of the Asia and South Pacific Design Automation Conference, pages 261–266, 2001.
[LL73]
C. L. Liu and J. W. Layland. Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment. Journal of Association for Computing Machinery, 20(1):46–61, Jan 1973.
[LLS+ 91]
J.W. S. Liu, K. J. Lin, W. K. Shih, A. C. Yu, J. Y. Chung, and W. Zhao. Algorithms for Scheduling Imprecise Computations. IEEE Computer, 24(5):58–68, 1991.
[LNLK87] K. J. Lin, S. Natarajan, J. W. S. Liu, and T. Krauskopf. Concord: A System of Imprecise Computations. In Proceedings of COMPSAC, pages 75–81, Tokyo, Japan, 1987. [LSD89]
J. P. Lehoczky, L. Sha, and Y. Ding. The Rate Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behaviour. IEEE Real-Time Systems Symposium, pages 166–171, December 1989.
[Lya07]
M. A. Lyapunov. Probl` eme G´ en´ eral de la Stabilit´ e du Mouvement. Ann. Fac. Sci. Toulouse, 9:203–474, 1907. Translation of the original paper 121
published in 1892 in Comm. Soc. Math. Karkow and reprinted as Vol. 17 in Ann. Math Studies, Princeton University Press, Princeton N.J., 1949. [Max68]
J. C. Maxwell. On Governors. Proceedings of the Royal Society London, 16:270–283, 1868.
[MB75]
R. A. Meyer and C. S. Burrus. A unified analysis of multirate and periodically time-varying digital filters. IEEE Transactions on Circuits and Systems, 22(3):162–168, March 1975.
[MLFL94] V. Millan-Lopez, W. Feng, and J. W. S. Liu. Using the ImpreciseComputation Technique for Congestion Control on a Real-Time Traffic Switching Element. International Conference on Parallel and Distributed Systems, pages 202–208, 1994. [MM01]
C. S. R. Murthy and G. Manimaran. Resource Management in Real-Time Systems and Networks. The MIT Press, Cambridge, Massachusetts, 2001.
[Moo81]
B. C. Moore. Principal Component Analysis in Linear Systems: Controllablity, Observablity, and Model Reduction. IEEE Transactions in Automatic Control, AC-35:203–208, 1981.
[Mur02]
R. Murray. Control in an Information Rich World. Report of the Panel on Future Directions in Control, Dynamics, and Systems, June 2002.
[Nyq32]
H. Nyquist. Regeneration Theory. Bell System Tech. J., 1932.
[PMC01]
J.L. Paunicka, B.R. Mendel, and D.E. Corman. The OCP - An Open Middleware Solution for Embedded Systems. American Control Conference, 2001.
[PPKT83] K. Poolla P. P. Khargonekar and A. Tannenbaum. Robust control of linear time-invariant plant using periodic compensation. IEEE Transactions on Automatic Control, 30(11):1088–1096, November 1983. [Pri99]
J.A. Primbs. Nonlinear Optimal Control: A Receding Horizon Approach. PhD thesis, California Institute of Technology, 1999.
[RF58]
J. R. Ragazzini and G. F. Franklin. Sampled-Data Control Systems. New York: McGraw-Hill, 1958. 122
[Rou77]
E. J. Routh. A Treatise on the Stability of a Given State of Motion. London: Macmillan & Co., 1877.
[RS94a]
K. Ramamritham and J. A. Stankovic. Scheduling algorithms and operating systems support for real-time systems. Proceedings of IEEE, 82(1):55– 67, January 1994.
[RS94b]
K. Ramamritham and J. A. Stankovic. Scheduling Algorithms and Operating Systems Support for Real-Time Systems. Proceedings of the IEEE, 82(1):55–67, January 1994.
[RW89]
S. J. Russel and E. H. Wefald. Principles of Metareasoning. In R. J. Brachman, H. J. Levesque, and R. Reiter, editors, Proceedings of the First International Conference on Pronciples of Knowledge Representation and Reasoning, pages 400–411, San Mateo, California, 1989. Morgan Kaufmann.
[RW91]
S. J. Russel and E. H. Wefald. Do the Right Thing: Studies in Limited Rationality. MIT Press, Cambridge, Mass., 1991.
[RZ52]
J. R. Ragazzini and L. A. Zadeh. The Analysis of Sampled-Data Systems. Transaction of AIEE, 71(II):225–234, 1952.
[Soe92]
R. Soeterboek. Predictive Control: A Unified Approach. International Series in Systems and Control Engineering. Prentice Hall, 1992.
[Vai93]
P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice Hall, Inc., Englewood Cliffs, N.J. 07632, 1993.
[Vis77]
I. A. Vishnegradsky. On Controllers of Direct Action. Izv. SPB Tekhnolog. Inst., 1877.
[VM88]
P. P. Vaidyanathan and S. K. Mitra. Polyphase networks, block digital filtering, lptv systems and alias-free qmf banks: A unified approach based on pseudo-circulants. IEEE Transactions on Acoustic, Speech, Signal Processing, 36(3):381–391, March 1988.
[YAT02]
H. Yoshimoto, D. Arita, and R. Taniguchi. Real-time Communication for Distributed Vision Processing based on Imprecise Computation Model. 123
Proceedings of International Parallel and Distributed Processing Symposium, pages 128–133, 2002. [ZBP01]
W. Zhang, M. S. Branicky, and S. M. Phillips. Stability of networked control systems. IEEE Control Systems Magazine, 21(1):84–99, Feb 2001.
[ZDG96]
K. Zhou, J. C. Doyle, and K. Glover. Robust and Optimal Control. Prentice Hall, Upper Saddle River, New Jersey 07458, 1996.
[Zil93]
S. Zilberstein. Operational Rationality through Compilation of Anytime Algorithms. PhD thesis, Department of Computer Science, University of California at Berkeley, 1993.
[Zil96]
S. Zilberstein. Using Anytime Algorithms in Intelligent Systems. Artificial Intelligence Magazine, 17(3):73–83, 1996.
124