Apr 27, 2017 - the requirement for the award of the degree of Master of Science in Mathematics and .... pletes our main motive to do this Master's Project work.
Controllability, Observability and Stability of Artificial Satellite Problem A dissertation submitted to the National Institute of Technology, Jamshedpur in partial fulfillment of the requirements for the award of the degree of
MASTER OF SCIENCE In
MATHEMATICS By SONU LAMBA Roll No. 05/15 Reg. No. 2015PGMHMH05 UNDER THE ESTEEMED GUIDANCE OF Dr. Ramayan Singh, Head & Associate Professor (Supervisor)
& Dr. Sumant Kumar Ad-hoc Faculty (Co-Supervisor)
DEPARTMENT OF MATHEMATICS NATIONAL INSTITUTE OF TECHNOLOGY JAMSHEDPUR-831014, JHARKHAND, INDIA APRIL-2017
CERTIFICATE
This is to certify that MR. SONU LAMBA, (2015PGMHMH05), student of M.Sc. Mathematics, in Department of Mathematics, National Institute of Technology, Jamshedpur, has completed his Master’s Thesis work on “CONTROLLABILITY, OBSERVABILITY AND STABILITY OF ARTIFICIAL SATELLITE PROBLEM” under our guidance and supervision. During his tenure, we found him hard working, sincere and dedicated with a learning attitude. We wish him all the success in his future endeavours.
The End Semester Presentation and Vive-Voce Examination of Mr. Sonu Lamba has been held on 27/ 04/ 2017.
Date: 27 April 2017. Place: NIT Jamshedpur.
Dr. Ramayan Singh, Head & Associate Professor (Supervisor)
Dr. Sumant Kumar Ad-hoc Faculty (Co-Supervisor)
DECLARATION
I, Sonu Lamba (Reg. No. 2015PGMHMH05), hereby certify that the work which is being presented in the thesis entitled “CONTROLLABILITY, OBSERVABILITY AND STABILITY OF ARTIFICIAL SATELLITE PROBLEM” in partial fulfilment of the requirement for the award of the degree of Master of Science in Mathematics and submitted in the Department of Mathematics of National Institute of Technology Jamshedpur is an authentic record of my own work carried out during a period from January, 2017 to April, 2017 under the supervision of Dr. Ramayan Singh, Head & Associate Professor and Dr. Sumant Kumar, Ad-hoc Faculty, Department of Mathematics, National Institute of Technology Jamshedpur. The matter presented in the thesis has not submitted by me for the award of any other degree of this or any other Institute.
Date: 27 April 2017. Place: NIT Jamshedpur. Sonu Lamba
ACKNOWLEDGEMENT
I would like to extend sincere thanks to my Supervisor - Dr. Ramayan Singh, Head of Department & Associate Professor, Department of Mathematics, National Institute of Technology, Jamshedpur for his eminent guidance and to give me freedom for choosing Control Theory as a research topic for Master’s Project. His wonderful social behavior made him a visionary, a wonderful and inspiring teacher and an ideal supervisor for me. I am ineffably indebted to my Co-supervisor - Dr. Sumant Kumar, Ad-hoc Faculty, Department of Mathematics, National Institute of Technology, Jamhsedpur for his invaluable direction, encouragement, support and for providing me all the necessary facilities for carrying out this work well-before time. I feel privileged to express my sincere regards and gratitude to Prof. Raju K George, Dean (R&D) and Professor and Dr. Govindaraj Venkatesan, Post Doctoral Fellow, Department of Mathematics, Indian Institute of Space Science and Technology, Thiruvananthapuram for their valuable guidance and constant encouragement during my NPDE TCA Project at IIST, Thiruvananthapuram. The critical comments rendered by them during the discussions are deeply appreciated. I specially thanking to Govindaraj Sir for his encouragement and moral support that make it possible for me to learn a lot in every phase of my M.Sc. programme. I would like to express my sincere regards to Prof. Sukavanam Nagarajan, Professor, Department of Mathematics, Indian institute of Technology, Roorkee, for his eminent guidance to Stability Chapter. I am highly thankful to him for providing me some new book and guidlines on Control Theory. I would like to extend my gratitude to Dr. Vikas Kumar Mirshra, Post Doctoral Fellow, Department of Mechanical Engineeing, Ben-Gurien University of the Negev, Beer-Sheva, Israel, for his guidance during initial days of project work. His support and encouragement during that time helped me a lot. My sincere thanks are due to all my teachers, friends and well-wishers for their valuable suggestions and discussions during my project work. I wish to put on records that whatever I have achieved in my life is due to the blessing of my mother Smt. Kiran Devi & father Shri Raj Kumar. Thanks are also due to all those who helped me directly or indirectly for completion of the work.
Date: 27 April 2017. Place: NIT Jamshedpur. Sonu Lamba
ABSTARCT
In this thesis, Mathematical Modelling, Controllability, Observability and Stability Analysis of an Artificial Satellite problem have been done. In first chapter, we discussed mainly the general introduction & motivation to Control Theory, Observability and Stability of linear and semilinear systems relating it to our Satellite Problem. In this chapter we have also discussed the origin of control theory- history and modern dovelepments and litrature review. In second chapter, we discussed all about mathematical modelling. As an introduction we discussed Approaches to Mathematical Modelling and Construction of Models from Data Obtained by Experimentation and from Theoretical Considerations. Then we discussed our main problem that is Mathematical Modelling of Satellite Motion. There we derived the Dynamical Equations of Motion and its State Space Representations. Then we discussed Linearization of Non Linear Systems and Approach to Linearization for Satellite Problem. for Satellite Problem. In third chapter, we discribed all the aspects of controllability. Starting with introduction, we discussed about prerequisties for controllability that is Fundamental and Transition Matrix. Then we are looking over Controllability of Linear Systems with simple examples, here we have have also discussed the Solution of the Controlled System using Transition Matrix . Then we taken our point to Controllability Grammian and Kalmans Rank Condition for Time Invariant Systems with an example of tank problem showing its application. And finally, we discribed our main problem that is Contollability of Satellite Problem. Here we discussed Controllability of Articial Satellite Problem by Kalmans Rank Condition and using the same condition we got some interesting results explaining Effect of Controllers or Thrusters on Controllability. After all discussion we also ploted some MATLAB Graphs regarding it. In fourth chapter, we are looking over Observability. Starting with its Introduction, we discussed Observability Grammian and the Kalmans Rank Condition for Time Invariant Systems with some interesting real life and simple Examples. Finally, we again come back to our main problem and discussed the Observability of Satellite Problem along with the Effect of Controllers or Thrusters on Observability. In fifth chapter, we have discussed Stability. Starting with its Introduction we looked over the Linear System Stability with some Examples explaining the conditions for a system to stable or unstable. And finally, we discussed the Stability of Satellite Problem. In last chapter, we have discussed all the conclusions made in previous chapters. This completes our main motive to do this Master’s Project work. I hope this will help the readers to learn a lot of things related to control theory with an appraoch to applications. Key Words: Mathematical Modelling, Controllability, Observability, Kalman’s Rank Condition, Grammian Matrix and Stability.
..........................
Dedicated to my Beloved Family
..........................
Contents Cover Page . . . . . . . . . . . . . . . Certificate . . . . . . . . . . . . . . . . Delaration . . . . . . . . . . . . . . . . Acknowledgement . . . . . . . . . . . . Abstract . . . . . . . . . . . . . . . . . Page Dedicated to My Beloved Family Contents . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
List of Figures
9
1 Introduction 1.1 General Introduction and Motivation 1.1.1 What is Control Theory? . . . 1.1.2 Controllability . . . . . . . . . 1.1.3 Observability . . . . . . . . . 1.1.4 Stability . . . . . . . . . . . . 1.2 Review of Literature . . . . . . . . . 1.3 An Overview of the Thesis . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
2 Mathematical Modelling 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Approaches to Mathematical Modelling . . . . . . . . . . . . . . . 2.1.2 Construction of Models from Data Obtained by Experimentation 2.1.3 Construction of Models from Theoretical Considerations . . . . . 2.2 Mathematical Modelling of Satellite Motion . . . . . . . . . . . . . . . . 2.2.1 Dynamical Equations of Motion . . . . . . . . . . . . . . . . . . . 2.2.2 State Space Representations . . . . . . . . . . . . . . . . . . . . . 2.3 Linearization of Non Linear Systems . . . . . . . . . . . . . . . . . . . . 2.3.1 Meaning of Non-linearity . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Why We Need Linearization? . . . . . . . . . . . . . . . . . . . . 2.3.3 Approach to Linearization for Satellite Problem . . . . . . . . . . 3 Controllability 3.1 Introduction . . . . . . . . . . . . . 3.2 Fundamental and Transition Matrix 3.2.1 Fundamental Matrix . . . . 3.2.2 Transition Matrix . . . . . . 3.3 Controllability of Linear Systems . 3.3.1 Controllability Problem . . 3.3.2 Example - Tank Problem . .
1 2 3 4 5 6 8
. . . . . . .
. . . . . . . 7
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . .
. . . . . . .
. . . . . . .
11 11 11 13 14 15 16 17
. . . . . . . . . . .
19 19 19 20 21 22 22 24 26 26 27 27
. . . . . . .
29 29 30 30 32 34 34 36
8
CONTENTS
3.4 3.5 3.6
3.7
3.3.3 Solution of the Controlled System using Transition Matrix . . . . . . . . 3.3.4 Conditions for Controllability . . . . . . . . . . . . . . . . . . . . . . . . Controllability Grammian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kalman’s Rank Condition for Time Invariant Systems . . . . . . . . . . . . . . . Examples- Tank Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Example 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Example 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contollability of Satellite Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Controllability of Artificial Satellite Problem by Kalman’s Rank Condition 3.7.2 Effect of Controllers or Thrusters on Controllability . . . . . . . . . . . . 3.7.3 MATLAB Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Observability 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Observability Grammian . . . . . . . . . . . . . . . . . . . 4.3 Kalman’s Rank Condition for Time Invariant Systems . . . 4.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Example 1. . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Example 2. . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Example 3. . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Example 4. . . . . . . . . . . . . . . . . . . . . . . 4.5 Observability of Satellite Problem . . . . . . . . . . . . . . 4.5.1 Effect of Controllers or Thrusters on Observability . 5 Stability 5.1 Introduction . . . . . . . . . . 5.2 Linear System Stability . . . . 5.3 Examples . . . . . . . . . . . 5.3.1 Example 1. . . . . . . 5.3.2 Example 2. . . . . . . 5.4 Stability of Satellite Problem
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
6 Conclusion and Future Scope 6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . 6.1.1 Conclusion on Conrollability . . . . . . . . 6.1.2 Conclusion on Observability . . . . . . . . 6.1.3 Conclusion on Stability . . . . . . . . . . . 6.2 Current Research Trends and Future Applications 6.2.1 Current Trends . . . . . . . . . . . . . . . 6.2.2 Future Applications . . . . . . . . . . . . . Achievements During M.Sc. . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . .
. . . . . . . . . . .
37 39 39 40 41 41 41 41 42 42 43
. . . . . . . . . .
45 45 46 49 50 50 51 52 52 53 54
. . . . . .
55 55 56 57 57 57 57
. . . . . . . . . . .
59 59 59 59 60 60 60 61 62 65 66 67
List of Figures 1.1
Depicting the structure of a system. . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 2.2 2.3
A general situation that is to be modelled. . . . . . . . . . . . . . . . . . . . . . 19 Satellite motion around Earth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Depicting the satellite motion in polar coordinates. . . . . . . . . . . . . . . . . 23
3.1 3.2 3.3 3.4 3.5 3.6
Depicting the solution curve of the system (3.2). . . . . Depicting the solution curve of the system (3.3). . . . . Tank problem- Model 1. . . . . . . . . . . . . . . . . . Tank problem- Model 2. . . . . . . . . . . . . . . . . . Controlled states of linearized satellite system. . . . . . Steering Control Profile of Linearized Satellite System.
4.1 4.2
Spring-Mass System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Satellite motion around Earth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.1
Block diagram; showing some new discipline where Control Theory plays a major role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
9
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
29 30 36 37 43 44
Chapter 1 Introduction 1.1 1.1.1
General Introduction and Motivation What is Control Theory?
Well, before going to Control Theory we define System. The objects under study in control theory are systems, system is any set of elements connected together by information links within some delineated system boundaries. Figure 1. is depicting the structure of a general system. Moreover, a system is defined as a collection, set or arrangement of objects which are related by interactions and produce various outputs in response to different inputs. If a system changes with respect to time then it is termed as Dynamical System. For example electromechanical machines such as motor car, aircraft or spaceships, biological systems such as human body, economic structures of countries or regions, population growth in a region are dynamical systems.
Figure 1.1: Depicting the structure of a system. 11
12
CHAPTER 1. INTRODUCTION
If a dynamical system is controlled by suitable inputs to obtain desired output (state) then it is called a control system. In other words, a control system is an interconnection of components forming a system conguration that provides a desired system response. Since control theory deals with structural properties, it requires system representations that have been stripped of all detail, until the main property that remains is that of connectedness. (The masterly map of the Delhi Metro system is an everyday example of how useful a representation can be when it has been stripped of all properties except that of connectedness). Connectedness is a concept from topology. Topology, the discipline that studies the underlying structure of mathematics, offers fascinating reading to aspiring systems theorists. Clearly, a system is a very general concept; control theory is most interested in certain classes of system. Control system is an interdisciplinary field covering many areas of engineering and sciences. It exists in everyday work of human life. For example, our body temperature and blood sugar level needs to be controlled at desired set points, insect and animal populations are controlled by very delicately balanced prey predator relationship. These control systems are provided to us by nature. There are several simple as well as complex man-made control systems which are used in our everyday life. Automatic air conditioners, automatic water heater, washing machine, missiles, etc. are some examples. However, whether a control system is natural or man-made, those all share a common aim, which is to control or regulate a particular variable within certain operating limits. Controllability is an important area in the study of control systems. It plays an important role in control problems such as stabilization of unstable systems by feedback control or in the study of optimal control. For this reason, it has been studied by several authors during the past few decades. Controllability is a mathematical problem, which analyzes the possibility of steering a system from an arbitrary initial state to an arbitrary final state using a set of admissible controls. On the other hand, control theory is a discipline where many mathematical ideas and methods meet to produce a new body of important mathematics. Accordingly, it is a rich crossing point of engineering and other sciences with mathematics. Origin of Control Theory- History and Modern Dovelepments: In ancient times, Romans did use some elements of control theory in their aqueducts. Indeed, ingenious systems of regulating valves were used in these constructions in order to keep the water level constant. Some experts claim that, the origin of control theory is in the ancient Mesopotamia (2000BC) where the control of the irrigation system was well known. Modern development in control theory started during the seventeenth century. The dutch mathematician and astronomer Krichristiaan Hugens designed pendulum clock and in doing so, analyzed the problem of speed control. The invention of steam engine by James Watt in 1769 made control mechanisms very popular. In 1860s, James Clerk Maxwell published the first complete mathematical treatment of the steady state behavior of control systems. Characterizations of stability were independently
1.1. GENERAL INTRODUCTION AND MOTIVATION
13
obtained for linear systems by mathematicians A. Hurwitz and E.J. Routh. This theory was applied in various different areas such as the study of ship steering system. During 1930s, H. Nyquist, H.W. Bonde and others developed feedback control and frequency domain approach for linear systems. During the second world war and the following years, engineers and scientists improved their experience on the control mechanisms of plane tracking, ballistic missiles and other designs of anti-aircraft batteries. These so called classical control approaches were for the most part limited by their restriction to linear time-invariant system, with scalar inputs and outputs. Only during 1950s control theory began to develop powerful general techniques that allowed treating multivariable, timevarying systems as well as many nonlinear problems. The contributions of R. Bellman in the context of dynamic programming, R. Kalman in filtering techniques and the algebraic approach to linear systems and L. Pontryagin with the maximum principle for nonlinear optimal control problems formed the basis for a very large research efforts during the 1960s, which continues to this day. In fact, nowadays theoretical research in control theory involves a variety of areas of pure mathematics. Concepts and results from these areas find application in control theory; conversely questions about control systems give rise to new open problems in mathematics.
1.1.2
Controllability
Control theory is certainly, at present one of the most important interdisciplinary areas of research and arises in the very first technological discoveries of the industrial revolution as well as in the most modern technological applications. Many scientific and engineering problems can be modeled by deterministic and non-deterministic partial dierential equations, integrodierential equations or coupled ordinary and partial differential equations with or without delay in finite or infinte dimensional spaces using semigroup and cosine family. Most of the systems that arise in practice are nonlinear to some extent, at least over portions of their operational range. Since, linear systems are much easier to handle mathematically, the first step in dealing with a nonlinear system is usually, if possible, to linearize it around some nominal operating point. A better approximation to nonlinear system is the semilinear system, that is, a system with a linear part as well as a nonlinear part and can be derived from a general nonlinear system by making a local approximation about some nominal trajectory. There are various properties of the system such as existence, uniqueness and regularity of the solutions, stability of equilibrium points, etc. Controllability is also an important area of study in control theory. In many applications the objective of the control action is to drive the system from one state to another in an optimal fashion. However, before we formulate the question of optimality it is necessary to pose the more fundamental question of whether or not it is possible to reach a desired state from an initial state. More description of this topic including transition matrix and fundamental matrix, is explained in Chapter 3. We will aslo explain computation of steering controls in artificial satellite.
14
1.1.3
CHAPTER 1. INTRODUCTION
Observability
In Control Theory, Observability is a measure for how well the internal states of a system can be inferred by knowledge of its external outputs. The concept of observability was introduced by American-Hungarian Engineer Rudolf E. Kalman for linear dynamic systems. Any System (Linear or Non-linear, time variant or invariant) is said to be observable over a given time period if it is possible to determine uniquely the initial state from the knowledge of the output over that time period. On the other hand, observability can be defined as the problem of finding the state vector knowing only the output over some interval of time. Consider the linear system, x(t) ˙ = A(t)x(t) + f (t)
(1.1)
n
where, x ∈ R , the n × n matrix A(t) and the n-vector f (t) are continuous and locally square integrable respectively, on some time interval (a, b). Along with euation (1.1) we have a linear observation equation, ˆ y(t) = C(t)x(t) + C(t)f (t),
y ∈ Rn
(1.2)
where, C(t) = [Cij (t)]m×n is the matrix having entries as continuous functions of t. Assuming that the system (1.1) is in operation during a time interval [t0 , t1 ] ⊂ (a, b) and that x(t) = x0 ∈ Rn . Then we have, Z t x(t) = X(t, t0 )x0 + X(t, s)f (s)ds (1.3) t0
where, X(t, t0 ) is transition matrix. If f is a known function, for example f (t) = B(t)u(t) with u(t) a control, then in principle the ˆ term C(t)f (t) in (1.2) and C(t) times the integral in (1.3) could be subtracted from, Z t ˆ y(t) = C(t)X(t, t0 )x0 + C(t) X(t, s)f (s)ds + C(t)f (t) t0
to get the modefide observation, yˆ(t) = C(t)X(t, t0 )x0 ,
t0 ≤ t ≤ t1
(1.4)
Here X(t, t0 )x0 satisfies the homogenous equation, x(t) ˙ = A(t)x(t)
(1.5)
y(t) = C(t)x(t)
(1.6)
and the observation (1.4) has the form,
Thus the information obtained from (1.1) and (1.2) reduced to homogeneous system (1.5) and homogeneous observation (1.6). Definition 1.1.3.1. System (1.5) is said to be observable over a time period [t0 , t1 ] if it is possible to determine uniquely the initial state x(t0 ) = x0 from the knowledge of the output function y(t) over [t0 , t1 ]. The complete state of the system is known if initial state x0 is known.
1.1. GENERAL INTRODUCTION AND MOTIVATION
1.1.4
15
Stability
A stable system is one that, when perturbed from an equilibrium state, will tend to return to that equilibrium state. Conversely, an unstable system is one that, when perturbed from equilibrium, will deviate further, moving off with ever increasing deviation (linear system) or possibly moving towards a different equilibrium state (non-linear system). All usable dynamical systems are necessarily stable - either they are inherently stable or they have been made stable by active design means. For example, a ship should ride stably with its deck horizontal and tend to return to that position after being perturbed by wind and waves. Stability occupies a key position in control theory for the reason that the upper limit of the performance of a feedback control system is often set by stability considerations, although most practical designs will be well away from the stability limit to avoid excessively oscillatory responses. Definition 1.1.4.1. Stability: For the dynamical system x˙ = f (x, t) Where x(t) is the state vector and f id a vector having components fi (x1 , x2 , ..., xn,t ), i = 1, 2, ..., n. We shall assume that the fi are continious and satisfies standard conditions, such as having continuous first partial derivatives so that the solution of the above system exists and is unique for initial conditions. An equilibrium state x = 0 is said to be: 1. Stable if for any positive scalar there exist a positive scalar δ such that kx(t0 )ke < δ ⇒ kx(t)ke < ,
t ≥ t0 .
2. Asymptotically Stable if it is stable and if in addition x(t) → 0 as t → ∞. 3. Unstable if it is not stable; that is, there exists an > 0 such that for every δ > 0 there exist an x(t0 ) with kx(t0 )k < δ and t1 > t0 such that kx(t1 )k ≥ for all t > t1 . If this holds for every x(t0 ) in kx(t0 )ke < δ this equilibrium is completely unstable. The above denitions are called ‘stability in the sense of Lyapunov’Regarded as a function of t in the n-dimensional state space, the solution x(t) of (5.1) is called a trajectory or motion. In two dimensions we can give the definitions a simple geometrical interpretation. If the origin O is stable, then give the outer circle C with radius , there exists an inner circle C1 with radius δ1 such that trajectories starting within C1 never leaves C. If O is asymptotically stable then there is some circle C2 , radius δ2 having the same property as C1 but in addition trajectories starting inside C2 tends to O as t → ∞. So, this gives the motivation to study the controllability, observability and stability of semilinear control systems of an abstract form. This thesis is concerned with the study of controllability, observability and stability of first order, second order semilinear control systems under a general frame work. Specifically, we will work on controllability, observability and stability of artificial satellite.
16
1.2
CHAPTER 1. INTRODUCTION
Review of Literature
In this section, a brief literature review regarding the controllability, Observability and Stability of linear and semi linear systems is given. In (1962-63), theory of controllability originated from the famous works, see [14, 15, 16, 17] done by Kalman. In this work, Kalman introduced the concept of controllability for finite dimensional linear control systems and proved the controllability under a rank condition of the controllability matrix. Russell [18] summarized the existing results and studied the controllability, observability and stability theory for linear partial dierential equations through operator approach. Joshi and George [19] investigated the controllability of the semilinear system (nonautonomous) in nite dimensional space with the assumption that its linear part is controllable and reduced the controllability problem to the solvability of an operator equation. The solvability analysis of the operator system was carried out by using fixed point theory and monotone operator theory. Balachandran and Dauer [9] [20] presented a survey on the controllability of nonlinear systems and functional integrodifferential systems in Banach spaces using fixed point theorems and semigroup theory. Nandakumaran and George [21] proved the exact controllability of semilinear thermoelastic system under the assumptions that the nonlinear functions are Lipschitz continuous with their Lipschitz constants very small. George [23] proved the approximate controllability of the non-autonomous semilinear systems under dierent assumptions on the system operators A, B and f . The controllability of impulsive systems was also proved by George et al. [24] Balachandran et al. [22] gave sucient conditions for the controllability of semilinear integrodifferential systems. Sukavanam and Divya [25] gave sufficient conditions for exact and approximate controllability for the semilinear control systems of the form y0 (t) = Ay(t)+Bu(t)+f (t, y(t)). Further [26], they also derived more results for approximate controllability. Chalishajar and George [27] studied the exact controllability of an abstract model described by the controlled generalized Hammerstein type integral equations. George et al. [28] studied the exact controllability of nonlinear third order dispersion equation. They established the controllability results using two standard types of nonlinearities, namely Lipschitzian and monotone. Chalishajar [29] proved the exact controllability of nonlinear integrodierential third order dispersion equation. Tomar and Sukavanam [30] discussed the exact controllability of the semilinear thermoelastic system by assuming the Lipschitz continuity on the nonlinear functions without any condition on their Lipschitz constants. In [31], they considered an important case, in which the state linear operator is non-densely dened on a Banach space and proved the approximate
1.3. AN OVERVIEW OF THE THESIS
17
controllability under the simple sufficient conditions. J. R. Leigh [3] presented a detailed survay on control theory, in second edition of this book he added two additional chapters devoted to H∞ approaches and to AI approaches, respectively. He also added a chapter that, placed at the end of the book, briefly reviews the development of control Theory. E. D. Sontag [1] gave a detailed explaination to nonlinear controllability via Lie-algebraic methods, variational and numerical approaches to nonlinear control, including a brief introduction to the Calculus of Variations and the Minimum Principle, time-optimal control of linear systems, feedback linearization (single-input case), nonlinear optimal feedback, controllability of recurrent nets, and controllability of linear systems with bounded controls in second edition of his book- Mathematical Control Theory. F. L. Lewis, D. L. Vrabie and V. L. Syrmos [5] presented optimal control theory in a clear and direct fashion through thier book Optimal Control.
1.3
An Overview of the Thesis
In this thesis, mathematical modelling, controllability, observability and stability Analysis of an artificial satellite problem have been done. There are total six chapters, the chapterwise discription is given below; In first chapter [1], we discussed mainly the general introduction & motivation [1.1] to Control Theory[1.1.1] [1.1.2], Observability[1.1.3] and Stability[1.1.4] of linear and semilinear systems relating it to our Satellite Problem. In this chapter [1] we have also discussed the origin of control theory- history and modern dovelepments[1.1.1] and litrature review[1.2]. In second chapter [2], we discussed all about mathematical modelling. As an introduction we discussed Approaches to Mathematical Modelling [2.1.1] and Construction of Models from Data Obtained by Experimentation [2.1.2] and from Theoretical Considerations [2.1.3]. Then we discussed our main problem that is Mathematical Modelling of Satellite Motion [2.2]. There we derived the Dynamical Equations of Motion [2.2.1] and its State Space Representations [2.2.2]. Then we discussed Linearization of Non Linear Systems [2.3] and Approach to Linearization for Satellite Problem [2.3.3]. for Satellite Problem. In third chapter [3], we discribed all the aspects of controllability. Starting with introduction [3.1], we discussed about prerequisties for controllability that is Fundamental and Transition Matrix [3.2]. Then we are looking over Controllability of Linear Systems [3.3] with simple examples, here we have have also discussed the Solution of the Controlled System using Transition Matrix [3.3.3]. Then we taken our point to Controllability Grammian [3.4] and Kalmans Rank Condition for Time Invariant Systems [3.5] with an example of tank problem [3.6] showing its application. And finally, we discribed our main problem that is Contollability of Satellite Problem [3.7]. Here we discussed Controllability of Articial Satellite Problem by Kalmans Rank Condition [3.7.1] and using the same condition we got some interesting results explaining Eect of Controllers or Thrusters on Controllability [3.7.2]. After all discussion we also ploted some MATLAB Graphs [3.7.3] regarding it.
18
CHAPTER 1. INTRODUCTION
In fourth chapter [4], we are looking over Observability [4]. Starting with its Introduction [4.1] we discussed Observability Grammian [4.2] and the Kalmans Rank Condition for Time Invariant Systems [4.3] with some interesting real life and simple Examples [4.4]. Finally, we again come back to our main problem and discussed the Observability of Satellite Problem [4.5] along with the Eect of Controllers or Thrusters on Observability [4.5.1]. In fifth chapter [5], we have discussed Stability [5]. Starting with its Introduction [5.1] we looked over the Linear System Stability [5.2] with some Examples [5.3] explaining the conditions for a system to stable or unstable. And finally, we discussed the Stability of Satellite Problem [5.4]. In sixth chapter 6, we have discussed all the conclusions made in previous chapters. This completes our main motive to do this Master’s Project work. I hope this will help the readers to learn a lot of things related to control theory with an appraoch to applications. ******
Chapter 2 Mathematical Modelling 2.1
Introduction
Control Theory is a fairly coherent well-defined body of concepts and knowledge, supported by techniques, and its applications are scattered amongst many disciplines. But the activity of mathematical modelling is usually ill-defined (less discussed). In science, models are often used to explain phenomena as, for instance, the Bohr model of the atom or the wave theory of electromagnetic propagation. Such models are essentially visualisations of mechanisms. Far removed from this are those models, usually implicit and sometimes fictitious, by which politicians claim to predict future rates of employment or inflation. We can propose that the science models contain a representation of physical variables and this is their fundamental characteristic. The second group may be, in the extreme, no more than extrapolations of past trends. Constructing a model in the first category is primarily a matter of bringing together, combining and refining concepts to produce an object called a model (usually it will consist of a set of mathematical equations).
2.1.1
Approaches to Mathematical Modelling
Figure 2.1 shows a general situation that is to be modelled. External influences (controls, raw material characteristics, environmental influences and disturbances) are contained in vector u. Available information (measurements, observations, other data) are contained in vector y. The vector x contains internal variables fundamental to the situation, x may be of no interest whatever, except as a building block to the modeller. Alternatively, x may be of great interest in its own right. We assume that there are available data sets (ui , yi ) for the modeller to work on.
Figure 2.1: A general situation that is to be modelled.
19
20
CHAPTER 2. MATHEMATICAL MODELLING
Approach (1); is to fit numerically a dynamic linear input-output model Gi to each data set (ui , yi ). This is very easy but: 1. Gi may not fit the data well for any i. Such an effect may be encountered when the situation is non-linear and/or time varying. 2. Different data sets (uj , yj ), (uk , yk ) that are supposed to arise from the same mechanism may give rise to widely differing models Gj , Gk . 3. Non-standard types of information, contained within the vectors ui ,yi may be impossible to accommodate within a standard identification procedure. Approach (2); is to construct a set of interlinked physically inspired equations, involving the vector x, that approximate (possibly grossly) the mechanisms that are thought to hold in the real process. The data sets (ui , yi ) are then used quantitatively to fix numerical values for any situation-specific coefficients and, when best values have been found, to verify the performance of the resulting model. Approach (3); is to fit an empirical black-box model, typically a neural network, to as wide a range of input-output data as possible in the hope of obtaining a single non-linear relation that represents all the cases presented. The expectation is that the behaviour of the model so obtained will generalise sufficiently well to act as a useful model of the process.
2.1.2
Construction of Models from Data Obtained by Experimentation
A system that exists may be able to produce data from which a model can be constructed. The ideal situation is one where: 1. The system is available for experimentation with no limits on the amount of data that can be acquired. 2. The system receives no other signals than those deliberately injected by the experimenter. 3. The system is, to a reasonable approximation, linear and time invariant. 4. The system completes its response to a stimulus within a reasonable time scale. 5. The system has no ‘factors causing special difficulty’. 6. It is not intended to use the model outside the region of operation spanned by the experiments. 7. The physical meaning of the model is not of interest. 8. The only system that is of interest is a unique one, on which the experiments are to be made.
2.1. INTRODUCTION
21
Let us discuse why modelling based on experimentation is so difficult: 1. Real (for instance, industrial) systems are almost never available for experimentation. This is why pilot plants and laboratory-scale systems are commonly used, unfortunately they are often quite different from large systems in their behaviour with such differences themselves being very difficult to quantify. For this reason, simulations of systems are often used in preference to pilot plants, but of course simulations need system models. However, real systems may usually be observed under normal operating conditions and models may be developed based on the resulting data. 2. Real systems will usually be subject to operational inputs and unmeasurable disturbances, in addition to any signals originated by the experimenter. The experimenter’s signals will always need to observe amplitude constraints and there always arises the question: Is the signal-to-noise ratio of recorded data sufficient to allow modelling to proceed to a level of sufficient accuracy? 3. Real systems exhibit every sort of undesirable behaviour like lack of repeatability.
2.1.3
Construction of Models from Theoretical Considerations
A system can most easily be modelled when every aspect obeys established physical laws and where, additionally, all the required numerical coefficients are exactly known. Most usually, real systems have to be heavily idealised before textbook theories can be applied. Such idealisation naturally means that model and system differ appreciably. Turning to numerical coefficients, these can be classified roughly into three groups: 1. Universal constants where values are exactly known. 2. Coefficients whose role in the theoretical framework is well understood but whose numerical values may vary over a wide range depending on system configuration and prevailing conditions. 3. Coefficients on whose numerical values the appropriate accepted theories have little or nothing to say. In next section, we will model Satellite Motion from theoretical considerations.
22
2.2
CHAPTER 2. MATHEMATICAL MODELLING
Mathematical Modelling of Satellite Motion
In this section, we will model the system for getting dynamical equations of satellite motion.
2.2.1
Dynamical Equations of Motion
Consider a satellite of mass m orbiting around the earth under inverse square law field. We can assume that the satellite has thrusting capacity with radial thrust u1 and tangential thrust u2 , as shown in following figure .
Figure 2.2: Satellite motion around Earth. If (x, y) are the rectangular co-ordinates of satellite of mass m then by using Newton’s law of motion the equations of motion are given by, m¨ x = fx m¨ y = fy
(2.1) (2.2)
Where, fx and fy are the components of the force f in direction of the co-ordinate axis. It will be convenient to use polar co-ordinates (r, θ) so that, x = r cos θ,
y = r sin θ
Let us consider u, v denotes the components of velocity of satellite along radial direction and tangential direction respectively.
The resultant of u and v is also the resultant of the components of x˙ and y. ˙ Therefore by resolving parallel to x-axis, we get; x˙ = u cos θ − v sin θ
(2.3)
2.2. MATHEMATICAL MODELLING OF SATELLITE MOTION
23
Figure 2.3: Depicting the satellite motion in polar coordinates. Since x = r cos θ, so differentiating it wrt time t we get, x˙ = r˙ cos θ − r sin θ.θ˙
(2.4)
u cos θ − v sin θ = r˙ cos θ − r sin θ.θ˙
(2.5)
from equation (2.3) and (2.4),
comparing the cofficients of cos θ and sin θ in equation (2.5) we get, u = r˙ v = rθ˙ where u = r˙ is the radial velocity component and v = rθ˙ is the transverse velocity component. Now if a1 and a2 denotes the components of the acceleration along radial and transverse direction respectively. Then by resolving it parallel to x-axis we get, x¨ = a1 cos θ − a2 sin θ
(2.6)
By differentiating equation (2.3) we get, x¨ =
d dx˙ ˙ = (r˙ cos θ − r sin θ.θ) dt dt = r¨ cos θ − r˙ sin θ.θ˙ − r˙ sin θ.θ˙ − r cos θ.θ˙2 − r sin θ.θ¨
¨ sin θ x¨ = (¨ r − rθ˙2 ) cos θ − (2r˙ θ˙ + rθ)
(2.7)
equating equation (2.6) and equation(2.7), ¨ sin θ a1 cos θ − a2 sin θ = (¨ r − rθ˙2 ) cos θ − (2r˙ θ˙ + rθ)
(2.8)
24
CHAPTER 2. MATHEMATICAL MODELLING
comparing the cofficients of cos θ and sin θ in equation (2.8) we get, a1 = r¨ − rθ˙2 1d 2 ˙ (r .θ) a2 = 2r˙ θ˙ + rθ¨ = r dt The quantities a1 and a2 are called the radial and transverse components of acceleration respectively. The equation of motion of the satellite can be now written as, ma1 = m(¨ r − rθ˙2 ) = Fr 1d 2 ˙ (r .θ) = Fθ ma2 = m r dt
(2.9) (2.10)
In case of central orbits, the force is always directed towards a fixed point. If we take this point as the origin, then Fθ = 0 and Fr = −k . Then equation (2.9), (2.10) will become, r2 −k m(¨ r − rθ˙2 ) = r2 1d 2 ˙ (r .θ) = 0 m r dt
(2.11) (2.12)
Assume that the mass is equipped with the ability to exert a thrust u1 in the radial direction and u2 in the tangential direction. Then under presence of these external forces the equation of motion become, k m¨ r − mrθ˙2 + 2 = u1 r ¨ mrθ + 2mr˙ θ˙ = u2
(2.13) (2.14)
equation (2.13) and (2.14) are required differential equations of dynamical satellite motion.
2.2.2
State Space Representations
In the state space modelling of linear systems it is assumed that there exists an nth order vector called the state vector, whose value at every instant of time completely charac- terises the dynamic state of the system. The order n is, in general, equal to the sum of the orders of all the individual differential equations that together describe the system. Every single-input single-output linear system can of course be described in state space form and we choose such a system to illustrate some simple state space ideas. Let the single-input single-output process (linear system) be d3 y d2 y dy + 2 + 3 + 4y = u dt3 dt2 dt To move to a state space model we consider following change of variables, x1 = y x2 = x˙1 x3 = x˙2
(2.15)
2.2. MATHEMATICAL MODELLING OF SATELLITE MOTION
25
Then equivalent to equation (2.15), we can write, x˙1 = x2 x˙2 = x3 x˙3 = −4x1 − 3x2 − 2x3 + u This is the requried state sapce form for considered system. It would more usually written as, x1 0 1 0 x1 0 d x2 0 0 1 x2 + 0 u = dt x3 −4 −3 −2 x3 1 x1 y = 1 0 0 x2 x3 Which is usually written as x(t) ˙ = Ax(t) + Bu(t) y = Cx(t) This formulation is same for all multivariable linear systems. State Space Representation of Equation of Satellite motion Assume that the mass m of the satellite is unit mass. Then, the motion of a satellite is described by a pair of second order differential equations k + u1 r2 −2 ˙ u2 θ¨ = r˙ θ + r r r¨ = rθ˙2 −
(2.16) (2.17)
If u1 = 0 = u2 then one can show that equation (2.16) and (2.17) have the solution given by r(t) = σ θ(t) = wt
(2.18) (2.19)
where σ 3 w2 = k. Make the following change of variable, x1 x2 x3 x4
= = = =
r−σ r˙ σ(θ − wt) σ(θ˙ − w)
(2.20) (2.21) (2.22) (2.23)
Then r = x1 + σ r˙ = x2 x3 θ = + wt σ x4 +w θ˙ = σ
(2.24) (2.25) (2.26) (2.27)
26
CHAPTER 2. MATHEMATICAL MODELLING
So by using above transformations, equation (2.16) and (2.17) reduces to system of four first order non-linear differential equations, dx1 dt dx2 dt dx3 dt dx4 dt
= x2 2 k k x4 2 ˙ = rθ − 2 + u1 = (x1 + σ) +w − + u1 r σ (x1 + σ 2 ) = x4
x4 x2 u2 σ = −2σ +w + σ (x1 + σ) (x1 + σ)
The equations obtained forms a system of non-linear ODE’s involving the forcing function (controls) u1 and u2 can be written as compact vector notation as, dx = f (x, u); dt
∀x(t) ∈ R4 , u(t) ∈ R2
Here f is a vector with components f1 , f2 , f3 and f4 given by, f1 (x1 , x2 , x3 , x4 ; u1 , u2 ) = x2 2 x4 k f2 (x1 , x2 , x3 , x4 ; u1 , u2 ) = (x1 + σ) +w − + u1 σ (x1 + σ 2 ) f3 (x1 , x2 , x3 , x4 ; u1 , u2 ) = x4 x4 x2 u2 σ f4 (x1 , x2 , x3 , x4 ; u1 , u2 ) = −2σ +w + σ (x1 + σ) (x1 + σ) This is the required state-space representation of dynamial equations of satellite motion.
2.3
Linearization of Non Linear Systems
Most of the differential equations and system of differential equations are encountered in practise are non-linear. And most of the real life problems are based on the non-linear system. Many of times we are unable to solve non-linear differential equation, so we linearize the system to get a linear equation that can be easily solved. So here our first concern to linearize the non-linear system. After linearize the non-linear system, we can easily apply the numerous linear analysis methods to study the nature (controllability) of the system.
2.3.1
Meaning of Non-linearity
In the linear world, the relation between cause and effect is constant and the relation is quite independent of magnitude. For instance, if a force of 1 newton, applied to a mass m, causes the mass to accelerate at a rate a, then according to a linear model, a force of 100 newtons, applied to the same mass, will produce an acceleration of 100a. Strictly a linear function f must satisfy the following two conditions, where it is assumed that the function operates on inputs u1 (t), u2 (t) , u1 (t) + u2 (t), αu(t), where α is a scalar multiplier: 1. f (u1 (t)) + f (u2 (t)) = f (u1 (t) + u2 (t)) 2. f (αu1 (t)) = αf (u1 (t)).
2.3. LINEARIZATION OF NON LINEAR SYSTEMS
27
Any system whose input-output characteristic does not satisfy the above conditions is classified as a non-linear system. Thus, there is no unifying feature present in non-linear systems except the absence of linearity. Non-linear systems sometimes may not be capable of analytic description, they may sometimes be discontinuous or they may contain well understood smooth mathematical functions.
2.3.2
Why We Need Linearization?
The following statements are broadly true for non-linear systems and these statements will answer the question - why we need linearization? : 1. Matrix and vector methods, transform methods, block-diagram algebra, frequency response methods, poles and zeros and root loci are all inapplicable. 2. Available methods of analysis are concerned almost entirely with providing limited stability information. 3. System design/synthesis methods scarcely exist. 4. Numerical simulation of non-linear systems may yield results that are misleading or at least difficult to interpret. This is because, in general, the behaviour of a non- linear system is structurally different in different regions of state space. Thus, the same system may be locally stable, unstable, heavily damped or oscillatory, according to the operating region in which it is tested. For a linear system, local and global behaviour are identical within a scaling factor- they are topologically the same. For a non-linear system it is in general meaningless to speak of global behaviour.
2.3.3
Approach to Linearization for Satellite Problem
We now linearize the non-linear system about the zero equilibrium solution to obtain the system in the form of the linear control system, x(t) ˙ = Ax(t) + Bu(t). By linearizing the function f (x, u) (described above) about x = 0, u = 0 we have, f˜(x, u) = fx0 (0, 0).x(t) + fu0 (0, 0).u(t) = Ax(t) + Bu(t) Where, df
1
dx
df21 1 A = fx0 (0, 0) = dx df3 dx 1 df4 dx1
df1 dx2 df2 dx2 df3 dx2 df4 dx2
df1 dx3 df2 dx3 df3 dx3 df4 dx3
0 1 3w2 0 A = 0 0 0 −2w
df1 dx4 df2 dx3 df3 dx4 df4 dx4 (0,0)
0 0 0 2w 0 1 0 0
(2.28)
(2.29)
28
CHAPTER 2. MATHEMATICAL MODELLING
and
df
1
B = fu0 (0, 0) =
du df21 du1 df3 du1 df4 du1
df1 du2 df2 du2 df3 du2 df4 du2 (0,0)
0 1 = 0 0
0 0 0 1
(2.30)
Here σ is normalized to 1. So the linearized system thus obtained is given by x(t) ˙ = Ax(t) + Bu(t)
(2.31)
where the matrix A and B are the matrices obtained in equation (2.29) and (2.30) respectively. Thus we can write, x1 0 1 2 d x2 3w 0 = 0 0 dt x3 x4 0 −2w
0 0 x1 0 0 2w x2 1 + 0 1 x3 0 0 0 x4 0
0 0 u1 (t) 0 u2 (t) 1
where u1 (t) and u1 (t) are radial thrust and tangential thrust respectively. ******
(2.32)
Chapter 3 Controllability 3.1
Introduction
Consider the n-dimensional control system described by the vector differntial equation, x(t) ˙ = A(t)x(t) + B(t)u(t),
t ∈ (t0 , T )
(3.1)
x(t0 ) = x0 where, A(t) = (aij (t))n×n is an n × n matrix with entries are continuous functions of t defined on I = [t0 , t1 ], B(t) = (bij (t))n×m is an n × m matrix with entries are continuous function of t on I. The state x(t) is an n-vector, control u(t) is an m-vector. We first deal with controllability of one dimensional system which described by a scalar dierential equation. Consider a one dimensional system dx = −2x, dt
x(0) = 3
The solution of the system is x(t) = 3e2t and its graph is shown in the following figure
Figure 3.1: Depicting the solution curve of the system (3.2).
29
(3.2)
30
CHAPTER 3. CONTROLLABILITY
If we add a nonhomogeneous term sin(t) called the forcing term or control term to it then the system is given by dx = −2x + sin(t), x(0) = 3 dt and the solution curve or trajectory of the system is changed to
(3.3)
Figure 3.2: Depicting the solution curve of the system (3.3).
That is, the evolution of the system is changed by adding the new forcing term to the system. Thus the system with a forcing term is called a control system. More about of controllability is described after discussing the fundamental and transition matrix in next section.
3.2 3.2.1
Fundamental and Transition Matrix Fundamental Matrix
Before defining the fundamental matrix, we shall first discusse some important propositions. Consider the homogeneous linear system, x(t) ˙ = A(t)x(t)
x(t0 ) = x0
(3.4)
where, x(t) ∈ Rn and A(t) is n × n matrix with entries aij (t) are continuous on the interval [t0 , t1 ]. Proposition 1: If φ1 (t), φ2 (t), ..., φn (t) are the solutions of system (3.4) with initial conditions x1 , x2 , ..., xn thier linear combination n X φ(t) = αi φi, αi ∈ R (3.5) i=1
3.2. FUNDAMENTAL AND TRANSITION MATRIX
31
is also a solution of (3.4) with initial condition φ(t0 ) =
n X
αi xi
(3.6)
i=1
§: Differentiating equation (3.5); n n X X d d φ(t) = αi φi(t) = αi A(t)φi(t) dt dt i=1 i=1
= A(t)
n X
αi φi(t) = A(t)φ(t)
i=1
Thus
Pn
i=1
φi(t) satisfies the system. Hence proved the proposition.
Proposition 2: Let φ1 (t), φ2 (t), ..., φn (t) are the solutions of system (3.4) on [t0 , t1 ] and s ∈ [t0 , t1 ] then φ1 (.), φ2 (.), ..., φn (.) are linearly independent solutions if and only if φ1 (s), φ2 (s), ..., φn (s) are linearly independent in Rn . §: If φ1 (.), φ2 (.), ..., φm (.) are linearly independent solutions of (3.4) then ∃ no nonzero vector (α1 , α2 , ..., αm ) ∈ Rm such that m X αi φi(t) = 0 ∀t ∈ [t0 , t1 ] i=1
which emplies that φ1 (s), φ2 (s), ..., φn (s) are linearly independent in Rn for s ∈ [t0 , t1 ]. Conversly, suppose φ2 (s), ..., φn (s) are linearly independent. Therefore, ∃ a non-zero vector (α1 , α2 , ..., αm ) ∈ Rm such that m X αi φi(s) = 0 i=1
Pm
Now, φ(t) = i=1 αi φi(t) = 0 is a solution of (3.4) with initial condition φ(s) = 0. Hence by uniqueness φ(t) = 0 implying that φ1 (.), φ2 (.), ..., φm (.) are linearly independent. Definition 3.2.1.1. Fundamental Matrix: An n×n matrix function φ(.) is said to be a fundamental matrix for the homogenous system (3.4)if the n columns of φ are linearly independent solution of (3.4). Proposition 3: A necessary and sufficient condition that a solution matrix φ(t) of d φ(t) = A(t)φ(t); t ∈ [t0 , t1 ] dt to be fundamental matrix for the system is that detφ(t) 6= 0 for t ∈ [t0 , t1 ]. §: If detφ(t) = 0 for any t ∈ [t0 , t1 ] then by Proposition 2 the columns of φ(t) are not linearly independent solutions of (3.4). Conversly, if φ(t) is fundamental matrix for (3.4) then by using Proposition 2, we have detφ(t) 6= 0 for every t ∈ [t0 , t1 ].
32
CHAPTER 3. CONTROLLABILITY
Proposition 4: If if φ(t) is fundamental matrix for (3.4) and C a constant non-singular matrix then φ(t)C is again a fundamental matrix for (3.4). Moreover every fundamental matrix of (3.4) id of this type for some non-singular matrix C, d φ(t)C = A(t)φ(t); dt
t ∈ [t0 , t1 ].
§: Since φ(t) and C both are non-singular. So φ(t)C is also non-singular and satisfying the system. So φ(t)C is fundamental matrix. Now, if φ1 and φ2 are two fundamental matrices of (3.4) then we shall show that φ1 = φ2 C is fundamental matrix for some constant non-singular matrix C. let, φ2 −1 φ1 = ψ φ1 = φ2 ψ differentiating it, dφ2 dψ d φ1 (t) = ψ + φ2 dt dt dt dψ Aφ1 (t) = Aφ2 ψ + φ2 dt dψ Aφ1 (t) = Aφ1 + φ2 dt dψ dψ Therefore, φ2 = 0 and hence, = 0 this emplies ψ is constant matrix and ψ is non-singular dt dt because φ1 and φ2 are so. Hence proved the proposition.
3.2.2
Transition Matrix
Definition 3.2.2.1. Transition Matrix: If φ(t) is a fundamental matrix then φ(t0 ) is nonsingular. let C = φ−1 (t0 ). Then by Proposition 4, φ(t)C = φ(t)φ−1 (t0 ) is a fundamental matrix of (3.4), which is known as transition matrix corresponding the linear homogenous system (3.4) and is denoted by φ(t, t0 ) that is, φ(t, t0 ) = φ(t)φ−1 (t0 ) Properties of Transition Matrix φ(t, t0 ) 1. φ(t0 , t0 ) = I, identity matrix of x. §: As, φ(t0 , t0 ) = φ(t0 )φ−1 (t0 ) = I by definition of transition matrix. 2. φ(t, t1 ) = φ(t, tk ).φ(tk , t1 )
3.2. FUNDAMENTAL AND TRANSITION MATRIX
33
§: Taking Right Hand Side, φ(t, tk ).φ(tk , t1 ) = φ(t)φ−1 (tk ).φ(tk )φ−1 (t1 ) = φ(t)In φ−1 (t1 ) = φ(t)φ−1 (t1 ) = φ(t, t1 ) 3. Transition matrix is invertible. §: As, φ(t, t0 ) = φ(t).φ−1 (t0 ) and both φ(t) and φ−1 (t0 ) are non-singular. So φ(t, t0 ) is also non-singular and hence invertible and also, [φ(t, t0 )]−1 = φ(t0 , t) 4. Abel-Jacobi-Liouville formula: Z
t
tr A(s) ds.
detφ(t, t0 ) = exp t0
§: Let φij and aij be the elements in the ith row and j th column of φ(t) and A(t) respectively. Since, φ(t) satisfies the matrix differential equation, d φ(t) = A(t)φ(t); dt
t ∈ [t0 , t1 ].
So we have, n
X d aik (t)φkj (t); φij (t) = dt k=1 The derivative of determinant of φ(t) is a sum φ˙ φ˙ ... φ˙ φ11 φ12 12 1n 11 φ φ˙ φ˙ φ ... φ d 21 22 2n 22 21 + . detφ(t) = . . . . . .. .. .. .. .. .. dt φn1 φn2 ... φnn φn1 φn2
i, j = 1, 2, 3, ..., n.
(3.7)
of n determinants, φ11 φ12 ... φ1n ... φ1n φ21 φ22 ... φ2n ... φ˙2n .. .. .. (3.8) .. .. + ... + .. . . . . . . ˙ φ˙n1 φ˙n2 ... φnn ... φnn
using above equation (3.7) and taking first determinant, we get; Pn Pn Pn a φ a φ ... a φ 1k k1 1k k2 1k kn k=1 k=1 k=1 φ21 φ22 ... φ2n .. .. .. .. . . . . φn1 φn2 ... φnn Applying, row transformation, R1 → R1 − (a12 R2 + a13 R3 + ... + a1n Rn ). Thus we have, a11 φ11 a11 φ12 ... a11 φ1n φ21 φ22 ... φ2n .. .. .. .. = a11 detφ(t). . . . . φn1 φn2 ... φnn
34
CHAPTER 3. CONTROLLABILITY Similarly, second determinant will be a22 detφ(t) and nth determinant will be ann detφ(t). Thus (3.8) will give, d detφ(t) = (a12 + a13 + ... + a1n )detφ(t) dt d detφ(t) = tr A(t) detφ(t) dt This has a solution, t
Z
tr A(s) ds
detφ(t) = detφ(t0 ).exp t0
t0 ∈ [t0 , t] we have, Z t 1 tr A(s) ds detφ(t).detφ (t0 ) = exp
for the initial condition detφ(t0 ),
t0 −1
det φ(t).φ (t0 ) = exp
Z
t
tr A(s) ds
t0
Z
t
detφ(t, t0 ) = exp
tr A(s) ds
(3.9)
t0
Hence the equation (3.9) is the required equation, which is known as Abel-Jacobi-Liouville formula
3.3
Controllability of Linear Systems
Many areas of study are fortunate in that their titles trigger an immediate image of their scope and content. For instance, the names ‘human anatomy’, ‘veterinary medicine’, ‘aeronautical engineering’ and ‘ancient history’ all conjure up coherent visions of well-defined subjects. This is not so for control theory although almost everyone is interested in control in the sense ofbeing able to achieve defined objectives within some time frame. Control theory applies to everyday situations, as in many real life examples, just as well as it applies to the more exotic task of manoeuvring space vehicles. In fact, the concepts of control theory are simple and application-independent. The universality of control theory means that it is best considered as applied to an abstract situation that contains only the topological core possessed by all situations that need to be controlled. Such an abstract situation is called a system, which we discribed in section 1.1.1 of chapter 1.
3.3.1
Controllability Problem
The controllability problem is to check the existence of a forcing term or control function u(t) such that the corresponding solution of the system will pass through a desired point x(t1 ) = x1 . We now show that the scalar control system x˙ = ax + bu;
x(t0 ) = x0
is controllable. We produce a control function u(t) such that the corresponding solution starting with x(t0 ) = x0 also satisfies x(t1 ) = x1 . Choose a dierentiable function z(t) satisfying z(t0 ) =
3.3. CONTROLLABILITY OF LINEAR SYSTEMS x0 and z(t1 ) = x1 . For example, by the method of linear interpolation, z − x0 = Thus the function z(t) = x0 +
35 x1 − x0 (t − t0 ). t1 − t0
x1 − x0 (t − t0 ) t1 − t0
satisfies z(t0 ) = x0 , z(t1 ) = x1 A Steering Control Using z(t): The form of the control system x˙ = ax + bu motivates a control of the form
1 u = [x˙ − ax] b Thus we define a control using the funtion z by; 1 u = [z˙ − az] b
Then the control system will become, x˙ = ax + b
1 [z˙ − az] b
x˙ − z˙ = a(x − z) d (x − z) = a(x − z) dt x(t0 ) − z(t0 ) = 0 Let y = x − z, dy = ay dt y(t0 ) = 0. The unique solution of the system is y(t) = x(t) − z(t) = 0. That is, x(t) = z(t) is the solution of the controlled system satisfying the required end condition x(t0 ) = x0 and x(t1 ) = x1 . Thus the control function 1 u = [z(t) ˙ − az(t)] b is a steering control. Definition 3.3.1.1. Controllability: The system x = A(t)x + B(t)u is controllable on an interval [t0 , t1 ] if ∀x0 , x1 ∈ Rn , ∃ controllable function u ∈ L2 ([t0 , t1 ] : Rn ) such that the corresponding solution of given system satisfying x(t0 ) = x0 also satises x(t1 ) = x1 . Since x0 and x1 are arbitrary this notion is also known as exact controllability or complete controllability. Definition 3.3.1.2. Subspace Controllability: Let D ⊂ Rn be a subspace of Rn and if the system is controllable for all x0 , x1 then we say that the system is controllable to the subspace D.
36
CHAPTER 3. CONTROLLABILITY
Definition 3.3.1.3. Approximate Controllability: If D is dense in state space then the system is approximately controllable. But in Rn , Rn is the only dense subspace of Rn . Thus approximate controllability is equivalent to complete controllability in Rn . For the subspace D ¯ = Rn implies D = Rn . we have D ⊆ Rn and D Definition 3.3.1.4. Null Controllability: If every non-zero state x0 ∈ Rn can be steered to the null state 0 ∈ Rn by a steering control then the system is said to be null controllable. We now see examples of controllable and uncontrollable systems.
3.3.2
Example - Tank Problem
Let x1 (t) be the water level in Tank 1 and x2 (t) be the water level in Tank 2. Let α be the rate of outow from Tank 1 and β be rate of outow from Tank 2. Let u be the supply of water to the system.
Figure 3.3: Tank problem- Model 1. The system (Model-1) can be modelled into the following differential equations: dx1 = αx1 + u dt dx2 = αx1 − βx2 dt and, d dt
x1 −α 0 x1 1 = + u x2 α −β x2 0
3.3. CONTROLLABILITY OF LINEAR SYSTEMS
37
Similarly for Model 2;
Figure 3.4: Tank problem- Model 2.
dx1 = −αx1 dt dx2 = αx1 − βx2 + u dt and, d x1 0 −α 0 x1 + u = x2 1 α −β dt x2 Obviously the second tank model is not controllable because supply can not change the water level in Tank 1. We will see later that the Model 1 is controllable whereas the model 2 is not controllable. Controllability analysis can be made in many real life problems like : 1. Rocket launching Problem, Satellite control and control of aircraft 2. Biological System : Sugar Level in blood 3. Defence: Missiles and Anti-missiles problems. 4. Economy- regulating ination rate 5. Eology: Predator - Prey system
3.3.3
Solution of the Controlled System using Transition Matrix
Consider the controlled system; x(t) ˙ = A(t)x(t) + u(t);
x(t0 ) = x0
(3.10)
where x(t), x0 , u(t) ∈ Rn . Let φ(t, t0 ) be the transition matrix of the corresponding homogeneous system’ x(t) ˙ = A(t)x(t)
38
CHAPTER 3. CONTROLLABILITY
Consider the transformation x(t) = φ(t, t0 )z(t) and hence we have, x(t) ˙ =
d ˙ t0 )z(t) + φ(t, t0 )z(t) [φ(t, t0 )z(t)] = φ(t, ˙ dt = A(t)φ(t, t0 )z(t) + φ(t, t0 )z(t) ˙ = A(t)x(t) + φ(t, t0 )z(t) ˙
Putting this in (3.10), we have, φ(t, t0 )z(t) ˙ = u(t), or z(t) ˙ = φ(t0 , t)u(t) Intergrating it, we have Z
t
φ(t0 , τ )u(τ )dτ i.e., z(t) = z(t0 ) + t0 Z t φ(t0 , t)x(t) = x0 + φ(t0 , τ )u(τ )dτ t0 Z t x(t) = φ(t, t0 )x0 + φ(t0 , τ )u(τ )dτ t0
This is called as known as variation of parameter formula (method) and the last equation is the requaried solution of the given system. We rst show that for linear systems complete controllability and null-controllability are equivalent. Theorem 3.3.3.1. The linear system (3.1) is completely controllable if and only if it is nullcontrollable. Proof. It is obvious that complete controllability implies null-controllability. We now show that null-controllability implies complete controllability. Suppose that x0 is to be steered to x1 . Suppose that the system is null-controllable and let w0 = x0 − φ(t0 , t1 )x1 . Thus there exists a control u such that, Z
t1
0 = φ(t1 , t0 )w0 +
φ(t1 , τ )B(τ )u(τ )dτ Z t1 = φ(t1 , t0 )[x0 − φ(t0 , t1 )x1 ] + φ(t1 , τ )B(τ )u(τ )dτ t0 Z t1 = φ(t1 , t0 )x0 − x1 + φ(t1 , τ )B(τ )u(τ )dτ t0 Z t1 = φ(t1 , t0 )x0 + φ(t1 , τ )B(τ )u(τ )dτ t0
x1
t0
x1 = x(t1 ) Hence u steers x0 to x1 during [t0 , t1 ].
3.4. CONTROLLABILITY GRAMMIAN
3.3.4
39
Conditions for Controllability
The system (3.1) is controllable if and only if ∃ u ∈ L2 (I, Rm ) such that Z t1 φ(t1 , τ )B(τ )u(τ )dτ i.e., x1 = φ(t1 , t0 )x0 + t0 Z t1 φ(t1 , τ )B(τ )u(τ )dτ x1 − φ(t1 , t0 )x0 = t0
Define an operator C : L2 (I, Rm ) → Rn by, Z t1 φ(t1 , τ )B(τ )u(τ )dτ Cu = t0
Obviously, C is a bounded linear operator and Range of C is a subspace of Rn . Since x0 , x1 are arbitrary, the system is controllable if and only if C is onto. Range(C) is called the Reachable set of the system. Equivalent Results: The following statements are equivalent: 1. The linear system (3.1) is completely controllable. 2. C is onto 3. C ? is one to one. 4. CC ? is one to one. In the above result, the operator C ? is the adjoint of the operator C. We now obtain the explicit form of C ? .
3.4
Controllability Grammian
For finding the Controllability Grammian, first we find the adjoint operator. The operator C : L2 (I, Rm ) → Rn defines its adjoint C ? : Rn → L2 (I, Rm )in the following way; Z t1 φ(t1 , τ )B(τ )u(τ ), v >Rn dτ < Cu, v >Rn = < t0 Z t1 = < φ(t1 , τ )B(τ )u(τ ), v >Rn dτ t0 Z t1 = < u(τ ), B ? (τ )φ? (t1 , τ )v >Rn dτ t0
= < u(τ ), B ? (τ )φ? (t1 , τ )v >L2 (I,Rm ) = < u, C ? v >L2 (I,Rm ) C ? v(t) = B ? (τ )φ? (t1 , τ )v Using C ? we get CC ? in the form, ?
Z
t1
CC = t0
φ(t1 , τ )B(τ )B ? (τ )φ? (t1 , τ )dτ
40
CHAPTER 3. CONTROLLABILITY
Observe that CC ? : Rn → Rn is a bounded linear operator. Thus, CC ? is an n × n matrix. Thus we have from the previous theorem that the system (3.1) is controllable ⇔ C is onto ⇔ CC ? is one to one. ⇔ CC ? is an invertible matrix. The matrix CC ? is known as the Controllability Grammian for the linear system and is given by Z t1 φ(t1 , τ )B(τ )B ? (τ )φ? (t1 , τ )dτ (3.11) W (t0 , t1 ) = t0
3.5
Kalman’s Rank Condition for Time Invariant Systems
Theorem 3.5.0.1. If the matrices A and B are time-independent then linear system (3.1) is controllable if and only if, Rank(Q) = Rank B|AB|...|An−1 B = n Proof. Suppose that the system is controllable. Thus the operator C : L2 (I, Rm ) → Rn dened by, Z t1 Cu = φ(t1 , τ )B(τ )u(τ )dτ t0
is onto. We now prove that Rn = Range(C) ⊂ Range(Q). Let x ∈ Rn then ∃ u ∈ L2 (I, Rn ) such that, Z t1 eA(t1 −τ ) B(τ )u(τ )dτ = x t0
Expand eA(t1 −τ ) by Cayley-Hamiltons Theorem; Z t1 [P0 (0) + P1 A + P2 A2 + ... + Pn−1 An−1 ]B(τ )u(τ )dτ = x t0
⇒ x ∈ Range[B|AB|A2 B|...|An−1 B] Conversely, suppose that condition holds but system is not controllable. That means, Rank of W (t0 , t1 ) 6= n ⇒ ∃ v 6= 0 ∈ Rn such that W (t0 , t1 )v = 0 ⇒ v T W (t0 , t1 )v = 0 Z
t1
⇒ t0
v T φ(t1 , τ )B(τ )B ? (τ )φ? (t1 , τ )vdτ = 0 Z t1 ⇒ kB ? φ? (t1 , τ )vk2 dτ = 0 t0
⇒ B ? φ? (t1 , τ )v = 0, ⇒ v T φ(t1 , t)B = 0 ⇒ v T eA(t1 −τ ) B = 0
t ∈ [t0 , t1 ] t ∈ [t0 , t1 ] t ∈ [t0 , t1 ]
3.6. EXAMPLES- TANK PROBLEM
41
Let t = t1 , v T B = 0 and differentiating v T eA(t1 −τ ) B = 0 with respect to t and putting t = t1 ; v T AB = 0...v T An−1 B = 0 Hence Rank(Q) 6= n. Rank condition is violated and thus we get a contradiction and thus the system is controllable.
3.6
Examples- Tank Problem
Here we will discuse controllability of given system in section 3.3.2 of this chapter with the help of Kalman’s Rank Condition for Time Invariant Systems.
3.6.1
Example 1.
§: Tank Problem: Model 1 §: d dt
x1 −α 0 x1 1 = + u x2 α −β x2 0
Therefore now, 1 −α Q = B : AB = 0 α Here rank of Q = 2. Hence the system is controllable.
3.6.2
Example 2.
§: Tank Problem: Model 2 §: d dt
0 x1 −α 0 x1 + u = x2 α −β x2 1
Therefore now,
0 0 Q = B : AB = 1 −β
Here rank of Q = 1 6= 2. Hence the system is not controllable.
3.7
Contollability of Satellite Problem
Here we discusse the main conclusions of controllability of artificial satellite problem for which we have studied above topics. In this section we basically describe the conclusion made by Kalmans controllability test and effects of thrusters or controllers on controllability of satellite in orbit of motion. We have already studied in section 2.3.3 of chapter 2 that the linearized equation of motion of satellite problem is given by x(t) ˙ = Ax(t) + Bu(t) (3.12)
42
CHAPTER 3. CONTROLLABILITY
where the matrix A and B are the matrices obtained in equation (2.29) and (2.30) respectively. Thus we can write, x1 0 1 2 d x2 3w 0 = 0 0 dt x3 x4 0 −2w
0 0 x1 0 0 2w x2 1 + 0 1 x3 0 0 0 x4 0
0 0 u1 (t) 0 u2 (t) 1
(3.13)
where u1 (t) and u1 (t) are radial thrust and tangential thrust respectively.
3.7.1
Controllability of Artificial Satellite Problem by Kalman’s Rank Condition
For the linearized satellite system, we can easily compute the controllability matrix Q = B : AB : A2 B : A3 B
0 1 Q = 0 0
0 1 0 0 2w −w2 0 0 0 2w −w2 0 0 −2w3 0 0 1 −2w 0 0 −4w2 1 −2w 0 0 −4w2 −2w3 0
We can verify that rank of Q is 4 and hence the linearized motion of the satellite is controllable.
3.7.2
Effect of Controllers or Thrusters on Controllability
It is very interesting to ask the question; “What happens, if one of the controllers or thrusters become in-operative? ” Here ‘in-operative ’means either u1 (radial thruster) or u2 (tangential thruster) fails to act on motion of satellite, that is either one of them become zero. Case 1: First we set u2 = 0 and hence B reduces to T B1 = 0 1 0 0 So the controllability matrix will become Q1 = B1 : AB1 : A2 B1 : A3 B1
Q1
0 1 0 −w2 0 0 −w2 0 = 0 0 −2w 0 0 −2w 0 2w3
As Q1 has rank 3, and hence the system is not controllable with the radial thrusters alone.
3.7. CONTOLLABILITY OF SATELLITE PROBLEM
43
Case 2: On other hand when the radial thrusters fail that is, u1 = 0. hence B reduces to T B2 = 0 0 0 1 So the controllability matrix will become Q2 = B2 : AB2 : A2 B2 : A3 B2
Q2
0 0 2w 0 0 2w 0 −2w2 = 0 1 0 −4w2 1 0 −4w3 0
As Q2 has rank 4, and hence the absence of radial thrusters doesnt affect the controllability of the system that is system is controllable. Conclusion: Hence by observing both cases we can say that “the loss of radial thrusters doesn’t destroy the controllability of the satellite, whereas the loss of tangential thrusters do.”
3.7.3
MATLAB Graphs
1. Graphs for the Controlled States of Linearized Satellite System:
Figure 3.5: Controlled states of linearized satellite system.
44
CHAPTER 3. CONTROLLABILITY
2. Graphs for the Steering Control Profile of Linearized Satellite System:
Figure 3.6: Steering Control Profile of Linearized Satellite System.
******
Chapter 4 Observability 4.1
Introduction
In Control Theory, Observability is a measure for how well the internal states of a system can be inferred by knowledge of its external outputs. The concept of observability was introduced by American-Hungarian Engineer Rudolf E. Kalman for linear dynamic systems. Any System (Linear or Non-linear, time variant or invariant) is said to be observable over a given time period if it is possible to determine uniquely the initial state from the knowledge of the output over that time period. On the other hand, observability can be defined as the problem of finding the state vector knowing only the output over some interval of time. Consider the linear system, x(t) ˙ = A(t)x(t) + f (t)
(4.1)
where, x ∈ Rn , the n × n matrix A(t) and the n-vector f (t) are continuous and locally square integrable respectively, on some time interval (t0 , t1 ). Along with euation (4.1) we have a linear observation equation, ˆ y(t) = C(t)x(t) + C(t)f (t),
y ∈ Rn
(4.2)
where, C(t) = [Cij (t)]m×n is the matrix having entries as continuous functions of t. Assuming that the system (4.1) is in operation during a time interval [t0 , t1 ] and that x(t) = x0 ∈ Rn . Then we have, Z t
x(t) = X(t, t0 )x0 +
X(t, s)f (s)ds
(4.3)
t0
where, X(t, t0 ) is transition matrix. If f is a known function, for example f (t) = B(t)u(t) with u(t) a control, then in principle the ˆ term C(t)f (t) in (4.2) and C(t) times the integral in (4.3) could be subtracted from, Z t ˆ y(t) = C(t)X(t, t0 )x0 + C(t) X(t, s)f (s)ds + C(t)f (t) t0
to get the modefide observation, yˆ(t) = C(t)X(t, t0 )x0 , 45
t0 ≤ t ≤ t1
(4.4)
46
CHAPTER 4. OBSERVABILITY
Here X(t, t0 )x0 satisfies the homogenous equation, x(t) ˙ = A(t)x(t)
(4.5)
y(t) = C(t)x(t)
(4.6)
and the observation (4.4) has the form,
Thus the information obtained from (4.1) and (4.2) reduced to homogeneous system (4.5) and homogeneous observation (4.6). Definition 4.1.0.1. System (4.5) is said to be observable over a time period [t0 , t1 ] if it is possible to determine uniquely the initial state x(t0 ) = x0 from the knowledge of the output function y(t) over [t0 , t1 ]. The complete state of the system is known if initial state x0 is known. Define a linear operator L : Rn → L2 ([t0 , t1 ]; Rm ) by, (Lx0 )(t) = C(t)X(t, t0 )x0 Thus, y(t) = (Lx0 )(t),
t ∈ [t0 , t1 ]
(4.7)
The system is observable if and only if L is invertible that means yi ’s are linearly independent (L.I) in L2 ([t0 , t1 ]; Rm ). ♦
4.2
Observability Grammian
For finding the Observability Grammian, first we find the adjoint operator L∗ : L2 → Rn . We will use inner product propeties for finding it; Z t1 < C(t)X(t, t0 )x0 , w(t) >Rn dt < (Lx0 )(.), w(.) >L2 (I,Rn ) = t0 Z t1 = < x0 , X ∗ (t, t0 )C ∗ (t)w(t) >Rn dt t0 Z t1 = < x0 , X ∗ (t, t0 )C ∗ (t)w(t)dt >Rn t0 ∗
= < x0 , L w(.) >Rn Thus, ∗
Z
t1
Lw=
X ∗ (t, t0 )C ∗ (t)w(t)dt
(4.8)
t0
The above expression (4.8) is the required adjoint operator L∗ . Now, the Observability Grammian is given by, Z t1 ∗ M (t0 , t1 ) = L L = X ∗ (t, t0 )C ∗ (t)C(t)X(t, t0 )dt t0
where, star (*) denotes the matrix transpose.
(4.9)
4.2. OBSERVABILITY GRAMMIAN
47
Theorem 4.2.0.1. The following statements are equivalent, 1. The linear system x˙ = A(t)x(t), y(t) = C(t)x(t) is observable. 2. All yi ’s are L.I. in L2 ([t0 , t1 ], Rm ). 3. The operator L is one to one and onto. 4. The adjoint operator L∗ is onto. 5. The operator L∗ L is onto. Proof. 1. Consider the given system x˙ = A(t)x(t), y(t) = C(t)x(t) is observable then, we shall need to prove all other statements are holds good. 2. We have defined a linear operator L : Rn → L2 ([t0 , t1 ]; Rm ) by, (Lx0 )(t) = C(t)X(t, t0 )x0 y(t) = (Lx0 )(t), t ∈ [t0 , t1 ] As the system is observable, that means all yi ’s are L.I. in L2 ([t0 , t1 ]; Rm ). ⇒ L is invertible and hence L is one to one and onto. 3. Let we rewrite the statement as a theorem, Theorem 4.2.0.2. Let x1 , x2 , ..., xk be vectors in Rn and let x1 (t), x2 (t), ..., xk (t) be the solutions of (3.5) on [t0 , t1 ] with xi (0) = xi , i = 1, 2, ..., k. Let yi be the observations on [t0 , t1 ] defined by, yi (t) = C(t)xi (t), t ∈ [t0 , t1 ] Then the linear system (4.5), (4.6) is observable on [t0 , t1 ] if and only if yi ’s are L.I. in L2 ([t0 , t1 ], Rm ) whenever the xi ’s are L.I. in Rn . Proof. The solutions xi (t)’s are L.I. in L2 ([t0 , t1 ], Rm ) just in case the xi are L.I. in Rn . If (4.5), (4.6) is observable and y(t) =
k X
ci yi (t) = 0
(4.10)
i=1
Then the corresponding solution also vanishes; that is, x(t) =
k X
ci xi (t) = 0
(4.11)
i=1
and in particular k X i=1
ci xi (0) =
k X
ci xi = 0.
(4.12)
i=1
Suppose xi ’s are L.I., then c1 , c2 , ..., ck = 0. Hence from (4.10), we conclude that yi ’s are L.I. On the other hand if ∃ L.I. x1 , x2 , ..., xk such that the corresponding observations y1 (t), y2 (t)
48
CHAPTER 4. OBSERVABILITY , ..., yk (t) are not L.I., that is, are dependent in L2 ([t0 , t1 ], Rm ), then ∃ c1 , c2 , ..., ck not all zero, such that k X y(t) = ci yi (t) = 0 i=1
here y(t) is an identically vanishing observation on the solution x(t) =
k X
ci xi (t)
i=1
which is not the zero solution of (4.5) because x1 = x1 (0), x2 = x2 (0), ..., xk = xk (0) are L.I.. We conclude that the system (4.5), (4.6) is not observable. 4. As we have proved that the linear operator L : Rn → L2 ([t0 , t1 ]; Rm ) defined by (Lx0 )(t) = C(t)X(t, t0 )x0 is one to one and onto, and L∗ is adjoint of L. Hence, L∗ is so. 5. By using above statements of L and L∗ . 1. L∗ L : Rn → Rn is an n × n matrix called Observability Grammian.
Remark 4.2.0.3.
2. The above stated (proved) theorem can also be imagined by some basic facts of linear algebra and functional analysis. ♦ Theorem 4.2.0.4. The linear system (4.5), (4.6) is observable on [t0 , t1 ] if and only if the observability grammian matrix Z t1 ∗ M (t0 , t1 ) = L L = X ∗ (t, t0 )C ∗ (t)C(t)X(t, t0 )dt t0
is positive definite. Proof. The solution x(t) of (4.5) corresponding to the initial condition x(t0 ) = x0 is given by x(t) = X(t, t0 )x0 and we have, for y(t) = C(t)x(t) = C(t)X(t, t0 )x0 , Now, Z Z t
∗
2
kyk =
y (t)y(t)dt = t0
=
x∗0
t1
X ∗ (t, t0 )C ∗ (t)C(t)X(t, t0 )dtx0
t0 ∗ x0 M (t0 , t1 )x0 .
It’s quadratic form in x0 . Clearly M (t0 , t1 ) is a symmetric n × n matrix. If it is positive definite then, y = 0 ⇒ x∗0 M (t0 , t1 )x0 = 0 ⇒ x0 = 0 and (4.5), (4.6) is observable on [t0 , t1 ]. If not positive definite, then there is some x0 6= 0 such that x∗0 M (t0 , t1 )x0 = 0. Then, x(t) = X(t, t0 )x0 6= 0,
t ∈ [t0 , t]
But kyk2 = 0 so, y = 0. That is (4.5), (4.6) is not observable, hence proved the theorem.
4.3. KALMAN’S RANK CONDITION FOR TIME INVARIANT SYSTEMS
49
Theorem 4.2.0.5. Any linear system is observable if and only if the observability grammian of that system is invertible. Proof. To prove this theorem we reconstruct the initial state x0 by using adjoint operator (from (4.7)) L∗ as, y = Lx0 Premultiplying (operating) the adjoint operator L∗ both sides, ⇔ L∗ y = L∗ Lx0 ⇔ x0 = (L∗ L)−1 L∗ y Z t1 −1 X ∗ (t, t0 )C ∗ (t)y(t)dt ⇔ x0 = [M (t0 , t1 )]
(4.13)
t0
Clearly, the equation (4.13) is valid if and only if the inverse of M (t0 , t1 ) exists, hence proved the theorem.
4.3
Kalman’s Rank Condition for Time Invariant Systems
If A and C are time-independent matrices, then we have the following rank condition for observability. As it’s given by Rudolf E. Kalman, therefore known as Kalman’s Rank Condition or K-R Test. Theorem 4.3.0.1. The linear system x(t) ˙ = Ax(t); y(t) = Cx(t) is observable if and only if the rank of following Observability Matrix O,
C CA CA2 .. .
O= is n. n−1 CA Proof. The observation y(t) and its time derivatives (for the given system) are given by y(t) = CeAt x(0) y 1 (t) = CAeAt x(0) y 2 (t) = CA2 eAt x(0) .. .. . . y n−1 (t) = CAn−1 eAt x(0) At t = 0, we have following relation, y(0) = Cx(0) y 1 (0) = CAx(0)
50
CHAPTER 4. OBSERVABILITY y 2 (0) = CA2 x(0) .. .. . . y n−1 (0) = CAn−1 x(0)
It can be written in matrix form as,
C CA CA2 .. .
y 0 (0) y 1 (0) y 2 (0) .. .
x(0) = n−1 n−1 CA y (0) The initial state x(0) can be determined if the Observability Matrix (on the left hand side) has full rank n. Hence the system is observable if the Kalman’s Rank Condition holds true, converse of the theorem can be proved easily.
4.4
Examples
Here we will discuse some examples to check observability including linear, non-linear (time variant/invariant models) Systems.
4.4.1
Example 1.
§: Consider a Spring-Mass System with unit mass and unit spring constant at equilibrium position. §: Consider the Spring-Mass System as shown in following figure,
Figure 4.1: Spring-Mass System. By Newton’s Law of motion we have the following differential equation; 00
my (t) + ky(t) = f (t) 0
(4.14)
with initial displacement y(0) = x0 and initial velocity y (0) = v0 . Let x1 = y(t) and x2 (t) = 0 y (t), thus the model reduces to the linear system, " 0 1# " 0 # x˙ 1 (t) x1 (t) = −k + f (t) x˙ 2 (t) 0 x2 (t) m m
4.4. EXAMPLES
51
with initial condition,
x1 (0) x = 0 x2 (0) v0
But here we are taking k = 1, m = 1 and equilibrium position. So we have, x˙ 1 0 1 x1 = x˙ 2 −1 0 x2 or x˙ 1 = x2 ,
x˙ 2 = −x1
0 1 and let we take the observation equation as, −1 0 x1 y(t) = x1 (t) = 1 0 x2 that is, C = 1 0 . Then the matrix O is given by, C 1 0 O= = ⇒ Rank(O) = 2 CA 0 1 So here, A =
Here the rank of matrix O is 2, Hence by Kalman’s Rank Condition (theorem 4.3.0.1.) for observability the system is observable.
4.4.2
Example 2.
§: Observability of the system (4.4.1) by computing the Observability Grammian Matrix. §: Taking the same Spring-Mass System with unit mass and unit spring constant at equilibrium position, let we observe the system for time interval 0 ≤ t ≤ π. The fundamental matrix for the system x¨ + x = 0 is given by, cos t sin t X(t) = − sin t cos t and transition matrix is given by, X(t, −π) = X(t)X −1 (−π) − cos t − sin t X(t, −π) = sin t − cos t and
(4.15)
− cos t − sin t CX(t, −π) = 1 0 sin t − cos t CX(t, −π) = − cos t − sin t
Then observability grammian is given by, Z t1 M (t0 , t1 ) = X ∗ (t, t0 )C ∗ (t)C(t)X(t, t0 )dt, t0
t ∈ [t0 , t1 ]
52
CHAPTER 4. OBSERVABILITY Z
0
X ∗ (t, −π)C ∗ (t)C(t)X(t, −π)dt,
⇒ M (−π, 0) =
t ∈ [−π, 0]
−π
Z
0
X ∗ (t, −π)(1)X(t, −π)dt, −π π 0 1 0 ⇒ M (−π, 0) = 2 π = π/2 0 1 0 2 Thus the matrix M (−π, 0) is non-singular that is invertible, hence by using theorem 4.2.0.5. the system is observable in given time period. =
4.4.3
Example 3.
§: Consider the System x˙ = Ax,
−2 1 1 y = Cx where, A = 1 −2 1 and C = 0 0 1 . 1 1 −2
§: The given System is, −2 1 1 x1 x˙ 1 x˙ 2 = 1 −2 1 x2 1 1 −2 x3 x˙ 3 with observation equation, y = 0 0 1 x. Then the Observability Matrix is given by, 0 0 1 C 1 −2 ⇒ Rank(O) = 2 O = CA = 1 2 −3 −3 6 CA Here the rank of matrix O is 2, Hence by Kalman’s Rank Condition (theorem 4.3.0.1.) for observability the system is not observable.
4.4.4
Example 4.
§: Consider the System x˙ = Ax,
−2 −2 0 0 1 and observay = Cx where, A = 0 0 −3 −4
tion equation y = 1 0 1 x(t). §: The given System is, x˙ 1 −2 −2 0 x1 x˙ 2 = 0 0 1 x2 x˙ 3 0 −3 −4 x3 with observation equation, y = 1 0 1 x(t). Then the Observability Matrix is given by, C 1 0 1 O = CA = −2 −5 −4 ⇒ Rank(O) = 3 CA2 4 16 14
4.5. OBSERVABILITY OF SATELLITE PROBLEM
53
Here the rank of matrix O is 3, Hence by Kalman’s Rank Condition (theorem 4.3.0.1.) for observability the system is observable.
4.5
Observability of Satellite Problem
§: Consider a satellite of mass m orbiting around the earth under inverse square law field. We can assume that the satellite has thrusting capacity with radial thrust u1 and tangential thrust u2 , as shown in following figure. §:
Figure 4.2: Satellite motion around Earth. After mathematical modelling and linearization of Satellite Problem, we have obtained the system x(t) ˙ = Ax(t) + Bu(t). or
x1 0 1 2 d x2 3w 0 = x 0 0 dt 3 x4 0 −2w
x1 0 0 0 0 2w x2 1 + 0 1 x3 0 0 0 0 x4
0 0 u (t) 1 0 u2 (t) 1
where u1 (t) and u1 (t) are radial thrust and tangential thrust respectively. Let us consider the observation equation is 0 1 0 0 y(t) = x(t) = Cx(t). 0 0 0 1 that is, C=
0 1 0 0 0 0 0 1
54
CHAPTER 4. OBSERVABILITY
So now, the observability matrix O is given by; 0 1 0 0 0 0 1 02 3w C 0 0 2w CA 0 −2w 0 0 ⇒ Rank(O) = 3 6= n O= = 2 CA2 0 −w 0 0 3 2 2 CA 0 0 −4w −6w4 −3w 0 0 −2w3 2 0 2w 0 0 Here the rank of matrix O is 3, Hence by Kalman’s Rank Condition (theorem 3.3.0.1.) for observability the system is not observable.
4.5.1
Effect of Controllers or Thrusters on Observability
Here we will see the effect of controllers on oservability of the system. Case 1: Only radial distance measurements are available: y1 (t) = 1 0 0 0 x(t) = C1 x(t). So now, the observability matrix O is given by; C1 1 0 0 0 C1 A 0 1 0 0 ⇒ Rank(O) = 3 6= n O= C1 A2 = 3w2 0 0 2w C1 A3 0 −w2 0 0 Here the rank of matrix O is 3, Hence by Kalman’s Rank Condition (theorem 3.3.0.1.) for observability the system is not observable. Thus the system is not observable only with radistance measurements. Case 2: Only measurements of angle are available: y1 (t) = 0 0 1 0 x(t) = C2 x(t). So now, the observability matrix O is given by;
C2 C2 A Rank(O) = Rank C 2 A 2 = 4 = n C2 A 3
Here the rank of matrix O is 3, Hence by Kalman’s Rank Condition (theorem 3.3.0.1.) for observability the system is observable. Thus the system is not observable only with radistance measurements. This implies that even with the measurement of angle alone the system is observable. ******
Chapter 5 Stability 5.1
Introduction
A stable system is one that, when perturbed from an equilibrium state, will tend to return to that equilibrium state. Conversely, an unstable system is one that, when perturbed from equilibrium, will deviate further, moving off with ever increasing deviation (linear system) or possibly moving towards a different equilibrium state (non-linear system). All usable dynamical systems are necessarily stable - either they are inherently stable or they have been made stable by active design means. For example, a ship should ride stably with its deck horizontal and tend to return to that position after being perturbed by wind and waves. Stability occupies a key position in control theory for the reason that the upper limit of the performance of a feedback control system is often set by stability considerations, although most practical designs will be well away from the stability limit to avoid excessively oscillatory responses.
Definition 5.1.0.1. Stability: For the dynamical system x˙ = f (x, t)
(5.1)
Where x(t) is the state vector and f id a vector having components fi (x1 , x2 , ..., xn,t ), i = 1, 2, ..., n. We shall assume that the fi are continious and satisfies standard conditions, such as having continuous first partial derivatives so that the solution of (5.1) exists and is unique for initial conditions. An equilibrium state x = 0 is said to be: 1. Stable if for any positive scalar there exist a positive scalar δ such that kx(t0 )ke < δ ⇒ kx(t)ke < ,
t ≥ t0 .
2. Asymptotically Stable if it is stable and if in addition x(t) → 0 as t → ∞. 3. Unstable if it is not stable; that is, there exists an > 0 such that for every δ > 0 there exist an x(t0 ) with kx(t0 )k < δ and t1 > t0 such that kx(t1 )k ≥ for all t > t1 . If this holds for every x(t0 ) in kx(t0 )ke < δ this equilibrium is completely unstable. 55
56
CHAPTER 5. STABILITY
The above definitions are called ‘stability in the sense of Lyapunov’ regarded as a function of t in the n-dimensional state space, the solution x(t) of (5.1) is called a trajectory or motion. In two dimensions we can give the definitions a simple geometrical interpretation. If the origin O is stable, then give the outer circle C with radius , there exists an inner circle C1 with radius δ1 such that trajectories starting within C1 never leaves C. If O is asymptotically stable then there is some circle C2 , radius δ2 having the same property as C1 but in addition trajectories starting inside C2 tends to O as t → ∞.
5.2
Linear System Stability
Consider the system x˙ = Ax
(5.2)
Theorem 5.2.0.1. The system (5.2) is asymptotically stable at x = 0 if and only if A is stability matrix, i.e. all characteristic roots λk of A have negative real parts. System (5.2) is unstable at x = 0 if any 0; and completely unstable if all 0. Proof. The solution of (5.2) subject to x(0) = x0 is x(t) = exp(At)x0
(5.3)
with f (λ) = exp(λt) we have (using Sylvesters formula), exp(At) =
q X
(Zk1 + Zk2 t + ... + Zkαk tαk −1 )exp(λkt )
k=1
where λk are the eigen values of A and αk is the power of the factor (λ − λk ) in the minimal polynomial of A and Zkl are constant matrices determined entirely by A. Using properties of norms we obtain q αk X X kexp(At)k ≤ tl−1 kexp(λk t)kkZkl k k=1 l=1 q
=
αk XX
tl−1 exp[