Symbolic Computation for Nonlinear Systems Using ... - IEEE Xplore

1 downloads 0 Views 210KB Size Report
Abstract—Nonlinear control systems are more difficult to handle than linear, since the associativity is not valid. Laplace transforms and transfer functions, which ...
Symbolic Computation for Nonlinear Systems Using Quotients over Skew Polynomial Ring Miroslav Halás and Mikuláš Huba Department of Automation and Control Faculty of Electrical Engineering and Information Technology Slovak University of Technology Ilkovičova 3, 812 19 Bratislava Slovakia [email protected]

Abstract—Nonlinear control systems are more difficult to handle than linear, since the associativity is not valid. Laplace transforms and transfer functions, which form a symbolic computation for linear systems, are, therefore, disabled. A modern development of nonlinear control systems is thus related mainly to the systematic use of differential algebraic methods. However, such methods allow us to introduce similar symbolic computation also for nonlinear systems. To provide a basis for such a symbolic computation the theory of non-commutative polynomials over the field of meromorphic functions is introduced. In that respect, differential operators, which act on one-forms, play a key role. They form the left skew polynomial ring. Quotients of such polynomials stand for transfer functions of nonlinear systems. In other words, presented paper tries to show that there is no reason to believe well known dogma saying that nonlinear systems have no transfer functions. Index Terms—non-commutative rings, nonlinear systems, pseudo-linear algebra, skew polynomials, transfer functions.

accepted. Some possibilities were presented also in [4]. Presented paper depicts that an algebraic point of view, just like that in [2], can be extended by introducing algebraic objects that show similar properties as transfer functions do for linear systems. From that point of view differential operators play a key role. They form a ring of skew polynomials. The scope of this paper is thus restricted to the main theoretical results relating to quotients of such noncommutative polynomials since they provide a basis for the symbolic computation for nonlinear systems. The paper is organized as follows. In §2, a key idea of our approach is depicted. Preliminaries of an algebraic point of view in nonlinear control systems and methods of pseudolinear algebra are reviewed in §3. The symbolic computation for nonlinear systems and main theoretical results are discussed in §4. Finally, conclusions are summarized in §5.

II. I. INTRODUCTION In linear systems Laplace transformation plays an important role, since it enables us to design controllers, switch between different types of system representations (mainly between a state-space representation and an input-output description). For those purposes we use well known transfer functions. Obviously, the situation is different when systems are nonlinear. The associativity which is not valid restrains us from employing linear theory and transfer functions. Each handbook dealing with principles of nonlinear systems states that nonlinear systems have no transfer functions. The same goes also for modern approaches, like differential geometry of [5] or nowadays so popular and diffused algebraic point of view summarized for instance in [2]. Although, there are some endeavors to introduce transfer functions for nonlinear systems, like the work of Leith and Leithead and their velocity-based linearization [6], none of them is widely This work has been partially supported by Slovak Grant Agency, Grant No. VEGA 1/3089/06.

PRINCIPLE

To begin with, a question can be asked as to whether it is possible to find an approach allowing us to introduce transfer functions of nonlinear systems such that they satisfy at least two following properties: --They completely characterize nonlinear dynamics of given system in any operating point. It means we are not interested in approximations like those obtained from a linearization in a fixed operating point. --They provide an input/output description of given system. It means we can express output in terms of input. This should somehow allow us to introduce algebra of transfer functions when combining systems in a series, parallel or feedback connection. The question itself implies that the answer is yes. To explain how to satisfy former requirements let us consider two simple nonlinear systems y1 = g1 (u1 )

y 2 = g 2 (u 2 )

(1)

where

u1 , u 2 , y1 , y 2 ∈ R

and

g1 , g 2

are differentiable

functions from R to R. The associativity is not valid when g1 , g 2 are nonlinear, as generally known. In other words, final behaviour depends on in what order we combine systems together. For instance, system y = g 2 (g1 (u )) does not equal system y = g1 (g 2 (u )) and it is obvious. However, we can formally avoid that problem by differentiating, since a derivative of a composite function is simply a product of derivatives of its components. Thus, on differentiating (1)

dy1 = K 1du1

(2)

dy 2 = K 2 du 2

where entries of f and g are meromorphic functions, which we think of as elements of the quotient field of the ring of analytic functions, and x ∈ R n , u ∈ R m and y ∈ R p denote state, input and output to the system. Let K denote the field of meromorphic functions of x, u and a finite number of derivatives of u; that is, each element of K is a meromorphic function of the form F x, u (k ) ; k ≥ 0 . The

(

field K can be endowed with a differential structure determined by system (4). To the effect, a derivative operator δ acting on K is defined as follows

δxi = x& i = f i (x, u )

dy = K1 K 2 du

i = 1,..., n

δu (jk ) = u (jk +1)

where K1 = ∂g1 ∂u1 and K 2 = ∂g 2 ∂u 2 . Now, formally (3)

(

k ≥ 0, j = 1,..., m

δF x, u (k ) =

m

i

i

k j

k j

j =1 k ≥0

for both y = g 2 (g1 (u )) and y = g1 (g 2 (u )) . Of course, in the systems (1) can be thus described by K1 and K 2 which we can think of as their transfer functions, since they satisfy both mentioned requirements (they completely characterize nonlinear dynamics and also allow us to express output in terms of input). A series connection of the systems can be thus described by K1 K 2 , no matter in what order we combine nonlinear systems together. That is the way how to avoid the problem with the associativity. Depicted principle can be extended also to nonlinear dynamic systems. But before that, appropriate algebraic setting needs to be built. Hence, next section deals with necessary algebraic tools.

(5)

) ∑ ∂∂xF δx + ∑ ∂F( ) δu ( ) ∂u n

i =1

first case u 2 = y1 and in the second u1 = y 2 . Evidently, the

)

We often use a notation δu = u& or u (2 ) = u&& etc. To handle theoretic properties of the nonlinear system (4), define a vector space spanned over K by differentials of elements of K, namely

ε = span K {dξ ; ξ ∈ K }

(6)

Elements of ε are called one-forms (or, more generally, differential forms). Any element in ε is a vector of the form

v=

∑ α dξ i

(7)

i

i

where all α i ∈ K. We define a (differential) operator, denoted

III.

by d, from K to ε in following way

PRELIMINARIES

A. Meromorphic functions and differential forms The scope of our interest is restricted to situations in which the system properties are generic; that is, to the systems defined by means of analytic functions. They form an integral domain and can be thus embedded to a quotient field. Elements of such a quotient field are called meromorphic functions. Such an approach is widely discussed in [2] and it considers an algebraic point of view in nonlinear control systems. In this section some algebraic tools and methods are briefly reviewed. The reader is referred to [2] for detailed technical constructions which are not found here. The dynamic systems, considered in this paper, are objects described by a system of first order differential equations of the form x& = f (x, u ) y = g ( x, u )

(4)

d: K → ε ; dF =

n

∂F

∑ ∂x i =1

i

m

dx i +

∂F

∑ ∂u ( ) du ( ) j =1 k ≥0

k j

k j

(8)

A one-form v ∈ ε is said to be an exact if v = dF for some F ∈ K and dF is then usually referred to as the differential of F. Finally, also the vector space ε can be endowed with a differential structure by defining a derivative operator, which by abuse of notation is again denoted by δ, as follows

δv = v& =

∑ [δ (α )dξ i

i

+ α i d(δξ i )]

(9)

i

Reviewed algebraic point of view in nonlinear control systems is the start point for handling theoretic properties of systems of the form (4). Although it directly treats many control problems, like transformations to canonical forms,

linearization, disturbance decoupling problem, etc., it does not provide a symbolic computation for nonlinear systems we are interested in. The aim of this paper is thus to show that reviewed algebraic point of view can be extended by introducing skew polynomials which act as differential operators on the vector space ε. Quotients of such polynomials will be the scope of our attention since they form the symbolic computation for nonlinear systems we are looking for. Before that, we introduce some necessary properties of such objects. B. Pseudo-derivations, skew polynomials and pseudolinear operators Pseudo-derivations, skew polynomials and pseudo-linear operators are basic objects of study of pseudo-linear algebra which is also known as Ore algebra. Such algebra deals with common properties of linear differential and difference operators. Here, we only recall some results from [1], which fit our purposes. In next parts, all fields are commutative and all rings are non-commutative unless explicitly stated otherwise. Let K be a field and σ : K → K an injective endomorphism of K; that is, σ (a + b ) = σ (a ) + σ (b ) and σ (ab ) = σ (a )σ (b ) for any a, b ∈ K . A map δ : K → K which satisfies

δ (a + b ) = δ (a ) + δ (b ) δ (ab ) = σ (a )δ (b ) + δ (a )b

(10)

is called a pseudo-derivation. Note that if σ = 1 then (10) is just a derivation on K. The left skew polynomial ring given by σ and δ is the ring (K [x ];+,⋅) of polynomials in the indeterminate x over K with the usual addition, and the (non-commutative) multiplication given by the commutation rule xa = σ (a )x + δ (a )

(11)

for any a ∈ K . Since K [x ] usually denotes a commutative polynomial ring, we now denote the left skew polynomial ring as K [x; σ , δ ] . Elements of such a ring are called skew polynomials or non-commutative polynomials or Ore polynomials [8]. Let V be a vector space over K. A map θ : V → V is called pseudo-linear if

θ (u + v ) = θ (u ) + θ (v ) θ (au ) = σ (a )θ (b ) + δ (a )u

θ (θ ...θ (u )) for some u ∈ V and θ pseudo-linear, it is preferable to define such an operation recursively, as θ k u = θ θ k −1u for any k ≥ 1 , setting θ 0 u = u . Now, any pseudo-linear map θ : V → V induces an action denoted by ⋅

(

)

 n  ⋅ : K [x; σ , δ ] × V → V ;  ai x i  ⋅ u =    i =0 



n

∑a θ u i

i

(13)

i =0

for any u ∈V . In following parts the symbol ⋅ is usually omitted. It is important to note that multiplication in K [x; σ , δ ] corresponds to the composition of operators. More over, one has ( pq )u = p(qu ) for any p, q ∈ K [x; σ , δ ] and u ∈V . C. Quotients of skew polynomials

The left skew polynomial ring K [x; σ , δ ] has no zero divisors. In fact, it is endowed with a right Euclidean division algorithm and if σ is an automorphism, also with a left one. More over, K [x; σ , δ ] satisfies the so-called Ore condition. Ore condition: For all non-zero a, b ∈ K [x; σ , δ ] , there exist non-zero a1 , b1 ∈ K [x; σ , δ ] such that a1b = b1 a .

In other words, each two elements of K [x; σ , δ ] have a common left multiple. K [x; σ , δ ] can be thus embedded to a non-commutative quotient field [7], [8] by defining quotients as a = b −1 ⋅ a b

(14)

where a, b ∈ K [x; σ , δ ] and b ≠ 0 . Addition is defined by reducing two fractions to the same denominator, namely a1 a 2 β 2 a1 + β 1 a 2 + = b1 b2 β 2 b1

(15)

where β 2 b1 = β 1b2 by Ore condition. Multiplication is defined by a1 a 2 α 1 a 2 ⋅ = b1 b2 β 2 b1

(16)

where β 2 a1 = α 1b2 again by Ore condition. We denote the resulting quotient field of skew polynomials by K x; σ , δ .

(12)

for any a ∈ K , u , v ∈V . Again, if σ = 1 then (12) is like a derivation on V. Skew polynomials can act on the vector space V and thus represent operators. In order to avoid a notation of the form

IV.

SYMBOLIC COMPUTATION

It is generally known that Laplace transformation plays a key role in dealing with linear systems. It enables to design control algorithms, to switch from an input-output description

of a linear system to a state space representation, etc. Accordingly, it is also generally known that such a symbolic computation is not available for nonlinear systems. The same is stated in [2]. In other words, everybody knows that nonlinear systems have no transfer functions. However, this statement is a dogma which there is no reason to believe. There exists an interesting possibility to combine two above mentioned approaches. Namely, differential forms which directly treat theoretic properties of nonlinear systems and pseudo-linear algebra which deals with linear differential operators. Such a point of view provides an interesting possibility to introduce symbolic computation for nonlinear control systems. This is discussed in following parts.

Proof: For k = 0 it directly follows that dF = dF. For k ≥ 1 we prove by induction. Let k = 1, then

A. Skew polynomials as operators over one-forms Firstly, as in previous section, in order to avoid a notation of the form δ (δ ...δ (F )) or δ (δ ...δ (v )) for higher order derivatives of F ∈ K or v ∈ ε , it is preferable to define such an operation recursively, as

 d (δF ) = d  

k

( v = δ (δ

δ F =δ δ δk

k −1

k −1

F

v

)

(17)

for any k ≥ 1 , setting δ 0 F = F and δ 0 v = v . Now, methods of pseudo-linear algebra can be applied to differential forms. It is clear that the derivative operator δ acting on K (5) is a pseudo-derivation (10) with respect to σ = 1 . Consequently, the derivative operator acting on ε (9), which by abuse of notation was also denoted by δ, is a pseudo-linear map (12) again with respect to σ = 1 . Thus, if we consider a corresponding left skew polynomial ring over the field of meromorphic functions, namely K [s;1, δ ] , then the map (9) induces an action  n  ⋅ : K [s;1, δ ] × ε → ε ;  ai s i  ⋅ v =    i =0 



n

∑a δ v i

i

(18)

i =0

for any v ∈ ε . The commutation rule (11) for K [s;1, δ ] now reads sF = Fs + F& for any F ∈ K.

Example 1: Consider a polynomial p (s ) = x1 s 2 + x 2 u& and a one-form v = udu . Then

(

)

p(s )v = x1 s 2 + x 2 u& udu = x1 (udu&& + 2u&du& + u&&du ) + x 2 u&udu (19)

A valuable fact is that the polynomials may not only act on one-forms, but it is also possible to extract from one-forms corresponding polynomials. For instance, a one-form d&y& + ydy& + y& dy can be written as s 2 + ys + y& dy . This is

(

stated and proved in following lemma. Lemma 1: For any F ∈ K and

( )

δ k (dF ) = d δ k F .

n

+

∑ i =1 n

+

∑ i =1

n

+

∑ i =1 n

)

)

k≥0

one has

 n ∂F  n  ∂F  dξ i + dξ i  = δ      i =1 ∂ξ i  i =1  ∂ξ i  n  n  ∂2F ∂F  d (δξ i ) = δξ j dξ i +   ∂ξ j ∂ξ i ∂ξ i i =1  j =1 

δ (dF ) = δ 

+

∑ i =1





∑∑ n

∂2F ∂F d (δξ i ) = δξ j dξ i + ∂ξ j ∂ξ i ∂ξ i i , j =1



n

(20)

∂F

∑ ∂ξ

d (δξ i )

i

i =1

 n  ∂F  ∂F δξ i + δξ i  = d   ∂ξ i i =1  i =1  ∂ξ i  n  n  ∂F ∂2F  d(δξ i ) = dξ j δξ i +   ∂ξ i ∂ξ j ∂ξ i i =1  j =1  n





∑∑ n

∂F ∂2F d(δξ i ) = dξ j δξ i + ∂ξ i ∂ξ j ∂ξ i i , j =1



n

(21)

∂F

∑ ∂ξ i =1

d(δξ i )

i

After considering the fact that for any F ∈ K one has

∂ 2 F ∂ξ j ∂ξ i = ∂ 2 F ∂ξ i ∂ξ j we

δ (dF ) = d(δF ) .

obtain

( ) (dF ) = δ (δ (dF )) = δ (d (δ F )).

Assume, now, that for some k δ k (dF ) = d δ k F . For k + 1 it

δ

follows that

k

k +1

k

k

For any

k

F ∈ K also δ F ∈ K and if we denote δ F = G , then

(

)

δ (dG ) = d(δG ) = d δ k +1 F which completes the proof.

B. Transfer functions of nonlinear systems As a result of Lemma 1 a state-space representation (4) can be after differentiating restated as follows x& = f (x, u ) y = g ( x, u )

(22)

dx& = Adx + Bdu

(23)

dy = C dx + D du

(sI − A)dx = Bdu

(24)

dy = Cdx + Ddu where

A = (∂f ∂x ) ,

B = (∂f ∂u ) ,

C = (∂g ∂x )

D = (∂g ∂u ) . The same goes also for an input-output description

(

and

F y (n ) ,... y, u (m ) ,..., u = 0

)

(25)

a n dy (n ) + ... + a 0 dy = bm du (m ) + ... + b0 du

(26)

(a s n

n

)

(

)

+ ... + a 0 dy = bm s m + ... + b0 du

(27)

where F ∈ K, a i = ∂F ∂y (i ) , i = 0,…,n and b j = − ∂F ∂u ( j ) ,

j = 0,…,m. Now, nothing restrains us from introducing transfer functions for nonlinear systems. Definition 1: Given a nonlinear system (22) ((25) respectively). F (s ) ∈ K s;1, δ such that dy = F (s )du is said to be a transfer function of nonlinear system (22) ((25) respectively). Consequently, in the case of MIMO system, we think of F (s ) as a matrix over K s;1, δ and F (s ) is then said to be a transfer matrix. Now, it follows from (24) that

F (s ) = C (sI − A)−1 B + D

Fig. 1. Series, parallel and feedback connection of nonlinear systems.

(28)

and from (27) that

F (s ) =

bm s m + ... + b0 a n s n + ... + a 0

(29)

Notice the similarity to the transfer functions of linear systems. Important difference is that we have to keep the definition of multiplication which is not commutative. Inverting matrix (sI − A) thus leads to solving linear equations in non-commutative fields [7]. 1) Algebra of transfer functions of nonlinear systems Each system structure can be divided into three basic types of connections: series, parallel and feedback. Thus, to compute the final transfer function of a complex structure one has to know how to compute a transfer function of these basic connections. As a result of Definition 1, algebra of transfer functions of nonlinear systems is similar to linear one. For a series connection, Fig. 1a, it follows that dy B = FB (s )du B = FB (s )dy A = FB (s )F A (s )du A and F (s ) = FB (s )F A (s )

(30)

For parallel and feedback connection, Fig. 1b and 1c, similarly goes F (s ) = F A (s ) + FB (s )

(31)

F (s ) = (1 − F A (s )FB (s ))−1 ⋅ F A (s )

(32)

Notice that due to the non-commutative multiplication we have to keep (30) and (32) exactly as they are. Following example demonstrates how to handle a series connection of two nonlinear systems.

Example 2: Consider two nonlinear systems x& A = − x 3A + u A

x& B = u B3

yA = xA

yB = xB + u B

(33)

Transfer functions are following

FA (s ) =

1 s + 3 x A2

FB (s ) =

s + 3u B2 s

(34)

The systems are combined together in a series connection. For the connection A → B the resulting transfer function is F AB (s ) = FB (s )F A (s ) =

s + 3u B2 1 1 ⋅ = 2 s s + 3x A s

(35)

Therefore, (35) is linear from an input/output point of view which is easy to check, y& B = u B3 + u& B = x 3A + x& A = u A . When the systems are connected as B → A, the result is different FBA (s ) = F A (s )FB (s ) =

1 s+

3 x A2



s + 3u B2 s + 3u B2 = s s s + 3 x A2

(

)

(36)

Presented example also represents a stepping-stone to the design of controllers by transfer function algebra. Details and examples can be found in [3]. 2) Invariance to regular static state transformations Introduced transfer functions of nonlinear systems show also other similar properties like transfer functions of linear systems. One of them follows. Proposition 1: Transfer function (28) is invariant to any regular static state transformation ξ = φ ( x ) . Proof: For any regular static state transformation ξ = φ (x )

one has rank K T = n , where T = (∂φ ∂x ) . Since dξ = Tdx , in new coordinates we have

(

)

dξ& = TAT −1 + T&T −1 dξ + TBdu

(37)

dy = CT −1dξ + Ddu

(s

(

(

= C T −1 sT − A − T −1T&

)

−1

)

−1

TB + D =

2

(38)

B+D

and after considering sT = Ts + T& , F (s ) = C (sI − A)−1 B + D which completes the proof. Remark that TAT −1 + T&T −1 in (37) is in fact the change of basis formula of the pseudo-linear map. For more details see [1] section 1.3. Proposition 1 has significant consequences which can be immediately stated: --Transfer function characterizes nonlinear system uniquely; that is, each nonlinear system has a unique transfer function, no matter what realization in state-space we use. --Transfer function describes a minimal realization of nonlinear system; that is, a realization which is accessible and observable. The proof is straightforward and thus omitted. (There exists a regular static state transformation to which (28) is invariant and which yields a controllability canonical form. From such a form an expression C (sI − A)−1 B yields only accessible part. The same holds also for observability). 3) Modeling problem: state elimination Control systems are usually described by the input-output description expressed in terms of a higher order differential equation or by the so-called state-space representation expressed in terms of coupled first order differential equations. Transfer functions represent an input/output description of nonlinear systems. Thus, when computed from a state-space representation, directly treat the state elimination problem of nonlinear system. This is shown in next example. Example 3: Given the nonlinear system

(39)

V. CONCLUSION A combination of differential forms and methods of pseudo-linear algebra allows us to introduce a symbolic computation for nonlinear systems. In that respect, differential operators, which we think of as members of the left skew polynomial ring, play a key role. Quotients of such polynomials show many similar properties as transfer functions do for linear systems, like algebra of transfer functions, invariance to regular static state transformations, etc. Thus, they are legitimately referred to as transfer functions of nonlinear systems. Quotients over the left skew polynomial ring represent a brand new feature of which the presented paper extends results of differential algebraic methods already obtained in [2] and others. However, presented paper does not state that linear theory can be applied to nonlinear control problems, although one could conclude this from introduced algebra of transfer functions. Introduced symbolic computation for nonlinear systems is more difficult to handle, since the definition of multiplication is not commutative. This is a fundamental feature which we always have to take into consideration. Thus, it is important to compare introduced symbolic computation for nonlinear systems to other techniques and methods. Of course, there can be many questions, for instance on stability, which are, regrettably, out of scope of this paper. REFERENCES [2]

y = x1 [3]

Differentiating gives [4]

0  A= − x 2 − cos x1

1  0  ; B =  ; B = [1 0]  − x1  1

(40)

F (s ) = C (sI − A)-1 B =

1 s 2 + x1 s + cos x1 + x 2

[5] [6]

and thus transfer function is following (41)

(42)

which represents the differential of the input-output equation &y& + yy& + sin y = u . Notice again the similarity to linear systems and Laplace transforms.

[1]

x&1 = x2 x& 2 = − x1 x2 − sin x1 + u

)

+ ys + cos y + y& dy = du d&y& + ydy& + cos ydy + y& dy = du

where T& = δ (T ) and δ is applied pointwise to T. Thus F (s ) = CT −1 sI − TAT −1 − T&T −1

The transfer function (41) expresses output y in terms of input u, dy = F (s )du . Thus, after considering x1 = y and x 2 = y&

[7] [8]

M. Bronstein and M. Petkovšek, “An introduction to pseudo-linear algebra,” Theoretical Computer Science, 157, pp. 3–33, 1996. G. Conte, C. H. Moog and A. M. Perdon, Nonlinear Control Systems: An Algebraic Setting. Lecture Notes in Control and Information Sciences 242, London: Springer-Verlag, 1999. M. Halás, “Symbolic computation for nonlinear systems,” Ph.D. dissertation (in slovak), Dept. of Aut. and Cont., Fac. of Elec. Eng. & IT, Slovak Univ. of Technology, Bratislava, Slovakia, 2006. M. Halás, M. Huba and K. Žáková, “The exact velocity linearization method,” Proc. 2nd IFAC Conference Control System Design, Bratislava, Slovakia, 2003. A. Isidori, Nonlinear Systems. 2nd edition, New York: Springer-Verlag, 1989. D. J. Leith and W. E. Leithead, “Gain-scheduled controller design: an analytic framework directly incorporating non-equilibrium plant dynamics,” Int. J. Control, 70, pp. 249–269, 1998. O. Ore, “Linear equations in non-commutative fields,” Annals of Mathematics, 32, pp. 463–477, 1931. O. Ore, “Theory of non-commutative polynomials,” Annals of Mathematics, 34, pp. 480–508, 1933.

Suggest Documents