Feedback linearization and lattice theory ,

2 downloads 181179 Views 413KB Size Report
In case of analytic systems, close connections are established between the new results and those based on differential one-forms. The Mathematica functions ...
(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright

Author's personal copy

Systems & Control Letters 62 (2013) 248–255

Contents lists available at SciVerse ScienceDirect

Systems & Control Letters journal homepage: www.elsevier.com/locate/sysconle

Feedback linearization and lattice theory✩, ✩✩ Ülle Kotta a,∗ , Maris Tõnso a , Alexey Ye. Shumsky b , Alexey N. Zhirabok b a

Institute of Cybernetics at Tallinn University of Technology, Akadeemia tee 21, 12618 Tallinn, Estonia

b

Department of Control and Automation, Far Eastern Federal University, Sukhanova street 8, 690950 Vladivostok, Russia

article

info

Article history: Received 4 May 2012 Received in revised form 7 November 2012 Accepted 18 November 2012

abstract The tools of lattice theory are applied to readdress the static state feedback linearization problem for discrete-time nonlinear control systems. Unlike the earlier results that are based on differential geometry, the new tools are also applicable for nonsmooth systems. In case of analytic systems, close connections are established between the new results and those based on differential one-forms. The Mathematica functions have been developed that implement the algorithms/methods from this paper. © 2012 Elsevier B.V. All rights reserved.

Keywords: Feedback linearization State feedback Nonlinear control systems Discrete-time systems Algebraic approaches

1. Introduction The paper recasts the old problem of static state feedback linearization using the algebraic lattice theory. The mathematical technique used is known under the name ‘algebra of functions’ [1]. The interest in recasting the old problem is manyfold. First, it helps through comparison to evaluate the (dis)advantages of the new technique. The explicit relations between respective formulas/algorithms/solvability conditions can be given. Second, it allows to compare the assumptions behind the different approaches. Finally, since the new tools were inspired by the algebra of partitions in the theory of finite automata [2] and mimics the latter, it helps to build a possible bridge between the theories of continuous-time and discrete event systems. However, this aspect will be studied in another paper. In the algebra of functions the partitions (used in the algebra of partitions) were replaced by functions generating them and the

✩ This work was supported by the European Union through the European Regional Development Fund, Estonian governmental funding project no. SF0140018s08, ESF grant no. 8787 and by the Russian Foundation of Basic Researchers Grants 10-08-00133 and 11-08-91151. ✩✩ Preliminary results of this work were partly presented at 18th International

Conference on Process Control, Tatranska Lomnica, High Tatras, Slovak Republic (June 14–17, 2011). ∗ Corresponding author. Tel.: +372 6204153; fax: +372 6204151. E-mail addresses: [email protected] (Ü. Kotta), [email protected] (M. Tõnso), [email protected] (A.Ye. Shumsky), [email protected] (A.N. Zhirabok). 0167-6911/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.sysconle.2012.11.014

analogous operations and operators for functions were introduced. The four key elements of the algebra of functions are partial preorder relation, binary operations (sum and product, defined in a specific manner), binary relation and certain operators m and M, defined on the set SX of vector functions with the domain being the state space X of the control system. The starting point of the approach are the relations of partial preorder and equivalence, denoted as ∼ =. The equivalence relation divides the set SX of vector functions into the equivalence classes SX \ ∼ = which is proved to be a lattice. Like the tools based on the differential forms, the algebra of functions provides a unified viewpoint to study the discretetime as well as the continuous-time control systems; additionally it allows to address also the discrete-event systems like those in [3,4]. An important point to stress is that these tools (unlike most previous methods) do not require the system to be described in terms of smooth functions. Besides extending the results on feedback linearization for nonsmooth systems, the goal of this paper is to compare the tools of the algebra of functions with those based on the vector spaces of differential one-forms over suitable difference fields of nonlinear functions. We will give precise relations between respective solvability conditions and solutions for analytic systems. In order to focus on the key aspects and keep the presentation simple, we restrict ourselves in this paper to the discrete-time single-input systems. Whereas the number of publications on the topic of static state feedback linearization is huge, the situation is different for the discrete-time case, see [5–11]. Except [11], all papers focus on smooth feedback.

Author's personal copy

Ü. Kotta et al. / Systems & Control Letters 62 (2013) 248–255

2. The algebra of functions Consider a discrete-time nonlinear control system of the form

σ (x) = f (x, u),

(1)

where by σ (x) is denoted the forward shift of x, alternatively written as x+ and understood as x(k + 1), f : X × U → X , the variables x and u are the coordinates of the state space X ⊆ Rn and the input space U ⊆ R, respectively. In the mathematical technique, called the algebra of functions, f in (1) is allowed to be non-smooth. The main elements of the technique, defined on the set SX of vector functions with the domain being the state space X , are [1]: 1. relation of partial preorder, denoted by ≤, and equivalence, denoted by ∼ =, 2. binary operations, denoted by × and ⊕, 3. (non-symmetric) binary relation, denoted by ∆, 4. operators m and M. Definition 1 (Relation of Partial Preorder). Given α, β ∈ SX , one says that α ≤ β iff there exists a function γ such that β(s) = γ (α(s)) for ∀x ∈ X . If α ̸≤ β and β ̸≤ α , then α and β are said to be incomparable. Definition 2 (Equivalence). If α ≤ β and β ≤ α , then α and β are called equivalent, denoted by α ∼ = β. Besides the relations of partial preorder and equivalence, we use the generic notions, corresponding to the situation when the relation may be violated on a set of measure zero. Note that the relation ∼ = is reflexive, symmetric and transitive. The equivalence relation divides the set SX into the equivalence classes containing the equivalent functions. If SX \ ∼ = is the set of all these equivalence classes, then the relation ≤ is partial order on this set. Recall that a lattice is a set with a partial order where every two elements α and β have a unique supremum (least upper bound) sup(α, β) and an infimum (greatest lower bound) inf(α, β). The equivalent definition of the lattice as an algebraic structure with two binary operations × and ⊕ may be given if for every two elements both operations are commutative and associative and moreover, α × (α ⊕ β) = α , α ⊕ (α × β) = α . The equivalence follows from the definition of the binary operations × and ⊕ as

α × β = inf(α, β),

α ⊕ β = sup(α, β).

(2)

Therefore, the triple (SX \ ∼ =, ×, ⊕) is a lattice. In lattice theory it is customary not to operate with inf(α, β) and sup(α, β) but with binary operations × and ⊕, respectively. In the simple cases the definition may be used to compute α⊕β . For the general case, see [1,12]. The rule for operation × is simple (α × β)(x) = [α T (x), β T (x)]T . However, the product may contain redundant (functionally dependent) components that have to be found and removed. Moreover, to simplify the computations, one is advised to replace the remaining components by equivalent but more simple functions, see more in [12]. Definition 3. (Binary relation ∆) Given α, β ∈ SX , α and β are said to form an (ordered) pair, denoted as (α, β) ∈ ∆, if there exists a function f∗ such that

β(f (x, u)) = f∗ (α(x), u) for all (x, u) ∈ X × U.

(3)

249

The binary relation ∆ may be given the following interpretation. One may ask what is the necessary information about x(t ) to compute β(x(t + 1)) for arbitrary but known u(t )? The amount of the necessary information is displayed in function α(x), forming a pair with the function β(x). Obviously, given β(x), there exist many functions α(x), forming a pair with β(x), i.e. (α, β) ∈ ∆. The most important among them is the maximal function with respect to the relation ≤, denoted by M(β). In a similar manner, for given α(x), there exist many functions β(x), forming a pair with α(x), i.e. (α, β) ∈ ∆. We will denote by m(α) the minimal function among those functions (with respect to relation ≤). This yields the following definitions. Definition 4. Operator m, applied to a function α , is a function m(α) ∈ SX that satisfies the following two conditions (i) (α, m(α)) ∈ ∆ (ii) if (α, β) ∈ ∆, then m(α) ≤ β . Definition 5. Operator M, applied to a function β , is a function M(β) ∈ SX that satisfies the following conditions (i) (M(β), β) ∈ ∆ (ii) if (α, β) ∈ ∆, then α ≤ M(β). Computation of m(α). It has been proven that the function γ exists that satisfies the condition (α × u) ⊕ f ∼ = γ (f ); define m(α) ∼ = γ, see [13]. Because the composition γ (f ) may be written as γ + and m(α) ∼ = γ , one may alternatively write the rule for computation of the operator m using a backward shift as follows: m(α) ∼ = ((α × u) ⊕ f )− .

(4)

Computation of M(β). In the special case when the composite function β(f (x, u)) can be represented in the form

β(f (x, u)) =

d 

ai (x)bi (u)

(5)

i=1

where a1 (x), a2 (x), . . . , ad (x) are arbitrary functions and b1 (u), b2 (u), . . . , bd (u) are linearly independent over R, then M(β) := a1 × a2 × · · · × ad .

(6)

For the general case, see [1]. Below we present two propositions that involve the coordinate transformation ϕ : X → Z . Note that the transformation ϕ itself as well as its inverse ϕ −1 are defined generically, i.e. everywhere except perhaps on the set of zero measure. Moreover, note that neither ϕ nor ϕ −1 are necessarily smooth; see also Example 29 below. Proposition 6 ([1] (pp. 81–82)). Let α be a vector function on SX and ϕ : X → Z a coordinate transformation. Then m(α(ϕ)) ∼ = (m(α))(ϕ), M(α(ϕ)) ∼ = (M(α))(ϕ). Proposition 7. Let α and β be two vector functions on SX and ϕ : X → Z a coordinate transformation. Then (α ⊕ β)(ϕ) ∼ = α(ϕ) ⊕ β(ϕ). Proof. Note that by definition of operation ⊕ inequalities (α ⊕ β)(ϕ) ≥ α(ϕ) and (α ⊕ β)(ϕ) ≥ β(ϕ) imply

(α ⊕ β)(ϕ) ≥ α(ϕ) ⊕ β(ϕ) and, furthermore, taking the composition with the function ϕ

((α ⊕ β)(ϕ))(ϕ −1 ) = α ⊕ β ≥ (α(ϕ) ⊕ β(ϕ))(ϕ −1 ).

(7) −1

,

(8)

Applying again (7) for the right-hand side of the above inequality, one gets

(α(ϕ) ⊕ β(ϕ))(ϕ −1 ) ≥ (α(ϕ))(ϕ −1 ) ⊕ (β(ϕ))(ϕ −1 ) = α ⊕ β. (9) Finally, (8) and (9) yield (α(ϕ) ⊕ β(ϕ))(ϕ −1 ) ∼ = α ⊕ β, or (α ⊕ β)(ϕ) ∼ = α(ϕ) ⊕ β(ϕ). 

Author's personal copy

250

Ü. Kotta et al. / Systems & Control Letters 62 (2013) 248–255

Example 8. Let α(x) = [x1 + x2 , x3 ]T , β(x) = [x1 x3 , x2 x3 ]T and z = ϕ(x) = [x1 + x2 , x1 − x2 , x3 /x1 ]T . Then (α ⊕ β)(x) = x3 (x1 + x2 ), α(ϕ(x)) = [z1 , 0.5z3 (z1 + z2 )]T , β(ϕ(x)) = [0.25(z1 + z2 )2 z3 , 0.25(z12 − z22 )z3 ]T . Moreover, (α⊕β)(ϕ(x)) = 0.5z1 z3 (z1 +z2 ), (α(ϕ)⊕β(ϕ))(x) = 0.5z1 z3 (z1 + z2 ). Clearly, (α ⊕ β)ϕ = α(ϕ) ⊕ β(ϕ).

The following proposition points what happens with the sequence δ i , i ≥ 0, computed by Algorithm 12, under the coordinate transformation z = ϕ(x) and regular static state feedback u = ϑ(χ , v). Note that like the coordinate transformation, the function ϑ that defines the feedback is not necessarily smooth. We denote the functions, computed for the transformed system by δ˜ i .

We use below the following properties of the operators M and m (see [1]):

Proposition 13. Let ϕ : X → Z be a state transformation and u = ϑ(χ , v) a regular static state feedback. Then δ˜ i ∼ = δ i (ϕ −1 ), for i ≥ 1.

1. 2. 3. 4. 5.

α ≤ β ⇒ M(α) ≤ M(β); α ≤ β ⇒ m(α) ≤ m(β); m(α ⊕ β) ∼ = m(α) ⊕ m(β); m(M(β)) ≤ β ; M(m(α)) ≥ α .

2.1. Minimal f -invariant function Definition 9. The function α is said to be invariant with respect to system dynamics (1) or f -invariant if for arbitrary u ∈ U the following holds:

α(x) = α(˜x) ⇒ α(f (x, u)) = α(f (˜x, u)).

Proof. First, observe that the application of the regular static state feedback does not influence either δ 1 or those components of m(δ 1 ) that are in δ 1 . This follows from the definition of δ 1 as the minimal function those forward shift is independent of the control. Since by Algorithm 12, δ k ≥ δ 1 , for k > 1, the same holds for every δ k , and therefore, δ k , k ≥ 1 is invariant under the regular static state feedback. Next, we examine δ k , k ≥ 1 under the state coordinate transformation z = ϕ(x). Note that the transformed state equations are z + = ϕ(f (ϕ −1 (z ), u)) := f˜ (z , u). Compute first

Lemma 10 ([1]). The function α is f -invariant iff (α, α) ∈ ∆.

δ 1 (f (x, u)) = δ 1 (ϕ −1 ) ◦ ϕ(f (ϕ −1 (z ), u)) = δ 1 (ϕ −1 ) ◦ f˜ (z , u). (11)

In case of smooth systems, Definition 9 agrees with the concept of invariant distribution for discrete-time system dynamics (1), as given in [8]. Namely, if we define Ω = span{dα}, where α is f -invariant, then the annihilator of Ω , Ω ⊥ =: D, is the invariant distribution. From one side Definition 9 is a bit more general since the function f in (1) as well as a function α are not necessarily smooth nor even continuous. From the other side the definition in [8] is more general since the invariant distribution is not necessarily involutive (integrable).

Since δ 1 (f (x, u)) is independent of u by definition of δ 1 , then from (11) also δ 1 (ϕ −1 ) ◦ f˜ (z , u) is independent of u, and therefore, again by definition of δ˜ 1 , δ 1 (ϕ −1 ) ≥ δ˜ 1 . In a similar manner one may prove that δ˜ 1 (ϕ) ≥ δ 1 or, alternatively, δ˜ 1 ≥ δ 1 (ϕ −1 ). The latter inequality together with δ 1 (ϕ −1 ) ≥ δ˜ 1 yields δ˜ 1 = δ 1 (ϕ −1 ). By definition of δ˜ 2 and Propositions 6 and 7,

Lemma 11 ([1]). α is f -invariant function ⇔ m(α) ≤ α ⇔ α ≤ M(α) Below we present the algorithm that computes the minimal f -invariant function δ satisfying the condition δ 1 ≤ δ , for given δ 1 . Denote δ 0 (x) = [x1 , . . . , xn ]T . Find a minimal (containing the maximal number of functionally independent components) vector function δ 1 (x) such that its forward shift δ 1 (f (x, u)) does not depend on control variable u. The function δ 1 (x) initializes Algorithm 12 below. Note that if f is smooth, δ 1 (x) satisfies the condition

∂ 1 δ (f (x, u)) ≡ 0. ∂u Though δ 1 (x) is not unique, all possible choices are equivalent functions. Moreover, applying the operators m and M to equivalent functions will yield again equivalent functions. Really, if α ∼ = β, then α ≤ β and β ≤ α . By the 1st property of operators M and m, one has M(α) ≤ M(β) and M(β) ≤ M(α). Therefore M(α) ∼ = M(β). The same proof holds for operator m except that now we use the 2nd property of operators M and m. Therefore, the results of Algorithm 12 and those in next sections will be the same for different choices of δ 1 (x), up to the function equivalence. Algorithm 12 (Computation of Minimal f -Invariant Function δ Satisfying the Condition δ 1 ≤ δ ). Given δ 1 , compute recursively, for i ≥ 1, using the formula

δ i+1 = δ i ⊕ m(δ i ),

(10)

the sequence of non-decreasing functions δ ≤ δ ≤ · · · ≤ δ i ≤ · · ·. By Theorem 1 in [14] there exists a finite j ≤ n such that δ j ̸∼ = δ j−1 , but δ j+l ∼ = δ j , for all l ≥ 1. Define δ := δ j . 1

2

∼ δ 1 (ϕ −1 ) ⊕ m(δ 1 (ϕ −1 )) δ˜ 2 = δ˜ 1 ⊕ m(δ˜ 1 ) = ∼ = δ 1 (ϕ −1 ) ⊕ (m(δ 1 ))(ϕ −1 ) ∼ = (δ 1 ⊕ m(δ 1 ))(ϕ −1 ) ∼ = δ 2 (ϕ −1 ). The proof for other i values is similar.



The simple consequence of proposition is that the sequence of functions δ i is invariant with respect to the state coordinate transformation and regular static state feedback, and therefore, the results presented in Sections 5–6 may be understood as intrinsic. Example 14. Consider the system: x+ 1 = x2 x3 ,

x+ 2 = x4 ,

x+ 3 = x4 x5 ,

x+ 4 = x1 u,

+

x5 = x3 u and the coordinate transformation z = ϕ(x) = [x1 , x2 x3 , x24 x5 , x2 , x4 ]T . Note that the inverse transformation is given by x = ϕ −1 (z ) = [z1 , z4 , z2 /z4 , z5 , z3 /z52 ]T . The system equations in the new coordinates are z1+ = z2 ,

z2+ = z3 ,

z3+ = u3 z12 z2 /z4 ,

z4+ = z5 ,

z 5 = z 1 u. +

Compute δ 1 (x) = [x1 , x2 , x3 , x4 /x5 ]T and δ 1 (ϕ −1 )(z ) = [z1 , z4 , z2 /z4 , z53 /z3 ]T . Observe that δ 1 (ϕ −1 ) ∼ = δ˜ 1 (z ) = [z1 , z2 , z4 , z3 / √ 3 T 1 z5 ] . Furthermore, compute m(δ )(x) = [x1 , x2 / x3 , x4 , x5 ]T , and 3/2

m(δ 1 )(ϕ −1 )(z ) = [z1 , z4 /z2 , z5 , z3 /z52 ]T . Note that m(δ 1 )(ϕ −1 )

3/2 (z ) ∼ = m(δ˜1 )(z ) = [z1 , z4 /z2 , z3 , z5 ]T . Therefore, δ 2 (x) = √ 3/2 [x1 , x2 / x3 , x4 /x5 ]T , δ 2 (ϕ −1 )(z ) = [z1 , z4 /z2 , z53 /z3 ]T , δ˜ 2 (z ) = [z1 , z2 /z43 , z3 /z53 ]T . Clearly, δ 2 (ϕ −1 ) ∼ = δ˜ 2 . Continuing in the 3 similar manner one may show that δ (ϕ −1 )(z ) ∼ = δ˜ 3 (z ) = 9/2 3/2 27/4 9/2 9/4 3 4 −1 4 ∼ ˜ [z2 /z4 , (z1 z5 )/z3 ], δ (ϕ )(z ) = δ (z ) = [(z1 z4 z5 )/(z2 3/2 5 5 z3 )], and finally, that δ = δ˜ = const.

Author's personal copy

Ü. Kotta et al. / Systems & Control Letters 62 (2013) 248–255

3. Tools based on differential forms In the approach based on differential one-forms, one assumes that f in (1) is analytic function. In the study of discrete-time nonlinear control systems the following assumption is usually made, that guarantees the forward shift operator σ , defined by Eq. (1), to be injective. Assumption 15. The map f in (1) is generically submersion. Note that the map f is a submersion at a point (x, u) if its differential df : X × U → X is a surjective linear map. This assumption is not restrictive, especially for problems studied in this paper, since by the results of [15], submersivity is a necessary condition for a system to be accessible and the latter is a necessary condition for static state feedback linearizability. Let us extend the map f : (x, u) → y to the map f¯ : (x, u) → (y, x˜ ) where x˜ = χ (x, u), x˜ ∈ R. Though the choice of the function χ(x, u) is not unique, all choices lead to isomorphic difference fields, associated to system (1). In order to define the inverse of σ , the backward shift operator σ −1 , one assumes that f¯ has a global analytic inverse, defined on its image f¯ (X × U ). In the approach of differential one-forms one associates with system (1) an inversive difference field (K , σ ) of meromorphic functions in a finite number of independent system variables C = {x, σ k (u), k ≥ 0, σ −l (˜x), l ≥ 1}; for details, see [16]. The forward and backward shift operators σ , σ −1 : K → K are defined respectively by

σ ϕ(x, u) = ϕ(σ (x), σ (u)) = ϕ(f (x, u), σ (u)) and

σ −1 ϕ(x, u) = ϕ(σ −1 (x), σ −1 (u)). In what follows we use sometimes the abridged notation ϕ + = σ (ϕ) and ϕ − = σ −1 (ϕ) for ϕ ∈ K . Over the field K one can define a vector space E := spanK {dϕ | ϕ ∈ C } of differential one-forms. The forward shift operator σ : K → K induces a forward shift operator σ : E → E by

Σi ai dϕi → Σi a+ i d(σ (ϕi )).

251

Proof. Note that the claim of theorem for k = 0 holds by definition and for k = 1 because δ 1 collects the minimal1 vector function whose components have relative degree two or more with respect to u. The remaining part of proof is by induction. We will prove that from Hˆ k+1 = spanK {dδ k (x)} follows Hˆ k+2 = spanK {dδ k+1 (x)}. The proof is given under the additional assumption that in the field extension one may choose x˜ = u. In the general case the proof is similar, though technically more complicated, see Remark 17. Since by Algorithm 12, δ k+1 = δ k ⊕ m(δ k ), one has2 spanK {dδ k+1 } ⊆ spanK {dδ k } ∩ spanK {dm(δ k )}. Furthermore, by the definition of operator m, δ k × u ≤ [m(δ k )]+ , therefore, (δ k )− × u− ≤ m(δ k ) and since α ≤ β ⇔ Ωα ⊃ Ωβ , we have spanK {dm(δ k )} ⊆ spanK {d[(δ k )− ]} + spanK {du− }

= Hˆ k−+1 + spanK {du− } yielding spanK {dδ k+1 } ⊆ Hˆ k+1 ∩ (Hˆ k−+1 + spanK {du− }) = Hˆ k+1 ∩ Hˆ k−+1 .

(12) By the definition of operation ⊕, the function δ is minimal function satisfying the conditions δ k+1 ≥ δ k and δ k+1 ≥ m(δ k ). Therefore, codistribution spanK {dδ k+1 } is integrable and maximal among integrable distributions such that (12) holds. Two last facts  imply spanK {dδ k+1 } = Hˆ k+1 ∩ Hˆ k−+1 . Since Hk+2 may be alternatively defined as Hk+2 = Hk+1 ∩Hk−+1 [9], one has spanK {dδ k+1 } =  Hˆ k+1 ∩ Hˆ k−+1 = Hˆ k+2 .  k+1

Remark 17. If it is impossible to choose x˜ = u in extension of f , then spanK {dm(δ 1 )} ̸= Hˆ 2− . However, the two codistributions differ only by a single basis element. Indeed, du− , present in spanK {dm(δ 1 )}, is missing in Hˆ 2− , whereas dx˜ − , present in Hˆ 2− , is missing in spanK {dm(δ 1 )}. Since du− ̸∈ Hˆ 2 , dx˜ − ∈ Hˆ 2 , those elements do not affect the intersection Hˆ 2 ∩ Hˆ 2− .

A 1-form ω ∈ E is called exact if dω = 0 and closed if dω ∧ ω = 0, where ∧ denotes the wedge product. The subspace of 1-forms in E is called completely integrable if it admits a basis which consists only of exact one-forms. The relative degree r of a 1-form ω in X := spanK {dx} is defined by r = min{k ∈ N | ω(k) ̸∈ X}. A sequence of codistributions Hk [9]

The example below illustrates Remark 17. Note that du− is missing in Hˆ 2− , but it contains instead the element dx˜ − , whereas x˜ − is missing in m(δ 1 ). Moreover, δ 1 ⊕ m(δ 1 ) corresponds to  Hˆ 2 ∩ Hˆ 2− .

H1 = spanK {dx}

x+ 1 = x1 + x3

Hk+1 = {ω ∈ Hk | ω ∈ Hk }, +

k≥1

plays a key role in the solution of the addressed problems. Each Hk contains the one-forms with relative degree equal to k or greater than k. There exists an integer k∗ ≤ n such that for 1 ≤ k ≤ k∗ , Hk+1 ⊂ Hk , Hk∗ +1 ̸= Hk∗ but Hk∗ +1 = Hk∗ +2 := H∞ . Obviously, H∞ is the maximal codistribution, invariant with respect to the forward shift. Finally, note that the subspaces are invariant with respect to the regular static state feedback and state coordinate transformation, see [9].

Example 18. Consider the system x+ 2 = x2 + x5 x+ 3 = u x+ 4 = x3 x4 x+ 5 = x1 . Note that for this example (for the choice x˜ = x5 ) x− 1 = x5

˜− x− 2 = x2 − x x− 3 = x1 − x5

4. Comparison

x− 4 = x4 /(x1 − x5 )

The theorem below shows that in case of analytic systems δ k corresponds to the integrable subspace of Hk+1 , denoted by Hˆ k+1 , for k = 1, . . . , n.

˜− x− 5 = x

Theorem 16. For k = 0, . . . , n, Hˆ k+1 = spanK {dδ k (x)}.

1 Containing the maximal number of functionally independent components. 2 Note that X ∩ Z is not necessarily integrable even if X and Z are.

Author's personal copy

252

Ü. Kotta et al. / Systems & Control Letters 62 (2013) 248–255

Proof. Sufficiency. Since δ k = δ k−1 ⊕ m(δ k−1 ) and δ k−1 ∼ = δk , then (since by definition of the operation ⊕, α ⊕ β ≥ α ), m(δ k−1 ) ≤ δ k−1 . The last inequality implies by Lemma 11 that the function δ k−1 is f -invariant. Then, due to the inequality δ 1 ≤ δ k−1 there exists a function fδ such that δ k−1 (f (x, u)) = fδ (δ k−1 (x)), or rewritten alternatively, σ (δ k−1 ) − fδ (δ k−1 ) = 0. The last equality means that δ k−1 is the autonomous variable.

and u− = x 3 . Find δ 1 = [x1 , x2 , x4 , x5 ]T and compute m(δ 1 ) = [[x1 , x2 , x4 , x5 , u] ⊕ [x1 + x3 , x2 + x5 , u, x3 x4 , x1 ]]−

 = [x2 + x5 , u, x1 , x4 ]− = x2 , x3 , x5 ,

x4 x1 − x5



.

Observe that δ 1 corresponds to H2 = spanK {dx1 , dx2 , dx4 , dx5 }, which is integrable and m(δ 1 ) corresponds to the H2− , where − − − H2− = spanK {dx− 1 , dx2 , dx4 , dx5 } − − − = spanK {dx− , dx− 2 + dx5 , dx4 , dx5 }  1    x4 − ˜ = spanK dx5 , dx2 , d , dx . x1 − x5

The only difference between dm(δ 1 ) and the basis vectors of H2− is that whereas dm(δ 1 ) contains dx3 = du− , H2− contains dx˜ − = dx− 5 . All the other components coincide. Furthermore, compute



δ = δ ⊕ m(δ ) = x2 , x5 , 2

1

1



x4 x1 − x5

,

m(δ ) =



x2 , x5 ,

−



x4 x1 − x5

, u ⊕ [x1 + x3 , x2 + x5 , u, x3 x4 , x1 ]

= [x2 + x5 , u]− = [x2 , x3 ]. Note that δ 2 corresponds to (integrable)



H3 = spanK dx2 , dx5 , d



x4 x1 − x5



Proposition 21 ([9]). The analytic system of the form (1) is accessible iff H∞ = {0}. Proposition 22. The following statements for analytic system (1) are equivalent

and 2

Necessity. Suppose the system (1) admits an autonomous variable α , meaning that there exists a function F such that F (α, σ (α)) = 0 holds and F depends explicitly on σ (α). Note that α is a f -invariant function since by solving generically the implicit equation F (α, σ (α)) = 0 for σ (α) we get that σ (α) = fα (α) holds for some function fα . Due to definition of the function δ 1 and the operator m, the inequalities δ 1 ≤ α and m(α) ≤ α hold. By the 2nd property of the operator m the inequality m(δ 1 ) ≤ m(α) holds. Since m(α) ≤ α , then m(δ 1 ) ≤ α by transitivity. Since δ 2 = δ 1 ⊕ m(δ 1 ), the last inequality and δ 1 ≤ α yield δ 2 ≤ α . By analogy one may prove that δ i ≤ α for all i > 0. When δ k−1 ∼ = δk k−1 for some k, then δ ≤ α ̸= const. 

.

In a similar manner, compute

δ 3 = δ 2 ⊕ m(δ 2 ) = x2 and m(δ 3 ) = [[x2 , u] ⊕ [x1 + x3 , x2 + x5 , u, x3 x4 , x1 ]]−

= u− = x 3 . Note that δ 3 corresponds to the integrable subspace of H4 . Finally,

δ 4 = δ 3 ⊕ m(δ 3 ) ∼ = const, corresponding to H∞ = {0}. 5. Accessibility Note that accessibility is a necessary condition for static state feedback linearizability. Therefore, we recall below the accessibility definition. A non-constant function α ∈ SX is said to be an autonomous variable of system (1) if there exist a non-constant function F and an integer µ ≥ 1 such that F (α, σ (α), . . . , σ µ (α)) = 0. Definition 19 below agrees with that given in [9] for analytic systems. Definition 19. System (1) is said to be accessible if it has no autonomous variable. Note that the theorem below is valid in the general case, i.e. including non-smooth systems. Theorem 20. System (1) admits an autonomous variable δ satisfying the function F (δ, σ (δ)) = 0 iff for some positive integer k, δ k−1 ∼ = δ k ̸= const.

(i) H∞ ̸= {0}. (ii) For some k, δ k−1 ∼ = δ k ̸= const. Proof. (i) → (ii). Suppose H∞ ̸= {0}. This means that there exists shift-invariant (or called alternatively, f -invariant) function β(x) such that δ 1 ≤ β . Due to Lemma 11, m(β) ≤ β.

(13)

Next, due to the 2nd property of operator m, the inequality δ 1 ≤ β implies m(δ 1 ) ≤ m(β), that by (13) yields m(δ 1 ) ≤ β . Analogously, one may prove that for all i ≥ 1, mi (δ 1 ) ≤ β . Furthermore, it follows from (10) that δ i ≤ β for all i ≥ 1, i.e. the sequence δ i is bounded by non-constant function β . Since δ 1 ≤ δ 2 ≤ · · · and the sequence is bounded, δ k−1 ∼ = δ k ̸= const for some k. (ii) → (i). Suppose that for some k, δ k−1 ∼ = δ k ̸= const. This means k−1 ∼ that the function δ = δ is f -invariant and satisfies δ 1 ≤ δ . Due to f -invariance of function δ , there exists a function f∗ such that

δ(f (x, u)) = f∗ (δ(x), u). Since δ ≥ δ 1 , the function δ(f (x, u)) is independent of u. Take z = δ(x), then obviously z + = f∗ (z ), meaning the system (1) has an autonomous subsystem, or equivalently, autonomous variable and therefore H∞ ̸= {0}, see [9].  6. Feedback linearization Definition 23. A regular static state feedback u = ϑ(x, v) is a mapping ϑ : X × V → U having an inverse with respect to the second argument. System (1) is said to be static state feedback linearizable if, generically, there exist (i) a state coordinate transformation ϕ : X → X and (ii) a regular static state feedback of the form u = ϑ(x, v), such that, in the new coordinates z = ϕ(x), the compensated system reads3 zi+ = zi+1 , for i = 1, . . . , n − 1 and zn+ = v .

3 This form is called the Brunovsky canonical form.

Author's personal copy

Ü. Kotta et al. / Systems & Control Letters 62 (2013) 248–255

253

Obviously, the regular static state feedback does not affect the accessibility (controllability) property of the system, and therefore, accessibility property is a necessary condition for feedback linearizability. Note that feedback linearizability property is closely related to transformability of the system equations into the controller canonical form. Consider a special form of accessible system (1), the so-called controller canonical form, see [17]:

Since by Proposition 13 the functions δ i are invariant with respect to the state transformation ϕ in the sense that the state transformation results in the equivalent set of functions, the proof is complete. 

z1+ = z2 ,

(i) for k = 1, . . . , n, Hk is completely integrable, (ii) H∞ := Hn+1 = {0}.

z2+ = z3 ,

.. .

(14)

zn+−1 +

= zn zn = ψ(z , u). The goal of Theorem 24 below is to find the conditions for system (1), not necessarily smooth, formulated in terms of the algebra of functions, to be transformable into the form (14) using the state coordinate transformation z = ϕ(x). From this form a regular static state feedback may be easily found by defining v = ψ(z , u), in order to solve the feedback linearization problem. Theorem 24. The system (1) can be transformed into the form (14) using the state transformation z = ϕ(x) iff δ i ̸= const, for i = 1, . . . , n − 1, and δ n = const. Proof. Sufficiency. Define ϕ1 := δ n−1 , ϕi+1 := M(ϕi ), i = 1, 2, . . . , n − 1 and zi = ϕi (x), i = 1, 2, . . . , n. According to the definition of operator M, (M(ϕi ), ϕi ) ∈ ∆, that together with ϕi+1 = M(ϕi ) yields (ϕi+1 , ϕi ) ∈ ∆. Then, by definition of the binary relation ∆, there exists a function f∗i such that

ϕi (f (x, u)) = f∗i (ϕi+1 (x), u),

(15)

for all (x, u) ∈ X × U and i = 1, 2, . . . , n − 1. By Algorithm 12, δ 1 ≤ δ 2 ≤ · · · ≤ δ n−1 . Since by (10), δ n−1 = δ n−2 ⊕ m(δ n−2 ), we have by the definition of operator ⊕, that δ n−1 ≥ m(δ n−2 ), and therefore, by the 1st and 5th properties of the operators m and M the following holds

ϕ2 := M(ϕ1 ) = M(δ n−1 ) ≥ M(m(δ n−2 )) ≥ δ n−2 ≥ δ 1 . In a similar manner, one may obtain the inequalities ϕ3 = M(ϕ2 ) ≥ δ n−3 ≥ δ 1 , . . . , ϕn−1 = M(ϕn−2 ) ≥ δ 1 . The definition of the function δ 1 and the inequalities ϕi ≥ δ 1 , i = 1, 2, . . . , n − 1, yield that ϕi (f (x, u)) = f∗i (ϕi+1 (x), u) does not depend on u and so

ϕi (f (x, u)) = f∗i (ϕi+1 (x)). The last equality, (5) and (6) yield f∗i (ϕi+1 ) = M(ϕi ). Since ϕi+1 := M(ϕi ), one has f∗i (ϕi+1 ) = ϕi+1 and then zi+ = ϕi (f (x, u)) = ϕi+1 (x) = zi+1 for i = 1, 2, . . . , n − 1. Since the function ϕn does not satisfy the condition ϕn ≥ δ 1 , the dynamics of the variable zn has the general form: zn+ = ϕn (f (x, u)) = ψ(z , u). Necessity. Suppose the system (1) can be transformed into the canonical form (14), i.e. there exists the state transformation ϕ : X → Z such that for i = 1, . . . , n − 1 zi+ = ϕi (x+ ) = ϕi (f (x, u)) = ϕi+1 (x) = zi+1 ,

(16)

Theorem 25 ([9]). The analytic system of the form (1) is static state feedback linearizable iff

Proposition 26. The following two conditions for analytic system (1) are equivalent (i) H∞ = {0} and Hk , for k = 1, . . . , n are completely integrable (ii) δ i = ̸ const for i = 1, . . . , n − 1 and δ n = const. Proof. (ii) → (i). Suppose that δ i ̸= const for i = 1, 2, . . . , n − 1, and δ n = const. According to Theorem 24, in this case the system (1) can be transformed into the controller canonical form (14) and therefore, by the results of [17], H∞ = {0} and H1 , . . . , Hn are completely integrable. (i) → (ii). Consider a sequence of functions δ 1 ≤ δ 2 ≤ · · ·. Since the sequence converges at most in n steps, for some k, δ k+1 ∼ = δ k . If k δ ̸= const, then it follows from Proposition 22 that H∞ ̸= {0} which contradicts the condition (i). Therefore for some k, δ k = const. According to Theorem 24, the system (1) is static state feedback linearizable, and therefore, due to the structure of the Brunovsky canonical form, k = n.  7. Computations and examples For computations a set of Mathematica functions have been developed, implementing the operations/algorithms/methods from this paper. These functions are part of previously developed package NLControl, devoted to modeling, analysis and synthesis problems of nonlinear control systems [18]. This package also includes the functions that address the feedback linearization and accessibility using the tools of differential forms. Note that the part of the package, related to the results and methods of this paper, is made available through the web site [19] in such a manner that user only needs internet connection and a browser to run the functions. A Mathematica license is not necessary. More information about implementation ideology and details can be found in [12]. The interested reader may run all the examples from this paper using the respective Mathematica functions.4 Example 27 (Continuation of Example 18). Since for this example n = 5, but already δ 4 ∼ = const, the system admits only partial linearization. Example 28. Consider the system x+ 1 = x1 + x3 x+ 2 = x2 x+ 3 = u x+ 4 = x3 x4 x+ 5 = x1 .

and zn+ = ϕn (x+ ) = ψ(z , u).

(17)

The functions δ for transformed system (16)–(17) are δ˜ = (z1 , . . . , zn−1 )T , δ˜ 2 = (z1 , . . . , zn−2 )T , . . . , δ˜ n−1 = z1 , δ˜ n = const. i

1

4 To access the linearization and accessibility functions, based on the tools of differential forms, one has to choose ‘Main page’ from the top menu, then ‘Discrete’ from the left menu and finally either ‘Linearization’ or ‘Accessibility’ from the submenu.

Author's personal copy

254

Ü. Kotta et al. / Systems & Control Letters 62 (2013) 248–255

Find δ 1 = [x1 , x2 , x4 , x5 ]T and compute

Define, according to the proof of Theorem 24 (Sufficiency part) x1 2

m(δ ) = [[x1 , x2 , x4 , x5 , u] ⊕ [x1 + x3 , x2 , u, x3 x4 , x1 ]] 1



= [x2 + x5 , u, x1 , x4 ]− = x2 , x3 , x5 ,

x4



x1 − x5



z1 := δ =

.

z2 := z1 = M(δ 2 ) = x2

Note that δ corresponds to H2 = spanK {dx1 , dx2 , dx4 , dx5 }. Furthermore, 1



δ = δ ⊕ m(δ ) = x2 , x5 , 2

1

1



x4 x1 − x5

,

m(δ 2 ) =

x2 , x5 ,

x1 − x5

−

, u ⊕ [x1 + x3 , x2 , u, x3 x4 , x1 ]

= [x2 , u]− = [x2 , x3 ]. Note that δ 2 corresponds to



H3 = spanK dx2 , dx5 , d



x4



x1 − x5

and find the state equations z1+ = z2 z3+ = z2 usign z3



x4

z3 := z2+ = M2 (δ 2 ) = M(x2 ) = x1 (sign x2 )

z2+ = z3

and



x3

+

.

which are linearizable by state feedback. Note that the inverse of state coordinate change is given by z3 x1 = sign z2 x2 = z2 z3 x3 = z1 sign z2 and defined everywhere except for the case when z1 = 0 or z2 = 0.

In a similar manner, compute

Example 30. Consider the non-smooth simplified sampled-data model

δ 3 = δ 2 ⊕ m(δ 2 ) = x2

x+ 1 = x1 + k1 x2

and

x+ 2 = k2 x2 − k3 sign (x2 ) + k4 (x3 − k5 x1 )

m(δ 3 ) = [[x2 , u] ⊕ [x1 + x3 , x2 , u, x3 x4 , x1 ]]−

= [x2 , u]− = [x2 , x3 ]. Note that δ corresponds to the integrable subspace of H4 , i.e. Hˆ 4 = spanK {dx2 }. Therefore 3

δ 4 = δ 3 ⊕ m(δ 3 ) = x2 . Finally, note that δ 4 ∼ = δ 3 = x2 corresponds to the fact that H∞ = H4 = spanK {dx2 }. Since δ 3 = δ 4 = x2 , the system is not accessible. Example 29. Consider the system with non-smooth state transition map f (x, u),

x+ 3 = x3 + k6 x4 x+ 4 = k7 x4 + k8 x5 − k9 sign (x4 ) + k10 (x3 − k5 x1 ) x+ 5 = k11 x4 + k12 x5 + u of the general electric servoactuator of manipulation robots, with the Coulomb friction taken into account [20]. Model variables have the following meaning: x1 and x2 are the output rotation angle and velocity at the reducer output shaft, respectively; x3 and x4 are the output rotation angle and velocity at the motor output shaft, respectively; x5 is the current through the servoactuator windings. Model coefficients k1 ÷ k12 characterize the structural features of the robot. Find

δ 1 (x) = [x1 , x2 , x3 , x4 ]T and compute

x+ 1 = x2 u

m(δ 1 (x)) = [[x1 , x2 , x3 , x4 , u]T ⊕ [x1 + k1 x2 , k2 x2 − k3 sign (x2 )

x+ 2 = x1 sign x2

+ k4 (x3 − k5 x1 ), x3 + k6 x4 , k7 x4 + k8 x5 − k9 sign (x4 ) + k10 (x3 − k5 x1 ), k11 x4 + k12 x5 + u]T ]− = [x1 + k1 x2 , k2 x2 − k3 sign(x2 ) + k4 (x3 − k5 x1 ), x3 + k6 x4 , k12 (k10 (x3 − k5 x1 ) − k9 sign(x4 ) + k7 x4 ) − k8 (u + k11 x4 )]T −

x+ 3 = u. Compute

δ 1 := α 0 =



x1 x3

, x2



= [x1 , x2 , x3 , k12 x4 − k8 x5 ]T ,

and

−

m(δ 1 ) = [δ 1 , u] ⊕ f (x, u)



= [x22 u, u]− = [x1 , x3 ].

So,

and

δ 2 (x) = (δ1 ⊕ m(δ1 ))(x) = [x1 , x2 , x3 ]T .

(19)

By analogy, one may compute

δ = δ ⊕ m(δ ) = 2

(18)

1

1



x1 x3



, x2 ⊕ [x1 , x3 ] =

x1 x3

.

δ 3 (x) = [x1 , x2 ]T ,

δ 4 ( x) = x1 ,

δ 5 = const.

Furthermore, compute

Next define, according the proof of Theorem 24, the non-smooth coordinate transformation

m(δ 2 ) = [[δ 2 , u] ⊕ f (x, u)]− = u− = x3

z1 = δ 4 (x) = x1 ,

and

z2 = M(δ 4 )(x) = x1 + k1 x2 ,

δ = δ ⊕ m(δ ) = 3

2

2

x1 x3

⊕ x3 = const.

z3 = M2 (δ 4 )(x)

= x1 + k1 x2 + k1 (k2 x2 − k3 sign(x2 ) + k4 (x3 − k5 x1 )),

(20)

Author's personal copy

Ü. Kotta et al. / Systems & Control Letters 62 (2013) 248–255

and5

255

References

z4 = M (δ )(x) z5 = M4 (δ 4 )(x) 3

4

(21)

yielding z1+ = z2 z2+ = z3 z3+ = z4 z4+ = z5 z5+ = α(z ) + β(z )u. 8. Conclusions For the discrete-time nonlinear control systems accessibility property and static state feedback linearization problem have been readdressed in terms of the new tools, called the algebra of functions. Unlike the differential geometric methods the new tools allow to study also the non-smooth systems. For simplicity of presentation, only the single-input case is considered. The new results were compared with the existing solvability conditions/solutions for the case when the function f in (1) is analytic and the explicit relationships are given and demonstrated on the numerous examples. The alternative solution is not based on the ‘tangent linearized system’ description but is found directly by manipulating the functions on the system equations level. Therefore, for finding the solution there is no need to integrate the differential oneforms. This does not necessarily mean that the solution is easier to find, since the computations are in general, more complicated. For analytic systems the computations may be handled with the help of codistributions of 1-forms. Therefore, one may conclude that for analytic systems, the new approach probably provides no advantages. However, the new approach provides alternative tools and for problems, not yet solved via differential geometric methods, the new approach is as good as the existing approaches. For example, recently the solution for the DDDPO was given in [21]. The extension of the results for the continuous-time case is not immediate, since the inequality δ k−1 ≥ m(δ k−2 ) in the continuoustime case, unlike the discrete-time case, does not yield the inequality M(δ k−1 ) ≥ M(m(δ k−2 )) on which the proof of Theorem 24 relied.

5 Note that the expressions for z and z as well as α(z ) and β(z ) are too long 4 5 to be given here explicitly, but can be found by Mathematica function available on site [19]. One has to choose ‘Discrete’ from the left-hand menu and then select the item ‘Operators m and M’.

[1] A. Zhirabok, A. Shumsky, The Algebraic Methods for Analysis of Nonlinear Dynamic Systems, Dalnauka, Vladivostok, 2008, (in Russian). [2] J. Hartmanis, R. Stearns, The Algebraic Structure Theory of Sequential Machines, Prentice-Hall, New York, 1966. [3] A. Shumsky, A. Zhirabok, Method of accommodation to the defect in finite automata, Automation and Remote Control 71 (5) (2010) 837–846. [4] A. Shumsky, A. Zhirabok, Unified approach to the problem of fault diagnosis, in: Proc. Conf. on Control and Fault Tolerant Systems, Nice, France, 2010, pp. 450–455. [5] G. Jayaraman, H. Chizek, Feedback linearization of discrete-time systems, in: Proc. of the 32nd IEEE Conference on Decision and Control, San Antonio, Texas, 1993, pp. 2372–2977. [6] K. Nam, Linearization of discrete-time nonlinear systems and a canonical structure, IEEE Transactions on Automatic Control 34 (1989) 119–122. [7] J. Grizzle, Feedback Linearization of Discrete-Time Systems, in: Lecture Notes in Control and Information Sciences, vol. 83, Springer, New York, 1986, pp. 273–281. [8] H. Nijmeijer, A. van der Schaft, Nonlinear Dynamical Control Systems, Springer, New York, 1990. [9] E. Aranda-Bricaire, Ü Kotta, C. Moog, Linearization of discrete-time systems, SIAM Journal on Control and Optimization 34 (6) (1996) 1999–2023. [10] B. Jakubczyk, Feedback linearization of discrete-time systems, Systems and Control Letters 9 (1987) 411–416. [11] C. Simões, H. Nijmeijer, Nonsmooth stabilizability and feedback linearization of discrete-time systems, International Journal of Robust and Nonlinear Control 6 (1996) 171–188. [12] V. Kaparin, Ü Kotta, A. Shumsky, M. Tõnso, A. Zhirabok, Implementation of the tools of functions’ algebra: first steps, in: Proceedings of the 7th Vienna International Conference on Mathematical Modelling (MATHMOD), Vienna, Austria, 2012, pp. 177–182. [13] A. Shumsky, Model of faults for discrete systems and its application to functional diagnosis problem, Engineering Simulation 12 (4) (1988) 56–61. [14] A. Shumsky, A. Zhirabok, Unified approach to the problem of full decoupling via output feedback, European Journal of Control 16 (4) (2010) 313–325. [15] J. Grizzle, A linear algebraic framework for the analysis of discrete-time nonlinear systems, SIAM Journal on Control and Optimization 31 (1993) 1026–1044. [16] M. Halás, Ü Kotta, Z. Li, H. Wang, C. Yuan, Submersive rational difference systems and their accessibility, in: J.P. May (Ed.), Proceedings of the 2009 International Symposium on Symbolic and Algebraic Computation, ACM, New York, 2009, pp. 175–182. [17] Ü Kotta, Controller and controllability canonical forms for discrete-time nonlinear systems, Proceedings of the Estonian Academy of Sciences, Physics Mathematics 54 (1) (2005) 55–62. [18] M. Tõnso, H. Rennik, U. Kotta, Webmathematica-based tools for discrete-time nonlinear control systems, Proceedings of the Estonian Academy of Sciences 58 (4) (2009) 224–240. [19] Institute of Cybernetics at Tallinn University of Technology: NLControl website http://webmathematica.cc.ioc.ee/webmathematica/NLControl/funcalg. [20] V. Filaretov, M. Vukobratovic, Z. Zhirabok, Parity relation approach to fault diagnosis in manipulation robots, Mechatronics 13 (2003) 141–152. [21] U. Kotta, A. Shumsky, A. Zhirabok, Output feedback disturbance decoupling in discrete-time nonlinear systems, in: Proceedings of the 18th IFAC World Congress, Milano, Italy, 2011, pp. 239–244.