Nonlinear programming-based sliding mode control with an ...

1 downloads 0 Views 418KB Size Report
Nonlinear programming-based sliding mode control with an application in the stabilization of an Acrobot. AHMET YAZICI†, ABDURRAHMAN KARAMANCIO ...
Engineering Optimization Vol. 38, No. 1, January 2006, 113–127

Nonlinear programming-based sliding mode control with an application in the stabilization of an Acrobot ˘ AHMET YAZICI†, ABDURRAHMAN KARAMANCIOGLU*† and RAFAIL N. GASIMOV‡ †Electrical Engineering Department, Osmangazi University, Meselik Campus, 26480 Eski¸sehir, Turkey ‡Industrial Engineering Department, Osmangazi University, 26480 Eski¸sehir, Turkey (Received 29 September 2004; revised 11 January 2005; in final form 22 June 2005) A new approach to compute optimal forcing functions for nonlinear dynamic systems expressed by differential equations and stemming from the sliding mode control (SMC) problems is presented. SMC input achieves generation of a desired trajectory in two phases. In the first phase, the input is designed to steer the state of the nonlinear dynamic system towards a stable (hyper) surface (in practice, it is generally a subspace) in the state space. The second phase starts once the state enters a prespecified neighbourhood of the surface. In this phase, the control input is required to drive the system state towards the origin while keeping it in this neighbourhood. It is shown that by appropriate selection of the objective functions and the constraints, it is possible to model both phases of this problem in the form of constrained optimization problems, which provide an optimal solution direction and thus improve the chattering. Generally, these problems are not convex and therefore require a special solution approach. The modified subgradient algorithm, which serves for solving a large class of nonconvex optimization problems, is used here for solving the optimization problems so constructed. This article also proposes a generalized optimization problem with a unified objective function by taking a weighted sum of two objectives representing the two stages. Validity of the approach of this work is illustrated by stabilizing a two-link planar robot manipulator. Keywords: Sliding mode control; Nonlinear programming; Modified subgradient method; Stabilization of an Acrobot

1.

Introduction

In this article, a novel method to compute sliding mode control (SMC) inputs by using a nonlinear programming (NLP) approach is proposed. It is shown that the control of a large class of systems, while satisfying various performance criteria, is possible by using this approach. Validity of the approach is illustrated by stabilizing Acrobot, a two-link planar robot manipulator. SMC aims to generate a desired trajectory for a given system by using an input, which may be a discontinuous function of the system states. In terms of the differential equations *Corresponding author. Email: [email protected]

Engineering Optimization ISSN 0305-215X print/ISSN 1029-0273 online © 2006 Taylor & Francis http://www.tandf.co.uk/journals DOI: 10.1080/03052150500229525

114

A. Yazıcı et al.

terminology, this is equivalent to SMC input and achieves generation of a desired trajectory in two phases. In the first phase, the input is designed to steer the state of the nonlinear dynamic system towards a stable (hyper) surface (in practice, it is generally a subspace) in the state space. The second phase starts once the state enters a prespecified neighbourhood of the surface. In this phase, the control input is required to drive the system state towards the origin while keeping it in this neighbourhood, obtaining a desired solution for the given differential equation by using a discontinuous forcing function of the dependent variables. SMC techniques have received increasing attention of the researchers since the survey paper of Utkin (1977). In the beginning, research focused on analysis of the second order systems using graphical notions. In the following decades, SMC techniques were generalized to more general classes of systems [These results take place in (DeCarlo et al. 1998, Slotine and Sastry 1983, Young et al. 1999, and the references therein)]. It has been emphasized in the literature that the superiority of SMC is apparent in its performance in the presence of system modelling errors and disturbances. In the literature, a significant amount of research has focussed on improving the potential drawbacks of the SMC approach. A major drawback is chattering, that is, zigzags in the solution graphics. Finding the feedback coefficients yielding negligible chattering while satisfying a given set of constraints is currently an active research subject. In this article, a solution to this problem in the nonlinear programming framework is proposed. In this article, the SMC problem is modelled as a nonlinear programming problem. It is shown that by choosing the objective function and the constraints appropriately, this model provides an optimal feasible solution direction and thus improves the chattering. Under some objective functions and constraints, the problem may become a nonconvex problem whose solution is not possible by using the classical solution methods. Considering this, the modified subgradient algorithm (MSA) (Gasimov 2002) is used for solving SMC problems with potentially nonconvex objective functions and constraints. This approach constructs the dual problem and solves it without any duality gap for a large class of constrained problems. The algorithm used here does not require any convexity and differentiability assumptions; therefore, it is applicable to a large class of problems. The gradient and subgradient methods and their different versions are investigated in Bazaraa et al. (1993) and Bertsekas (1995). The duality gap, which is a major problem in nonlinear programming, has been investigated and the theoretical tools for a zero duality gap condition have been improved extensively in Azimov and Gasimov (1999, 2002), Gasimov (2002), Gasimov and Rubinov (2004), Rockafellar and Wets (1998), Rubinov and Gasimov (2003) and Rubinov et al. (2003). A very significant research in the literature, which considers optimal control problems in a nonlinear programming framework, belongs to Betts (2001) in which the optimal control problem is viewed as an infinite-dimensional extension of the nonlinear programming problem. Considering that practical methods for solving these problems require Newton-based iterations with a finite set of variables and constraints, the infinite-dimensional problem is converted to a finitedimensional approximation. It is shown in Betts (2001) that the so-formed problem is ‘large and sparse’, and iterative approaches that exploit these properties are proposed to solve the problem. Demyanov (to appear) introduced different tools for transforming problems in the differential equations domain into the NLP domain and used a penalty function approach for investigating the related variational analysis problems. In this article, the use of the nonlinear programming approach for a SMC problem is illustrated by using the Acrobot system, which is a two-link planar robot with a single actuator at the elbow (Brown and Passino 1997, Spong 1995) (figure 1). Here a control input for this robot manipulator is designed to keep it in a vertically upright position. The designer is required to select the torque input at the elbow (the second joint) and has no control over the shoulder (the first joint). That is, a nonlinear programming based SMC is used to stabilize two

Nonlinear programming based SMC

115

Figure 1. The Acrobot system.

joint angles of Acrobot by using an input function only at its elbow. To solve this problem, two different approaches are proposed here. In the first approach, which is called the main algorithm, the problem is partitioned into two phases (the reaching and sliding phases) and each phase is associated with a sub-algorithm. The second approach, which is called the generalized algorithm, weights each phase of the main algorithm to generate a unified solution procedure. The Acrobot system, which has a very rich nonlinear dynamic structure, can be viewed as an indicator of performance of a control strategy (Brown and Passino 1997). It is shown that the approaches presented in this article perform well both in terms of the chattering and speedy achievement of the control objectives. A similar approach is shown to be successful in the stabilization of an inverted pendulum and Acrobot systems (Yazici et al. 2003, 2004).

2.

Problem statement and solution approach

In this section, the SMC problem is briefly introduced and thereafter the approach of this work and its major tool, the MSA, are presented. Consider the single-input nth order nonlinear differential equation X˙ = a(X, u)

(1)

where X ∈ R n is a state vector, u a scalar control input, and a an n-dimensional vector function. Each entry function of a is continuous with continuous bounded derivatives with respect to the components of X. It is assumed that the system is controllable (i.e. for every initial and final states pair (X0 , Xf ), there exists a function u, which drives the initial state X0 to the final state Xf in a finite time). For the sake of clarity in presentation, the single-input case is considered in this work, because this case is relatively mature in the SMC literature. However, it will be seen that the nature of the approach used in this article is potentially extendable to the multi-input case. The classical strategy in SMC consists of two steps: First, choose a stable surface of R n . Secondly, design a control input that steers the trajectory of equation (1), at first to a prespecified

116

A. Yazıcı et al.

neighbourhood of a stable surface, then once it reaches this neighbourhood, it steers towards the origin while keeping the states in this neighbourhood of the surface. Choosing a stable surface guarantees that every trajectory restricted to the neighbourhood of the surface-reaches to the origin asymptotically (DeCarlo et al. 1998, Slotine and Sastry 1983, Utkin 1977, Young et al. 1999). In this article two different algorithms that drive the trajectories to the origin using the two steps mentioned previously are presented. SMC theory relies on the existence of stable surfaces. Systematic approaches to determine stable surfaces for linear systems and some classes of nonlinear systems exist in the literature (DeCarlo et al. 1998, Slotine and Sastry 1983, Utkin 1977,Young et al. 1999). In the literature, for ease in design, surfaces are generally restricted to subspaces, which is also preferred in this article. Consider an (n − 1)-dimensional subspace of R n {X ∈ R n : GX = 0} where G is a row matrix. Also define s := GX for each X ∈ R n and consider the positive definite function 1 V = s2 (2) 2 This function decreases as X gets closer to the subspace and reaches its minimal value zero on the subspace. Let an input u be designed for the system (1) so that the time derivative of V becomes negative. Negativity of dV /dt in some domain of t means that V decreases there. Hence the trajectory approaches the subspace. The common form of u used in the literature that yields negative dV /dt is u = ueq (X) + u1 (X) + · · · + ur (X) + sgn(s)

(3)

where ueq is called an equivalent control input, which is a fixed function of X, each of the following r terms switches between some fixed functions of X to cancel possibly positive terms in dV /dt, and the last term is used to ensure reaching the subspace in finite time. There are numerous methods for the computation of the input u in the literature (see, for instance, DeCarlo et al. 1998, Slotine and Sastry 1983, Utkin 1977, Young et al. 1999). For the sake of simplicity in presentation, even though it is not required by the algorithms presented in this article, the input having the form u = KX is considered for some K = [k1 . . . kn ] ∈ R n . In physical systems, the terms u and K are bounded, therefore, it is sensible to impose bounds on them, for example, |u| ≤ α and K ∈  for some α ∈ R and for some  which is a compact, possibly nonconvex, subset of R n . The first phase of the main algorithm is reaching a prespecified neighbourhood of the sliding subspace. The reaching phase continues as long as the current state of the trajectory satisfies s > δ. The positive constant δ is the width of the neighbourhood of the subspace, and its value is selected by the designer regarding the system performance specifications. In the reaching phase, whose sub-algorithm is given subsequently, the minimization of the objective function dV /dt serves for a speedy arrival in the neigbourhood of the sliding subspace. The quantity η is chosen as a positive number in the first constraint. This makes sure that V strictly decreases, therefore, the trajectory approaches the subspace. The factor s 2 enlarges the interval of feasible dV /dt values when the trajectory is farther from the sliding subspace and makes the interval smaller when it is closer to the subspace. In addition to the upper bound, a lower bound −γ s 2 is also imposed on dV /dt, where γ is a positive real number, to avoid very fast approaching rates. Considering physical realities, it is realistic to impose an upper limit on the size of the input and impose bounds on the set  which contains the admissible feedback coefficients.

Nonlinear programming based SMC

117

This set is required to be compact, which may contain discrete or continuous elements. The reaching phase sub-algorithm is as follows: Step R1 Using the MSA, solve the following problem for K min K

dV dt

(4)

 dV 2 2   −γ s ≤ dt ≤ −s η  subject to

u ∈ {KX: |KX| ≤ α}     K ∈  ⊆ Rn

Step R2 Use the solution K of this problem to form u = KX and run system (1) from t to t + t. Step R3 If s > δ, go to Step R1, else if s ≤ δ go to the sliding phase. The objective function and the constraints contain time derivative of V . This term can be expressed in terms of known quantities and the nonlinear programming variables: dV = s s˙ = GXGX˙ dt = GXGa(X, u) = GXGa(X, KX) Using this in equation (4), it can be expressed as a nonlinear programming problem free from derivative terms: min K

GXGa(X, KX)

 2 2  −γ s ≤ GXGa(X, KX) ≤ −s η subject to u ∈ {KX: |KX| ≤ α}   K ∈  ⊆ Rn The sliding phase activates when the trajectory gets in the neighbourhood |s| ≤ δ. In the sliding phase, the major problem is chattering (i.e. frequent zigzags about the sliding subspace). In the literature, there are two different approaches to eliminate the chattering. The first one is based on approximating the control input with a continuous function in the neighbourhood of the sliding subspace [see, for instance DeCarlo et al. (1999), Young et al. (1999)]. The other method first augments the differential equation characterizing the system, then obtains the control input for the augmented system, and finally integrates this to obtain the control input for the system before augmentation (Bartolini et al. 2000). In this work, the objective function and the constraints are designed so that the states approach the sliding subspace and the origin in a smooth and almost zigzag-free manner. This is illustrated for the Acrobot in a later section of this article. When the objective function given subsequently is minimized, the best is done to select u, so that the state field vector is towards both the sliding subspace and the origin. In the objective function, defining the vector w as the projection of X on the sliding subspace yields this. In the sliding phase, the form of the constraints are kept as in

118

A. Yazıcı et al.

the reaching phase; however, additional constraints can be added if required by the system specifications. The sliding phase subalgorithm used is as follows: Step S1 Using the MSA, solve the following problem for K   ˙ X) ˙   w T · (X/  min  w + w   K w  dV  2 2   −γ s ≤ dt ≤ −s η subject to u ∈ {KX: |KX| ≤ α}    K ∈  ⊆ R n

(5)

Step S2 Use the solution K of this problem to form u = KX and run system (1) from t to t + t. Step S3 If s ≤ δ go to Step R1, else if s > δ go to the reaching phase. A possible special case in the sliding algorithm occurs when w = 0. When w = 0, the objective function in equation (5) equals zero. This case is handled by switching to the reaching phase subalgorithm, in which this case is well defined and suitable for the control objectives. As in the reaching phase problem, the objective function and the constraints in the sliding phase can be freed from the time derivatives as follows:     projℵ(G) X T · (a(X, KX)/a(X, KX))   min projℵ(G) X + projℵ(G) X  K   projℵ(G) X  2 2  −γ s ≤ GXGa(X, KX) ≤ −s η subject to u ∈ {KX: |KX| ≤ α}   K ∈  ⊆ Rn where ℵ(G)X denotes the null space of G and projℵ(G) X denotes the projection of X on ℵ(G). In the reaching and sliding phases, the updating interval t is determined regarding the smallest time constant of the differential equation or regarding the characteristics of the physical system that give rise to the differential equation. The nonlinear programming problems of the reaching and sliding phases can be expressed in the standard form as the following problem (P): min K

subject to

f (K)

(6)



h(K) = 0 K∈

where h(K) is the constraint vector. In the sequel, problem (6) is called the primal problem. Now, it is the time to present the main algorithm. Let the initial time and state be t0 and X0 , respectively. Depending on the location of the states in the state space, the main algorithm uses one of the following two subalgorithms to determine the input u. It uses the reaching phase subalgorithm when the states are outside a prespecified neighbourhood of the sliding subspace and uses the sliding phase subalgorithm when the states are inside the neighbourhood of the subspace. The main algorithm is as follows: Step M1 (Initialization step) Assign initial values to the time t and the state X, i.e. t ← t0 , X ← X0 .

Nonlinear programming based SMC

119

Step M2 (δ-checking step) Check whether |s| ≤ δ or not. If |s| > δ, then execute the reaching phase sub-algorithm, else if |s| ≤ δ, then execute the sliding phase sub-algorithm. Step M3 Update the time and the state and go to Step M2. The algorithm ends when a prespecified stop criterion is satisfied: for instance, it ends when the trajectory reaches into a prespecified neighbourhood of the origin. In the sequel, this article presents the MSA that solves the dual problem corresponding to the primal problem (6). 2.1 The MSA In general, the first step in dealing with a problem of the form (6) is to convert it into an unconstrained form or to the so-called dual problem. In other words, forming the dual problem corresponding to equation (6) is the starting point. Duals of constrained problems can be formed by using various Lagrange functions. In the literature, there are various Lagrange functions to form a dual problem (Azimov and Gasimov 1999, Rockafellar and Wets 1998, Rubinov and Gasimov 2003). Selection of a fitting Lagrange function depends on the problem at the hand. The fitting Lagrange function must guarantee that the optimal value of the dual problem equals that of the primal problem. Classical Lagrange functions guarantee this for the convex problems. However, if the objective function or some of the constraints are not convex, then the classical Lagrange function does not guarantee this. In such a case, the use of the classical Lagarange function leads to a so-called duality gap (i.e. a difference between optimal values of the primal and dual problems). Therefore, for the nonconvex problems, suitably selected augmented Lagrange functions are useful. In this article, considering the possibly nonconvex nature of the problem, the dual problem is formed by using a sharp augmented Lagrange function and the MSA is used for solving it. The MSA serves for solving dual problems constructed via the sharp Lagrangian. This algorithm has been presented in Gasimov (2002). Conditions guaranteeing zero duality gap have been considered in Azimov and Gasimov (1999, 2002), Gasimov (2002), Gasimov and Rubinov (2004), Rockafellar and Wets (1998), and Rubinov et al. (2003). The following theorem presented in Gasimov (2002) (see Theorem 4) states some of these conditions: THEOREM 1 Suppose in (P ) that f and h are continuous,  is compact, and a feasible solution exists. Then, inf P = sup P ∗ , where P ∗ is dual to P and there exists a solution to P . Furthermore, in this case, the dual function H in (P ∗ ) is concave and finite everywhere on R 2 × R+ , so this maximization problem is unconstrained effectively. The dual function H will be explained subsequently. Clearly, the objective functions and the constraints in this article satisfy the hypotheses of this theorem. The sharp Lagrangian function for problem (6) is defined as follows L(K, v, c) = f (K) + ch(K) − v T h(K)

(7)

where v ∈ R 2 and c ∈ R+ . Defining the dual function as H (v, c) = min L(K, v, c) K∈

(8)

the dual problem (P ∗ ) is max

(v,c)∈R m ×R+

H (v, c)

(9)

120

A. Yazıcı et al.

The MSA can be presented as (Gasimov 2002): Initialization step Choose a pair (v1 , c1 ) with v1 ∈ R 2 , c1 ≥ 0, and let j = 1, and go to Step MSA1. Step MSA1 Given (vj , cj ), solve the following subproblem: min K

f (K) + cj h(K) − vj h(K) =: H (vj , cj ) subject to

K∈

(10)

Let Kj be a solution of equation (10). If h(Kj ) = 0, then stop; (vj , cj ) is an optimal solution to the dual problem and Kj is a solution to equation (6), so f (Kj ) is the optimal value of problem (6). Otherwise, go to Step MSA2. Step MSA2 Update (vj , cj ) by vj +1 = vj − zj h(Kj ) cj +1 = cj + (zj + j )h(Kj )

(11)

where zj and j are positive scalar step sizes defined in the sequel. Replace j by j + 1 and go to Step MSA1. Considering the dual function formed by using the sharp Lagrangian, its value at any feasible point is not larger than primal problem’s objective function value. The equality occurs at a point when both primal and dual problems achieve their optimal values. It has been proved in Gasimov (2002) that if h(K) = 0, for any K obtained from equation (10), then it is the solution of the primal problem. If h(K) = 0, then the value of H calculated from equation (10) is strictly less than the optimal value of equation (6). In this case, dual variables are updated using Step 2, which leads to the increase value of the dual function. Solution of equation (10) corresponding to the updated (v, c) is always greater than the value obtained in the previous step. Note that this property is not guaranteed by the multiplier and penalty methods (Bazaraa et al. 1993, Bertsekas 1995). Step size calculation. Consider the pair (vj , cj ) and calculate H (vj , cj ) = min{f (K) + cj h(K) − vj h(K)} K∈

and let h(Kj ) = 0 for the corresponding Kj , which means that Kj is not optimal. Then, the step size parameters zj and j can be taken as 0 < zj
δ and |s| ≤ δ regions. However, an appropriate component of the objective function dominates intrinsically in each location of the state space. The generalized algorithm is as follows: Step G1 Assign initial values to the time t and the state X. Step G2 Using the MSA solve the following for K     ˙ X) ˙     dV w T · (X/ 2    min w + w + λ  + 0.9s  K w dt  dV   ≤ −10−10   dt subject to u ∈ {KX: |KX| ≤ α}    K ∈ 

(19)

Step G3 Use the solution K of this problem to form u = KX and run system (1) from t to t + t. Step G4 Update the time and the state and go to Step G2. As in the main algorithm, this algorithm ends when a prespecified stop criterion is satisfied. In the aforementioned algorithm, λ ∈ R+ is a weighting parameter, which is explained subsequently. The generalized algorithm can be expressed as a sum of two terms in the objective function to be minimized. Depending on the location of the states and value of the weighting parameter λ one of the terms becomes dominant. When the trajectory is distant from the sliding subspace, the term s 2 is larger. The minimization process selects the larger dV /dt to zero the term dV /dt + 0.9s 2 . Hence, distant from the sliding subspace, the minimization routine produces K values to maintain such dV /dt values. In the close neighbourhood of the sliding subspace, the term s 2 is smaller, consequently the second term of the objective function becomes less important. In the neighbourhood, therefore, the first term is dominant. A very important feature of the generalized algoritm is that the first constraint of the reaching and sliding subalgorithms is replaced with a milder negativity constraint on the approaching rate. This enlarges the feasible set of solutions. Among the feedback coefficients, K ∈  satisfying the negativity condition and |u| ≤ α, the algorithm always returns the best fitting one. The Acrobot stabilization problem using the generalized algorithm is considered subsequently. Letting λ = 250, α = 7, and  = {(k1 , k2 , k3 , k4 ): |ki | ≤ 45; i = 1, . . . , 4}, the generalized algorithm is run. The sliding surface matrix G and the initial state are kept the same as in the main algorithm. The state trajectory X and the functions s, u obtained by computer simulation are shown in figures 5 and 6.

126

A. Yazıcı et al.

Figure 5. The states of Acrobot generated by the generalized algorithm.

Figure 6. The input u and variable s generated by the generalized algorithm.

Nonlinear programming based SMC

4.

127

Conclusion

In this article, a nonlinear programming approach to compute the SMC input for dynamic systems has been proposed and applied to the Acrobot, a popular two-link planar robot with a single actuator at the elbow. The nonlinear programming approach uses the MSA to solve the associated nonlinear programming problems which are allowed to be nonconvex. This article first presented a two phase algorithm, the main algorithm, which uses different objective functions inside and outside the neighbourhood of the sliding subspace. Following this, the article presented its generalization, the generalized algorithm, which intrinsically operates on a larger feasible set. To illustrate the validity of the nonlinear programming based SMC, both algorithms were used for the Acrobot stabilization and corresponding graphical outputs were presented. References Azimov, A.Y. and Gasimov, R.N., On weak conjugacy, weak subdifferentials and duality with zero gap in nonconvex optimization. Int. J. Appl. Math., 1999, 1, 171–192. Azimov, A.Y. and Gasimov, R.N., Stability and duality of nonconvex problems via augmented Lagrangian. Cybern. Sys. Anal., 2002, 38, 412–421. Bartolini, G., Ferrara, A., Usai, E. and Utkin, V.I., On multi-input chattering free second order sliding mode control. IEEE Trans. Autom. Cont., 2000, 45(9), 1711–1717. Bazaraa, M.S., Sherali, H.D. and Shetty, C.M., Nonlinear Programming: Theory and Algorithms, 1993 (John Wiley and Sons: New York). Bertsekas, D.P., Nonlinear Programming, 1995 (Athena Scientific: Belmont, MA). Betts, J.T., Practical Methods for Optimal Control Using Nonlinear programming, 2001 (Advances in Design and Control, SIAM). Brown, S.C. and Passino, K.M., Intelligent control for an Acrobot. J. Intell. Robot. Syst., 1997, 18(3), 209–248. DeCarlo, R.A., Zak, S.H. and Matthews, G.B., Variable structure control of nonlinear multivariable systems: a tutorial. Proc. IEEE, 1998, 76(3), 212–232. Demyanov, V.F., Exact penalties in optimal control problems and calculus of variations, in 34th Workshop on Optimization and Control with Applications, 9–17 July, Erice, Italy (Kluwer), in press. Ferris, M.C., MATLAB and GAMS: Interfacing, Optimization, and Visualization Software, 2004, ftp://ftp.cs.wisc. edu/math-prog/tech-reports/98-19.ps Gasimov, R.N., Augmented Lagrangian duality and nondifferentiable optimization methods in nonconvex programming. J. Global Optim., 2002, 24(2), 187–203. Gasimov, R.N. and Rubinov, A.M., On augmented Lagrangians for optimization problems with a single constraint. J. Global Optim., 2004, 28(2), 153–173. Rockafellar, R.T. and Wets, R.J.B., Variational Analysis, 1998 (Springer: Berlin). Rubinov, A.M. and Gasimov, R.N., Strictly increasing positively homogeneous functions with applications to exact penalization. Optimization, 2003, 52(1), 1–28. Rubinov, A.M., Yang, X.Q., Bagirov, A.M. and Gasimov, R.N., Lagrange type functions in constrained optimization. J. Math. Sci., 2003, 115, 2437–2505. Slotine, J.J.E. and Sastry, S.S., Tracking control of non-linear systems using sliding surfaces, with application to robot manipulators. Int. J. Cont., 1983, 38(2), 465–492. Spong, M.W., The swing-up control problem for the Acrobot. IEEE Cont. Sys. Mag., 1995, 15(1), 49–55. Utkin, V.I., Variable structure systems with sliding modes. IEEE Trans. Autom. Cont., 1977, 22(2), 212–222. Yazıcı, A., Karamancıo˘glu, A. and Gasimov, R.N., Nonlinear programming based sliding mode control of an inverted pendulum system, in Proceedings of ELECO 2003 – International Conference on Electrical and Electronics Engineering, Bursa, Turkey, 2003, 293–297. Yazıcı, A., Karamancıo˘glu, A. and Gasimov, R.N., Stabilizing Acrobot by using nonlinear programming based sliding mode controller, in International Conference: 2004 – Dynamical Systems and Applications, Antalya, Turkey, 2004. Young, K.D., Utkin, V.I. and Ü. Özgüner, A control engineer’s guide to sliding mode control. IEEE Trans. Cont. Syst. Technol., 1999, 7(3), 328–342.

Suggest Documents