On-line System Identification Using Chebyshev ... - Semantic Scholar

0 downloads 0 Views 56KB Size Report
Abslract-This paper proposes a computationally efficient artificial neural network (ANN) model for system identification of unknown dynamic nonlinear ...
On-line System Identification Using Chebyshev Neural Networks S.Purwar , I.N.Kar, A.N.Jha Department of Electrical Engineering Indian Institute of Technology, Delhi New delhi-110016. INDIA E-mail: ink@ee,iitd.ernet.in, Tel: +91-11-26591073

Abslract-This paper proposes a computationally efficient artificial neural network (ANN) model for system identification of unknown dynamic nonlinear continuous and discrete time systems. A single layer functional link ANN is used for the model where the need of hidden layer is eliminated by expanding the input pattern by Chebyshev •polynomials. These models are linear in their parameters. The recursive least squares method with forgetting factor is used as on-line learning algorithm for parameter updation. The good behaviour of the identification method is tested on two single input single output (SISO) continuous time plants and two discrete time plants. Stability of the identification scheme is also addressed. Index terms-Identification, neural network, polynomials.

chebyshev

1. INTRODUCTION

In the last few years, a growing interest in the study of nonlinear systems in control theory has been observed. This interest stems from the need to give new solutions to some long standing necessities of automatic control; to work with more and more complex systems, to satisfy stricter design criteria, and to fulfill previous points with less and less a priori knowledge of the plant. A new set of methods has been developed recently which apply artificial neural networks to the tasks of identification and control of dynamic systems. These works are supported by two of the most important capabilities of neural networks; their ability to learn [1] and their good performance for the approximation of nonlinear functions. At present, most of the works on system identification using neural networks are based on multilayer feedforward neural networks with backpropogation learning or more efficient variations of this algorithm [4][5]. These methods have been applied to real processes and they have shown an adequate behaviour. This paper presents the use of chebyshev neural network models (CNN) [3][6] to identify continuous as well as discrete time processes. Additionally, the identification

methods use on-line training unlike the offline training adopted in [6]. Also the training scheme is based on recursive least squares based algorithm. This paper is organized as follows. The problem statement and chebyshev neural network whose weight updation is done using recursive least squares are presented in sections II and III. The simulation of continuous and discrete time systems is illustrated in section IV. Finally, section V summarizes the conclusions of the present work. 2. PROBLEM STATEMENT

Consider a class of SISO nonlinear continuous time systems described by where xeW.", f is locally lipschitz and h is continuous function where x is the system state, u is the input and y is the output. The method for system identification is depicted in Fig.l. The plant is excited by a signal u, and the output y is measured. The plant is assumed to be stable with known parameterization but with unknown values of the parameters. The objective is to construct a suitable identification model which when subjected to the same input u as the plant, produces an output which approximates y in the sense described by fly - y < e

1—•

Plant

/ ANN

A y

w model Fiq,1 identification scheme for some desired e > 0 and a suitably defined norm. The choice of the identification model and the method of adjusting its parameters based on the identification error constitute the two principal parts of the identification problem [2]. In the present study we also consider SISO

TENCON 2003/\ 116 discrete time plants described by the difference equations

where the weights of the neural network are given by W =[ w i

W

....]T.

2

A

general

nonlinear

function

/ ( J C ) e C " ( S ) , x(r) e S' can be approximated by CNN as

+ g[u(k)Mk - \),...u(k - m + 1)] = f[y{k\y(k-l),...yik-n + l)] +

f{x) = fVT + £

where u(k) and y(k) represent the input and the output of the plant at the kth instant of time. 3. C H E B Y S H E V NEURAL NETWORK

(7)

where £ is the CNN functional reconstruction error vector. In CNN, functional expansion of the input increases the dimension of the input pattern. Thus, creation of nonlinear decision boundaries in the multidimensional input space and approximation of complex nonlinear systems becomes easier [6], [7]. The output of the plant is W r

= Wr4,

3.1. Structure of CNN The ANN structure used in this paper is a single layer Chebyshev Neural Network (CNN). CNN is a functionallink network (FLN) based on Chebyshev polynomials. The architecture of the CNN consists of two parts; namely, numerical transformation part and learning part [3]. Numerical transformation deals with the input to the hidden layer by approximate transformable method. The transformation is the functional expansion (FE) of the input pattern comprising of a finite set of Chebyshev polynomials. As a result the Chebyshev polynomial basis can be viewed as a new input vector. For example, consider a two dimensional input pattern X=[x t x J T . An enhanced-pattern obtained by using Chebyshev functions is given by

x2) T 2 (x 2 )...f

» = [1 T h (x,)

3.2.

Learning A Igorithm

As CNN is a single layered neural network it is linear in the weights. We shall use the recursive least squares method with forgetting factor as the learning algorithm for the purpose of on-line weight updation. The cost function to be minimized is given by

X | f

(4)

where Tj(x;) is a Chebyshev polynomial. The Chebyshev polynomials can be generated by the following recursive formula [3]

W =

(5)

T 0 (x)=l

(9)

The learning algorithm for the continuous time model is

P= = 2xTL(x) - T

(8)

Remark: In this paper, Chebyshev polynomials are used for functional expansion. Note that other basis functions like Legendre, Bessels or Trigonometric polynomials can also be used for functional expansion and trigonometric polynomials have been used for identification in [7].

if 0

(10) otherwise

In terms of P" we have (11)

dt

FE

The algorithm for the discrete time model is given by

y *(«) =

' (n)P(n-l)0(n) (12)

Fig.2. Structure of CNN The different choices of T|(x) are x, 2x, 2x-l and 2 x + l . In this paper T,(x) is chosen as x. The network is shown in Fig.2. The output of the singie layer neural network is given by y = WT

(6)

P{n)= Xy P{n-\)- rxk(n)f (ri)P(n-X) where X is the forgetting factor and is the basis function formed by the functional expansion of the input and P(0) = cl, c is a positive constant, H/^rjfls^, , ^ is a constant that serves an upper bound for \P(t)\ • All matrix and vectors are orcompatible dimension for the purpose of computation.

Control Systems and Applications /1117 3.3. Stability Analysis By choosing an appropriate quadratic Lyapunov function it can be proved that the error is bounded and neural network weights W converges to W. For the continuous time case choose the Lyapunov function V = —W TP'XW 2

where g = 9.8, m = 2, I = I and v = 1.5. The input to the pendulum is the same as in the previous example. For the CNN the input is expanded to 9 verms using Chebyshev

-plant .. CNN

(13)

The derivative of the Lyapunov function is given by 14)

111

which finally gives

V =—ere 0 as t -» 0 . The stability of discfete time plants can be verified on the same lines by choosing a Lyapunov function 0

(16) 4. SIMULATIONS

Extensive simulation studies were carried out with several examples of iionlinear dynamic systems in continuous and discrete time domain. Two examples in continuous time and two in discrete time domain are presented below. Example 1: The Van der Vusse chemical stirred tank

B

10

1S

JO time(aec)

25

30

35

Fig. 3. Identification of CSTR polynomials. Here also the neura! network weights are initialized to zero. The results of the identification are shown in fig 3 where the solid line represents the neural network model output and the dashed line is the actual system output. It can be seen from Fig.3 and Fig.4 that the identification of both the plants is satisfactory and the error is bounded.

reactor

The Vander Vusse chemical stirred tank reactor (CSTR) can be described by the following set of differential equations

y = xz where qc is the input to the system and represents flow rate, xi is the concentration of the chemical input and x2 is the concentration of the output chemical. The input to the reactor is chosen as u = 0.5sin(2t) + 0.2sin(4t). For the CNN the input is expanded to 9 terms using Chebyshev polynomials given by Eq.(5). The neural network weights are initialized to zero. The weights of the neural network are updated using the algorithm given by Eq.(lO). The results of the identification are shown in fig.3 where the solid line represents the neural network model output and the dashed line is the actual system output.

Fig,4. Identification of inverted pendulem Example 3: We consider a discrete time plant described by

Example 2: The Inverted pendulum The equations that describe the dynamics of the pendulum are given by A

2

y(k +1) = 0.3y(k)+ 0.6

—V

!m2

JC, - — sinx, +- 2 2 1 ml

40

f sin(2/rA / 250) for 0

Suggest Documents