= Wr4,
3.1. Structure of CNN The ANN structure used in this paper is a single layer Chebyshev Neural Network (CNN). CNN is a functionallink network (FLN) based on Chebyshev polynomials. The architecture of the CNN consists of two parts; namely, numerical transformation part and learning part [3]. Numerical transformation deals with the input to the hidden layer by approximate transformable method. The transformation is the functional expansion (FE) of the input pattern comprising of a finite set of Chebyshev polynomials. As a result the Chebyshev polynomial basis can be viewed as a new input vector. For example, consider a two dimensional input pattern X=[x t x J T . An enhanced-pattern obtained by using Chebyshev functions is given by
x2) T 2 (x 2 )...f
» = [1 T h (x,)
3.2.
Learning A Igorithm
As CNN is a single layered neural network it is linear in the weights. We shall use the recursive least squares method with forgetting factor as the learning algorithm for the purpose of on-line weight updation. The cost function to be minimized is given by
X | f
(4)
where Tj(x;) is a Chebyshev polynomial. The Chebyshev polynomials can be generated by the following recursive formula [3]
W =
(5)
T 0 (x)=l
(9)
The learning algorithm for the continuous time model is
P= = 2xTL(x) - T
(8)
Remark: In this paper, Chebyshev polynomials are used for functional expansion. Note that other basis functions like Legendre, Bessels or Trigonometric polynomials can also be used for functional expansion and trigonometric polynomials have been used for identification in [7].
if 0
(10) otherwise
In terms of P" we have (11)
dt
FE
The algorithm for the discrete time model is given by
y *(«) =
' (n)P(n-l)0(n) (12)
Fig.2. Structure of CNN The different choices of T|(x) are x, 2x, 2x-l and 2 x + l . In this paper T,(x) is chosen as x. The network is shown in Fig.2. The output of the singie layer neural network is given by y = WT
(6)
P{n)= Xy P{n-\)- rxk(n)f (ri)P(n-X) where X is the forgetting factor and is the basis function formed by the functional expansion of the input and P(0) = cl, c is a positive constant, H/^rjfls^, , ^ is a constant that serves an upper bound for \P(t)\ • All matrix and vectors are orcompatible dimension for the purpose of computation.
Control Systems and Applications /1117 3.3. Stability Analysis By choosing an appropriate quadratic Lyapunov function it can be proved that the error is bounded and neural network weights W converges to W. For the continuous time case choose the Lyapunov function V = —W TP'XW 2
where g = 9.8, m = 2, I = I and v = 1.5. The input to the pendulum is the same as in the previous example. For the CNN the input is expanded to 9 verms using Chebyshev
-plant .. CNN
(13)
The derivative of the Lyapunov function is given by 14)
111
which finally gives
V =—ere 0 as t -» 0 . The stability of discfete time plants can be verified on the same lines by choosing a Lyapunov function 0
(16) 4. SIMULATIONS
Extensive simulation studies were carried out with several examples of iionlinear dynamic systems in continuous and discrete time domain. Two examples in continuous time and two in discrete time domain are presented below. Example 1: The Van der Vusse chemical stirred tank
B
10
1S
JO time(aec)
25
30
35
Fig. 3. Identification of CSTR polynomials. Here also the neura! network weights are initialized to zero. The results of the identification are shown in fig 3 where the solid line represents the neural network model output and the dashed line is the actual system output. It can be seen from Fig.3 and Fig.4 that the identification of both the plants is satisfactory and the error is bounded.
reactor
The Vander Vusse chemical stirred tank reactor (CSTR) can be described by the following set of differential equations
y = xz where qc is the input to the system and represents flow rate, xi is the concentration of the chemical input and x2 is the concentration of the output chemical. The input to the reactor is chosen as u = 0.5sin(2t) + 0.2sin(4t). For the CNN the input is expanded to 9 terms using Chebyshev polynomials given by Eq.(5). The neural network weights are initialized to zero. The weights of the neural network are updated using the algorithm given by Eq.(lO). The results of the identification are shown in fig.3 where the solid line represents the neural network model output and the dashed line is the actual system output.
Fig,4. Identification of inverted pendulem Example 3: We consider a discrete time plant described by
Example 2: The Inverted pendulum The equations that describe the dynamics of the pendulum are given by A
2
y(k +1) = 0.3y(k)+ 0.6
—V
!m2
JC, - — sinx, +- 2 2 1 ml
40
f sin(2/rA / 250) for 0