Evolving Systems DOI 10.1007/s12530-012-9053-6
ORIGINAL PAPER
Sliding mode incremental learning algorithm for interval type-2 Takagi–Sugeno–Kang fuzzy neural networks Sevil Ahmed • Nikola Shakev • Andon Topalov Kostadin Shiev • Okyay Kaynak
•
Received: 22 October 2011 / Accepted: 9 February 2012 Springer-Verlag 2012
Abstract Type-2 fuzzy logic systems are an area of growing interest over the last years. The ability to model uncertainties and to perform under noisy conditions in a better way than type-1 fuzzy logic systems increases their applicability. A new stable on-line learning algorithm for interval type-2 Takagi–Sugeno–Kang (TSK) fuzzy neural networks is proposed in this paper. Differently from the other recently proposed variable structure system theorybased on-line learning approaches for the type-2 TSK fuzzy neural nets, where the adopted consequent part of the fuzzy rules consists solely of a constant, the developed algorithm applies the complete structure of the Takagi–Sugeno type fuzzy if–then rule base (i.e. first order instead of zero order output function is implemented). In addition it is able to adapt the existing relation between the lower and the upper membership functions of the type-2 fuzzy systems. This allows managing of non-uniform uncertainties. Simulation results from the identification of a nonlinear system with uncertainties and a non-bounded-input bounded-output
S. Ahmed N. Shakev (&) A. Topalov K. Shiev Control Systems Department, Technical University of Sofia, campus Plovdiv, 25 Tsanko Dustabanov Str., 4000 Plovdiv, Bulgaria e-mail:
[email protected] S. Ahmed e-mail:
[email protected] A. Topalov e-mail:
[email protected] K. Shiev e-mail:
[email protected] O. Kaynak Department of Electrical and Electronic Engineering, Bogazici University, Bebek, 80815 Istanbul, Turkey e-mail:
[email protected]
nonlinear plant with added output noise have demonstrated the better performance of the proposed algorithm in comparison with the previously reported in the literature sliding mode on-line learning algorithms for both type-1 and type-2 fuzzy neural structures. Keywords Type-2 fuzzy logic Artificial neural networks Variable structure systems Incremental learning Sliding mode
1 Introduction Uncertainties are undivided part of both on-line modeling and control problems. They appear as a result of different kinds of external disturbances and existing measurement and sensor errors. Changes in the parameters of the system could also cause uncertain behavior. All of these reflect on the performance of real-time applications. Type-1 fuzzy logic systems (T1FLSs) are not able to model directly such uncertainties because their membership functions (MFs) are totally crisp. On the other hand, type-2 fuzzy logic systems (T2FLSs) can handle them owing to fuzziness of their MFs. The membership functions of type-1 fuzzy sets (T1FSs) are two-dimensional, whereas those of the type-2 fuzzy sets (T2FSs) are three-dimensional. The new thirddimension of T2FSs provides additional degree of freedom that makes it possible to directly model uncertainties (Mendel and John 2002). It has been also shown that T2FLSs have better noise reduction property when compared to T1FLSs (Khanesar et al. 2011). That is why T2FLSs have been preferred versus T1FLSs and have been successfully used in many applications in various areas where uncertainties occur, such as in decision making (Garibaldi and Ozen 2007), signal processing (Karnik et al.
123
Evolving Systems
1999; Mendel 2000), traffic forecasting (Li et al. 2006), mobile robot control (Hagras 2004), pattern recognition (Mitchell 2005; Wu and Mendel 2007), intelligent control (Castillo et al. 2005; Sepulveda et al. 2007). Type-2 fuzzy neural networks (T2FNNs) have the capability to deal with fuzzy reasoning when working with imprecise information as well as the learning ability of the neural networks. They are particularly suitable for tasks involving modeling, identification and control of systems with unknown time-varying dynamics and nonlinear dynamic systems that are inherently uncertain and imprecise. The additional flexibility introduced by the type-2 fuzzy logic helps to avoid problems associated with uncertainties pertaining to the choice of system’s fuzzy rules and fuzzy membership functions. Like the fuzzy systems and neural networks, fuzzy neural networks are proven to be universal approximators too (Lin and George Lee 1996). Stability and convergence, on the other hand, are among the main problems that have to be considered when applying intelligent structures for on-line modeling, identification and control tasks. The existing investigations have been split over two main research directions. The first one relies on the direct implementation of Lyapunov’s stability theory to obtain robust training algorithms (Suykens et al. 1999). The second way to get the solution is to utilize the variable structure systems (VSS) theory in constructing the parameter adaptation mechanism (Yu et al. 2004; Topalov et al. 2008; Shakev et al. 2008). Intelligent systems with sliding mode incremental learning algorithms exhibit robustness and invariance properties inherited from the variable structure control technique while still maintaining good approximation capability and flexibility. An additional and important benefit is that VSS-based on-line parameter tuning of artificial neural networks and fuzzy neural networks ensures faster convergence than the traditional learning techniques (Cascella et al. 2005). A sliding mode incremental learning algorithm for type1 fuzzy neural networks has been initially proposed in (Topalov et al. 2008). It has been subsequently extended and applied also to type-2 fuzzy neural networks in (Kayacan et al. 2011). This paper presents a new sliding mode incremental learning algorithm for type-2 fuzzy neural networks which differently from the algorithms earlier proposed in Topalov et al. (2008) and Kayacan et al. (2011) implements the complete structure of the Takagi– Sugeno type fuzzy if–then rule base (i.e. first order instead of zero order output function is used). The developed new algorithm is also capable to adapt the relation between the existing two components (the lower and the upper membership functions) of T2FLSs. This allows managing of
123
non-uniform uncertainties in T2FLSs. Two comparative simulations have been made to confirm the consistency of the proposed approach. They have demonstrated that the newly introduced features of the proposed sliding mode incremental learning algorithm can improve significantly performance and adaptation characteristics the T2FNNs. The paper is organized as follows. Section 2 presents a short overview of T2FLSs. The proposed sliding mode incremental learning algorithm for the TSK T2FNNs with Gaussian membership functions is introduced in Sect. 3. Simulation results are shown in Sect. 4. Finally, concluding remarks are given in Sect. 5.
2 Overview of type-2 fuzzy logic systems Type-2 fuzzy sets were first introduced by Zadeh in 1970s relying on the idea that the membership functions instead of being considered as continuous mathematical functions can be defined also as fuzzy sets. However, the implementation of the T2FLSs in engineering applications meets some problems such as characterization of type-2 fuzzy sets, performing operations with T2FSs, inferencing with T2FSs and obtaining the defuzzified value from the output of a type-2 inference engine (Mendel 2001). A type-2 fuzzy set deals with a primary membership grade, which is in the interval [0, 1]. There is also a secondary membership grade, corresponding to each primary membership which defines the grade for the primary grade. The set of all possible values of the primary memberships for a given value of the input signal forms the so called footprint of uncertainty (FOU), which is limited by upper and lower membership functions. The inclusion of FOU, which is a new third dimension of the T2FSs, makes it possible to better handle uncertainties (Mendel and John 2002). If the secondary membership functions are set to 1 over the whole interval formed by FOU, then the fuzzy sets are called interval type-2 fuzzy sets (IT2FSs). This assumption corresponds to the case of uniform uncertainties and it is preferred by many researchers due to its simplicity. The architecture of the interval type-2 fuzzy logic systems (IT2FLSs) is similar to that of the fuzzy logic controllers (FLCs) built on T1FLSs and consisting of fuzzifier, rule base, fuzzy inference engine, type-reducer, and defuzzifier. The implementation of IT2FLSs can provide more robustness in handling existing uncertainties and disturbances (Hagras 2004; Castillo et al. 2005). The fuzzy if–then rules used by the FLCs built upon T2FLSs are quite similar to those of the conventional FLCs with T1FLSs. An interval type-2 Takagi–Sugeno–Kang fuzzy if–then rule base has been implemented in this
Evolving Systems
investigation, where the antecedents are type-2 fuzzy sets and consequents are crisp numbers (IT2 TSK FLS A2-C0). The rth rule has the following form: Rr : if x1 is A~1j . . .and xi is A~ik and. . .and xI is A~Il then fr ¼
I X
ari xi þ br
ð1Þ
i¼1
where xi (i = 1…I) represents the input variables sequence; A˜ik is a kth type-2 membership function (k = 1…K) for ith input variable, K is the number of the membership functions of the ith input. This number can be different for each network input in (1) for i = 1, k is represented by j = 1…J and for i = I, by l = 1…L; fr, fr ðr ¼ 1. . .N; where N ¼ J K LÞ is the TSK type output function; ar and br are the parameters of the consequent part for the rth rule (Rr). The firing strength of rth rule is calculated using lower ðxÞ membership functions. The type-2 lðxÞ and upper l Gaussian fuzzy sets can be associated with the existing system uncertainties in two manners—by their mean or standard deviation (Fig. 1). Membership functions with uncertain standard deviation have been implemented in the antecedent part of the fuzzy if–then rules in this investigation. The lower and upper type-2 Gaussian membership functions with uncertain deviation (Fig. 1a) can be represented as follows: ! 1 ðxi cik Þ2 ik ðxi Þ ¼ exp l ð2Þ 2 2ik r ! 1 ðxi cik Þ2 lik ðxi Þ ¼ exp ð3Þ 2 r2ik where cik (i = 1…I; k = 1…K) is the mean value of the ik and rik are devikth fuzzy set for the ith input signal; r ations of the upper and lower membership functions of the kth fuzzy set for the ith input signal.
2.1 Type-2 fuzzy neural networks Consider the structure of the type-2 fuzzy neural network shown on Fig. 2. The network implements TSK fuzzy if–then rules introduced by (1). Each network layer carries out a particular part of the T2FLSs strategy. The first layer of the T2FNN depicts the inputs of the structure. The number of the nodes in the input layer depends on the dimension of the vector of input variables. The next layer of the network performs fuzzification operation over the inputs by using interval type-2 Gaussian fuzzy sets with uncertain standard deviation (Fig. 1a). Each membership function of the antecedent part of (1) is repðxi Þ and a lower lðxi Þ membership resented by an upper l function. The degrees of fulfillment for both membership functions (upper and lower) for the ith input signal are determined in accordance with (2) and (3), respectively. The next layer of the network consists of all N rules from the TSK rule base (1). The outputs of the neurons here are sequences of the membership degrees of the activated in the previous layer type-2 fuzzy membership functions. They are passed through the fourth layer if the corresponding fuzzy rule is activated. The ‘‘prod’’ T-norm operator is applied in the fourth layer to calculate the strength of each rule Rr. The neurons in this layer are represented as follows: wr ¼ lA1 ~ ðx1 Þ lA2 ~ ðx2 Þ lAI ~ ðxI Þ A1 A2 AI wr ¼ l ~ ðx1 Þ l ~ ðx2 Þ l ~ ðxI Þ
ð4Þ
The weights of the connections between the neurons in the fourth and fifth layers are the following TSK linear functions: fr ¼
I X
ari xi þ br
ð5Þ
i¼1
Therefore, the last two layers of the fuzzy neural network perform type reduction and defuzzification operations. The output y of the T2FNN is evaluated in
Fig. 1 Type-2 Gaussian fuzzy sets with a uncertain standard deviation and b uncertain mean
123
Evolving Systems Fig. 2 Structure of the type-2 TSK fuzzy neural network
accordance with the proposed in (Biglarbegian et al. 2010) type-2 fuzzy inference engine as follows: yN ¼ q
N X
~r þ ð1 qÞ fr w
r¼1
N X
~r fr w
ð6Þ
r¼1
~r are ~r and w where N is the number of the fuzzy rules, and w r and wr , calculated as follows: the normalized values of w w ~ r ¼ PN r w
r¼1
wr
;
w ~r ¼ P r w N
r¼1
r w
ð7Þ
The parameter q is introduced to concern the case of non-uniform uncertainties. It allows to perform on-line adjustment of the influence of the lower or the upper membership function of IT2FLSs on the output determination procedure. It is convenient to define the following vectors ~ ¼ ½w ~1 w ~ 2 . . .w ~r . . . w ~N ; F ¼ ½f1 f2 . . .fr . . . fN ; W ~ ¼ ½w ~1 w ~2 . . .w ~r . . . w ~N W
ð8Þ
The proposed sliding mode on-line learning algorithm, described in the next section, includes adaptation of: (1) the parameters of IT2 Gaussian functions, (2) the parameters of the linear functions in the consequent parts of the fuzzy rules and (3) the parameter q.
123
3 The sliding mode on-line learning algorithm Let us define the learning error of the T2FNN as the difference between the network’s current output yN(t)and its desired value y(t): eðtÞ ¼ yN ðtÞ yðtÞ
ð9Þ
The scalar signal y(t) represents the time-varying desired output of the neural network. It will be assumed that the _ rate of change of the desired output yðtÞ and the input signals x_ i ðtÞ are bounded by a predefined positive constants By_dot and Bx_dot (this limitation is valid for all real signal sources due to the physical restrictions). jy_ j By
dot
jx_ i ðtÞj Bx
8t dot ;
ð10Þ ði ¼ 1. . .IÞ8t
ð11Þ
It will be assumed also that, due to existing physical constraints, the time-varying coefficients ari in the consequents part of fuzzy if–then rules of the neuro-fuzzy network are bounded, i.e., jari ðtÞj Ba ðr ¼ 1. . .N; i ¼ 1. . .I Þ 8t
ð12Þ
Based on the principles of the sliding mode control theory (Utkin 1992) the zero value of the learning error coordinate e(t) can be defined as a time-varying sliding surface, i.e.
Evolving Systems
SðeðtÞÞ ¼ eðtÞ ¼ yN ðtÞ yðtÞ ¼ 0
ð13Þ
where the following substitutions are used xi cik ; rik
xi cik Aik ¼ ik r
which is the condition that guarantees that when the system is in a sliding mode on the sliding surface S the IT2 TSK FLS A2-C0 output yN(t) will coincide with the desired output signal y(t) for all time t [ th where th is the hitting time of e(t) = 0.
Aik ¼
Definition A sliding motion will have place on a sliding manifold S(e(t)) = e(t) = 0, after time th if the condition _ ¼ eðtÞeðtÞ\0 _ SðtÞSðtÞ is true for all t in some nontrivial semi open subinterval of time of the form [t, th) , (-?, th).
It is also obvious that by applying the proposed adaptation laws (14), (15) the value of K r and Kr can be calculated as follows:
The algorithm for the adaptation of the parameters ik ; rik ; ari ; br ; q should be derived in such a way cik ; cik ; r that the sliding mode condition of the above definition will be enforced.
I X
Kr ¼
Aik A_ ik ; Kr ¼
I X
i¼1
ð21Þ
Aik A_ik
ð22Þ
i¼1
I X
Kr ¼ K r ¼
Aik A_ik ¼
I X
i¼1
Aik A_ ik ¼ IasignðeÞ
ð23Þ
i¼1
Consider the following Lyapunov function candidate: 1 V ¼ e2 2
ð24Þ
Theorem 1 If the learning algorithm for the parameters ðxi Þ and the lower lðxi Þ membership funcof the upper l tions with a Gaussian distribution is chosen respectively as:
In order to satisfy the stability condition the time derivative V_ has to be negative.
c_ ik ¼ c_ ik ¼ x_i
_ V_ ¼ ee_ ¼ eðy_ N yÞ
r_ ik ¼
ð14Þ
r3ik
asignðeÞ; ðxi cik Þ2
_ ik ¼ r
3ik r ðxi cik Þ
2
Differentiating (6) it is possible to obtain:
asignðeÞ ð15Þ
y_ N ¼ q_
N X
~r þ q fr w
r¼1
and the adaptation of the coefficients in the consequents part of fuzzy rules is chosen as: a_ ri ¼ asignðeÞ
1 xi
ð17Þ
r¼1
ð26Þ
Substituting (20), (22), (23) consecutively in (26) results in y_N ¼ q_
ð18Þ
N X
~r þ q fr w
q_
N X
~r ~ r þ fr ~ wr K r þ w f_r w
r¼1
N X
¼ q_
N X
~r þ ð1 qÞ fr w
r¼1
q_ dot
N X
N X
~r þ fr w ~r Kr þ w ~r f_r w
N X
~r þ ð1 qÞ fr w
!! ~r Kr w
r¼1
~r ~ r IasignðeÞfr w ~r w f_r w
r¼1
r¼1
ð19Þ
!! ~ rKr w
r¼1
~r þ q fr w
N X
N X r¼1
r¼1
where a is a sufficiently large positive number satisfying the inequality: þ By ðI þ 2Þ
r¼1
r¼1
1 q_ ¼ T asignðeÞ ~ ~ W F W
dot
N X ~r ~_ r q_ ~ r þ fr w fr w f_r w
N X ~_ r ~r þ fr w þ ð1 qÞ f_r w
ð16Þ
and the weight coefficient q is updated as follows:
IBa Bx
N X
r¼1
b_r ¼ asignðeÞ
a[
ð25Þ
N X
!!
~r w
r¼1 N X
~r IasignðeÞfr w ~r w ~r f_r w
r¼1
N X
!! ~r w
r¼1
ð27Þ
then, given an arbitrary initial condition e(0), the learning error e(t) will converge to zero during a finite time th.
Note that the sums of normalized activations are constant.
Proof From (2), (3), (4) and (7) it is possible to obtain the time derivatives:
N X
~r ~_ r ¼ ~ w wr K r þ w
N X r¼1
~ rKr; w
~_ r ¼ w ~r Kr þ w ~r w
N X r¼1
r¼1
~r Kr w
ð20Þ
~ r ¼ 1; w
N X
~r ¼ 1 w
ð28Þ
r¼1
Applying (16), (17), (18) and (28) we obtain
123
Evolving Systems
The last inequality is true if (19) is satisfied. The inequality (30) means that the controlled trajectories of the learning error e(t) converge to zero in a stable manner. A standard approach to avoid the chattering phenomenon (which is a well known problem associated with SMC) is to smooth the discontinuity introduced by the signum function in (15)–(18) by using the following substitution
N X 1 ~r Þ ~r w f r ðw y_ N ¼ T asignðeÞ ~ ~ r¼1 F W W
þ
N X r¼1
þ
N X
~r Þ ¼ asignðeÞ wr þ ð1 qÞw f_r ðq~ "
I X
r¼1
# ! _ ~ r Þ ða_ ri xi þ ari x_ i Þ þ br ðq~ wr þ ð1 qÞw
i¼1
ð29Þ
signðeðtÞÞ
Thus for the considered Lyapunov function candidate we obtain:
_ V_ ¼ ee_ ¼ eðy_ N yÞ " V_ ¼ e asignðeÞ þ
N X
"
I X
r¼1
eðtÞ jeðtÞj þ d
ð31Þ
where d is a small positive scalar.
#
!
#
~r Þ y_ ða_ ri xi þ ari x_i Þ þ b_r ðq~ wr þ ð1 qÞw
i¼1
" " ! # # N I X X _ ~r Þ y_ V ¼ e þasignðeÞ þ ðasignðeÞ þ ari x_ i Þ asignðeÞ ðq~ wr þ ð1 qÞw r¼1
" V_ ¼ e asignðeÞ þ
N X
i¼1
"
aIsignðeÞ þ
r¼1
I X
!
#
#
~r Þ y_ ðari x_ i Þ asignðeÞ ðq~ wr þ ð1 qÞw
i¼1
" " ! # # N I X X _ ~r Þ y_ V ¼ e asignðeÞ þ aðI þ 1ÞsignðeÞ þ ðari x_i Þ ðq~ wr þ ð1 qÞw r¼1
"
V_ ¼ jej a þ aðI þ 1Þ
N X
#
i¼1
"
~r Þ þ e ðq~ wr þ ð1 qÞw
N X
r¼1
" ~r Þ ðq~ wr þ ð1 qÞw
I X
r¼1
þe q
N X
r¼1
"
r¼1
~r w
I X
r¼1
#
ðari x_i Þ þ ð1 qÞ
i¼1
N X
"
r¼1
~r w
I X
ðari x_i Þ y_
#
ð30Þ
#
ðari x_i Þ y_
i¼1
V_ ¼ jej½a þ aðI þ 1Þðq þ 1 qÞ " " # " # # N I N I X X X X ~r ~r þe q w ðari x_i Þ þ ð1 qÞ ðari x_i Þ y_ w r¼1
i¼1
r¼1
_ jej½aðI þ 2Þ V\ " " N I X X ~r þ jej q w ð Ba Bx r¼1
# dot Þ
þ ð1 qÞ
i¼1
r¼1
"
¼ jejaðI þ 2Þ þ jej qIBa Bx
dot
N X r¼1
" ~r w
I X
# ð Ba B x
dot Þ
# þ By
i¼1
~ r þ ð1 qÞIBa Bx w
¼ jejaðI þ 2Þ þ jej IBa Bx dot þ By dot ¼ jej IBa Bx dot þ By dot aðI þ 2Þ \0
123
i¼1
N X
#
i¼1
" !# N N X X _ ~ r ~ r þ ð1 qÞ V ¼ jej a þ aðI þ 1Þ q w w "
#
dot
N X r¼1
dot
# ~r þ By w
dot
Evolving Systems
4 Simulation results The effectiveness of the proposed new sliding mode incremental learning algorithm for T2 TSK FNNs has been evaluated when compared to two other earlier proposed learning algorithms for fuzzy neural networks: (1) the SMC-based on-line learning algorithm for T1FNNs proposed in (Topalov et al. 2008) and (2) the extended sliding mode on-line learning algorithm for T2FNNs presented in (Shiev et al. 2011). The difference between the latter one and the newly proposed learning algorithm consists in the order of the implemented TSK-type output function. The algorithm presented in (Shiev et al. 2011) uses zero order output function (T2FNN (0)), namely the implemented TSK linear function in Eq. (5) includes only the parameter br. The proposed here new learning algorithm applies first order output function (T2FNN (1)), which means that the full function in Eq. (5) is used. Simulation results from the on-line identification of: (1) second order nonlinear system with uncertainties, and (2) non-bounded-input bounded-output (non-BIBO) nonlinear plant with added output noise are presented. Both experiments have been performed in accordance with the schematic representation on Fig. 3. The all three tested TSK-type fuzzy neural networks have been implemented with five inputs—two delayed signals from the plant output—y(t - T0), y(t - 2T0) and three delayed input signals of the plant—u(t - T0), u(t - 2T0), u(t - 3T0), with a period of discretization selected as T0 = 1 ms. The incoming network signals and the learning error signal e(t) have been normalized to be in the range [0;1]. Each input signal of the fuzzy neural network has been fuzzified by implementing a fuzzy set consisting of three u (t )
y (t )
Nonlinear Plant e (t )
z −1
z
−1
z −1
z
−1
z −1
y (t − T0 ) y (t − 2T0 )
u (t − T0 ) u (t − 2T0 )
– + y N (t )
Type2/Type1 Fuzzy Neural Network
u (t − 3T0 )
Fig. 3 Schematic representation of the nonlinear system identification carried on-line with a type-1/type-2 fuzzy neural network identifier
Gaussian membership functions. All experiments have been carried out with identical initial values of all common parameters of the compared fuzzy neural networks and learning algorithms. 4.1 Identification of a SISO second order nonlinear system The proposed identification procedure is applied to a SISO second order nonlinear system described by the following expression (Shiev et al. 2011): v_ 1 ¼ v2 v_ 2 ¼ f ðv; tÞ þ gðv; tÞu þ gðv; tÞ
ð32Þ
y ¼ v1 where f(v, t) = f0(v, t) ? Df(v, t) and g(v, t) = g0(v, t) ? Dg(v, t) are smooth nonlinear functions, and g(v, t) is a bounded uncertainty. Both nonlinear functions consist of nominal (known) parts f0 ðv; tÞ ¼ v21 1:5v2 and g0(v, t) = 2, and fault terms Df ðv; tÞ ¼ 0:3 sinðtÞv21 þ cosðtÞv2 ; Dg(v, t) = cos (v1) arising at a certain moment of the system operating. The external disturbance term is presented by g(v, t) = 0.2 sin (v2) ? v1 sin (2t) with a known upper bound g0(v, t) = 0.2 ? |v1|. The input signal is described by u(t) = e(-t/1000) sin (20t) - 5. The experiment starts with the nominal case, i.e. Df(v, t) = 0 and Dg(v, t) = 0. At a certain time (t = 10 s) a fault operating is introduced. The system output is presented on Fig. 4 and the errors during the online identification procedure are shown on Fig. 5. During the first 10 s (when Df(v, t) = 0 and Dg(v, t) = 0) the error provided by T1FNN is the smallest one. The results prove that T1FNNs handle very well in the nominal case. However, when the system is characterized by uncertainties (after t = 10 s), T2FNNs are more accurate in modeling. Figure 5 shows also that the newly proposed learning algorithm for T2FNNs leads to better results compared with the extended sliding mode online learning algorithm earlier presented by Shiev et al. (2011). The root mean squared error (RMSE) values presented on Fig. 6 confirm the better performance of the currently presented algorithm. Figure 7 shows the evolution of the parameters in the consequent part of the TSK if–then rules during the experiment performed with the proposed new learning algorithm for T2 TSK FNNs. 4.2 Identification of a non-BIBO nonlinear plant In this simulation study the same fuzzy neural structures and the compared on-line learning algorithms are applied to the on-line identification of a non-bounded-input bounded-output nonlinear plant (non-BIBO). The dynamical
123
Evolving Systems
Fig. 4 System output under uncertainties and faults activated at t = 10 s
Fig. 5 The obtained system identification errors for the three compared FNN identifiers implementing sliding mode on-line learning algorithms
Fig. 7 Evolution of the parameters in the consequent part of the TSK if–then rules during the experiment performed with the proposed new learning algorithm for T2 TSK FNNs
The plant is unstable in the sense that for a given uniformly bounded input signal u(k), the plant output may diverge. When a step input uðkÞ ¼ 0:83; 8k 0, is applied to the plant, its output diverges. Thus in order to guarantee the stability of the system, the input signal is considered to have values under 0.83, i.e. u(k) \ 0.83 (Ku and Lee 1995). The input signal in this study has the following expression: uðkÞ ¼ 0:5eðkT0 =10Þ sinð5kT0 Þ
Fig. 6 RMSE values during the online identification procedure of a SISO second order nonlinear system
model of the plant is described as follows (Ku and Lee 1995): yðk þ 1Þ ¼ 0:2y2 ðkÞ þ 0:2yðk 1Þ þ 0:4 sin ½0:5 ðyðkÞ þ yðk 1ÞÞ cos ½0:5 ðyðkÞ þ yðk 1ÞÞ þ 1:2uðkÞ: ð33Þ
123
ð34Þ
An output noise generated as uniformly distributed random numbers in the interval of [-0.1; 0.1] is added to the output of the nonlinear system described by Eq. (33). The plant output is shown on Fig. 8 and the system on-line identification errors are presented on Fig. 9. It can be seen that T1FNN produces the biggest errors during the identification procedure of non-BIBO system. T2FNNs that uses zero order output function (T2FNN (0)) shows slightly better performance. The smallest error values are achieved by applying the proposed in this paper novel sliding mode learning algorithm for T2FNNs implementing the first order output function (T2FNN (1)).
Evolving Systems
5 Conclusions
Fig. 8 System output of the non-BIBO plant with output noise added at t = 10 s
A novel sliding mode incremental learning algorithm for type-2 Takagi–Sugeno–Kang fuzzy neural networks is presented. Adaptive elements are situated in the second, the fourth and the fifth layer of the proposed fuzzy neural structure. The second and fourth layers consist of the parameters of TSK if–then fuzzy rules. The antecedents are represented by the center and the standard deviations of each activated type-2 Gaussian membership function with uncertain deviation. The adaptive parameters in the fourth layer consist of the coefficients of the TSK linear function. Last adaptive element in the fuzzy neural structure is the modeling parameter q. The performed simulations have shown better performance of the presented sliding mode on-line learning algorithm for T2FNN (1) when compared to two other earlier proposed learning algorithms: (1) for T1FNNs and (2) for T2FNNs using zero order output function (T2FNN (0)). Acknowledgments The work of N. Shakev, A. V. Topalov and K. Shiev was supported in part by the TU Sofia Research Fund Project 112pd009-19 and in part by the Ministry of Education, Youth and Science of Bulgaria Research Fund Project BY-TH-108/2005. The work of O. Kaynak was supported by the TUBITAK Project 107E248.
References
Fig. 9 System identification errors for the non-BIBO plant with output noise added at t = 10 s
Fig. 10 RMSE values during the identification procedure of the nonBIBO plant
The root mean squared error (RMSE) values shown on Fig. 10 confirm the better performance of the currently presented algorithm.
Biglarbegian M, Melek W, Mendel JM (2010) On the stability of interval type-2 tsk fuzzy logic control systems. IEEE Trans Syst Man Cybern B Cybern 40(3):798–818 Cascella G, Cupertino F, Topalov A, Kaynak O, Giordano V (2005) Adaptive control of electric drives using sliding-mode learning neural networks. IEEE Int Symp Ind Electron 1:125–130 Castillo O, Melin P, Montiel O, Rodriguez-Diaz A, Sepulveda R (2005) Handling uncertainty in controllers using type-2 fuzzy logic. J Intell Syst 14(3):237–262 Garibaldi JM, Ozen T (2007) Uncertain fuzzy reasoning: a case study in modelling expert decision making. IEEE Trans Fuzzy Syst 15(1):16–30 Hagras HA (2004) A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots. IEEE Trans Fuzzy Syst 12(4):524–539 Karnik NN, Mendel JM, Liang Q (1999) Type-2 fuzzy logic systems. IEEE Trans Fuzzy Syst 7(6):643–658 Kayacan E, Cigdem O, Kaynak O (2011) A novel training method based on variable structure systems approach for interval type-2 fuzzy neural networks. In: IEEE symposium series on computational intelligence, SSCI, Paris, pp 142–149 Khanesar MA, Kayacan E, Teshnehlab M, Kaynak O (2011) Analysis of the noise reduction property of type-2 fuzzy logic systems using a novel type-2 membership function. IEEE Trans Syst Man Cybern 41(5):1395–1405 Ku CC, Lee K (1995) Diagonal recurrent neural networks for dynamic systems control. IEEE Trans Neural Netw 6(1): 144–156 Li L, Lin W-H, Liu H (2006) Type-2 fuzzy logic approach for shortterm traffic forecasting. Proc Inst Elect Eng Intell Transp Syst 153(1):33–40
123
Evolving Systems Lin CT, George Lee CS (1996) Neural fuzzy systems. Englewood Cliffs, NJ Mendel JM (2000) Uncertainty, fuzzy logic, and signal processing. Signal Process 80(6):913–933 Mendel JM (2001) Uncertain rule-based fuzzy logic systems. Prentice Hall, Los Angeles Mendel JM, John R (2002) Type-2 fuzzy sets made simple. IEEE Trans Fuzzy Syst 10(2):117–127 Mitchell HB (2005) Pattern recognition using type-II fuzzy sets. Inf Sci 170:409–418 Sepulveda R, Castillo O, Melin P, Rodriguez-Diaz A, Montiel O (2007) Experimental study of intelligent controllers under uncertainty using type-1 and type-2 fuzzy logic. Inf Sci 177(10):2023–2048 Shakev N, Topalov AV, Kaynak O (2008) A neuro-fuzzy adaptive sliding mode controller: application to second-order chaotic system. In: IS 2008, IEEE international conference on intelligent systems, Varna, pp 9.14–9.19 Shiev K, Shakev N, Topalov AV, Ahmed S, Kaynak O (2011) An extended sliding mode learning algorithm for type-2 fuzzy
123
neural networks. In: Bouchachia A (ed) Adaptive and intelligent systems, LNAI, vol 6943, Springer, Heidelberg, pp 52–63 Suykens JAK, Vandewalle J, De Moor B (1999) Lur’e systems with multilayer perceptron and recurrent neural networks: absolute stability and dissipativity. IEEE Trans Autom Control 44:770–774 Topalov AV. Kaynak O, Shakev N, Hong SK (2008) Sliding mode algorithm for on-line learning in fuzzy rule-based neural networks. In: Proceedings of the 17th IFAC world congress, Seoul, pp 12793–12798 Utkin VI (1992) Sliding modes in control and optimization. Springer, Berlin Wu H, Mendel JM (2007) Classification of battlefield ground vehicles using acoustic features and fuzzy logic rule-based classifiers. IEEE Trans Fuzzy Syst 15(1):56–72 Yu S, Yu X, Man Z (2004) A fuzzy neural network approximator with fast terminal sliding mode and its applications. Fuzzy Sets Syst 148:469–486