a delay-dependent approach to design state ... - Semantic Scholar

7 downloads 27 Views 467KB Size Report
Chin-Wen Liao, Chien-Yu Lu, Kai-Yuan Zheng and Chien-Chung Ting ... 1. Introduction. In the past few decades, recurrent neural networks (RNNs) have been.
c ICIC International 2009 ISSN 1881-803X

ICIC Express Letters Volume 3, Number 3(A), September 2009

pp. 465–470

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS Chin-Wen Liao, Chien-Yu Lu, Kai-Yuan Zheng and Chien-Chung Ting Department of Industrial Education and Technology National Changhua University of Education No.1, Jin-De Road, Changhua 500, Taiwan [email protected]

Received March 2009; accepted May 2009 Abstract. This paper deals with the problem of state estimation for discrete stochastic recurrent neural network with interval time-delays. The activation functions are assumed to be globally Lipschitz continuous. Attention is focused on the design of a state estimator which ensures the global stability of the estimation error dynamics. A delay-dependent condition with dependence on the upper and lower bounds of the delays is given in terms of a linear matrix inequality (LMI) to solve the neuron state estimation problem. When this LMI is feasible, the expression of a desired state estimator is also presented. In addition, slack matrices are introduced to reduce the conservatism of the condition. A numerical example is provided to demonstrate the applicability of the proposed approach. Keywords: Recurrent neural network, Stochastic systems, Linear matrix inequality, State estimators, Interval time-delays

1. Introduction. In the past few decades, recurrent neural networks (RNNs) have been intensively studied. Many researchers found successful various applications in many fields such as pattern recognition, image processing, optimization problems, and associative memory. Many of these applications heavily depend on the dynamic behaviors. In practical condition, delayed systems are often encountered and time delay is frequently a source of instability and oscillations in the system. Therefore, dynamics in a neural network often have time delays due to lots of reasons, such as the finite signal propagation time in biological networks and the finite switching speed of amplifiers in electronic neural networks. The situation of time delay could make RNNs have bad performance, and even make system instable. So, many researchers focus on stability analysis for delayed RNNs. A lot of literatures of this issue have been reported in literature [1-11]. State estimation is a subject of great practical and theoretical important which has received much attention in recent years [12-19]. Since, the neuron states are not always fully available in the neural networks outputs in many practical applications. In this kind of cases, it is necessary to estimate the neuron states through measurements. Through available output measurements, many problems are about to estimate the neuron states in which the dynamic of the estimation error is asymptotically globally or exponentially stable. Recently, the state estimation problem for recurrent neural networks with time delays was studied in [12-14], where an effective linear matrix inequality (LMI) [20] approach was developed to solve the problem. The state estimation problem for recurrent neural networks with mixed time delays has been dealt with in [15-17], where sufficient conditions for the existence of estimator have been presented in terms of LMIs. A class of Markovian recurrent neural networks with mixed time delays was presented, where the neural networks have a finite number of modes, and the modes may jump from one to another according to a Markov chain in [18]. [19] presented the design problem of state 465

466

C.-W. LIAO, C.-Y. LU, K.-Y. ZHENG AND C.-C. TING

estimator for a class of neural networks of neutral-type with interval time-varying delays, where a sufficient condition for existence of state estimator for the networks is given in terms of LMI. However, it should be pointed out the aforementioned results are continuous delayed RNNs. Recently, the problem of state estimation for discrete-time recurrent neural networks with interval time-varying delay was considered in [21], where a sufficient condition with dependence on lower and upper bounds of delay was proposed and an LMI approach was developed. So far, no state estimation results on discrete stochastic recurrent neural network with interval time varying delays are available in the literature, and remain essentially open. The objective of this paper is to address this unsolved problem. In this paper, the aim is to deal with the state estimation problem for discrete stochastic recurrent neural network with interval time-varying delays. The interval time-varying delay includes both lower and upper bounds of delays. A delay-dependent condition for the existence of estimators is proposed and the criterion is formulated in accordance with an LMI, which introduces into slack matrices and reduces the conservatism of the criterion. A general full order estimator is sought to guarantee that the resulting error system is globally asymptotically stable. Desired estimators can be obtained by the solution to certain LMIs, which can be solved numerically and efficiently by resorting to standard numerical algorithms [20]. Finally, an illustrative example is provided to demonstrate the effectiveness of the proposed method. 2. Problem Statement and Preliminaries. Consider the following discrete stochastic recurrent neural network with interval time-delays described by x(k + 1) = Ax(k) + W1 f (x(k) + W2 f (x(k − τ (k)) + [C x(k) + Ch x(k − h(k))]ω(k), (1) where x(k) = (x1 (k), x2 (k), · · · , xn (k))T is the state vector, A = diag(a1 , a2 , · · · , an ) is real constant with entries |ai | < 1, i = 1, 2, · · · , n, W1n×n and W2n×n are the interconnection matrices representing the weighting coefficients of the neurons, C and Ch are known real constant matrices. f (x(t)) = [f1 (x1 (t)), · · · , fn (xn (t))]T ∈ Rn is the neuron activation function with f (0) = 0, τ (k) and h(k) are the time-varying delay of the system satisfying τm ≤ τ (k) ≤ τM ,

k ∈ N,

(2)

hm ≤ h(k) ≤ hM , k ∈ N, (3) where 0 < τm < τM and 0 < hm < hM are known integers. ω(k) is a scalar Wiener process (Brownian Motion) defined on a complete probability space (Ω, F, P ) which is assumed to satisfy E{ω(k)} = 0, E{ω 2 (k)} = δ, k = 0, 1, 2, 3 · · · , (4) where δ > 0 is a known scalar. In order to establish main results, it is necessary to build the following assumption which the activation function (1) is assumed to bounded and satisfy the following assumption. Assumption 2.1. The neuron activation functions in (1), gi (·), satisfy the following Lipschitz condition. gi (x) − gi (y) 0≤ ≤ αi , (i = 1, 2, · · · , n), (5) x−y where Gi ∈ Rn×m are known constant matrices such that x, y ∈ R and x 6= y. Assumption 2.2. The neuron activation functions in (1) are bounded. In order to observe the neuron states. The recurrent neural network measurements are assumed to satisfy y(k) = Dx(k) + [Ex(k) + Eh x(k − h(k))]ω(k),

(6)

where y(k) ∈ R is the measurement output and D, E and Eh are constant matrix with appropriate dimension. m

ICIC EXPRESS LETTERS, VOL.3, NO.3, 2009

467

For (1) and (5) of the system, we consider the following full-order estimator xˆ(k + 1) = Aˆ x(k) + W1 f (ˆ x(k) + W2 f (ˆ x(k − τ (k)) + [C xˆ(k) + Ch xˆ(k − h(k))]ω(k) +L[y(k) − Dˆ x(k) − (E xˆ(k) + Eh xˆ(k − h(k)))ω(k)], (7) m×n where xˆ(k) is estimator of the neuron states and L ∈ R is the estimator gain matrix to be determined. In this article, we found a suitable L ∈ Rm×n such that xˆ(k) approaches x(k) asymptotically. Let e(k) = x(k) − xˆ(k)

(8)

be the state estimator error. Then with (1), (5) and (6), the state error e(k) satisfies the following equation: e(k + 1) = (A − LD)e(k) + W1 f (e(k)) + W2 f (e(k − τ (k))) +[(C − LE)e(k) + (Ch − LEh )e(k − h(k))]ω(k),

(9)

where f (e(k)) = f (x(k))−f (ˆ x(k)), f (e(k−τ (k))) = f (x(k−τ (k)))−f (ˆ x(k−τ (k))), e(k−h(k)) = x(k−h(k))−ˆ x(k−h(k)).

It is obvious to find out from Assumption 2.1 that the solution of (1) exists for all k ≥ 0 and is unique.

3. Main Results. In this section, an LMI based condition will be established. The globally delay-dependent state estimation condition given in (9). We used the LMI approach to solve the estimator gain matrix if the system (9) is globally asymptotically stable. Now, we derive the conditions under which the neural network dynamics of (1) is globally stable. For mathematical formulation, we define Z1 = ρ1 P , Z2 = ρ2 P , Z3 = ρ3 P , Z4 = ρ4 P , Y = P L, (ρ1 , ρ2 , ρ3 , ρ4 , are given scalars). The following theorem reveals to solving the state estimation problem formulated involving several scalar parameters. Theorem 3.1. Under Assumption 2.1 and Assumption 2.2, given scalars 0 ≤ τm < τM , 0 ≤ hm < hM , the network output (6), the error-state dynamics (9) and system (1) with interval time varying delays τ (k) and h(k) satisfying (2) and (3) is globally asymptotically stable. If there exist matrices P > 0, Qi > 0 (i = 1, 2 · · · , 6), Zi > 0 (i = 1, 2, 3, 4), and diagonal matrix Ri > 0, R2 > 0 and Si , Hi , Ti , Γi , Φi , Θi (i = 1, 2, 3, 4) of appropriate dimensions such that the following LMI holds                              

Ω τM S T τM m H T τM m T T hM ΦT hM m ΓT hM m ΘT A¯T C¯ T τM A¯T1 τM C¯1T τM m A¯T2 τM m C¯2T hM A¯T3 hM C¯3T hM m A¯T4 hM m C¯4T

τM S −τM Z1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

τM m H 0 −(τM m )(Z1 + Z2 ) 0 0 0 0 0 0 0 0 0 0 0 0 0 0

τM m T 0 0 −τM m Z2 0 0 0 0 0 0 0 0 0 0 0 0 0

hM Φ 0 0 0 −hM Z3 0 0 0 0 0 0 0 0 0 0 0 0

hM m Γ hM m Θ 0 0 0 0 0 0 0 0 −hM m (Z3 + Z4 ) 0 0 −hM m Z4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

A¯ 0 0 0 0 0 0 −P 0 0 0 0 0 0 0 0 0

C¯ 0 0 0 0 0 0 0 −δ −1 P 0 0 0 0 0 0 0 0

468

τM A¯1 0 0 0 0 0 0 0 0 −τM Z1 0 0 0 0 0 0 0

C.-W. LIAO, C.-Y. LU, K.-Y. ZHENG AND C.-C. TING

τM C¯1 0 0 0 0 0 0 0 0 0 −δ −1 τM Z1 0 0 0 0 0 0

τM m A¯2 0 0 0 0 0 0 0 0 0 0 −τM m Z2 0 0 0 0 0

τM m C¯2 0 0 0 0 0 0 0 0 0 0 0 −δ −1 τM m Z2 0 0 0 0

hM A¯3 0 0 0 0 0 0 0 0 0 0 0 0 −hM Z3 0 0 0

hM C¯3 0 0 0 0 0 0 0 0 0 0 0 0 0 −δ −1 hM Z3 0 0

hM m A¯4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 −hM m Z4 0

hM m C¯4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 −δ −1 hM m Z4

               

Suggest Documents