Event-Triggered Generalized Dissipativity Filtering for ... - IEEE Xplore

2 downloads 36 Views 2MB Size Report
Abstract— This paper is concerned with event-triggered generalized dissipativity filtering for a neural network (NN) with a time-varying delay. The signal ...
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

1

Event-Triggered Generalized Dissipativity Filtering for Neural Networks With Time-Varying Delays Jia Wang, Xian-Ming Zhang, and Qing-Long Han, Senior Member, IEEE

Abstract— This paper is concerned with event-triggered generalized dissipativity filtering for a neural network (NN) with a time-varying delay. The signal transmission from the NN to its filter is completed through a communication channel. It is assumed that the network measurement of the NN is sampled periodically. An event-triggered communication scheme is introduced to design a suitable filter such that precious communication resources can be saved significantly while certain filtering performance can be ensured. On the one hand, the event-triggered communication scheme is devised to select only those sampled signals violating a certain threshold to be transmitted, which directly leads to saving of precious communication resources. On the other hand, the filtering error system is modeled as a time-delay system closely dependent on the parameters of the event-triggered scheme. Based on this model, a suitable filter is designed such that certain filtering performance can be ensured, provided that a set of linear matrix inequalities are satisfied. Furthermore, since a generalized dissipativity performance index is introduced, several kinds of event-triggered filtering issues, such as H∞ filtering, passive filtering, mixed H∞ and passive filtering, ( Q, S, R)-dissipative filtering, and L2 –L∞ filtering, are solved in a unified framework. Finally, two examples are given to illustrate the effectiveness of the proposed method. Index Terms— Event-triggered communication scheme, filtering, generalized dissipativity, neural networks (NNs), transmission delays.

I. I NTRODUCTION

D

URING the past three decades, neural networks (NNs) have been extensively investigated and have been found a wide range of applications in different fields, such as signal and image processing, pattern recognition, communication, and industrial automation [1]. There are many results available in [2]–[4]. It is well known that a number of applications of NNs heavily depend on neuron states in order to achieve some desired performance in practice. However, it is usual that only parts of neuron states are available in network outputs, especially for relatively large-scale NNs. Thus, neuron state estimation has gained increasing attention, and much effort has been made on this issue [5]–[7].

Manuscript received August 14, 2014; revised December 31, 2014 and March 7, 2015; accepted March 8, 2015. This work was supported by the Australian Research Council Discovery Project under Grant DP1096780. J. Wang is with the College of Mathematics and Computer Science, Fuzhou University, Fuzhou 350108, China (e-mail: [email protected]). X.-M. Zhang and Q.-L. Han are with the Griffith School of Engineering, Griffith University, Gold Coast, QLD 4111, Australia (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2015.2411734

Due to finite switching speed of amplifiers in electronic NNs or finite signal propagation time in biological networks, time delays often exist in the dynamics of NNs, and possibly cause oscillation and instability. For NNs with time delays, the state estimation issue was addressed in [8] and [9]. Since then, the topic of state estimation for NNs with time delays has attracted interest of some researchers, and a number of results have been reported in [10]–[13]. However, in the derivative of those results, there is an implicit assumption that input signals of the filter (or called estimator) are exactly the same as the network measurements of the related NNs at any time instants. In some cases, this assumption is difficult to hold. For example, in the digitalized world, the measurements of an NN are usually sampled in a digital form. In this situation, the input signals of the filter are not equal to the measurements of the NN at certain time instant. Moreover, when the NN and the filter are located at different places, a communication channel should be used to transmit signals between them, which definitely violates the above assumption. Therefore, it is of much significance in both theory and practice to study the neuron state estimation for delayed NNs with a communication channel. Nevertheless, few results on this issue have been reported in the open literature, which is the first motivation of this paper. When a communication channel is used to transmit signals from an NN to its filter, the effective use of precious communication resources should be considered. If signals are little fluctuating compared with their previous transmitted ones, it is a waste of communication resources to transmit those little fluctuating signals. In order to save precious communication resources, in the past few years, an eventtriggered scheme has been proposed in the implementation of real-time systems [14]–[16]. Under this scheme, a task is executed only if a predefined event-triggered condition (ETC) is violated, exactly as a human behaves. Recently, a number of researchers focus on the event-triggered control [17]–[24]. To mention a few, Wang and Lemmon [17] introduced a self-triggered scheme to analyze the finite-gain L2 stability for feedback control systems. Heemels et al. [18] introduced a periodic event-triggered control scheme for linear systems. Yue et al. [19] proposed a discrete event-triggered scheme to study H∞ control for linear systems. Peng and Han [20] employed the discrete event-triggered scheme to solve the problem of the codesign of event-triggered parameters and L2 controllers for sampled-data control systems. It should be mentioned that how to apply the even-triggered scheme to estimate neuron states for an NN with time-varying

2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 2

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

delays is still challenging, which is the second motivation of this paper. In this paper, we investigate the event-triggered filtering for an NN with a time-varying delay. The signal transmission from the NN to its filter is completed through a communication channel. Suppose that the network measurements are sampled with a constant h > 0. Whether or not a sampled signal needs to be transmitted is determined by a predefined ETC. Once the ETC is violated, the current sampled signal is immediately released to the communication channel to be transmitted for filter design. As a result, precious communication resources can be saved significantly. Under the event-triggered communication scheme, the filtering error system is modeled as a sampled-data error-dependent time-delay system and suitable filters can be designed such that certain filtering performance can be ensured, provided that a set of linear matrix inequalities are satisfied for a given threshold parameter. Moreover, since a generalized dissipativity performance index is used, several kinds of event-triggered filtering issues, such as H∞ filtering, passive filtering, mixed H∞ and passive filtering, (Q, S, R)-dissipative filtering, and L2−L∞ filtering, are solved in a unified framework. Finally, two numerical examples are given to show the effectiveness of the proposed method. Throughout this paper, the notations are standard. diag{· · · } and col{· · · } represent a diagonal matrix and a column vector, respectively. The symbol He{A} means A + A T . The space of square-integrable vector functions over [0, ∞) is denoted by L2 [0, ∞). The symbol  stands for the symmetric term in a symmetric matrix. II. P ROBLEM D ESCRIPTION Consider an NN with an equilibrium point being shifted into the origin described by ⎧ x(t) ˙ = −Ax(t) + B f (x(t)) + W1 f (x(t − d(t))) + Ew(t) ⎪ ⎪ ⎨ y(t) = C1 x(t) + W2 f (x(t)) (1) z(t) = C2 x(t) ⎪ ⎪ ⎩ x(θ ) = φ(θ ), θ ∈ [−d M , 0] where x(t) = col{x 1 (t), x 2 (t), . . . , x n (t)} ∈ Rn is the state vector associated with n neurons, y(t) ∈ Rm is the network measurement, z(t) ∈ R p is the signal to be estimated, and w(t) ∈ Rq is the noise input belonging to L2 [0 ∞), f (x(t)) = col{ f 1 (x 1 (t)), f2 (x 2 (t)), . . . , f n (x n (t))} ∈ Rn denotes the neuron activation function, and d(t) is a time-varying delay satisfying ˙ ≤μ 0 ≤ d(t) ≤ d M , d(t)

(2)

where d M and μ are constants, A = diag{a1 , a2 , . . . , an } > 0 is a constant real matrix, B, W1 , and W2 are the interconnection matrices representing the weighting coefficients of the neurons, E, C1 , and C2 are known real constant matrices with compatible dimensions, and φ is the initial condition. In this paper, it is assumed that the neuron activation functions f i (·) (i = 1, 2, . . . , n) satisfy f i (0) = 0 and | f i (s1 ) − fi (s2 )| ≤ ρi |s1 − s2 |

∀s1 , s2 ∈ R

(3)

where ρi ≥ 0 (i = 1, 2, . . . , n) are known real scalars. For convenience, we denote ρ := diag{ρ1 , ρ2 , . . . , ρn }.

Fig. 1.

Diagram of event-triggered generalized filtering for a delayed NN.

The objective of this paper is to design suitable filters to estimate the neuron states based on the network measurement y(t). Recalling some existing filtering methods [25], it is implicitly assumed that there is an ideal channel to transmit measurement signals y(t) of the NN to the filter instantaneously, which is unreality, especially when the filter is located remotely at a different place from the NN under consideration. Instead, in this paper, we focus on an eventtriggered communication scheme to the generalized filtering for the NN (1), which is shown in Fig. 1. Since the input signal of the filter y˜ (t) is no longer equal to the measurement y(t), the filtering method in [25] is not applicable in this scenario. A. Event-Triggered Communication Scheme In Fig. 1, the measurement signal y(t) is first sampled at the time instants sh (s = 1, 2, . . .) with h > 0 being a constant. The sampled signal with its time stamp is encapsulated into a data packet. Whether or not a data packet is transmitted to the zero-order-hold (ZOH) closely depends on a predefined ETC. That is, only those data packets that violate the predefined ETC are transmitted through a communication channel. It is significant from the perspective of saving precious communication resources. In fact, when data packets carry signals with little fluctuating compared with its previous transmitted ones, it is definitely a waste of communication resources to transmit them through a communication channel. The data packet processor (DPP) is introduced to select necessary data packets to be transmitted. The DPP has a register and a logical comparator. The register is used to store the latest transmitted data packet (tk , y(tk h)). The logical comparator is used to check if the current data packet satisfies the predefined ETC, which is given by ψ T ((tk + j )h)ψ((tk + j )h) ≤ λy T (tk h)y(tk h)

(4)

ψ((tk + j )h) := y((tk + j )h) − y(tk h), j = 1, 2, . . .

(5)

where λ > 0 is a threshold parameter, and  > 0 is a weighting matrix. The mechanism of the DPP can be simply described as: if the current data packet (tk + j0, y((tk + j0)h)), where j0 is a certain integer greater than or equal to one, satisfies the ETC (4), the DPP discards this data packet right away; otherwise: 1) set tk+1 = tk + j0; 2) the register updates its store with the data packet (tk+1 , y(tk+1 h)); and 3) the DPP immediately releases this data packet to the communication channel to be transmitted. From the mechanism

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. WANG et al.: EVENT-TRIGGERED GENERALIZED DISSIPATIVITY FILTERING FOR NNs

of the DPP, the release time sequence that indicates when those necessary data packets are released to the communication channel can be given by t1 , t2 , . . . , tk , . . . with tk < tk+1 . B. Communication Channel and the ZOH The communication channel is supposed to be of good quality of service. The data packets released by the DPP can be successfully transmitted to the ZOH without disorder, while transmission delays are unavoidable. Denote by τtk the transmission delay of the released data packet (tk , y(tk h)) transmitted from the DPP to the ZOH. It is assumed that τtk satisfies τm ≤ τtk ≤ τ M , k = 1, 2, . . .

(6)

where τm and τ M are two constants. Under the above assumptions, one can see that the time sequence, which indicates when the released data packets arrive at the ZOH, can be given as t1 + τt1 , t2 + τt2 , . . . , tk + τtk , . . . , with tk + τtk < tk+1 + τtk+1 . Suppose that the ZOH is event-driven. Once a data packet arrives at the ZOH, the ZOH immediately updates its store and actuates the filter. By the property of the ZOH, one has y˜ (t) = y(tk h), t ∈ [tk + τtk , tk+1 + τtk+1 )

(7)

which is used to be the input signal of the filter to be designed. C. Filter We are interested in designing a full-order filter of the form  x˙ f (t) = A f x f (t) + B f y˜ (t), x f (0) = 0 (8) z f (t) = C f x f (t) where A f , B f , and C f are filter gain matrices to be determined. The input signal y˜ (t) is given in (7), but it is not convenient to be used in the performance analysis of the filtering error system. In the following, we present a useful expression of y˜ (t) following the line in [22]. Denote k := [tk +τtk , tk+1 +τtk+1 ) and lk := tk+1 −tk −1. Then, k = ∪ljk=0 kj , where

kj = [skj h + τskj , sk, j +1 h + τsk, j +1 ) := tk + j . The artificial time delays with skj τskj ( j = 1, 2, . . . , lk ) can be chosen to be some scalars, such that τm ≤ τskj ≤ τ M and skj h + τskj < sk, j +1 h + τsk, j +1 . Now, we denote η(t) = t − skj h, t ∈ kj .

(9)

Then, η(t) is a piecewise function satisfying ηm ≤ η(t) ≤ η M

(10)

where ηm := τm and η M := h + τ M . Moreover, considering (7) and (5), for t ∈ kj , we have

Apparently, (11) presents a new expression for y˜ (t). Substituting (11) into (8) yields ⎧ ⎪ ⎨x˙ f (t) = A f x f (t) + B f [y(t − η(t)) − ψ(t − η(t))] (12) x f (0) = 0 ⎪ ⎩ z f (t) = C f x f (t), t ∈ kj . Furthermore, based on (4), it is clear to see that ψ(t − η(t)) and y(t − η(t)) satisfy, for t ∈ kj ψ T (t − η(t))ψ(t − η(t)) ≤ λ[y(t −η(t))−ψ(t −η(t))]T [y(t −η(t))−ψ(t −η(t))]. (13) To summarize, under the event-triggered communication scheme, the filter (8) is modeled as a time-delay system (12) subject to (13). Thus, some existing analysis methods for time-delay systems can be used to design suitable filters. D. Filtering Error System and Problem Formation Denote ξ(t) := col{x(t), x f (t)}, e(t) := z(t) − z f (t). Then, the filtering error system connecting the NN (1) with the filter (12) can be given as ⎧ ˜ ξ˙ (t) = Aξ(t) + B˜ f (x(t)) + W˜ 1 f (x(t − d(t))) ⎪ ⎪ ⎪ ⎪ + W˜ 2 f (x(t − η(t))) + C˜ 1 H ξ(t − η(t)) ⎨ ˜ (14) − B˜ f ψ(t − η(t)) + Ew(t) ⎪ ⎪ ⎪ ξ(θ ) = col{φ(θ ), 0}, θ ∈ [− max{d , η }, 0] M M ⎪ ⎩ e(t) = C˜ 2 ξ(t), t ∈ kj where A˜ = diag{−A, A f }, B˜ f = col{0, B f }, B˜ = col{B, 0} W˜ 2 = col{0, B f W2 }, C˜ 1 = col{0, B f C1 }, H = [I 0] W˜ 1 = col{W1 , 0}, C˜ 2 = [C2 − C f ], E˜ = H T E. Remark 1: It is clear that, under the event-triggered communication scheme, the filtering error system is modeled as a nonlinear system with two time-varying delays, which depends on ψ(t−η(t)). Since ψ(t−η(t)) is defined as the error between the current sampled data packet and the latest transmitted data packet, which can be seen from (5), the filtering error system (14) can be regarded as a sampled-data error-dependent time-delay system. Before formulating the problem, we first introduce a new definition, which is called generalized dissipativity. Definition 1 (Generalized Dissipativity): For given real matrices 0 ≥ 0, 1 ≤ 0, 2 and 3 > 0 satisfying that if 0 = 0 then 1 = 0 and 2 = 0, the filtering error system (14) is said to be generalized dissipative if under zero initial conditions, the following inequality holds for any t f ≥ 0 and w ∈ L2 [0, ∞) :  tf J (t)dt − sup e T (t) 0 e(t) ≥ 0 (15) 0

y˜ (t) = y(tk h) = y(skj h) − ψ(skj h) = y(t − η(t)) − ψ(t − η(t)), t ∈ kj .

3

0≤t ≤t f

where (11)

J (t) = e T (t) 1 e(t) + 2e T (t) 2 w(t) + w T (t) 3 w(t).

(16)

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 4

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Remark 2: The notion of generalized dissipativity in Definition 1 includes some special cases as follows. 1) If taking 0 = 0, 1 = −I , 2 = 0, and

3 = γ 2 I (γ > 0), the generalized dissipativity reduces to the H∞ performance [26]. 2) If taking 0 = 0, 1 = 0, 2 = I , and 3 = γ I , the generalized dissipativity is exactly the same as the passivity [27]. 3) If taking 0 = 0, 1 = −γ −1 α I , 2 = (1 − α)I (0 ≤ α ≤ 1), and 3 = γ I , the generalized dissipativity becomes the mixed H∞ and passive performance [28]. 4) If taking 0 = 0, 1 = Q, 2 = S, and 3 = R, the generalized dissipativity reduces to the (Q, S, R)dissipativity [29]. 5) If taking 0 = I , 1 = 0, 2 = 0, and 3 = γ 2 I , the generalized dissipativity means the L2 –L∞ performance [11]. The problem of event-triggered generalized dissipativity filtering to be addressed in this paper is formulated as: for given scalars d M , ηm , η M , and λ, design suitable filter gain matrices ( A f , B f , C f ) and the event-triggered weighting matrix  > 0 such that. 1) The filtering error system (14) with w(t) ≡ 0 is asymptotically stable. 2) The filtering error system (14) is generalized dissipative in the sense of Definition 1. To proceed with, we first introduce two integral inequalities, which are useful in solving the above problem. Lemma 1 [30]: Let τ (t) be a continuous function satisfying 0 ≤ h 1 ≤ τ (t) ≤ h 2 . For any n × n real matrix R1 > 0 and a vector x˙ : [−h 2 , 0] → Rn such that the integration concerned below is well defined, the following inequality holds for any  R˜ S 2n × 2n real matrix S1 satisfying S T1 R˜1 ≥ 0 : 1

 −(h 2 − h 1 )

t −h 1 t −h 2

1

2 b−a



b

w(u)du.

a

Remark 3: Applying Lemma 2 with a = t −h 2 , b = t −h 1 , and w = x, one obtains  t −h 1 ˜ 0 −(h 2 − h 1 ) x˙ T (s)R x(s)ds ˙ ≤ −ψ0T Rψ (19) t −h 2

where



⎤ x(t − h 1 ) − x(t − h 2 )  t −h 1 ⎦. 2 ψ0 := ⎣ x(t − h 1 ) + x(t − h 2 ) − x(s)ds h 2 − h 1 t −h 2

Clearly, (17) and (19) provide  t −h two upper bounds for the integral term −(h 2 − h 1 ) t −h 21 x˙ T (s)R x(s)ds. ˙ Although it is difficult to disclose their relationship theoretically, one can see that the upper bound given by (17) is an expression on the vectors x(t − h 1 ), x(t − h 2 ), and x(t − τ (t)), while the upper bound given by (19) just depends on the vectors x(t−h 1 ) and x(t − h 2 ). This characteristic of (17) makes Lemma 1 useful in the delay-dependent stability analysis of NNs with interval time-varying delays. III. E VENT-T RIGGERED G ENERALIZED D ISSIPATIVITY F ILTERING P ERFORMANCE A NALYSIS In this section, by employing the Lyapunov–Krasovskii functional method, some sufficient condition is derived such that the filtering error system (14) is not only asymptotically stable but also generalized dissipative in the sense of Definition 1. Choose the following Lyapunov–Krasovskii functional:

where (17)

where R˜ 1 := diag{R1 , 3R1 }; and ⎧

x(t − τ (t)) − x(t − h 2 ) ⎪ ⎪ ⎪ ⎨ψ11 := x(t − τ (t)) + x(t − h 2 ) − 2v 2 (t)

⎪ x(t − h 1 ) − x(t − τ (t)) ⎪ ⎪ ⎩ψ21 := x(t − h 1 ) + x(t − τ (t)) − 2v 1 (t) with

˜ 2 := w(b) + w(a) −

V (t) = ξ T (t)Pξ(t) + V1 (t) + V2 (t)

x˙ T (s)R1 x(s)ds ˙

T T ˜ T ˜ ≤ 2ψ11 S1 ψ21 − ψ11 R1 ψ11 − ψ21 R1 ψ21

where ˜ 1 := w(b) − w(a) and

⎧  t −h 1 1 ⎪ ⎪ ⎪v 1 (t) := x(s)ds ⎨ τ (t) − h 1 t −τ (t )  t −τ (t ) ⎪ 1 ⎪ ⎪ x(s)ds. ⎩v 2 (t) := h 2 − τ (t) t −h 2

Lemma 2 [31]: For a given matrix R > 0, the following inequality holds for any continuously differentiable function w : [a, b] → Rn :  b

1 T (18) w˙ T (u)R w(u)du ˙ ≥ ˜ 1 R ˜ 1 + 3˜ 2T R ˜ 2 b−a a

V1 (t) :=





t t −d(t )  t

x T (s)Q 1 x(s)ds +

t

(20)

x T (s)Q 2 x(s)ds

t −d M  t −ηm

x T (s)R1 x(s)ds + x T (s)R2 x(s)ds t −η M  t x˙ T (s)S1 x(s)dsdθ ˙ V2 (t) := d M t −d M θ  t  t + ηm x˙ T (s)S2 x(s)dsdθ ˙ t −ηm θ  t −ηm  t x˙ T (s)S3 x(s)dsdθ ˙ + (η M − ηm ) +

t −η  tm

 P1

P2

t −η M

θ

where P =  P3 > 0, Q 1 > 0, Q 2 > 0, R1 > 0, R2 > 0, S1 > 0, S2 > 0, and S3 > 0 to be determined. Applying Lemmas 1 and 2, we have the following result. Proposition 1: For given scalars d M , μ, τm , τ M , λ > 0, and real matrices  > 0, 0 ≥ 0, 1 ≤ 0, ˜ ˜ T

2 , 3 = 3 3 ≥ 0, the filtering error system (14) is asymptotically stable  and generalized dissipative, if there exist real matrices P = P1 PP23 > 0, Q i > 0, Ri (i = 1, 2), S j > 0

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. WANG et al.: EVENT-TRIGGERED GENERALIZED DISSIPATIVITY FILTERING FOR NNs

and real diagonal matrices  j > 0 ( j = 1, 2, 3), and real  T12  T21 T22 matrices T1 = TT11 , T = 2 T T23 T24 , such that 13 14





S˜3 T2 S˜1 T1 T ˜ ˜ P − C2 0 C2 ≥ 0, ≥ 0, ≥ 0 (21)  S˜1  S˜3 ⎡ ⎤ 1 d M S1 ηm S2 (η M − ηm )S3 ⎢ ⎥ −S1 0 0 ⎢ ⎥ < 0 (22) ⎣ ⎦  −S2 0    −S3

where S˜1 := diag{S1 , 3S1 }, S˜3 := diag{S3 , 3S3 }, and    = col − A T , 0, 0, 0, 0, 0, 0, B T , W1T , 0, 0, E T , 0, 0, 0, 0, 0 and 1 is defined in (23), which is given at the bottom of this page, where ϑ11 := He{−P1 A} + Q 1 + Q 2 + R1 − 4S1 −4S2 − C2T 1 C2 ϑ12 := P2 A f − A T P2 + C2T 1 C f ϑ16 :=

T T11

ϑ17 :=

T −T11

+

T T13

+

+

T T13

ϑ1,10 := [P2 B f W2

T T12



+

T T12

T T14

+

− 2S1

T T14 ,

ϑ18 := P1 B + ρ1

−T ˜ − P2 B f ], ϑ1,11 := P1 E −C2T 2 3

T T ϑ1,12 := [6S1 − 2T13 −2T14 ], T ϑ22 := He{P3 A f }−C f 1 C f ,

ϑ2,10 := [P3 B f W2 − P3 B f ]

−T ˜ ϑ2,11 := P2T E+C Tf 2 3

ϑ33 := −R1 + R2 − 4S3 − 4S2

 T T ϑ3,13 := 6S3 − 2 T23 + T24 T T T T ϑ34 :=  4j =1 T2Tj − 2S3 , ϑ35 := −T21 + T23 − T22 + T24

ϑ44 := He {T22 + T24 − T21 − T23 } − 8S3 + λC1T C1 T T T T ϑ45 := T21 − T23 − T22 + T24 − 2S3 T ϑ4,10 := [ρ3 + λC1 W2 − λC1T ] T T ϑ4,13 := [−2(T22 + T24 ) + 6S3 2(T23 − T24 ) + 6S3 ] ϑ55 := −4S3 − R2 , ϑ5,13 := [2(T22 − T24 ) 6S3 ]

ϑ66 := −(1 − μ)Q 1 − 2T11 − 2T13 + 2T12 + 2T14 − 8S1 T T T T ϑ67 := T11 − T13 − T12 + T14 − 2S1 T T ϑ6,12 := [−2(T12 + T14 ) + 6S1 2(T13 − T14 ) + 6S1 ] ϑ77 := −Q 2 − 4S1 , ϑ7,12 := [2(T12 − T14 ) 6S1 ]



ϑ11 ⎢  ⎢ ⎢  ⎢ ⎢  ⎢ ⎢  ⎢ ⎢  ⎢ ⎢  1 := ⎢ ⎢  ⎢ ⎢  ⎢ ⎢  ⎢ ⎢  ⎢ ⎢  ⎢ ⎣  

ϑ12 ϑ22            

−2S2 0 ϑ33           

P2 B f C1 P3 B f C1 ϑ34 ϑ44          

0 0 ϑ35 ϑ45 ϑ55         

ϑ16 0 0 0 0 ϑ66        

5

ϑ10,10 ϑ12,12



−λW2T  −23 + λW2T W2 :=  −(1 − λ)



T T −12S1 4T14 −12S3 4T24 := , ϑ13,13 := .  −12S1  −12S3

Proof: First, from (3), it is clear to see that, for si = 0 0≤

f i (si ) ≤ ρi . si

(25)

Thus, one has f i (x i (t))( fi (x i (t)) − ρi x i (t)) ≤ 0

(26)

which leads to 2

n 

κ1i f i (x i (t))( fi (x i (t)) − ρi x i )

i=1

= 2 f T (x(t))κ1 f (x(t)) − 2x T (t)ρ1 f (x(t)) ≤ 0

(27)

where 1 = diag{κ11 , κ12 , . . . , κ1n } > 0. Similarly, we have that 2 f T (x(t − d(t)))2 f (x(t − d(t))) − 2x T (t − d(t))ρ2 f (x(t − d(t))) ≤ 0 T 2 f (x(t − η(t)))3 f (x(t − η(t))) − 2x T (t − η(t))ρ3 f (x(t − η(t))) ≤ 0

(28) (29)

where i = diag{κi1 , κi2 , . . . , κin } > 0 (i = 2, 3). Then, taking the derivative of V (t) with respect to t along the trajectory of the system (14) yields V˙ (t) = 2ξ T (t)P ξ˙ (t) + x T (t)(Q 1 + Q 2 + R1 )x(t) − (1 − μ)x T (t − d(t))Q 1 x(t − d(t)) −x T (t − d M )Q 2 x(t − d M ) + x T (t − ηm )(R2 − R1 )x(t − ηm ) + x˙ T (t)x(t) ˙ − x T (t − η M )R2 x(t − η M )  t  t T − dM x˙ (θ )S1 x(θ ˙ )dθ − ηm x˙ T (θ )S2 x(θ ˙ )dθ t −d M t −ηm  t −ηm − (η M − ηm ) x˙ T (θ )S3 x(θ ˙ )dθ (30) t −η M

where  = ϑ17 0 0 0 0 ϑ67 ϑ77       

ϑ18 P2T B 0 0 0 0 0 −21      

P1 W1 P2T W1 0 0 0 ρ2 0 0 −22     

2 S dM 1

2S + ηm 2

ϑ1,10 ϑ2,10 0 ϑ4,10 0 0 0 0 0 ϑ10,10    

+ (η M − ηm )2 S3 .

ϑ1,11 ϑ2,11 0 0 0 0 0 0 0 0 −I   

ϑ1,12 0 0 0 0 ϑ6,12 ϑ7,12 0 0 0 0 ϑ12,12  

0 0 ϑ3,13 ϑ4,13 ϑ5,13 0 0 0 0 0 0 0 ϑ13,13 

⎤ 3S2 0 ⎥ ⎥ 3S2 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ (23) 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎦ −3S2

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 6

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

ζ(t) := col{x(t), x f (t), x(t − ηm ), x(t − η(t)), x(t − η M ), x(t − d(t)), x(t − d M ), f (x(t)), f (x(t − d(t))), f (x(t − η(t))) ψ(t − η(t)), w(t), v 11 (t), v 12 (t), v 31 (t), v 32 (t), v 21 (t)}

Applying Lemma 1, we have  t T T ˜ −d M x˙ T (s)S1 x(s)ds ˙ ≤ 2ψ11 T1 ψ12 − ψ11 S1 ψ11 t −d M

T ˜ − ψ12 S1 ψ12

with

⎧ ⎪ ⎪ ⎨v 11 (t) := ⎪ ⎪ ⎩v 12 (t) :=

1 d(t)



t

x(s)ds

t −d(t )

1 d M − d(t)

V˙ (t) ≤ e T 1 e(t) − σ ξ T (t)ξ(t) ≤ −σ ξ T (t)ξ(t) < 0, for ξ(t) = 0.

t −d(t ) t −d M

x(s)ds.

t −η M

T T ˜ T ˜ S3 ψ31 − ψ32 S3 ψ32 T2 ψ32 − ψ31 ≤ 2ψ31

(32)

where S˜3 := diag{S3 , 3S3 } and

⎧ x(t − η(t)) − x(t − η M ) ⎪ ⎪ ⎨ψ31 := x(t − η(t)) + x(t − η M ) − 2v 32 (t)

⎪ x(t − ηm ) − x(t − η(t)) ⎪ ⎩ψ32 := x(t − ηm ) + x(t − η(t)) − 2v 31 (t)

t −ηm

where S˜2 := diag{S2 , 3S2 } and

x(t) − x(t − ηm ) ψ21 := x(t) + x(t − ηm ) − 2v 21 (t) v 21 (t) :=

1 ηm



x(s)ds.

0

0≤t ≤t f

0

(33)

(34)

(35)

Substituting (31)–(33) and (13) into (30) yields V˙ (t) − J (t) ≤ ζ T (t)ϒζ (t)

Therefore, one can conclude that the filtering error system (14) with w(t) ≡ 0 is asymptotically stable. Next, we prove that under zero initial conditions, the filtering error system (14) is generalized dissipative. Suppose that w(t) = 0. From (37), for any t ≥ 0, we have  t J (s)ds ≥ V (t) − V (0). (39)

From Definition 1, we only need to prove that for t f ≥ 0, the following inequality is true:  tf J (s)ds ≥ sup e T (t) 0 e(t). (41)

t t −ηm

(38)

Under zero initial conditions, V (0) = 0. On the other hand, from (20), it is clear that V (t) ≥ ξ T (t)Pξ(t). Moreover, applying the Schur complement to (21) yields P > C˜ 2T 0 C˜ 2 , thus, it follows from (39) that:  t J (s)ds ≥ ξ T (t)Pξ(t) ≥ e T (t) 0 e(t). (40)

 t −ηm 1 x(s)ds η(t) − ηm t −η(t )  t −η(t ) 1 ⎪ ⎪ ⎩v 32 (t) := x(s)ds. η M − η(t) t −η M

By Lemma 2, it is clear that  t T ˜ −ηm S2 ψ21 x˙ T (θ )S2 x(θ ˙ )dθ ≤ −ψ21

(37)

0

⎧ ⎪ ⎪ ⎨v 31 (t) :=

with

V˙ (t) − J (t) ≤ −σ ζ T (t)ζ(t) ≤ −σ ξ T (t)ξ(t).

In the sequel, we complete the proof from two aspects: 1) the filtering error system (14) with w(t) ≡ 0 is asymptotically stable and 2) under zero initial conditions, the filtering error system (14) is generalized dissipative. First, we prove that the filtering error system (14) with w(t) ≡ 0 is asymptotically stable if the matrix inequalities in (21) and (22) are satisfied. In doing so, set w(t) ≡ 0. Then from (16) and (37), together with 1 ≤ 0, one can see that

Apply Lemma 1 again to obtain  t −ηm x˙ T (θ )S3 x(θ ˙ )dθ −(η M − ηm )

with

If the matrix inequality in (22) is satisfied, apply the Schur complement to obtain ϒ < 0. Thus, there exists a scalar σ > 0 such that

(31)

where S˜1 := diag{S1 , 3S1 }; and

⎧ x(t − d(t)) − x(t − d M ) ⎪ ⎪ ψ := ⎨ 11 x(t − d(t)) + x(t − d M ) − 2v 12 (t)

⎪ x(t) − x(t − d(t)) ⎪ ⎩ψ12 := x(t) + x(t − d(t)) − 2v 11 (t)

(24)

(36)

where ζ (t) is defined in (24), as shown at the top of this page, and   2 −1 2 −1 ϒ :=  +  d M S1 + ηm S2 + (η M − ηm )2 S3−1  T .

We now consider two cases. Case 1 ( 0 = 0): In this case, from (40), it is clear that  tf J (s)ds ≥ e T (t f ) 0 e(t f ) = 0 0

which means that (41) is true. Case 2 ( 0 = 0): In this case, from Definition 1, 1 = 0,

2 = 0, and 3 > 0. Thus, J (t) = w T (t) 3 w(t) ≥ 0, which leads to  t  tf J (s)ds ≥ J (s)ds ≥ e T (t) 0 e(t), 0 ≤ t ≤ t f . 0

0

Consequently, (41) also holds for this case. Therefore, by Definition 1, one can conclude that the filtering error system (14) is generalized dissipative, which completes the proof. 

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. WANG et al.: EVENT-TRIGGERED GENERALIZED DISSIPATIVITY FILTERING FOR NNs

Remark 4: Proposition 1 presents a sufficient condition on the generalized dissipativity for the filtering error system (14) subject to (13). This condition provides a unified frame for different performance requirements. For example, if setting

0 = 0, 1 = −I , 2 = 0 and 3 = γ 2 I (γ > 0), Proposition 1 reduces to a bounded real lemma; if setting

0 = 0, 1 = 0, 2 = I , and 3 = γ I , Proposition 1 can be regarded as a passivity condition; if taking 0 = 0,

1 = Q, 2 = S, and 3 = R, Proposition 1 becomes a strict (Q, S, R)-dissipativity condition; and so on. Remark 5: The use of Lemmas 1 and 2 brings several advantages for Proposition 1. On the one hand, since Lemma 2 is an improvement over Jensen’s inequality, Proposition 1 is of less conservatism than the ones using the Jensen’s inequality. On the other hand, because Lemma 1 obviates the need of using the free-weighting matrix approach, Proposition 1 introduces fewer slack variable matrices to estimate the upper bounds of some related integral terms than those using the free-weighting matrix approach. IV. E VENT-T RIGGERED G ENERALIZED F ILTER D ESIGN In this section, we focus on solving out the problem of event-triggered generalized filtering based on Proposition 1. Proposition 2: For given scalars d M , μ, τm , τ M , λ > 0, ˜ ˜ T and real matrices 0 ≥ 0, 1 = − 1 1 ≤ 0, T ˜ 3 ≥ 0, the event-triggered filtering problem is ˜

2 , 3 = 3 solvable if there exist real matrices  > 0, P1 > 0, Q 1 > 0, Q 2 > 0, R1 > 0, R2 > 0, S1 > 0, S2 > 0, S3 > 0, U > 0, real diagonal matrices 1 > 0, 2 > 0, 3 > 0, and T12 21 T22 ˆ ˆ ˆ real matrices T1 = [ TT11 ], T2 = [ TT23 T24 ], A f , B f , and C f , 13 T14 such that P1 > U and





S˜1 T1 S˜3 T2 P1 U ≥ C0T 0 C0 , ≥ 0, ≥0  U  S˜1  S˜3 (42) ⎤ ⎡ T ˜ 2 d M S1 ηm S2 (η M − ηm )S3 1 1 ⎢ −S1 0 0 0 ⎥ ⎥ ⎢ ⎢  −S2 0 0 ⎥ ⎥ 0. Then, there exist a nonsingular real matrix P2 and a real matrix P3 > 0 such that

7

U = P2 P3−1 P2T . Since P1 − U > P1 − P2 P3−1 P2T > 0, which leads to P := [ P1 A f = P2−1 Aˆ f U −1 P2 ,

0, we have > 0. Denote

P2 P3 ]

B f = P2−1 Bˆ f , C f = Cˆ f U −1 P2 . (45)

Then, one has

P1 U ≥ C0T 0 C0 ⇒ P ≥ C˜ 2T 0 C˜ 2 .  U Define T := diag{I, P2 P3−1 , I, . . . , I } and premultiply and    19

postmultiply (43) by T and T T , respectively. Applying the Schur complement, one can deduce that the matrix (43) implies (22). Therefore, if the conditions of Proposition 2 are satisfied, so are the conditions of Proposition 1, which means that the filtering error system (14) is asymptotically stable and generalized dissipative. Finally, we prove the filter matrix parameters A f , B f , and C f can be obtained by (44). In fact, an observation from (45) is that −1





Af Bf P2 0 0 Aˆ f U −1 Bˆ f P2 . = 0 I Cf 0 0 I Cˆ f U −1 0 Similar to [32, proof of Proposition 11], one can conclude that the filter (12) with matrix parameters (A f , B f , C f ) is algebraically equivalent to the filter (12) with the matrix parameters ( Aˆ f U −1 , Bˆ f , Cˆ f U −1 ). The proof is completed.  Remark 6: It is worth pointing out that Proposition 2 provides an approach to designing event-triggered filters to estimate the states of n neurons. For a given threshold parameter λ > 0, desired event-triggered filters and the event weighting matrix  > 0 can be designed provided that solutions to the linear matrix inequalities described by (42) and (43) can be found. Moreover, based on Proposition 2, we can choose to design H∞ filters, passive filters, dissipative filters as well as L2 − L∞ filters if there is a need of performance requirements. However, although the event-triggered feedback in control, estimation and optimization are investigated in [14], the controllers and estimators should be given a priori. When no communication channel is used to connect the NN and the filter, the event-triggered communication scheme proposed in this paper is no longer available. Nevertheless, we can also present a sufficient condition to design suitable filters to estimate the states of neurons. In doing so, we first rewrite the corresponding filtering error system as ⎧ ˜ ˆ + Bˆ f (x(t)) + W˜ 1 f (x(t − d(t))) + Ew(t) ⎨ξ˙ (t) = Aξ(t) (46) e(t) = C˜ 2 ξ(t) ⎩ ξ(θ ) = col{φ(θ ), 0}, θ ∈ [−d M , 0] ˜ and C˜ 2 are defined in (14), and where W˜ 1 , E,



B −A 0 ˆ ˆ , B= . A= B f W2 B f C1 A f Then, following the line of the proofs of Propositions 1 and 2, we have the following result. Proposition 3: For given scalars d M , μ, 0 ≥ 0, 1 ≤ 0, ˜ T ˜

2 , 3 = 3 3 ≥ 0, the generalized filtering problem is

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 8

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

solvable if there exist real matrices P1 > 0, Q 1 > 0, Q 2 > 0, S1 > 0, U > 0, real diagonal matrices 1 > 0, 2 > 0, and T12 real matrices T1 = [ TT11 ] such that P1 > U , (42) and 13 T14 ⎡ ⎤ χ d M ˆ 1 S1 ˆ 2 ⎣ (47) −S1 0 ⎦ 0 using Proposition 3, and the obtained results are listed in Table III,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. WANG et al.: EVENT-TRIGGERED GENERALIZED DISSIPATIVITY FILTERING FOR NNs

TABLE III N UMBER OF T RANSMITTED PACKETS IN D IFFERENT S CHEMES

from which one can see clearly that Proposition 3 in this paper outperforms the ones in [25]. VI. C ONCLUSION The problem of event-triggered generalized dissipativity filtering has been addressed for an NN with a time-varying delay. A novel event-triggered communication scheme has been introduced to select those necessary data packets to be transmitted through a communication channel. Under the event-triggered communication scheme, the resultant filtering error system has been modeled as sampled-data error-dependent time-delay systems. Based on this model, a linear matrix inequality-based approach has been presented to design suitable filters such that some certain filtering performance can be ensured. It should be mentioned that under an event-triggered communication scheme, a unified frame has been provided for addressing several kinds of filtering issues, such as H∞ filtering, passive filtering, mixed H∞ and passive filtering, dissipative filtering, and L2 − L∞ filtering. The effectiveness of the proposed method has been demonstrated by two numerical examples. R EFERENCES [1] H. Zhang, Z. Wang, and D. Liu, “A comprehensive review of stability analysis of continuous-time recurrent neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 7, pp. 1229–1262, Jul. 2014. [2] W. He, F. Qian, Q.-L. Han, and J. Cao, “Lag quasi-synchronization of coupled delayed systems with parameter mismatch,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 58, no. 6, pp. 1345–1357, Jun. 2011. [3] H. Shao and Q.-L. Han, “New delay-dependent stability criteria for neural networks with two additive time-varying delay components,” IEEE Trans. Neural Netw., vol. 22, no. 5, pp. 812–818, May 2011. [4] Y. Zhang and Q.-L. Han, “Network-based synchronization of delayed neural networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 60, no. 3, pp. 676–689, Mar. 2013. [5] V. T. S. Elanayar and Y. C. Shin, “Radial basis function neural network for approximation and estimation of nonlinear stochastic dynamic systems,” IEEE Trans. Neural Netw., vol. 5, no. 4, pp. 594–603, Jul. 1994. [6] R. Habtom and L. Litz, “Estimation of unmeasured inputs using recurrent neural networks and the extended Kalman filter,” in Proc. Int. Conf. Neural Netw., Houston, TX, USA, Jun. 1997, pp. 2067–2071. [7] F. M. Salam and J. Zhang, “Adaptive neural observer with forward co-state propagation,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Washington, DC, USA, Jul. 2001, pp. 675–680. [8] Z. Wang, D. W. C. Ho, and X. Liu, “State estimation for delayed neural networks,” IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 279–284, Jan. 2005. [9] Y. He, Q.-G. Wang, M. Wu, and C. Lin, “Delay-dependent state estimation for delayed neural networks,” IEEE Trans. Neural Netw., vol. 17, no. 4, pp. 1077–1081, Jul. 2006. [10] Z. Wang, Y. Liu, and X. Liu, “State estimation for jumping recurrent neural networks with discrete and distributed delays,” Neural Netw., vol. 22, no. 1, pp. 41–48, Jan. 2009. [11] H. Huang and G. Feng, “Delay-dependent H∞ and generalized H2 filtering for delayed neural networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 56, no. 4, pp. 846–857, Apr. 2009. [12] X. Liu and J. Cao, “Robust state estimation for neural networks with discontinuous activations,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 6, pp. 1425–1437, Dec. 2010.

11

[13] T. H. Lee, S. Lakshmanan, J. H. Park, and P. Balasubramaniam, “State estimation for genetic regulatory networks with mode-dependent leakage delays, time-varying delays, and Markovian jumping parameters,” IEEE Trans. Nanobiosci., vol. 12, no. 4, pp. 363–375, Dec. 2013. [14] A. Bemporad, M. Heemels, and M. Vejdemo-Johansson, Networked Control Systems (Lecture Notes in Control and Information Sciences), vol. 406. New York, NY, USA: Springer-Verlag, 2010, pp. 293–358. [15] P. Tabuada, “Event-triggered real-time scheduling of stabilizing control tasks,” IEEE Trans. Autom. Control, vol. 52, no. 9, pp. 1680–1685, Sep. 2007. [16] X. Wang and M. D. Lemmon, “Event-triggered broadcasting across distributed networked control systems,” in Proc. Amer. Control Conf., Seattle, WA, USA, Jun. 2008, pp. 3139–3144. [17] X. Wang and M. D. Lemmon, “Self-triggered feedback control systems with finite-gain L2 stability,” IEEE Trans. Autom. Control, vol. 54, no. 3, pp. 452–467, Mar. 2009. [18] W. P. M. H. Heemels, M. C. F. Donkers, and A. R. Teel, “Periodic event-triggered control for linear systems,” IEEE Trans. Autom. Control, vol. 58, no. 4, pp. 847–861, Apr. 2013. [19] D. Yue, E. Tian, and Q.-L. Han, “A delay system method for designing event-triggered controllers of networked control systems,” IEEE Trans. Autom. Control, vol. 58, no. 2, pp. 475–481, Feb. 2013. [20] C. Peng and Q.-L. Han, “A novel event-triggered transmission scheme and L2 control co-design for sampled-data control systems,” IEEE Trans. Autom. Control, vol. 58, no. 10, pp. 2620–2626, Oct. 2013. [21] M. C. F. Donkers and W. P. M. H. Heemels, “Output-based eventtriggered control with guaranteed L∞ -gain and improved and decentralized event-triggering,” IEEE Trans. Autom. Control, vol. 57, no. 6, pp. 1362–1376, Jun. 2012. [22] C. Peng, Q.-L. Han, and D. Yue, “To transmit or not to transmit: A discrete event-triggered communication scheme for networked Takagi–Sugeno fuzzy systems,” IEEE Trans. Fuzzy Syst., vol. 21, no. 1, pp. 164–170, Feb. 2013. [23] X.-M. Zhang and Q.-L. Han, “Event-triggered dynamic output feedback control for networked control systems,” IET Control Theory Appl., vol. 8, no. 4, pp. 226–234, Mar. 2014. [24] X.-M. Zhang and Q.-L. Han, “Event-based H∞ filtering for sampleddata systems,” Automatica, vol. 51, pp. 55–69, Jan. 2015. [25] Z. Su, H. Wang, L. Yu, and D. Zhang, “Exponential H∞ filtering for switched neural networks with mixed delays,” IET Control Theory Appl., vol. 8, no. 11, pp. 987–995, Jul. 2014. [26] X.-M. Zhang and Q.-L. Han, “Network-based H∞ filtering using a logic jumping-like trigger,” Automatica, vol. 49, no. 5, pp. 1428–1435, May 2013. [27] X. Lin, X. Zhang, and Y. Wang, “Robust passive filtering for neutral-type neural networks with time-varying discrete and unbounded distributed delays,” J. Franklin Inst., vol. 350, no. 5, pp. 966–989, Jun. 2013. [28] Z.-G. Wu, J. H. Park, H. Su, B. Song, and J. Chu, “Mixed H∞ and passive filtering for singular systems with time delays,” Signal Process., vol. 93, no. 7, pp. 1705–1711, Jul. 2013. [29] Z. Feng and J. Lam, “Robust reliable dissipative filtering for discrete delay singular systems,” Signal Process., vol. 92, no. 12, pp. 3010–3025, Dec. 2012. [30] X.-M. Zhang and Q.-L. Han, “Global asymptotic stability analysis for delayed neural networks using a matrix-based quadratic convex approach,” Neural Netw., vol. 54, pp. 57–69, Jun. 2014. [31] A. Seuret and F. Gouaisbaut, “Wirtinger-based integral inequality: Application to time-delay systems,” Automatica, vol. 49, no. 9, pp. 2860–2866, Sep. 2013. [32] X.-M. Zhang and Q.-L. Han, “Robust H∞ filtering for a class of uncertain linear systems with time-varying delay,” Automatica, vol. 44, no. 1, pp. 157–166, Jan. 2008.

Jia Wang received the B.Sc. degree in computing science from Liaoning University, Shenyang, China, in 2005, the M.Sc. degree in operational research and cybernetics from Northeastern University, Shenyang, in 2008, and the Ph.D. degree in computer engineering from Central Queensland University, Rockhampton, QLD, Australia, in 2013. She joined the College of Mathematics and Computer Science, Fuzhou University, Fuzhou, China, in 2015. Her current research interests include networked control systems, neural networks, distributed systems, and dissipative control and filtering.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 12

Xian-Ming Zhang received the M.S. degree in applied mathematics and the Ph.D. degree in control theory and engineering from Central South University, Changsha, China, in 1992 and 2006, respectively. He joined Central South University, in 1992, where he was an Associate Professor with the School of Mathematics and Statistics. He was a Senior Post-Doctoral Research Fellow with the Centre for Intelligent and Networked Systems, and a Lecturer with the School of Engineering and Technology, Central Queensland University, Rockhampton, QLD, Australia, from 2007 to 2014. He joined Griffith University, Gold Coast, QLD, Australia, in 2014, where he is currently a Lecturer with the Griffith School of Engineering. His current research interests include H-infinity filtering, event-triggered control, networked control systems, neural networks, distributed systems, and time-delay systems. Dr. Zhang was a recipient of the Hunan Provincial Natural Science Award in Hunan, China, in 2011, and the National Natural Science Award in China, in 2013, both jointly with Prof. M. Wu and Prof. Y. He.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Qing-Long Han (SM’13) received the B.Sc. degree in mathematics from Shandong Normal University, Jinan, China, in 1983, and the M.Sc. and Ph.D. degrees in control engineering and electrical engineering from the East China University of Science and Technology, Shanghai, China, in 1992 and 1997, respectively. He was a Post-Doctoral Researcher Fellow with the Laboratoire d’Automatique et d’Informatique Industrielle, Ecole Supérieur d’Ingénieur de Poitiers, Université de Poitiers, Poitiers, France, from 1997 to 1998, and a Research Assistant Professor with the Department of Mechanical and Industrial Engineering, Southern Illinois University, Edwardsville, IL, USA, from 1999 to 2001. He joined Central Queensland University, Rockhampton, QLD, Australia, in 2001, where he was a Laureate Professor, an Associate Dean of Research and Innovation, and Higher Education Division, and the Founding Director of the Centre for Intelligent and Networked Systems. He joined Griffith University, Gold Coast, QLD, Australia, in 2014, where is a Professor with the Griffith School of Engineering. His current research interests include networked control systems, neural networks, time-delay systems, multiagent systems, and complex systems. Prof. Han is the Chang Jiang (Yangtze River) Scholar Chair Professor from the Ministry of Education, China, and the 100 Talents Program Chair Professor, Shanxi, China. He is one of The World’s Most Influential Scientific Minds: 2014 and has been recognized as a highly cited Researcher in the field of Engineering by Thomson Reuters.

Suggest Documents