Neuronal State Estimation for Neural Networks With Two ... - IEEE Xplore

15 downloads 0 Views 888KB Size Report
Xian-Ming Zhang, Member, IEEE, Qing-Long Han, Senior Member, IEEE, ... X.-M. Zhang and Q.-L. Han are with the School of Software and Electrical.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON CYBERNETICS

1

Neuronal State Estimation for Neural Networks With Two Additive Time-Varying Delay Components Xian-Ming Zhang, Member, IEEE, Qing-Long Han, Senior Member, IEEE, Zidong Wang, Fellow, IEEE, and Bao-Lin Zhang, Member, IEEE

Abstract—This paper is concerned with the state estimation for neural networks with two additive time-varying delay components. Three cases of these two time-varying delays are fully considered: 1) both delays are differentiable uniformly bounded with delay-derivative bounded by some constants; 2) one delay is continuous uniformly bounded while the other is differentiable uniformly bounded with delay-derivative bounded by certain constants; and 3) both delays are continuous uniformly bounded. First, an extended reciprocally convex inequality is introduced to bound reciprocally convex combinations appearing in the derivative of some Lyapunov–Krasovskii functional. Second, sufficient conditions are derived based on the extended inequality for three cases of time-varying delays, respectively. Third, a linearmatrix-inequality-based approach with two tuning parameters is proposed to design desired Luenberger estimators such that the error system is globally asymptotically stable. This approach is then applied to state estimation on neural networks with a single interval time-varying delay. Finally, two numerical examples are given to illustrate the effectiveness of the proposed method. Index Terms—Global asymptotic stability, neural networks, reciprocally convex inequality, state estimation, time-varying delays.

I. I NTRODUCTION EURONAL state estimation is one of crucial issues in the field of neural networks. A number of applications of neural networks are based on neuronal states [1]–[4]. However, in some practical scenarios, neuronal states are unmeasurable and only partial information of neuronal states is available, especially in relatively large scale neural networks [5], [6]. Neuronal state estimation aims at designing suitable estimators to approximate neuronal states using those available measurements such that practical performance can be achieved. Thus, during the last decade, neuronal state estimation has become a

N

Manuscript received December 18, 2016; revised March 25, 2017; accepted March 30, 2017. This work was supported by the Australian Research Council Discovery Project under Grant DP160103567. This paper was recommended by Associate Editor L. Zhang. (Corresponding author: Qing-Long Han.) X.-M. Zhang and Q.-L. Han are with the School of Software and Electrical Engineering, Swinburne University of Technology, Melbourne, VIC 3122, Australia (e-mail: [email protected]; [email protected]). Z. Wang is with the Department of Computer Science, Brunel University, Uxbridge UB8 3PH, U.K. (e-mail: [email protected]). B.-L. Zhang is with the College of Science, China Jiliang University, Hangzhou 310018, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TCYB.2017.2690676

hot research topic, and a number of results have been reported in [7]–[14]. In the implementation of practical neural networks, the finite switching speed of associated amplifiers unavoidably results in the occurrence of time-varying delays [15]–[20], which are usually regarded as a main factor that may degrade the performance of the neural network under study. Different from the well-used fuzzy-based methods and receding horizon optimization-based methods [21]–[24], the neuronal state estimation issue for delayed neural networks is first addressed using a delay-independent analysis approach [5], and then developed by a delay-dependent analysis approach [14]. Since the delay-dependent analysis approach generally reduces the conservatism compared with the delayindependent one [25]–[27], delay-dependent state estimation comes to the fore, whose main goal is to design a suitable estimator such that the error dynamics is globally asymptotically stable for any time-varying delay being less than an admissible upper bound. The size of the admissible upper bound is thus regarded as an important index to measure the conservatism of the criterion involved. A larger admissible upper bound would lead to a less conservative criterion [27]–[29]. One key point of obtaining less conservative criteria is to bound the integral terms appearing in the derivative of certain Lyapunov–Krasovskii functional. It is worth mentioning that, an effective method bounding the integral terms is to apply a proper integral inequality incorporating with a reciprocally convex approach. In 2008, a novel model of delayed neural networks was introduced in [30], where the time-delay is modeled by the sum of two or more time-delay components, which cannot be lumped together. The motivation of the above model comes from the remote control or network-based control [31]. In network environments, signal transmissions between two network nodes may experience two or more segments of networks with different transmission conditions, which leads to two or more time-delays with different properties [33], [34]. It is unreasonable to describe such induced effects by a single time-delay component. Therefore, it is of significance in both theory and practice to study neural networks with two or more additive time-delay components, which has attracted considerable research interest in recent years. To mention a few, the stability for neural networks with two additive timevarying delay components has been analyzed in [30]–[33].

c 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. 2168-2267  See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 2

IEEE TRANSACTIONS ON CYBERNETICS

Passivity and passification of memristor-based recurrent neural networks with additive time-varying delays have been investigated in [35]. However, to the best of the authors’ knowledge, no results have been reported in the literature concerning neuronal state estimation of such neural networks with two or more additive time-delay components, which motivates the current study. This paper focuses on neuronal state estimation for a class of neural networks with two additive time-varying delay components. The main contributions lie in two aspects. The first one is that an extended reciprocally convex inequality is introduced to bound some reciprocally convex combinations appearing in the derivative of certain Lyapunov–Krasovskii functional. The second one is that three types of additive time-varying delay functions are taken into account: 1) two differentiable time-varying delay functions; 2) a differentiable delay function combined with a continuous one; and 3) two continuous time-varying delay functions. Thus, for different cases of time-varying delays, desired Luenberger estimators can be designed in terms of solutions to a set of linear matrix inequalities (LMIs). The effectiveness of the proposed results are demonstrated through two numerical examples. Notation: The notation used in this paper is the same as the one in [36]. diag{. . .} and col{. . .} denote a block-diagonal matrix and a block-column vector, respectively. The symbol “” stands for the symmetric term in a symmetric matrix.

Case 1: τ1 (t) and τ2 (t) are differentiable functions satisfying 0 ≤ τi (t) ≤ hi , μi1 ≤ τ˙i (t) ≤ μi2 (i = 1, 2).

(4)

Case 2: τ2 (t) is a continuous function, while τ1 (t) is a differentiable function satisfying 0 ≤ τ1 (t) ≤ h1 , μ11 ≤ τ˙1 (t) ≤ μ12 , 0 ≤ τ2 (t) ≤ h2 .

(5)

Case 3: τ1 (t) and τ2 (t) are continuous functions satisfying 0 ≤ τ1 (t) ≤ h1 , 0 ≤ τ2 (t) ≤ h2

(6)

where hi , μi1 , and μi2 (i = 1, 2) are known real constants. Remark 1: Clearly, Case 3 is of generality, which includes cases 1 and 2 as its special cases. However, if neither τ1 (t) nor τ2 (t) is nondifferentiable, only Case 3 can handle this situation. This paper aims to estimate the neuronal state u(t) based on the measurement output y(t). For this purpose, we introduce a Luenberger estimator of the following form:     u˙ˆ (t) = −Aˆu(t) + W0 g uˆ (t) + W1 g uˆ (t − τ1 (t) − τ2 (t)) + J    + K y(t) − Cuˆ (t) −  t, uˆ (t) (7) where uˆ (t) ∈ Rn is the estimation of the neuronal state u(t), and K ∈ Rn×m is the gain matrix to be determined. Let x(t) = u(t) − uˆ (t). From (1) and (7), one has

II. P ROBLEM D ESCRIPTION Consider the neural network with two additive time-varying delays described by ⎧ ⎨ u˙ (t) = −Au(t) + W0 g(u(t)) + W1 g(u(t − τ1 (t) − τ2 (t))) + J (1) ⎩ y(t) = Cu(t) + (t, u(t)) where u(t) = col{u1 (t), u2 (t), . . . , un (t)} ∈ Rn is the state vector associated with n neurons; y(t) ∈ Rm is the network measurement output vector; A = diag{a1 , a2 , . . . , an } > 0; J is a constant vector representing the bias; W0 and W1 ∈ Rn×n stand for the connection weight matrices; C ∈ Rm×n is a constant real matrix; and the neuron activation function is given by g(u(t)) = col{g1 (u1 (t)), g2 (u2 (t)), . . . , gn (un (t))} ∈ Rn satisfying g(0) = 0 and gi (s1 ) − gi (s2 ) − ≤ + (2) i ≤ i , s1  = s2 , i = 1, 2, . . . , n s1 − s2 + n m where − i and i are known constants. (t, u) : R×R → R is the neuron-dependent nonlinear disturbances on the network outputs. Suppose that there exists a constant matrix F such that (3) |(t, ν1 ) − (t, ν2 )| ≤ |F (ν1 − ν2 )|. τ1 (t) and τ2 (t) are two time-varying delays. This additive two time-vary delay components model is introduced in [37], motivated from networked systems, where transmission delays are subject to two segments with different network transmission conditions. As pointed out in [30], this model has also strong background in neural networks. Thus, it is of significance to study such neural networks with additive two time-vary delay components. In this paper, we consider three cases of the time-varying delays τ1 (t) and τ2 (t).

x˙ (t) = −AK x(t) + W0 f (x(t)) − K℘(t, x(t)) + W1 f (x(t − τ1 (t) − τ2 (t)))

(8)

where AK = A + KC and

  f (x(t)) = g(u(t)) − g uˆ (t)   ℘(t, x(t)) = (t, u(t)) −  t, uˆ (t) .

From (2), it is easy to verify that  −   fi (xi ) − + i xi i xi − fi (xi ) ≥ 0, ∀t ∈ R.

(9)

From (3), for any constant ε > 0, one has ε℘ T (t, x(t))℘(t, x(t)) ≤ εxT (t)F T F x(t).

(10)

The problem we address in this paper is to design a suitable Luenberger estimator of form (7) such that the error system (8) subject to (9) and (10) is globally asymptotically stable. Before proceeding, we first introduce the following result. Lemma 1 [38]: Let R be an n × n constant real matrix satisfying R = RT > 0, and ω : [a, b] → Rn a vector function such that the integrations below are well defined, where a and b are two scalars with b > a. Then, the following inequality holds: b 1 ζ T (ω, a, b) T R ζ (ω, a, b) (11) ω˙ T (s)Rω(s)ds ˙ ≥ b−a a where R = diag{R, 3R} and

b 1 ζ (ω, a, b) := col ω(a), ω(b), b−a ω(s)ds, a

 −I I 0

:= . I I −2I

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. ZHANG et al.: NEURONAL STATE ESTIMATION FOR NEURAL NETWORKS

III. S TABILITY A NALYSIS Lemma 2 (An Extended Reciprocally Convex Inequality): Let Q1 ∈ Rn1 ×n1 and Q2 ∈ Rn2 ×n2 be positive definite, χ1 ∈ Rn1 and χ2 ∈ Rn2 any two real vectors, and θ be a scalar satisfying 0 < θ < 1. Then for any real matrices M ∈ Rn1 ×n1 , N ∈ Rn2 ×n2 , S ∈ Rn1 ×n2 satisfying [ Q1−M QS2 ] ≥ 0 and [ Q1 Q2S−N ] ≥ 0, the following inequality holds: 1 T 1 χ Q 1 χ1 + χ T Q 1 χ2 θ 1 1−θ 2 ≥ χ1T [Q1 + (1 − θ )M]χ1 + χ2T (Q2 +θ N)χ2 + 2χ1T Sχ2 . (12)

G (θ ) 

3

To proceed with, choose the Lyapunov–Krasovskii functional candidate as:

V(t, xt ) = V1 (t, xt ) + V2 (t, xt ) + V3 (t, xt ) + V4 (t, xt ) where (16) V1 (t, xt ) := ς T (t)Pς(t) n xi       + σ1i fi (s) − − V2 (t, xt ) := 2 i s + σ2i i s − fi (s) ds i=1

V3 (t, xt ) :=

Proof: Notice that G (θ ) − χ1T [Q1 + (1− θ)M]χ1 + χ2T (Q2 + θ N)χ2 + 2χ1T Sχ2  

T 1−θ Q1 − (1 − θ )M −S χ1 χ1 θ . = θ χ2 χ Q − θ N  2 1−θ 2     θ 1−θ Let J := diag − 1−θ I, θ I . Then JT

1−θ

 Q1 − (1 − θ )M −S J θ  1−θ Q2 − θ N



Q S Q −M + (1 − θ ) 1 =θ 1  Q2 

θ

which leads to (12). Remark 2: Lemma 2 delivers an extended inequality for a reciprocally convex combination with two terms. If setting M = N = 0, Lemma 2 reduces to the corresponding one as G (θ ) ≥ χ1T Q1 χ1 + χ2T Q2 χ2 + 2χ1T Sχ2

(13)

which is proposed in [39]. By introducing more slack matrices, the inequality (12) can be extended to a general form and one can refer to [40]. However, it is still challenging to derive such an inequality as (12) if the reciprocally convex combination G (θ ) includes  more than three terms, i.e.,  G (θ ) = rj=1 (1/θj )χjT Qj χj with rj=1 θj = 1 (θj ≥ 0, r ≥ 3). A. Stability of the Error System (8) in Case 1 We first consider Case 1, i.e., both delays τ1 (t) and τ2 (t) are differentiable. For simplicity, we denote d(t) := τ1 (t) + τ2 (t), h¯ := h1 + h2 η(t) := col{x(t), f (x(t))}, I1 = [I 0], I2 = [0 I]. Then the system (8) can be rewritten as x˙ (t) = (−AK I1 + W0 I2 )η(t) + W1 I2 η(t − d(t)) − K℘(t, x(t)).

(14) From (9), one can see that, for any real diagonal matrix T = diag{t1 , t2 , . . . , tn } > 0, the following holds:   ηT (s) L1 TLT2 + L2 TLT1 η(s) ≥ 0, ∀s ∈ R (15) where

t

0

ηT (s)Q1 η(s)ds +

t−τ1 (t) t−τ1 (t)

+

    − − L1 := col −L− , I , L− := diag − 1 , 2 , . . . , n     + + L2 := col L+ , −I , L+ := diag + 1 , 2 , . . . , n .

(17) t−τ1 (t)

ηT (s)Q2 η(s)ds

t−h1



η (s)Q3 η(s)ds + T

t−d(t) t−h1

t−d(t) t−h¯

ηT (s)Q4 η(s)ds

+ ηT (s)Q5 η(s)ds t−h¯ 0 t V4 (t, xt ) := h¯ x˙ T (s)R1 x˙ (s)dsdθ −h¯ t+θ 0 t + h1 x˙ T (s)R2 x˙ (s)dsdθ −h1

 S ≥0 Q2 − N

following

(18)

(19)

t+θ

where

t−τ (t) t ς (t) := col x(t), t−τ1 (t) x(s)ds, t−h11 x(s)ds t−d(t) t−τ1 (t) x(s)ds, x(s)ds . t−d(t) t−h¯

Now we state and establish the following result for Case 1. Proposition 1: Consider Case 1. For given scalars h1 , h2 , μ11 , μ12 , μ21 , and μ22 satisfying (4), the system (8) subject to (9), and (10) is globally asymptotically stable if there exist real matrices P > 0, Ti = diag{ti1 , ti2 , . . . , tin } > 0, Qi > 0, R1 > 0, R2 > 0, j = diag{σj1 , σj2 , . . . , σjn } > 0 (i = 1, 2, . . . , 5, j = 1, 2), M1 , N1 , S1 , S2 , S3 , S4 , Y1 , Y2 , and Y3 with appropriate dimensions, and a scalar ε > 0 such that 



S1 S1 R2 R2 − M1 ≥0 ≥0 (20)  R2  R2 − N1

 R1 Sk ≥ 0, (k = 2, 3, 4) (21)  R1   τ1 (t) = 0, h1   τ (t) = 0, h2 (τ1 (t), τ2 (t), τ˙1 (t), τ˙2 (t)) 2 0, Ti = diag{ti1 , ti2 , . . . , tin } > 0, Qj > 0, Rr > 0, r = diag{σr1 , σr2 , . . . , σrn } > 0, M1 , N1 , Sj , Y1 , Y2 , and Y3 (i = 1, . . . , 5, j = 1, . . . , 4, r = 1, 2) with appropriate dimensions, and a scalar ε > 0 such that (20), (21) and   τ1 (t) = 0, h1  0, Ti = diag{ti1 , ti2 , . . . , tin } > 0 (i = 1, . . . , 5), Qr > 0, Rr > 0, r = diag{σr1 , σr2 , . . . , σrn } > 0 (r = 1, 2), M1 , N1 , Y1 , Y2 , Y3 and Sj (j = 1, . . . , 4) with appropriate dimensions, and a scalar ε > 0 such that (20), (21) and   τ (t) = 0, h1 0, Qi > 0, Ti = diag{ti1 , ti2 , . . . , tin } > 0 (i = 1, . . . , 5), Rj > 0, j = diag{σj1 , σj2 , . . . , σjn } > 0 (j = 1, 2), M1 , N1 , S1 , S2 , S3 , S4 , Y1 , and Y with appropriate dimensions, and a scalar ε > 0 such that (20)–(22), where (τ1 (t), τ1 (t), τ˙1 (t), τ˙2 (t)) is ˜ 1 (t), τ1 (t), τ˙1 (t), τ˙2 (t)) as replaced with (τ ⎡ ⎤ ˜ 12 11  −Y ˜ 1 (t), τ2 (t), τ˙1 (t), τ˙2 (t)) := ⎣  ˜ 1 Y ⎦ (46) ˜ 22 − (τ    −εI where ˜ T1 ˜ 12 := D0T PD2 + (L1 1 + L2 2 )T e1 +  ˜ 2 − Y1T   ˜ 22 := 1 + 2 − 3 +  ˜2+ ˜ T1 ˜ 1 ˜ T2  

problem for the system (1) subject to (2) and (3) is solvable if there exist real matrices P > 0, Ti = diag{ti1 , ti2 , . . . , tin } > 0, Qj > 0, Rr > 0, j = diag{σj1 , σj2 , . . . , σjn } > 0, M1 , N1 , Sj (i = 1, . . . , 5, j = 1, . . . , 4, r = 1, 2), Y1 and Y with appropriate dimensions, and a scalar ε > 0 such that (20), (21), and (43), where (τ1 (t), τ1 (t), τ˙1 (t)) is ˜ 1 (t), τ1 (t), τ˙1 (t)) as replaced with (τ ⎡ ⎤ ˜ 12 11  −Y ˜ 1 (t), τ2 (t), τ˙1 (t)) := ⎣  ˜ 1 Y ⎦ (49) ˜ 22 − (τ    −εI where ˜ 12 := B0T PB2 + (L1 1 + L2 2 )T e1 +  ˜ T1 ˜ 2 − Y1T   ˜ 22 := 4 + 5 − 3 +  ˜2+ ˜ T1 ˜ 1 ˜ T2   + εeT1 I1T F T F I1 e1 and the other notations are the same as those in Propositions 2 and 4. Moreover, the Luenberger estimator gain K is given by K = Y1−1 Y. Proposition 6: Consider Case 3. For given scalars h1 > 0, h2 > 0, λ1 and λ2 , the state estimation problem for the system (1) subject to (2), (3), and (5) is solvable if there exist real matrices P > 0, Ti = diag{ti1 , ti2 , . . . , tin } > 0 (i = 1, . . . , 5), Qr > 0, Rr > 0, r = diag{σj1 , σj2 , . . . , σjn } > 0 (r = 1, 2), M1 , N1 , Sj (j = 1, . . . , 4), Y1 and Y with appropriate dimensions, and a scalar ε > 0 such that (20), (21), and (45), where ˜ 1 (t), τ1 (t)) as ϒ(τ1 (t), τ1 (t)) is replaced with ϒ(τ ⎡ ⎤ 11 ϒ˜ 12 −Y ˜ 1 (t), τ2 (t)) := ⎣  ˜ 1Y ⎦ ϒ(τ (50) ϒ˜ 22 −   −εI where

+ εeT1 I1T F T F I1 e1

˜ 1 := λ1 eT1 I1T + λ2 eT3 I2T  (47) ˜ 2 := [ − (Y1 A + YC)I1 + Y1 W0 I2 ]e1 + Y1 W1 I2 e5 (48) and the other notations are the same as those in Proposition 1. Moreover, the Luenberger estimator gain K is given by K = Y1−1 Y. Proof: We prove the conclusion based on Proposition 1. If the matrix inequalities in (22) hold, one has that 11 < 0, which leads to Y1 + Y1T > 0. Thus, Y1 is nonsingular. Let ˜ 1Y Y = Y1 K, Y2 = λ1 Y1 , and Y3 = λ2 Y1 . Then 1 K =  ˜ 1 ˜ 1 and  ˜ 1 are defined ˜ 2 , where  and 1 2 =  in (47) and (48), respectively. Therefore, if the LMIs in Proposition 4 are satisfied, so are the matrix inequalities in Proposition 1, which concludes that the error system (8) is globally asymptotically stable. Therefore, provided that the conditions in Proposition 4 are satisfied, the state estimation problem of the neural network (1) is solvable through the Luenberger estimator (7) with K = Y1−1 Y. Similar to the proof of Proposition 4, the following results are readily derived. Proposition 5: Consider Case 2. For given scalars h1 , h2 , μ11 , and μ12 satisfying (5), λ1 and λ2 , the state estimation

˜ T1 ˜ 2 − Y1T  ϒ˜ 12 := AT0 PA2 + (L1 1 + L2 2 )T e1 +  ˜2+ ˜ T1 + εeT1 I1T F T F I1 e1 ˜ 1 ˜ T2  ϒ˜ 22 := 6 + 7 − 3 +  and the other notations are the same as those in Propositions 3 and 4. Moreover, the Luenberger estimator gain K is given by K = Y1−1 Y. Remark 5: From the proof of Proposition 4, the introduction of Y1 , Y2 , and Y3 plays a key role in solving the problem of state estimation for the neural network. By setting Y2 = λ1 Y1 and Y3 = λ2 Y1 , desired Luenberger estimators can be designed by tuning two parameters λ1 and λ2 . This method has ever been used to design state feedback controllers for linear systems with time-varying delay in [42], where it is shown [42, Remark 2] that less conservative results can be obtained even though one sets Y1 = Y2 (no Y3 therein). On the other hand, as stated in Remark 3, if we replace x˙ (t) in (31), (32), and (34) with 2 ξ(t) − K℘ (t, x(t)) to formulate Propositions 4–6, the gain matrix K is thus coupled with the matrix variables P, R1 , R2 , 1 , and 2 , which makes it quite difficult to solve out K if following the same linearization method as above, due to that 1 and 2 are diagonal, while P, R1 , and R2 are not. Remark 6: Propositions 4–6 depend on two tuning parameters λ1 and λ2 . Based on some numerical optimization algorithms, such as fmincon in the Optimization Toolbox,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. ZHANG et al.: NEURONAL STATE ESTIMATION FOR NEURAL NETWORKS

one can find an optimization combination of them. Moreover, it should be mentioned that the proposed criteria are obtained using the Wirtinger-based integral inequality. Instead, if one uses the Bessel–Legendre inequality with N = 2 or 3 [43], one can derive some less conservative results, which, however, require higher computation complexity. Remark 7: Propositions 4–6 present an LMI-based method to solve the problem of state estimation of the neural network with two additive time-varying delay components in three cases. When the bounds of τ˙1 (t) and τ˙2 (t) are known, one can use Propositions 4 to design desired Luenberger estimators. If only the bounds of τ˙1 (t) are available, Proposition 5 may solve the state estimation problem. In the case that all the bounds of τ˙1 (t) and τ˙2 (t) are not available, Proposition 6 is a good choice to design suitable Luenberger estimators for the neural network under study. V. A PPLICATION TO S TATE E STIMATION ON N EURAL N ETWORKS W ITH S INGLE T IME -VARYING D ELAY If the two time-varying delays τ1 (t) and τ2 (t) are lumped together as d(t) := τ1 (t) + τ2 (t), the neural network model described by (1) reduces to  u˙ (t) = −Au(t) + W0 g(u(t)) + W1 g(u(t − d(t))) + J (51) y(t) = Cu(t) + (t, u(t)) which is a neural network model with a single time-varying delay. Specifically, if setting τ2 (t) = h2 , one has τ˙2 (t) = 0 and d(t) = h2 + τ1 (t), where τ1 (t) satisfies (4). Then ¯ μ11 ≤ d(t) ˙ ≤ μ12 h2 ≤ d(t) ≤ h,

(52)

which means that d(t) is an interval time-varying delay. Stability analysis and state estimation issues for the neural network model (51), (52) are well studied in the open literature. It should be pointed out that the proposed results in this paper can be easily applied to neural networks described by (51) and (52). To proceed with, the Luenberger estimator is adapted as u˙ˆ (t) = −Aˆu(t) + W0 g(ˆu(t)) + W1 g(ˆu(t − d(t)) + J + K[y(t) − Cuˆ (t) − (t, uˆ (t))].

(53)

Thus, the error system of (51) and (53) is given as x˙ (t) = −AK x(t) + W0 f (x(t)) − K℘ (t, x(t)) + W1 f (x(t − d(t)). (54) Based on Propositions 1 and 4, we have the following corollaries. ¯ μ11 , and μ12 , the Corollary 1: For given scalars h2 , h, system (54) subject to (9), (10), and (52) is globally asymptotically stable if there exist real matrices P > 0, Qi > 0, Ti = diag{ti1 , ti2 , . . . , tin } > 0 (i = 1, 2, . . . , 5), R1 > 0, R2 > 0, j = diag{σj1 , σj2 , . . . , σjn } > 0 (j = 1, 2), M1 , N1 , S1 , S2 , S3 , S4 , Y1 , Y2 , and Y3 with appropriate dimensions, and a scalar ε > 0 such that (20), (21) and   τ (t) = 0, h¯ − h2 0, Qi > 0, Ti = diag{ti1 , ti2 , . . . , tin } > 0 (i = 1, . . . , 5), Rj > 0, j = diag{σj1 , σj2 , . . . , σjn } > 0 (j = 1, 2), M1 , N1 , S1 , S2 , S3 , S4 , Y1 and Y with appropriate dimensions, and a scalar ε > 0 such that (20), (21) and   ¯ ˜ 1 (t), h2 , τ˙1 (t), 0) τ1 (t) = 0, h − h2 < 0 (τ (56)  τ˙1 (t) = μ11 , μ12 ˜ is defined in (46). where  VI. N UMERICAL E XAMPLES Example 1: Consider the neural network (1) subject to (2) and (3), where ⎧





 2 0 1 1 0.88 1 ⎪ ⎪ A = , W = , W = ⎪ 0 1 ⎪ 0 2 −1 − 1 1 1 ⎪ ⎪ ⎨ C = [1 1], (t, u(t)) = 0.7(sin u1 (t) + sin u2 (t)) (57) g(u) = col{g1 (u1 ), g2 (u2 )) ⎪ ⎪ ⎪ ⎪ g1 (u1 ) = 0.2(|u1 + 1| − |u1 − 1|) ⎪ ⎪ ⎩ g2 (u2 ) = 0.4(|u2 + 1| − |u2 − 1|). First, notice that the stability of Example 1 is well studied in the open literature. In order to show the effectiveness of the proposed method, we make a comparison between Proposition 1 with some recent results proposed in [30]–[33], [35], and [44]. In doing so, suppose that the time-varying delays τ1 (t) and τ2 (t) satisfy (4) with μ12 = −μ11 = μ1 > 0 and μ22 = −μ21 = μ2 > 0. Then we calculate the admissible maximum upper bound (AMUB) of h2 for h1 ∈ {0.8, 1.0, 1.2}, μ1 = 0.7 and μ2 = 0.1. Table I lists the corresponding results obtained by [44, Th. 1], [30, Th. 1], [31, Th. 1], [32, Th. 1], [35, Th. 3.1], [33, Corollary 3.1] and Proposition 1 with K = 0 in this paper. It is clear to see that Proposition 1 with K = 0 can obtain larger AMUBs of h2 than those [30]–[33], [35], [44] for this example. Second, we design suitable Luenberger estimators for three cases of two time-varying delays. Case 1: Both τ1 (t) and τ2 (t) are differentiable and satisfy (4) with h1 = 1.2, h2 = 2.2655, μ12 = −μ11 = 0.7, and μ22 = −μ21 = 0.1. In this case, applying Proposition 4 with

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 8

Fig. 1.

IEEE TRANSACTIONS ON CYBERNETICS

Error state responses x(t) for Case 1 in Example 1.

Fig. 3.

Error state responses x(t) for Case 3 in Example 1. TABLE II ACHIEVED AMUB S OF h¯ FOR VARIOUS μ FOR E XAMPLE 2

μ12 = −μ11 = μ with μ being a constant, and Fig. 2.

Error state responses x(t) for Case 2 in Example 1.

λ1 = 1.95 and λ2 = 0.75, it is found that the state estimation problem is solvable and the corresponding Luenberger estimator gain K is given by K = col{1.3171, 0.2712}. Associated with this estimator, together with τ1 (t) = 0.5 + 0.7 sin t and τ2 (t) = 2.1655 + 0.1 cos t, the error state x(t) can be shown in Fig. 1, where the system initial state is u0 = col{0.5, −0.5}. Case 2: τ1 (t) is differentiable while τ2 (t) is not differentiable but continuous. Suppose that τ1 (t) = 0.8 + 0.2 sin t and τ2 (t) = 0.6| cos t|. Then h1 = 1, h2 = 0.6, and μ12 = −μ11 = 0.2. Applying Proposition 5 with λ1 = 1.5 and λ2 = 1, the Luenberger estimator of form (7) can be found with K = col{0.101, −0.0135}. Connecting this estimator with the error system (8), the error state responses with the initial condition u0 = col{0.5, −0.5} are depicted in Fig. 2. Case 3: Neither τ1 (t) nor τ2 (t) is differentiable. In this case, we assume that τ1 (t) = 0.6| sin t| and τ2 (t) = 0.5(1 + | cos t|). Then h1 = 0.6 and h2 = 1. By Proposition 6 with λ1 = 1.5 and λ2 = 1, the estimator gain can be calculated as K = col{0.1413, −0.0149}, based on which, the error state responses with the initial condition u0 = col{1, −0.2} are illustrated in Fig. 3. From Figs. 1–3, one can see that the obtained estimators for three cases all produce pretty good estimation on the neuronal states of the neural network under study. Example 2: Consider the neural network (51) subject to (2), (3), and (52), where C = I, (t, u(t)) = 0.4 cos(u(t)),

A = diag{1.2769, 0.6231, 0.9230, 0.4480} ⎡ ⎤ −0.0373 0.4852 −0.3351 0.2336 ⎢−1.6033 0.5988 −0.3224 1.2352 ⎥ ⎥ W0 = ⎢ ⎣ 0.3394 −0.0860 −0.3824 −0.5785⎦ −0.1311 0.3253 −0.9534 −0.5015 ⎡ ⎤ 0.8674 −1.2405 −0.5325 0.0220 ⎢ 0.0474 −0.9164 0.0360 0.9816 ⎥ ⎥ W1 = ⎢ ⎣ 1.8495 2.6117 −0.3788 0.8428 ⎦ −2.0413 0.5179 1.1734 −0.2775 − − − − 1 = −0.4, 2 = 0.1, 3 = 0, 4 = −0.3

+ + + + 1 = 0.1137, 2 = 0.1279, 3 = 0.7994, 4 = 0.2368.

For this example, the AMUB of h¯ for stability is calculated in [32] using several methods based on a novel Lyapunov– Krasovskii functional with a switched term. For comparison, we calculate the AMUBs for various μ ∈ {0.1, 0.5, 0.9} and h2 ∈ {0, 1}, and the obtained AMUBs are listed in Table II. From this table, one can see clearly that Corollary 1 in this paper can derive larger AMUBs of h¯ than those methods proposed in [32]. It should be mentioned that for μ = 0.9, the obtained results in [32] are conservative, especially for the case of h2 = 1, where the proposed methods in [32] fail to judge the stability of the neural network, while h¯ = 1.37 is derived by Corollary 1. − Next, we turn to the state estimation issue. Let − 1 = 2 = − − 3 = 4 = 0. For h2 = 1 and μ = 0.9, employing Corollary 2 with λ1 = 1 and λ2 = 0.5, it is found that the state estimation problem is solvable for h¯ = 3.2 and the corresponding

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. ZHANG et al.: NEURONAL STATE ESTIMATION FOR NEURAL NETWORKS

Fig. 4.

Neuronal state u1 (t) and the estimation state uˆ 1 (t) for Example 2.

9

Fig. 7.

Neuronal state u4 (t) and the estimation state uˆ 4 (t) for Example 2.

u0 = col{0.5, 0.2, −0.2, −0.5}. From these figures, it is clear that the designed Luenberger estimator indeed makes a good estimation on the neuronal states of the neural network. VII. C ONCLUSION

Fig. 5.

Neuronal state u2 (t) and the estimation state uˆ 2 (t) for Example 2.

The neuronal state estimation issue has been addressed for a class of neural networks with two additive time-varying delay components. Three cases of these two time-varying delays have been fully discussed, where both two delays are differentiable, or continuous, or one delay is differentiable while the other is continuous. By introducing an extended reciprocally convex inequality, some sufficient conditions on the existence of suitable Luenberger estimators have been derived for the above three cases of time-varying delays, respectively. An LMI-based approach has been presented to design desired Luenberger estimators. Two well studied numerical examples have been given to illustrate the validity of the proposed method. R EFERENCES

Fig. 6.

Neuronal state u3 (t) and the estimation state uˆ 3 (t) for Example 2.

estimator gain K is given by ⎡ 1.1702 0.2718 −0.2925 ⎢ 0.2179 1.3483 0.3636 ⎢ K=⎣ 0.1543 0.4255 0.8466 −1.2334 −0.1890 −0.0819

⎤ −1.1771 0.1490 ⎥ ⎥. −0.2601⎦ 4.3845

(58)

Connecting the obtained Luenberger estimator, the neuronal states and the estimation states are plotted in Figs. 4–7, where the activation functions are chosen as gi (ui (t)) = + i tanh(ui (t)) (i = 1, 2, 3, 4), the time-varying delay d(t) = 1 + 2.2 ∗ sin(0.9t/2.2), and the initial condition is given as

[1] S. Wen, Z.-G. Zeng, T. Huang, Q. Meng, and W. Yao, “Lag synchronization of switched neural networks via neural activation function and applications in image encryption,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 7, pp. 1493–1502, Jul. 2015. [2] J. Wang, X.-M. Zhang, and Q.-L. Han, “Event-triggered generalized dissipativity filtering for neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 1, pp. 77–88, Jan. 2016. [3] W.-H. Chen, S. Luo, and W. X. Zheng, “Generating globally stable periodic solutions of delayed neural networks with periodic coefficients via impulsive control,” IEEE Trans. Cybern., to be published, doi: 10.1109/TCYB.2016.2552383. [4] P. P. San, S. H. Ling, Nuryani, and H. Nguyen, “Evolvable rough-blockbased neural network and its biomedical application to hypoglycemia detection system,” IEEE Trans. Cybern., vol. 44, no. 8, pp. 1338–1349, Aug. 2014. [5] Z. Wang, D. W. C. Ho, and X. Liu, “State estimation for delayed neural networks,” IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 279–284, Jan. 2005. [6] X.-M. Zhang and Q.-L. Han, “New Lyapunov–Krasovskii functionals for global asymptotic stability of delayed neural networks,” IEEE Trans. Neural Netw., vol. 20, no. 3, pp. 533–539, Mar. 2009. [7] L. Zhang, Y. Zhu, and W. X. Zheng, “State estimation of discrete-time switched neural networks with multiple communication channels,” IEEE Trans. Cybern., vol. 47, no. 4, pp. 1028–1040, Apr. 2017. [8] X.-M. Zhang and Q.-L. Han, “Network-based H∞ filtering using a logic jumping-like trigger,” Automatica, vol. 49, no. 5, pp. 1428–1435, May 2013.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 10

[9] Y. Luo, Z. Wang, G. Wei, F Alsaadi, and T. Hayat, “State estimation for a class of artificial neural networks with stochastically corrupted measurements under Round-Robin protocol,” Neural Netw., vol. 77, pp. 70–79, May 2016. [10] L. Zou, Z. Wang, H. Gao, and X. Liu, “State estimation for discrete-time dynamical networks with time-varying delays and stochastic disturbances under the Round-Robin protocol,” IEEE Trans. Neural Netw. Learn. Syst., to be published, doi: 10.1109/TNNLS.2016.2524621. [11] X. Liu and J. Cao, “Robust state estimation for neural networks with discontinuous activations,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 6, pp. 1425–1437, Dec. 2010. [12] H. Shen, Y. Zhu, L. Zhang, and J. H. Park, “Extended dissipative state estimation for Markov jump neural networks with unreliable links,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 2, pp. 346–358, Feb. 2017. [13] X.-M. Zhang and Q.-L. Han, “State estimation for static neural networks with time-varying delays based on an improved reciprocally convex inequality,” IEEE Trans. Neural Netw. Learn. Syst., to be published, doi: 10.1109/TNNLS.2017.2661862. [14] Y. He, Q.-G. Wang, M. Wu, and C. Lin, “Delay-dependent state estimation for delayed neural networks,” IEEE Trans. Neural Netw., vol. 17, no. 6, pp. 1077–1081, Jun. 2006. [15] H.-B. Zeng, Y. He, M. Wu, and H.-Q. Xiao, “Improved conditions for passivity of neural networks with a time-varying delay,” IEEE Trans. Cybern., vol. 44, no. 6, pp. 785–792, Jun. 2014. [16] P. Liu, Z.-G. Zeng, and J. Wang, “Multistability analysis of a general class of recurrent neural networks with non-monotonic activation functions and time-varying delays,” Neural Netw., vol. 79, pp. 117–127, Jul. 2016. [17] P. Jiang, Z.-G. Zeng, and J. Chen, “Almost periodic solutions for a memristor-based neural networks with leakage, time-varying and distributed delays,” Neural Netw., vol. 68, pp. 34–45, Aug. 2015. [18] X.-M. Zhang and Q.-L. Han, “Global asymptotic stability analysis for delayed neural networks using a matrix-based quadratic convex approach,” Neural Netw., vol. 54, pp. 57–69, Jun. 2014. [19] Z. Guo, J. Wang, and Z. Yan, “Passivity and passification of memristorbased recurrent neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 11, pp. 2099–2109, Nov. 2014. [20] P. Balasubramaniam and M. S. Ali, “Robust exponential stability of uncertain fuzzy Cohen–Grossberg neural networks with time-varying delays,” Fuzzy Sets Syst., vol. 161, no. 4, pp. 608–618, Feb. 2010. [21] C. V. Rao, J. B. Rawlings, and D. Q. Mayne, “Constrained state estimation for nonlinear discrete-time systems: Stability and moving horizon approximations,” IEEE Trans. Autom. Control, vol. 48, no. 2, pp. 246–258, Feb. 2003. [22] D. Zhang, Q.-L. Han, and X. Jia, “Network-based output tracking control for a class of T–S fuzzy systems that can not be stabilized by nondelayed output feedback controllers,” IEEE Trans. Cybern., vol. 45, no. 8, pp. 1511–1524, Aug. 2015. [23] D. Zhang, Q.-L. Han, and X. Jia, “Network-based output tracking control for T–S fuzzy systems using an event-triggered communication scheme,” Fuzzy Sets Syst., vol. 273, pp. 26–48, Aug. 2015. [24] L. Zhang, Z. Ning, and Z. Wang, “Distributed filtering for fuzzy timedelay systems with packet dropouts and redundant channels,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 46, no. 4, pp. 559–572, Apr. 2016. [25] Z. Guo, S. Yang, and J. Wang, “Global exponential synchronization of multiple memristive neural networks with time delay via nonlinear coupling,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 6, pp. 1300–1311, Jun. 2015. [26] J. Lian and J. Wang, “Passivity of switched recurrent neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 2, pp. 357–366, Feb. 2015. [27] X.-M. Zhang and Q.-L. Han, “Global asymptotic stability for a class of generalized neural networks with interval time-varying delay,” IEEE Trans. Neural Netw., vol. 22, no. 8, pp. 1180–1192, Aug. 2011. [28] X.-M. Zhang and Q.-L. Han, “Output feedback stabilization of networked control systems with a logic zero-order-hold,” Inf. Sci., vol. 381, pp. 78–91, Mar. 2017. [29] X.-M. Zhang and Q.-L. Han, “Event-triggered H∞ control for a class of nonlinear networked control systems using novel integral inequalities,” Int. J. Robust Nonlin. Control, vol. 27, no. 4, pp. 679–700, Mar. 2017. [30] Y. Zhao, H. Gao, and S. Mou, “Asymptotic stability analysis of neural networks with successive time delay components,” Neurocomputing, vol. 71, nos. 13–15, pp. 2848–2856, Aug. 2008. [31] H. Shao and Q.-L. Han, “New delay-dependent stability criteria for neural networks with two additive time-varying delay components,” IEEE Trans. Neural Netw., vol. 22, no. 5, pp. 812–818, May 2011.

IEEE TRANSACTIONS ON CYBERNETICS

[32] C.-K. Zhang, Y. He, L. Jiang, Q. H. Wu, and M. Wu, “Delay-dependent stability criteria for generalized neural networks with two delay components,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 7, pp. 1263–1276, Jul. 2014. [33] Y. Liu, S. Lee, and H. Lee, “Robust delay-dependent stability criteria for uncertain neural networks with two additive time-varying delay components,” Neurocomputing, vol. 151, pp. 770–775, Mar. 2015. [34] X.-M. Zhang and Q.-L. Han, “A decentralized event-triggered dissipative control scheme for systems with multiple sensors to sample the system outputs,” IEEE Trans. Cybern., vol. 46, no. 12, pp. 2754–2757, Dec. 2016. [35] R. Rakkiyappan, R. Sivasamy, J. H. Park, and T. H. Lee, “An improved stability criterion for generalized neural networks with additive timevarying delays,” Neurocomputing, vol. 171, pp. 615–624, Jan. 2016. [36] X.-M. Zhang and Q.-L. Han, “Abel lemma-based finite-sum inequality and its application to stability analysis for linear discrete time-delay systems,” Automatica, vol. 57, pp. 199–202, Jul. 2015. [37] H. Gao, T. Chen, and J. Lam, “A new delay system approach to networkbased control,” Automatica, vol. 44, no. 1, pp. 39–52, Jan. 2008. [38] A. Seuret and F. Gouaisbaut, “Wirtinger-based integral inequality: Application to time-delay systems,” Automatica, vol. 49, no. 9, pp. 2860–2866, Sep. 2013. [39] P. G. Park, J. W. Ko, and J. W. Jeong, “Reciprocally convex approach to stability of systems with time-varying delays,” Automatica, vol. 47, no. 1, pp. 235–238, Jan. 2011. [40] A. Seuret and F. Gouaisbaut, “Delay-dependent reciprocally convex combination lemma,” Rapport LAAS n16006, 2016. [41] X.-M. Zhang and Q.-L. Han, “Event-based H∞ filtering for sampleddata systems,” Automatica, vol. 51, pp. 55–69, Jan. 2015. [42] X.-M. Zhang, M. Li, M. Wu, and J.-H. She, “Further results on stability and stabilisation of linear systems with state and input delays,” Int. J. Syst. Sci., vol. 40, no. 1, pp. 1–10, Jan. 2009. [43] A. Seuret and F. Gouaisbaut, “Complete quadratic Lyapunov functionals using Bessel–Legendre inequality,” in Proc. Eur. Control Conf., Strasbourg, France, 2014, pp. 448–453. [44] J. Tian and S. Zhong, “Improved delay-dependent stability criteria for neural networks with two additive time-varying delay components,” Neurocomputing, vol. 77, no. 1, pp. 114–119, Feb. 2012.

Xian-Ming Zhang (M’16) received the M.Sc. degree in applied mathematics and the Ph.D. degree in control theory and engineering from Central South University, Changsha, China, in 1992 and 2006, respectively. In 1992, he joined Central South University, where he was an Associate Professor with the School of Mathematics and Statistics. From 2007 to 2014, he was a Post-Doctoral Research Fellow and a Lecturer with the School of Engineering and Technology, Central Queensland University, Rockhampton, QLD, Australia. From 2014 to 2016, he was a Lecturer with the Griffith School of Engineering, Griffith University, Gold Coast, QLD, Australia. In 2016, he joined the Swinburne University of Technology, Melbourne, VIC, Australia, where he is currently a Senior Lecturer with the School of Software and Electrical Engineering. His current research interests include H-infinity filtering, event-triggered control systems, networked control systems, neural networks, distributed systems, and time-delay systems. Dr. Zhang was a recipient of the National Natural Science Award (Level 2) in China in 2013, and the Hunan Provincial Natural Science Award (Level 1) in Hunan Province in China in 2011, both jointly with Prof. M. Wu and Prof. Y. He, and the IET Premium Award in 2016, jointly with Prof. Q.-L. Han. He serves as an Associate Editor for the Journal of the Franklin Institute.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. ZHANG et al.: NEURONAL STATE ESTIMATION FOR NEURAL NETWORKS

Qing-Long Han (M’09–SM’13) received the B.Sc. degree in mathematics from Shandong Normal University, Jinan, China, in 1983, and the M.Sc. and Ph.D. degrees in control engineering and electrical engineering from the East China University of Science and Technology, Shanghai, China, in 1992 and 1997, respectively. From 1997 to 1998, he was a Post-Doctoral Researcher Fellow with the Laboratoire d’Automatique et d’Informatique Industielle (currently, Laboratoire d’Informatique et d’Automatique pour les Systémes), École Supérieure d’Ingénieurs de Poitiers (currently, École Nationale Supérieure d’Ingénieurs de Poitiers), Université de Poitiers, Poitiers, France. From 1999 to 2001, he was a Research Assistant Professor with the Department of Mechanical and Industrial Engineering, Southern Illinois University at Edwardsville, Edwardsville, IL, USA. From 2001 to 2014, he was Laureate Professor, an Associate Dean (Research and Innovation) with the Higher Education Division, and the Founding Director of the Centre for Intelligent and Networked Systems, Central Queensland University, Rockhampton, QLD, Australia. From 2014 to 2016, he was Deputy Dean (Research), with Griffith Sciences, and a Professor with the Griffith School of Engineering, Griffith University, Gold Coast, QLD, Australia. In 2016, he joined the Swinburne University of Technology, Melbourne, VIC, Australia, where he is currently a Pro Vice-Chancellor (Research Quality) and a Distinguished Professor. In 2010, he was appointed the Chang Jiang (Yangtze River) Scholar Chair Professor by the Ministry of Education, China. His current research interests include networked control systems, neural networks, time-delay systems, multiagent systems, and complex dynamical systems. Prof. Han was a recipient of one of the World’s Most Influential Scientific Minds from 2014 to 2016 and the Highly Cited Researcher Award in Engineering by Thomson Reuters. He is an Associate Editor of a number of international journals, including the IEEE T RANSACTIONS ON I NDUSTRIAL E LECTRONICS, the IEEE T RANSACTIONS ON I NDUSTRIAL I NFORMATICS, the IEEE T RANSACTIONS ON C YBERNETICS, and Information Sciences.

Zidong Wang (SM’03–F’14) was born in Jiangsu, China, in 1966. He received the B.Sc. degree in mathematics from Suzhou University, Suzhou, China, in 1986, and the M.Sc. degree in applied mathematics and the Ph.D. degree in electrical engineering from the Nanjing University of Science and Technology, Nanjing, China, in 1990 and 1994, respectively. He is currently a Professor of Dynamical Systems and Computing with the Department of Information Systems and Computing, Brunel University London, Uxbridge, U.K. From 1990 to 2002, he held teaching and research appointments in universities in China, Germany, and the U.K. He has published over 300 papers in refereed international journals. He is a holder of the Alexander von Humboldt Research Fellowship of Germany, the JSPS Research Fellowship of Japan, and William Mong Visiting Research Fellowship of Hong Kong. His current research interests include dynamical systems, signal processing, bioinformatics, and control theory and applications. Prof. Wang serves (or has served) as the Editor-in-Chief for Neurocomputing and an Associate Editor for 12 international journals, including the IEEE T RANSACTIONS ON AUTOMATIC C ONTROL, the IEEE T RANSACTIONS ON C ONTROL S YSTEMS T ECHNOLOGY, the IEEE T RANSACTIONS ON N EURAL N ETWORKS, the IEEE T RANSACTIONS ON S IGNAL P ROCESSING, and the IEEE T RANSACTIONS ON S YSTEMS , M AN , AND C YBERNETICS —S YSTEMS . He is a fellow of the Royal Statistical Society and a member of program committee for many international conferences.

11

Bao-Lin Zhang (M’13) born in Ningxia Hui, China, in 1972. He received the B.Sc. and M.Sc. degrees in applied mathematics from Ningxia University, Yinchuan, China, in 1995 and 1998, respectively, and the Ph.D. degree in physical oceanography from the Ocean University of China, Qingdao, China, in 2006. In 1998, he joined Ludong University, Yantai, China, where he was a Lecturer with the School of Mathematics and Statistics Science. In 2006, he joined China Jiliang University, Hangzhou, China, where he is currently a Professor and the Dean with the College of Science. He was the Director with the Department of Mathematics, China Jiliang University, from 2010 to 2012, and an Assistant Dean with the College of Sciences from 2013 to 2016. From 2009 to 2010, he was a Visiting Professor with the Centre for Intelligent and Networked Systems, Central Queensland University, Rockhampton, QLD, Australia. From 2015 to 2016, he was a Visiting Professor with the Griffith School of Engineering, Griffith University, Gold Coast, QLD, Australia. His current research interests include networked control systems, time-delay systems, and active control for ocean engineering. Prof. Zhang was a recipient of the New Century 151 Talent Program of Zhejiang Province in 2009.

Suggest Documents