Summary. In this paper, the periodic solutions have been discussed for a general class of bi-directional associative memory (BAM) neural networks with ...
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.2B, February 2006
63
Existence of periodic solutions of BAM neural networks with timevarying delays and reaction-diffusion terms Zhenjiang Zhao and Qiankun Song Department of Mathematics, Huzhou Teachers College, Huzhou, Zhejiang 313000, China functional equation. However, strictly speaking, diffusion effect cannot be avoided in the neural networks model Summary In this paper, the periodic solutions have been discussed for a when electrons are moving in asymmetric electromagnetic general class of bi-directional associative memory (BAM) neural field, so we must consider the activations vary in space as networks with time-varying delays and reaction-diffusion terms. well as in time. In [14-24], the authors have considered A simple and new sufficient condition has been obtained to the stability of neural networks with diffusion terms, ensure the existence of periodic solutions of bi-directional which are expressed by partial differential equations. To associative memory neural networks with time-varying delays the best of our knowledge, the problem of periodic and reaction-diffusion terms, and the activation functions are not oscillatory solutions is seldom considered for the BAM required to be differentiable, bounded, monotone nondecreasing networks with reaction-diffusion terms. and the weight-connected matrices to be symmetric in the In this paper, we analyze further the problem of periodic obtained result. In addition, an example has been given to show solutions of the BAM neural networks with time-varying the effectiveness of the obtained result. delays and reaction-diffusion terms, and derive one simple Key words: sufficient condition to ensure the existence of periodic Periodic solutions, reaction-diffusion, bi-directional associative solutions of the BAM neural networks by constructing a memory neural networks, time-varying delays suitable Lyapunov functional and some analysis techniques, and prove that all other solutions of the BAM Introduction neural networks converge exponentially to the periodic oscillatory solution. The activation functions are not It is well known that a series of neural networks related to required to be differentiable, bounded, monotone bi-directional associative memory (BAM) neural networks nondecreasing, the weight-connected matrices to be have been proposed by Kosko[1-3]. The bidirectional symmetric; hence the results improve and extend the associative memory (BAM) model is more general and earlier works in Refs. [8, 9, 11]. These possess important powerful than the Hopfield auto-associative memory and leading significance in the design and applications of includes the Hopfield memory as a special case. A BAM periodic oscillatory BAM with time-delaying and reactioncan associate an input pattern with a different stored diffusion, and are of great interest in many applications. output pattern or a stored pattern pair, thus allowing In this paper, we consider the following BAM networks bidirectional association. Therefore, this class of networks with time-delaying and reaction-diffusion described by has good application in the field of pattern recognition and differential equations. artificial intelligence [2]. The stability of BAM neural networks without and with delays has been studied, we ∂ui (t , x) m ∂ ⎛ ∂ui (t , x) ⎞ =∑ ⎜ aik ⎟ − ci ui (t , x) refer to Refs.[1-13] and the references cited therein. ∂t ∂xk ⎠ k =1 ∂xk ⎝ Moreover, studies on neural dynamical systems not only p (1) involve a discussion of stability properties, but also + ∑ pij f j (α j v j (t − τ ij (t ), x)) + I i (t ), j =1 involve many dynamic behaviors such as periodic ∂v j (t , x) m ∂ ⎛ ∂v j (t , x) ⎞ oscillatory behavior, bifurcation and chaos. In many =∑ ⎜ b jk ⎟ − d j v j (t , x) applications, the properties of periodic oscillatory ∂t ∂xk ⎠ k =1 ∂xk ⎝ solutions are of great interest. For example, the human n + ∑ qij f i ( βi ui (t − σ ij (t ), x)) + J j (t ), brain has been in periodic oscillatory or chaos state; hence i =1 it is of prime importance to study periodic oscillatory and T m a where x = ( x1 , x2 ,L , xm ) ∈ Ω ⊂ R , Ω is chaos phenomena of neural networks. If the dynamical behaviors of the BAM networks depend only on the time, compact set with smooth boundary and mesΩ > 0 in at this time the model is an ordinary system of differential space ; , Rm u = (u1 , u2 ,L , un )T ∈ R n equation; if the behavior depends on not only the time but also contains the time delay, the model is a system of Manuscript received February, 2006. Manuscript reviced February, 2006.
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.2B, February 2006
64
v = (v1 , v2 ,L , v p )T ∈ R p ; ci , d j , α j , and β i are
convenience,
positive constants and denote the neuronal gains associated with the neuronal activation, i = 1 , 2 , L , n ,
that
j = 1 , 2 , L , p . aik = aik (t , x) ≥ 0 , b jk = b jk (t , x) ≥ 0 are sufficient smooth functions, i =1, 2 , L , n , j =1 , 2 , L , p , k =1, 2 , L , m ; τ ij (t ) and σ ij (t ) satisfy 0 ≤ τ ij (t ) ≤ τ ij ,
0 ≤ σ ij (t ) ≤ σ ij and
dτ ij (t )
≤0,
dσ ij (t )
≤ 0 , i =1,
dt dt 2 , L , n , j = 1 , 2 , L , p . The boundary conditions
and initial conditions are given by T
∂ui ⎛ ∂ui ∂ui ∂u ⎞ , ,L , i ⎟ , i = 1, 2,L , n, =⎜ % ∂xm ⎠ ∂ n ⎝ ∂x1 ∂x2
(2)
T
∂v j ⎛ ∂v j ∂v j ∂v j ⎞ , ,L , =⎜ ⎟ , j = 1, 2,L , p, % ∂xm ⎠ ∂ n ⎝ ∂x1 ∂x2
v j (t , x) = ϕvj (t , x), −σ ≤ t ≤ 0, j = 1, 2,L , p,
in which,
ϕvj
notation.
For
any
u (t , x) = (u1 (t , x), u2 (t , x)L un (t , x)) ∈ R , define T
n
1
ui (t , x) 2 = ⎡ ∫ | ui (t , x) |2 ⎤ 2 , i = 1, 2,L , n. ⎣ Ω ⎦ Theorem 1 For model (1), suppose that f j ( j = 1, 2,L , max{n, p}) satisfy the hypothesis (H) +
above, I i : R → R , i = 1 ,
2 , L , n and
+
J j : R → R , j = 1 , 2 , L , p are continuously periodic functions with period ω ,then there exists
exactly one ω -periodic solution of model (1) and all other solutions of model (1) converge exponentially to it as t → ∞ if j =1
(4)
n
(3)
1≤ i ≤ n ,1≤ j ≤ p
are bounded and continuous on ( −∞, 0] .
Suppose further that the following assumption is satisfied (H) There exist positive constants l j ( j = 1, 2,L , max{n, p}) such that
| f j (ξ1 ) − f j (ξ 2 ) |≤ l j | ξ1 − ξ 2 | for any
a
p
τ = max τ ij , σ = max σ ij , and ϕui , 1≤ i ≤ n ,1≤ j ≤ p
introduce
ci − β i li ∑ | qij | > 0, i = 1, 2,L , n,
and
ui (t , x) = ϕui (t , x), −τ ≤ t ≤ 0, i = 1, 2,L , n,
we
ξ1 , ξ 2 ∈ R .
The organization of this paper is as follows. In Section 2, we shall derive a new sufficient condition for checking the existence of periodic solutions of the reaction-diffusion BAM neural networks with time-varying delays. In Section 3, we shall make some comparisons with the earlier references and give an example to illustrate the theory. In Section 4, we shall give concluding remarks of the derived results.
d j − α j l j ∑ | pij | > 0, j = 1, 2,L , p. i =1
Proof.
We
denote
that
Γ = {ϕ | ϕ = (ϕu1 ,L , ϕun , ϕv1 ,L , ϕvp )T }
ϕu = (ϕu1 ,L , ϕun )T ,
and
,
ϕv = (ϕv1 ,L , ϕvp )T .
⎛ ϕu ⎞ m n+ p ⎟ :[−τ , 0] × R → R ϕ ⎝ v⎠
ϕ =⎜
is continuous function with the topology of uniform convergence. For ϕ ∈ Γ , define that
⎛ ϕu ( s, x ) ⎞ ⎜ ⎟ = sup ϕu ( s, x) + sup ϕv ( s, x) , −σ ≤ s ≤ 0 ⎝ ϕu ( s, x) ⎠ −τ ≤ s ≤0 in which n
ϕu ( s, x) = ∑ ϕui ( s, x)
2
ϕv ( s, x) = ∑ ϕvj ( s, x)
2
i =1 p
j =1
, , then
Γ is the Banach
space.
2. Existence of periodic solutions In this section, we consider the periodic oscillatory solutions of model (1), in which external input
I i : R + → R , i = 1 , 2 , L , n and J j : R + → R , j = 1 , 2 , L , p are continuously periodic functions with period ω , i.e. I i (t + ω ) = I i , J j (t + ω ) = J j . For
⎛ ϕu ⎞ ⎟, ⎝ ϕv ⎠
⎛ψ u ⎞ ⎜ ⎟ ∈ Γ , we denote the solutions of ⎝ψ v ⎠ ⎛ ⎛ 0 ⎞ ⎛ ϕu ⎞ ⎞ ⎛ ⎛ 0 ⎞ ⎛ψ u ⎞ ⎞ model (1) through ⎜ ⎜ ⎟ , ⎜ ⎟ ⎟ and ⎜ ⎜ ⎟ , ⎜ ⎟ ⎟ as ϕ ψ 0 0 ⎝ ⎠ ⎝ ⎠ v v ⎝ ⎠⎠ ⎝ ⎠⎠ ⎝ ⎝
For any ⎜
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.2B, February 2006
u (t , ϕu ) = (u1 (t , ϕu ),L , un (t , ϕu ))T ,
m
∑∫
v (t , ϕv ) = (v1 (t , ϕv ),L , v p (t , ϕv ))T ,
k =1
and
Ω
(uiϕ − uψi )
65
∂ ∂xk
⎛ ∂ (uiϕ − uψi ) ⎞ ⎜ aik ⎟ dx ∂xk ⎝ ⎠ m
⎛ ∂ (uiϕ − uψi ) ⎞ = ∫ (ui − ui )∇ ⋅ ⎜ aik ⎟ dx Ω ∂xk ⎝ ⎠ k =1 ϕ
u (t ,ψ u ) = (u1 (t ,ψ u ),L , un (t ,ψ u )) , T
v(t ,ψ v ) = (v1 (t ,ψ v ),L , v p (t ,ψ v ))T ,
m
⎛ ∂ (uiϕ − uψi ) ⎞ = ∫ ∇ ⋅ ⎜ (uiϕ − uψi )aik ⎟ dx Ω x ∂ k ⎝ ⎠ k =1
respectively. Define
ut (ϕu ) = u (t + θ , ϕu ),θ ∈ [−τ , 0], t ≥ 0,
m
vt (ϕv ) = v(t + θ , ϕv ), θ ∈ [−τ , 0], t ≥ 0, ⎛ ut (ϕu ) ⎞ ⎟ ∈ Γ for any t ≥ 0 . From Eq. (1), we get ⎝ vt (ϕv ) ⎠
then ⎜
∂ (ui (t , ϕu ) − ui (t ,ψ u )) m ∂ ⎛ ∂ (ui (t , ϕu ) − ui (t ,ψ u )) ⎞ =∑ ⎜ aik ⎟ ∂t ∂xk k =1 ∂xk ⎝ ⎠ (5) −ci (ui (t , ϕu ) − ui (t ,ψ u )) p
+ ∑ pij f j (α j v j (t − τ ij (t ), ϕv ))
m
⎛ ∂ (uiϕ − uψi ) ⎞ = ∫ ⎜ (uiϕ − uψi )aik ⎟ ds ∂Ω ∂xk ⎝ ⎠k =1 2
⎛ ∂ (uiϕ − uψi ) ⎞ −∑ ∫ aik ⎜ ⎟ dx Ω ∂xk k =1 ⎝ ⎠ m
2
⎛ ∂ (uiϕ − uψi ) ⎞ = −∑ ∫ aik ⎜ ⎟ dx Ω ∂xk k =1 ⎝ ⎠
p
−∑ pij f j (α j v j (t − τ ij (t ),ψ v )) j =1
∂t
⎛ ∂ (uiϕ − uψi ) ⎞ ϕ ψ − ∫ ⎜ aik ⎟ ⋅∇(ui − ui )dx Ω ∂xk ⎝ ⎠ k =1
m
j =1
∂ (v j (t , ϕv ) − v j (t ,ψ v ))
ψ
⎛ ∂ (v j (t , ϕv ) − v j (t ,ψ v )) ⎞ ⎜ b jk ⎟ ∂xk ⎝ ⎠ (6) −d j [v j (t , ϕv ) − v j (t ,ψ v )]
∂ =∑ k =1 ∂xk m
(8) T
⎛ ∂ ∂ ∂ ⎞ in which ∇ = ⎜ , ,L , ⎟ is the gradient ∂xm ⎠ ⎝ ∂x1 ∂x2 operator, and
n
+ ∑ qij fi ( β i ui (t − σ ij (t ), ϕu ))
m
T
For convenience, in this section, we denote that
⎛ ∂ (uiϕ − uψi ) ⎞ ⎛ ∂ (uiϕ − uψi ) ∂ (uiϕ − uψi ) ⎞ . ,L , aim ⎜ aik ⎟ = ⎜ ai1 ⎟ ∂xk ∂x1 ∂xm ⎝ ⎠ k =1 ⎝ ⎠ From (7), (8), assumption (H) and Cauchy inequality, we have
uiϕ = ui (t , ϕu ) , uψi = ui (t ,ψ u ) , vϕj = v j (t , ϕv ) , and
d ϕ ψ ui − ui dt
i =1 n
−∑ qij fi ( βi ui (t − σ ij (t ),ψ u )) i =1
ψ j
v = v j (t ,ψ v ) . Multiply the both side of Eq. 5 by ϕ i
ψ i
u − u , and integrate, then get
−ci ∫ (ui − ui ) dx ψ 2
Ω
p
+ ∑ ∫ (uiϕ − uψi ) pij f j (α j v j (t − τ ij (t ), ϕv ))dx j =1
Ω
p
−∑ ∫ (uiϕ − uψi ) pij f j (α j v j (t − τ ij (t ),ψ v ))dx j =1
Ω
By the boundary condition (2), we get
2
≤ −2ci uiϕ − uψi
p
+2∑ | pij | α j l j uiϕ − uψi j =1
m ∂ ⎛ ∂ (uiϕ − uψi ) ⎞ 1 d (uiϕ − uψi ) 2 dx = ∑ ∫ (uiϕ − uψi ) ⎜ aik ⎟ dx ∫ Ω Ω ∂xk ⎝ ∂xk 2 dt k =1 ⎠
ϕ
2
(7)
i.e. d ϕ ui − uψi dt
2
2
2 2
v j (t − τ ij (t ), ϕv ) − v j (t − τ ij (t ),ψ v )
≤ −ci uiϕ − uψi
2
2
(9)
p
+ ∑ | pij | α j l j v j (t − τ ij (t ), ϕ v ) − v j (t − τ ij (t ),ψ v ) . 2
j =1
ϕ
ψ
Similarly, multiply the both side of Eq. 6 by v j − v j , and integrate, then get d ϕ ψ v j − v j ≤ − d j vϕj − vψj 2 dt
2
(10)
n
+ ∑ | qij | β i li vi (t − τ ij (t ), ϕu ) − vi (t − τ ij (t ),ψ u ) . i =1
From Eq. 4, we can choose a small
2
δ >0
such that
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.2B, February 2006
66
p
(ci − δ ) − β i li ∑ qij e
δσ ij
j =1
D +V (t ) dt
> 0, i = 1, 2,L , n,
n
(11)
n
(d j − δ ) − α j l j ∑ pij e
δτ ij
i =1
> 0, j = 1, 2,L , p.
V (t ) = ∑ ⎡ ui − ui ⎣ i =1 ϕ
p
+∑∫ j =1
t
t −τ ij ( t )
ψ
+ eδ t ∑ pij α j l j v j (t − τ ij (t ), ϕv ) − v j (t − τ ij (t ),ψ v )
p
⎡ vϕj − vψj ∑ ⎣ j =1
2
p
+ ∑ pij α j l j vϕj − vψj j =1
e
pij α j l j vϕj − vψj
+∑∫
have
t
e
p
δ ( s +τ ij )
2
e
⎤ ds ⎥ ⎦
δ ( t −τ ij ( t ) +τ ij )
(12)
2
n
+ eδ t ∑ qij β i li ui (t − σ ij (t ), ϕu ) − ui (t − σ ij (t ),ψ u )
2
i =1
δ ( s +σ ij )
dτ ij (t ) ⎤ )⎥ dt ⎦
p
+ ∑ ⎡(δ − d j )eδ t vϕj − vψj ⎣ j =1
n
e
(1 −
j =1
eδ t
qij β i li uiϕ − uψi
δ ( t +τ ij )
2
− ∑ pij α j l j v j (t − τ ij (t ), ϕv ) − v j (t − τ ij (t ),ψ v ) 2 e
⎤ ds ⎥ . 2 t −σ ij ( t ) i =1 ⎦ + Calculating the upper right Dini derivate D V (t ) of V (t ) along the solutions of Eq. 5 and Eq. 6, and by Eq. 9, Eq. 10, Eq. 11 and the assumptions of τ ij (t ) , σ ij (t ) , we n
2
j =1
δt
2
2
p
We consider the Lyapunov functional n
≤ ∑ ⎡(δ − ci )eδ t uiϕ − uψi ⎣ i =1
+ ∑ qij βi li uiϕ − uψi i =1
δ ( t +σ ij )
2
e
n
− ∑ qij βi li ui (t − σ ij (t ), ϕu ) − ui (t − σ ij (t ),ψ u ) e
δ ( t −σ ij ( t ) +σ ij )
2
i =1
p n ⎡ δτ ≤ eδ t ∑ ⎢(δ − ci ) uiϕ − uψi + ∑ pij α j l j e ij vϕj − vψj 2 i =1 ⎣ j =1 p n ⎡ δσ + eδ t ∑ ⎢(δ − d j ) vϕj − vψj + ∑ qij β i li e ij uiϕ − uψi 2 j =1 ⎣ i =1
(1 −
dσ ij (t ) ⎤ ) dt ⎥⎦
⎤ ⎥ ⎦ ⎤ 2⎥ ⎦
2
p n ⎡ δσ ⎤ = eδ t ∑ ⎢(δ − ci ) + β i li ∑ qij e ij ⎥ uiϕ − uψi 2 i =1 ⎣ j =1 ⎦ p n ⎡ δτ ⎤ + eδ t ∑ ⎢(δ − d j ) + α j l j ∑ pij e ij ⎥ vϕj − vψj 2 j =1 ⎣ i =1 ⎦ ≤ 0.
Hence
V (t ) ≤ V (0), t ≥ 0.
(13)
From Eq. 12, we know
⎛ n V (t ) ≥ e ⎜ ∑ uiϕ − uψi ⎝ i =1
p
δt
2
+ ∑ vϕj − vψj j =1
⎞ ⎟, 2 ⎠
and n
V (0) = ∑ ⎡⎣ ϕui −ψ ui i =1 p
+∑∫ j =1
0
−τ ij (0)
2
δ ( s +τ ij )
pij α j l j ϕvj −ψ vj e 2
⎤ ds ⎥ ⎦
p
+ ∑ ⎡⎣ ϕvi −ψ vi j =1 n
+∑∫ i =1
0
−σ ij (0)
2
δ ( s +σ ij )
qij βi li ϕui −ψ ui 2 e
⎛ n ≤ M ⎜ ∑ ϕui −ψ ui ⎝ i =1
⎤ ds ⎥ ⎦
p ⎞ + ϕvi −ψ vi 2 ⎟ , ∑ 2 j =1 ⎠
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.2B, February 2006
⎛ ϕu* ⎞ ⎛ ϕu* ⎞ P ⎜ * ⎟ = ⎜ * ⎟. ⎜ϕ ⎟ ⎜ϕ ⎟ ⎝ v⎠ ⎝ v⎠ * ⎛ ⎛ ϕu ⎞ ⎞ ⎛ ⎛ ϕu* ⎞ ⎞ ⎛ ϕu* ⎞ PN ⎜ P ⎜ * ⎟ ⎟ = P ⎜ PN ⎜ * ⎟ ⎟ = P ⎜ * ⎟ ⎜ϕ ⎟ ⎟ ⎜ϕ ⎟ ⎜ ⎜ϕ ⎟ ⎟ ⎜ ⎝ v ⎠⎠ ⎝ v⎠ ⎝ ⎝ v ⎠⎠ ⎝
where p n ⎧ δσ δτ ⎫ M = max ⎨1 + βi li ∑ qij σ ij e ij ,1 + α j l j ∑ pij τ ij e ij ⎬ ≥ 1. 1≤i ≤ n ,1≤ j ≤ p j =1 i =1 ⎩ ⎭ Then from Eq. 13, we easily follow that
n
∑ i =1
ui (t , ϕu ) − ui (t ,ψ u )
2
j =1
2
≤ M ( ϕu −ψ u + ϕ v − ψ v ) e − δ t
uniqueness of fixed point, we can get that
for all t ≥ 0 . Hence n
∑ u (t , ϕ ) − u (t ,ψ ) ≤ M ( ϕ −ψ + ϕ −ψ ) e δ , i
u
i
u
2
− t
u
u
v
v
p
∑ v (t , ϕ ) − v (t ,ψ j =1
j
v
j
v
)
2
≤ M ( ϕu − ψ u + ϕ v − ψ v ) e − δ t . Thus
ut (ϕu ) − ut (ψ u )
≤ Me −δ (t −τ ) ( ϕu −ψ u + ϕv −ψ v ) , vt (ϕv ) − vt (ψ v )
,
⎛ ϕu* ⎞ P ⎜ * ⎟ ( ∈ Γ ) is also a fixed point of P N . By the ⎜ϕ ⎟ ⎝ v⎠
p
i =1
N
Since
+ ∑ v j (t , ϕv ) − v j (t ,ψ v )
67
≤ Me −δ (t −σ ) ( ϕu −ψ u + ϕv −ψ v
)
for all t ≥ 0 . We can choose a positive integer N such that
1 1 Me−δ ( Nω −τ ) ≤ , Me −δ ( Nω −σ ) ≤ . 6 6 Now we define a Poincare mapping P : Γ → Γ by ⎛ ϕu ⎞ ⎛ uω (ϕu ) ⎞ P⎜ ⎟ = ⎜ ⎟, ⎝ ϕv ⎠ ⎝ vω (ϕv ) ⎠ then
⎛ ϕu ⎞ ⎛ u Nω (ϕu ) ⎞ PN ⎜ ⎟ = ⎜ ⎟. ⎝ ϕv ⎠ ⎝ vNω (ϕv ) ⎠
Take t = N ω , then
⎛ ϕu ⎞ ⎛ψ u ⎞ 1 ⎛ ϕu ⎞ ⎛ψ u ⎞ PN ⎜ ⎟ − PN ⎜ ⎟ ≤ ⎜ ⎟ − ⎜ ⎟ , ⎝ ϕv ⎠ ⎝ψ v ⎠ 3 ⎝ ϕv ⎠ ⎝ψ v ⎠ N this implies that P is a contraction mapping, so there ⎛ ϕu* ⎞ exists one unique fixed point ⎜ ∈ Γ , such that ⎜ψ * ⎟⎟ ⎝ u⎠
⎛ ϕu* ⎞ ⎛ ϕu* ⎞ ⎛ uω (ϕu* ) ⎞ ⎛ ϕu* ⎞ P ⎜ * ⎟ = ⎜ * ⎟ , i.e. ⎜ . = ⎜ϕ ⎟ ⎜ϕ ⎟ ⎜ v (ϕ * ) ⎟⎟ ⎜⎜ ϕ * ⎟⎟ ⎝ v⎠ ⎝ v⎠ ⎝ ω v ⎠ ⎝ v⎠ * ⎛ u (t , ϕu ) ⎞ be a solution of model (1) Let ⎜ ⎜ v(t , ϕ * ) ⎟⎟ v ⎠ ⎝ ⎛ u (t + ω , ϕu* ) ⎞ ⎛ ⎛ 0 ⎞ ⎛ ϕu ⎞ ⎞ is also a through ⎜ ⎜ ⎟ , ⎜ ⎟ ⎟ , then ⎜ ⎜ v(t + ω , ϕ * ) ⎟⎟ ⎝ ⎝ 0 ⎠ ⎝ ϕv ⎠ ⎠ v ⎠ ⎝ solution of model (1), obviously,
⎛ ut +ω (ϕu* ) ⎞ ⎛ ut (uω (ϕu* )) ⎞ ⎛ ut (ϕu* ) ⎞ ⎜⎜ ⎟ = ⎜⎜ ⎟⎟ = ⎜⎜ ⎟, * ⎟ * * ⎟ v ( ϕ ) v ( v ( ϕ )) v ( ϕ ) ⎝ t +ω v ⎠ ⎝ t ω v ⎠ ⎝ t v ⎠ for t ≥ 0 . Hence ⎛ u (t + ω , ϕu* ) ⎞ ⎛ u (t , ϕu* ) ⎞ ⎜⎜ ⎟ = ⎜⎜ ⎟, * ⎟ * ⎟ ⎝ v(t + ω , ϕv ) ⎠ ⎝ v(t , ϕv ) ⎠ ⎛ u (t , ϕu* ) ⎞ is exactly one ω -periodic this shows that ⎜ ⎜ v(t , ϕ * ) ⎟⎟ v ⎠ ⎝ solution of model (1), and it easy to see that all other solutions of model (1) converge exponentially to it as t → +∞ . The proof is completed.
3. Comparisons and example For model (1), if aik = 0 , bik = 0 ,
τ ij (t )
,
σ ij (t )
α j = 1 , β i = 1 , and
are constants, then the existence of
periodic solutions of this model has been studied by many authors, see for example, Refs. [8, 9, 11] and the references cited therein, but the activation function f j was required to be bounded on R in these references. In addition, the studied models in Refs. [8, 9, 11] are the special case of model (1) in this paper. Furthermore, in our discussion of the existence of periodic solutions of model
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.2B, February 2006
68
(1), we only require the activation function f j to satisfy
R.
the assumption (H), not require it to be bounded on
Example. Consider the following network with timevarying delay and reaction-diffusion.
∂u1 (t , x) ∂ ⎛ 2 4 ∂u1 (t , x) ⎞ 1 = ⎜t x ⎟ − u1 (t , x) ∂t ∂t ⎝ ∂x ⎠ 2 1 f1 (3v1 (t − τ 11 (t ), x)) + sin t , + 12 ∂u2 (t , x) ∂ ⎛ 2 4 ∂u2 (t , x) ⎞ 7 = ⎜t x ⎟ − u2 (t , x) ∂t ∂t ⎝ ∂x ⎠ 10 1 − f1 (3v1 (t − τ 21 (t ), x)) − 2 cos t , 9 ∂v1 (t , x) ∂ ⎛ 2 4 ∂v1 (t , x) ⎞ 6 = ⎜t x ⎟ − v1 (t , x) ∂t ∂t ⎝ ∂x ⎠ 5 1 + f1 (2u1 (t − σ 11 (t ), x)) 9 1 + f 2 (5u2 (t − σ 12 (t ), x)) + cos t , 15 where f1 (u ) = f 2 (u ) = u + 1 + u − 1
τ 11 (t ) = τ 21 (t ) = σ 11 (t ) = σ 12 (t ) =
π
− arctan t
v∈R
,
hence,
paper have a wider adaptive range.
Acknowledgments This work is supported by the National Natural Science Foundation of China under Grant 10475026, the Natural Science Foundation of Zhejiang Province, China under Grant M103087, the Scientific and Technological Project of Huzhou City, China under Grant 2004YG53, the Natural Science Foundation of Huzhou city, Zhejiang Province, China under Grant 2004SZX0703. ,
References
(
[1] B. Kosko, Adaptive bi-directional associative memories, Appl. Opt.,1987,26(23):4947-4960.
= 1, 2 ) for any u ,
l1 = l2 = 2
to τ 11 (t ) = τ 21 (t ) = σ 11 (t ) = σ 12 (t ) =
.
π
In this paper, we have presented one simple and new sufficient condition for the existence of periodic solutions of the BAM neural networks with time-varying delays and reaction-diffusion terms. We only require the activation functions to satisfy assumption (H), not require the activation functions to be differentiable, bounded, monotone nondecreasing, the weight-connected matrices to be symmetric. In view of this point, the established sufficient conditions to ensure the existence of periodic solutions of bi-directional associative memory neural networks with time-varying delays and reaction-diffusion terms in this
2 t ≥ 0 ). As f1 (u ) = f 2 (u ) = u + 1 + u − 1 , it is easy to
see that f i (u ) − f i (v) ≤ 2 u − v ( i
4. Conclusion
According
− arctan t , we
[2] B. Kosko, Bi-directional associative memories, IEEE Trans. Syst. Man. Cybern., 1988, 18(1): 49- 60 [3] B. Kosko, Neural Networks and Fuzzy Systems-A Dynamical System Approach Machine Intelligence. Englewood Cliffs, NJ: Prentice-Hall, 1992:38-108.
know that
[4] K. Gopalsamy, X. Z. He, Delay-independent stability in bidirectional associative memory networks, IEEE Trans. Neural Networks, 1994,5:998-1002.
and
[5] X.F. Liao, G.Y. Liu, J.B. Yu, Neural networks of bidirectional associative memory axonal signal transmission delays, J.Electron.,1997,19(4):439-444.in Chinese.
2
dτ 11 (t ) dτ 21 (t ) dσ 11 (t ) dσ 12 (t ) = = = ≤0 dt dt dt dt 0 ≤ τ 11 (t ) = τ 21 (t ) = σ 11 (t ) = σ 12 (t ) ≤
π 2
.
Therefore, according to Theorem 1, we know that this network has one unique 2π -periodic solution, and all other solutions of this network converge exponentially to it as t → +∞ .
[6] X.F. Liao and J.B. Yu, Qualitative analysis of bi-directional associative memory with time delay, Int. J. Circuit Theory Applicat.1998, 26(3): 219-229. [7] V. S. H. Rao, B. R. M. Phaneendra, Global dynamics of bidirectional associative memory neural networks involving transmission delays and dead zones, Neural Networks, 1999, 12: 455-465. [8] X.F. Liao, J.B. Yu, Investigation of periodic oscillation phenomenon for neural networks of BAM with axonal signal transmission delays, J.Electronics, 21(1), 1999: 60-65. in Chinese.
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.2B, February 2006
[9] J.D. Cao, L. Wang, Periodic oscillatory solution of bidirectional associative memory networks with delays, Phys.Rev.E, 2000,61(2): 1825-1828. [10] J. Zhang, Y. Yang, Global stability analysis of bidirectional associative memory neural networks with time delay, Int.J.Cire.Theor.Appl.,2001,29: 185-196. [11] J.D. Cao, L.Wang, Exponential Stability and Periodic Oscillatory Solution in BAM Networks With Delays, IEEE Trans. Neural Networks, 2002,13(2):457-463. [12] J.D. Cao, M.F. Dong, Exponential stability of delayed bidirectional associative memory networks, Applied Mathematics and computation, 135(2003)105-112. [13] J.D. Cao, Global asymptotic stability of delayed bidirectional associative memory neural networks, Applied Mathematics and computation, 142(2003)333-339. [14] Q..M. He, L.S. Kang, Existence and stability of global solution for generalized Hopfield neworks system.Neural Parallel Sci. Comput. 2(1994)165-176 [15] G.A. Carpenter, A geometric approach to singular perturbation problems with application to nerve impulse equations. J. Differential Equations 23(3)(1977)355-367. [16] J.W. Evans, Nerve axon egus II: Stability at rest. Indiana Univ.Math.J.22(1)(1972)75-90. [17] X.X. Liao, J. Li. Stability in Gilpin-Ayala competition models with diffusion, nonlinear analysis theorem. Methods and Applications,1997,28(10):1751-1758. [18] A. Hastings, Global stability in Lotka-volterra systems with diffusion. J.Math. Biol, 1978, 6: 163-168. [19] F. Rothe, Convergence to the equilibrium state in the Lotkavolterra diffusion equations. J.Math.Biol., 1976,3:319-324. [20] X.X. Liao, Y.L. Fu, J. Gao, X.Q. Zhao, Stability of Hopfield neural networks with reaction-diffusion terms, Acta Electronica Siniga,28(1),2000:78-80. in Chinese. [21] X.X. Liao, S. Z. Yang, S.J. Cheng, Y. Shen, Stability of generalized neural networks with reaction-diffusion terms, Sci. China (Series E),32(1),2002:87-94. in Chinese. [22] J.L. Liang, J.D. Cao, Global exponential stability of reaction-diffusion recurrent neural networks with timevarying delays, Phys. Lett. A 314 ,2003: 434-442. [23] L.S. Wang, D.Y. Xu, Global exponential stability of Hopfield neural networks with time-varying delays and reaction-diffusion terms, Sci. China (Series E) 33(6), 2003:488-495. in Chinese. [24] L.S. Wang, D.Y. Xu, Asymptotic behavior of a class of reaction-diffusion equations with delays, J.Math, Anal. Appl.281(2),2003:439-453.
69
Zhenjiang Zhao received the M.S. degrees in Department of Mathematics, OKAYAMA University, of Japan in 1996 and 1998. During 1998-2002, he stayed in TSS Software Co. Ltd. in Japan to develop software of computer. Since 2002, He has worked in Department of Mathematics, Huzhou Teachers College, Zhejiang, China.