Controlling Chaotic Associative Memory for Multiple

0 downloads 0 Views 546KB Size Report
feedback control based on online averaging of network states. An augmented autoas- sociative layer is employed to further improve the retrieval performance.
Cognitive Computation manuscript No. (will be inserted by the editor)

Controlling Chaotic Associative Memory for Multiple Pattern Retrieval Jeremiah D. Deng

Received: date / Accepted: date

Abstract Associative memory in chaotic neural networks differs from traditional associative memory models in their rich dynamics that may lead to promising solutions for a number of information processing problems. In this paper, we propose a new method to control the chaotic neural network, whose refractoriness is tuned by using feedback control based on online averaging of network states. An augmented autoassociative layer is employed to further improve the retrieval performance. Simulation results demonstrate that the proposed chaotic neural network model gives favourable performance in handling noisy, incomplete and composite patterns, while in the meantime achieving either enhanced or comparable memory capacity compared with the conventional Hopfield net and other chaotic neural network models. With the ability to retrieve multiple patterns from memory according to their similarity, it is promising for the proposed network to be applied in real-world information retrieval tasks upon further improvement. Keywords Chaotic neural networks · Associative memory · Information retrieval

1 Introduction With the fast advancing of digital technologies and large volumes of data being produced in our daily life, information retrieval research faces many challenges that call for new techniques. Artificial neural network models with biological plausibility, such as the ART networks [5], the Hopfield net [16] and self-organizing maps [18], have found some interesting applications in pattern recognition and information retrieval [24, 25, 9, 20, 7]. Apart from these approaches based on conventional neural network models, chaotic neural networks (CNN) [3] have come to the attention of researchers recently for their potentials in information processing [1, 2]. Aihara’s original CNN model is based on the modeling of squid giant axons using the Caianiello neuronic equation and the Nagamo-Sato model [2]. The CNN model not only bears stronger biological plausibility than the conventional neural network models, but also provide some new Department of Information Science, University of Otago PO Box 56, Dunedin 9054, New Zealand E-mail: [email protected]

2

pattern search mechanisms. Unlike conventional associative memory models such as the Hopfield net that use fixed point attractors to store patterns, the memory searching process of CNN exhibits rich dynamics with complex chaotic and transient phases, and the network model can hover among stored patterns, generating interesting recalling behaviour [1]. Modifications to this CNN model have been attempted in favour of pattern recognition tasks, and it has also been found that the CNNs are capable of handling noisy and partial patterns [4, 15, 13]. In general, apart from pattern recognition, neural network models with chaotic dynamics have been used for optimization [1, 2, 11, 8], data clustering [22], image retrieval [19], and image segmentation [29]. A transient chaotic neural network model is proposed for solving dynamic programmming matching problems [23]. Another interesting recent development in chaotic associative memory is the multi-value content address models [27, 28], however the proposed models are different from that of the CNN, using for example matrix pseudo-inverse operations [28], therefore their biological plausibility is not clear. A more biologically plausible learning algorithm has been proposed for a nonlinear neural network used for bi-directional association [6]. On a relevant note, chaotic pattern transition is also studied in a pulse neural network model [17], but this goes beyond the scope of our current investigation. Despite their rich dynamics seems promising for retrieving patterns stored in memory, several obstacles thwart CNNs’ finding practical applications in information retrieval. First of all, as the retrieval phase is not selective, the length of the transient retrieval phase can be too long for realistic information processing [1]. Without using fixed-point attractors for the network to quickly converge to, it may take a long time for the network dynamics to arrive at the desired network state that produces pattern retrieval. Secondly, the capacity of the associative memory models remains unknown and the memory capacity in all empirical studies so far published is rather limited. Finally, another challenging issue in applying chaotic neural networks is the effective control of the chaotic dynamics [13]. The network needs to converge to desired recall patterns; on the other hand, it needs to utilize the dynamics to reduce the impact of local minima and noises. To be able to handle information retrieval tasks, it is desirable for the chaotic dynamics to be modified so that the network can work onto retrieving other potentially similar patterns. Recent advances [15, 4, 13] handled the problem of recognizing noisy patterns, but the retrieval of multiple target patterns remains unexplored. In this paper, we propose an improved chaotic associative memory model that extends the existing work on CNN [2, 13]. By using an improved network state feedback mechanism, the refractory parameter of the CNN model is effectively controlled. As a result, the network manages to not only handle noisy or partial inputs, but also retrieve multiple patterns from memory. An additional autoassociative procedure is also introduced to improve the retrieval statistics. Our simulation shows that the improved model enjoys higher recall rates compared with the Hopfield model and the CNN models [1, 13], while maintaining a relatively high memory capacity comparable to the ACNN. The rest of the paper is organized as follows. First we give a brief introduction to Aihara’s CNN model in Section 2. The feedback mechanism in the controlled CNN model [13] is then introduced, followed by our proposed algorithm using online averaging of network states and augmented autoassociation. The bifurcation diagrams of the related neuron models used in these CNN models are presented in comparison to show the difference of their dynamic characteristics. Section 3 presents the simulation study

3

where the performance of these CNN models, in terms of pattern retrieval and memory capacity, is compared. Finally, the paper is concluded in Section 4 with a discussion on future directions.

2 CNN models 2.1 The chaotic neuron model The dynamics of the chaotic neuron model by Aihara [3] can be described as: y(t + 1) = ky(t) − αf (y(t)) + a, x(t + 1) = f (y(t + 1)), 1 f (x) = . 1 + exp(−x/)

(1)

Here, x(t) is the neuron output at time t, y(t) is the neuron’s internal state, k is the decay parameter for refractoriness, α is a scaling factor, and another parameter a is based on the external input and the threshold of the neuron; f (.) is the sigmoid function that nonlinearly transforms the internal state into the output, where  controls its steepness. Fig. 1 gives the bifurcation diagram of the chaotic neuron model when α changes from 0.8 to 2.0, with other parameter set as k = 0.5,  = 0.4, and a = 1.0. Two chaotic areas are obvious when α is around 1.1 and 1.4.

1 0.8 0.6 0.4 y

0.2 0 -0.2 -0.4 -0.6 -0.8 0.8

1

1.2

1.4

1.6

1.8

2

α

Fig. 1 Bifurcation diagram of the chaotic neuron for α values ranging from 0.8 to 2.0. Parametric settings are k = 0.5,  = 0.4, and a = 1.0.

2.2 Aihara’s Chaotic neural network (ACNN) The ACNN model consists of a number of chaotic neurons connected with synapses. Each neuron has an output, and some internal states, whose updating rules can be

4

described by the following rules [1]: xi (t + 1) = f {ηi (t + 1) + ζi (t + 1)}, X ηi (t + 1) = kf ηi (t) +

wij xj (t),

(2)

j

ζi (t + 1) = kr ζi (t) − αxi (t) + ai . Here xi (t) denotes the output of the i-th neuron at time t; ηi (t) and ζi (t) are the internal states, shaped by feedback inputs from the constituent neurons and refractoriness respectively; kf and kr are the corresponding feedback factor and refractory factor; wij is the synaptic weight from the j-th neuron to the i-th neuron, α the refractory scaling parameter, and ai the constant external input on the i-th neuron. Typical settings for the above parameters are: kf , kr ∈ (0, 1), α = 1 ∼ 10, ai = 1 or 2. The ACNN uses the auto-associative memory matrix with the network synaptic weights defined by the Hopfield rule:

wij =

P 1 X p (xi − x)(xpj − x), P

(3)

p=1

where x denotes the average input pattern, P is the total number of patterns to be stored, and xpi is the i-th component of the p-th pattern. In the autoassociative memory, neurons do not have a synapse to itself, i.e., wii = 0. For binary patterns it is usually assumed that x = 1/2, hence the network synaptic weights can be calculated as

wij =

P 1 X p (2xi − 1)(2xpj − 1). P

(4)

p=1

2.3 Controlled CNN (CCNN) Based on the observation that the network dynamics is dependent on the value of the refractory scaling factor α, a chaos control method is introduced using a feedback control mechanism [13]. For the chaotic neuron described in Eq.(1), the first updating rule is modified as y(t + 1) = ky(t) − αβ kc u(t) f (y(t)) + a,

(5)

where a feedback control signal u(t) is introduced as u(t) = |x(t) − x(t − τ )|, β is a control parameter within the range of (0,1), kc is the control strength, and τ is the feedback delay. The rationale behind this modification is that the feedback term u(t) is employed to tune the equivalent α value for the neuron model in Eq.(2), forcing the neuron in or out of the chaotic region: when feedback is strong (neuron being chaotic), α becomes small (neuron also becomes non-chaotic); when feedback is weak (neuron being non-chaotic), α becomes big (forcing the neuron to go chaotic). The bifurcation diagram of the controlled CNN neuron is shown in Fig. 2. To contrast with Fig. 1, the reduction of chaotic regions on the bifurcation diagram is obvious.

5

1 0.8 0.6

y

0.4 0.2 0 -0.2 -0.4 -0.6 0.8

1

1.2

1.4

1.6

1.8

2

α

Fig. 2 Bifurcation diagram of a controlled CNN neuron for α values ranging from 0.8 to 2.0. Parametric settings are k = 0.5,  = 0.04, β = 0.9, τ = 1, kc = 1, and a = 1.0.

Connecting the chaos-controlled neurons in the same manner as in ACNN, it results in a ‘controlled CNN’ (CCNN) model, with modified rules in updating the internal states and output [15, 13]: xi (t + 1) = f {ηi (t + 1) + ζi (t + 1)}, X wij xj (t),

ηi (t + 1) = kf ηi (t) +

(6)

j

ζi (t + 1) = kr ζi (t) − αγ kn un (t) xi (t) + αi , where γ and kn are the control parameter and control strength, and the control signal un (t) is defined as the difference between the current state and a delayed feedback: un (t) =

N X

|xi (t) − xi (t − τ )|,

(7)

i

where τ is the feedback delay, and N is the total number of selected nodes for producing feedback signal. We set N as the number of neurons in the network [13].

2.4 CCNN using feedback of online averaging Since the CCNN model uses a time delay feedback, it is highly likely that the network output will be dominated by the initial input or initial recalls. The retrieved patterns can overcome noise but will not be able to escape from the target pattern and reach other stored patterns despite their potential similarity. This effect will be demonstrated in our simulation presented in the next section. Such a drawback limits its capability in information retrieval when multiple similar patterns exist and need to be retrieved. To overcome this difficulty, we replace the time delayed feedback with the online averaging of network states, which is calculated as Θ(t) = ρΘ(t − 1) + (1 − ρ)x(t),

(8)

6

where ρ is a positive weighting factor less than 1. Eq.(7) then becomes un (t) =

N X

|xi (t) − Θi (t)|.

(9)

i

From now on we denote the new model as CCNN-OA (i.e., CCNN using online averaging). Fig. 3 gives the bifurcation diagram of a CCNN-OA neuron with the same setting as in Fig. 1. Interestingly, the CCNN-OA neuron maintains very similar dynamics compared with the original CNN model, having increased chaotic regions compared with the CCNN. This feature is very promising, as the CCNN-OA model demonstrates richer chaotic dynamics than the CCNN, which then may enable chaotic transition among recalled patterns and allow for multiple target patterns to be retrieved. We will assess the chaotic dynamics of the CCNN-OA network model later in Section 3.

1 0.8 0.6

y

0.4 0.2 0 -0.2 -0.4 -0.6 0.8

1

1.2

1.4

1.6

1.8

2

α

Fig. 3 Bifurcation diagram of the CCNN-OA neuron for α values ranging from 0.8 to 2.0. Parameters used are k = 0.5,  = 0.4, β = 0.9, ρ = 0.8, and a = 1.0.

2.5 Augmented autoassociation To improve the pattern retrieval performance of the CNN model, another layer of autoassociative memory with feedback loops at each neuron of the network can be employed [10]. The additional layer accepts the updated network states of the dynamic memory, but only updates itself through feedback loops. The benefit of employing this two-layer structure is that, by feedforwarding the dynamic states to the loosely coupled autoassociative memory, the dynamic properties of the CNN will not be affected. Rather, through feedback loops in the additional layer, network states get ‘cooled down’ and can more readily converge to a stored pattern. Fig. 4 gives the network diagram with an augmented autoassociation layer. The operation of the augmented layer occurs simultaneously each time when the upper layer undergoes dynamic update. First the neurons copy from their dynamic

7

Dynamic Memory

Static Memory

Recalled Pattern Fig. 4 The modified CNN structure that has an augmented autoassociation layer loosely coupled with the dynamic CNN layer.

layer counterpart: oi = xi .

(10)

Then with the following rule the neurons update themselves using a simple feedback loop that operates for a number of iterations: oi =

X

wij oj .

(11)

j

In our experiments we find two or three rounds of feedback looping will usually be sufficient. The final output is defined the same way as in an auto-associative memory: o0i = sgn(oi )

(12)

It has been shown that the augmented autoassociation significantly improves the recall rate for ACNN [10]. Because it operates independently from the CNN network dynamics, we include it also for CCNN and CCNN-OA in our simulation and the improvement on recall rates is also observed.

3 Simulation The simulation on the proposed CCNN-OA model is carried out in two parts. For the first part, we concentrate on the retrieval performance using a benchmark dataset. For the second, a comparison study on memory capacity is made for the Hopfield model, ACNN, CCNN, and CCNN-OA.

3.1 Pattern retrieval We use the same 4-pattern dataset as in Refs. [1, 21, 4, 13, 14]. These binary patterns of 10 × 10 size, are shown in Fig. 5. Each CNN model is constructed with 100 fullyconnected neurons in total. The following CNN models are involved in our experiments:

8

Fig. 5 Four binary patterns for pattern retrieval testing, named as ‘x1 ’ ∼ ‘x4 ’ from left to right.

– ACNN with kr = 0.95, kf = 0.2, α = 10.0, ai = 2.0; – CCNN with the same setting as CNN; other parameters are kn = 0.6, γ = 0.945, and τ = 4; – CCNN-OA with the same setting as CNN; other parameters are kn = 0.6, γ = 0.945, and c = 0.8. Unless otherwise mentioned, the network models all run for 2000 iterations upon being given an initial input pattern. When the network output is identical or reverse to a stored pattern, it is counted a hit. The numbers of hits on all stored patterns are collected at the end for us to assess the pattern retrieval ability of different models. With patterns stored in the memory, the dynamics of the CCNN-OA network can be assessed. We use the algorithm by Wolf et al. [26] to monitor the largest Lyapunov exponent (LLE) over 400 steps of iteration. The network is initialized with a random network. As we see the LLE calculated over time displays an interesting pattern dynamically switching between positive and negative values, indicating the feedback mechanism in CCNN-OA manages to control the network to traverse between chaotic and non-chaotic states. This is in clear contrast with CNN and CCNN: CNN gives a positive LLE with α = 10, and CCNN reports all negative LLE for different values of α [13]. This difference in network dynamics will result in different performance in pattern retrieval as we will see in further experiments.

0.4

0.3

0.2

0.1

LLE

0

-0.1

-0.2

-0.3

-0.4

-0.5 0

50

100

150

200 t

250

300

350

400

Fig. 6 LLE of the CCNN-OA network within 400 iterations. The switching between positive and negative values occurs frequently over time.

9 Table 1 Autoassociation performance of ACNN, CCNN, and CCNN-OA. Input x1 x2 x3 x4

ACNN x2 x3 2 2 9 3 4 5 4 1

x1 5 7 3 1

x4 3 7 2 4

x1 1247 0 0 0

CCNN x2 x3 0 0 1304 0 0 1249 0 0

x4 0 0 0 1210

CCNN-OA x2 x3 0 31 1744 0 0 1471 0 0

x1 1463 0 43 6

x4 11 58 0 1490

Table 2 Comparison of retrieval performance on noisy/partial patterns. Input (a) (b) (c)

x1 1 4 2

ACNN x2 x3 4 7 3 3 8 0

x4 2 2 3

x1 1243 1245 0

CCNN x2 x3 0 0 0 0 0 1249

x4 0 0 0

x1 1473 1471 43

CCNN-OA x2 x3 0 25 0 33 0 1471

x4 0 8 0

3.1.1 Autoassociation We first experiment with using the stored patterns as input and see how the system dynamics in different networks affects their retrieval performance. The hit rates are reported in Table 1. One can see that the CNN model gives rather poor retrieval performance with very low hit rates. Little dominance is shown on the input pattern either as the chaotic dynamics moves the network state to travel frequently onto other patterns. The CCNN model, on the other hand, manages to obtain more hits, but all the hits are limited on the input pattern. The CCNN-OA model, however, not only achieves higher hit rates on the input pattern, but also allows other patterns to be recalled occasionally. In the meantime, the dominance of the input pattern among retrieval results is maintained. 3.1.2 Dealing with noisy patterns Next, some noisy or partial patterns, as show in Fig. 7, are used to test the retrieval capability of the CNN models. Pattern (a) is a noisy version of Pattern x1 , and Pattern (b) gives only a half of x1 . Both have been previously used [13]. Pattern (c) is a random composite pattern based on Patterns x1 , and x3 . The retrieval performance of three network models on these input patterns is listed in Table 2.

Fig. 7 Three testing binary patterns from left to right: (a) a noisy Pattern 1; (b) half of Pattern 1 set to zero; and (c) a composite pattern of Pattern 1 and Pattern 3.

As revealed by Table 2, the performance of ACNN remains rather poor, with very low hits and reports no proper dominance of the retrieved patterns. CCNN manages to retrieve the counterpart stored pattern, but no other stored patterns are recalled after all iterations. This becomes a problem with Pattern (c) which partially consists of x1 and x3 , where retrieval of the two original patterns would be desirable. CCNN-OA

10

performs the best, allowing occasional retrieval of other patterns, and in the case of Pattern (c), allowing both x1 and x3 to be retrieved. 1 0.9 0.8

Recall prob.

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1

0.2

0.3

0.4

0.5 p

0.6

0.7

0.8

0.9

Fig. 8 Tendency of retrieval performance on the random composite pattern in CCNN-OA. The x-axis represents the probability of picking pixels from Pattern x1 , while y-axis gives the probability of recalling x1 .

3.1.3 Processing composite patterns We further investigate how CCNN-OA handles composite patterns such as Pattern (c). A simple scenario is tested by varying the probabilities for the new pattern to take pixels from either x1 or x3 . Denote the probability for randomly taking a pixel from x1 as p. The probability to take a pixel from x3 is then (1 − p). The value of p varies from 0.1 to 0.9 in our experiments. Using the random composite pattern as input, the probability of retrieving x1 is measured. As shown in Fig. 8, pattern retrieval has two modes, either favouring x3 when p is small, or favouring x1 when p is big. The two modes balances at p = 0.5, which is an average when the retrieval becomes unstable, jumping between the two modes. This scenario clearly indicates that the new network model is able to handle composite patterns and retrieve stored patterns depending on the similarity between the input and the stored patterns. On the other hand, the recall result of CCNN on these random composite patterns is not stable at all. There is no clear selectivity shown between patterns x1 and x3 even when the composition probability p varies.

3.2 Memory capacity test The second part of our simulation is focused on memory capacity comparison. Because of the inherent complexity of the CNN neuron and network models, there has been no theoretical analysis on the capacity of the CNN memory. It is however a general belief that chaotic neural networks can store many more patterns than conventional auto-associative networks such as the Hopfield model [13]. Yet so far we have not found an empirical comparison study in the literature. The following experiment is carried out to test the memory capacity of the CNN models. For this purpose we use a different but larger dataset. The patterns used in this experiment are ten Greek letters in 10 × 10 bitmap, shown in Fig. 9. Here the patterns

11

are fed into the associative memory in an incremental manner. We start with the first pattern to memorize, and test the recall; Next we add in the second pattern, and test the recall of both patterns, so on so forth. Four network models are compared here: the Hopfield model, ACNN, CCNN, and CCNN-OA. For each model, we record the hits (i.e. a perfect recall on a binary pattern or its inverse) progressively, with number of patterns memorized by the network increases from 1 to 10. The percentage of recalled patterns (in regards to total number of patterns memorized) is also monitored. During testing, each memorized pattern is taken as the initial state and the network runs for 5000 iterations. Augmented autoassociation is carried out on ACNN, CCNN, and CCNN-OA, all for two loops. To eliminate the possible effect of pattern presentation order on the results, we randomize the order of the pattern set for 10 times and take the average performance. The performance of these four models over the 10 rounds is quite stable.

Fig. 9 Ten Greek letter patterns used to test the memory capacity.

10000 Hopfield CNN CCNN CCNN-OA 1000

Avg. hits

100

10

1

0.1

0.01 1

2

3

4

5 6 Memory capacity

7

8

9

10

Fig. 10 Average hits versus the number of memorized patterns .

Fig. 10 shows the comparison of recall hits, while Fig.11 shows the recall rate, i.e., the percentage of patterns that are recalled at least once. Since the patterns are not orthogonal enough to each other, the crosstalk has easily compromised the memory capacity of the Hopfield model. No more hits are obtained further than memorizing 4 patterns. Fig. 11 tells the same story: only 2 patterns are recalled (50%) when 4

12

patterns are memorized; further than that, no more recall happens. This number is much smaller than the theoretical estimate of the memory capacity of the Hopfield model given as 0.14N (N is the number of network neurons) [12]. It is well-known that correlation among the memorized patterns will generate spurious memory, thus resulting in reduced memory capacity. On the other hand, although the CNN associative memory models rely on a memory matrix constructed the same as in the Hopfield model, the memory capacity is however much improved, as shown by the experiment results. Among the CNN models, CCNN maintains the highest recall hits when the number of memory patterns is lower than 4; further than that, it quickly drops to much lower levels. In fact, upon reaching 10 patterns in memory, the average hits become almost zero. From Fig. 11, the recall percentage quickly drops from the initial height of 100% to less than 40% (with 7 patterns in memory), however remaining to stay lowly above 0 even when the capacity reaches 10. As shown by both Fig. 10 and Fig. 11, the memory capacity of ACNN and CCNNOA is quite similar. Both maintain a low but non-zero hit rate when the memory capacity increases. In fact, upon reaching the 9th pattern, the average hits are still bigger than 1. In terms of the percentage of recalled patterns, CCNN-OA and ACNN are also quite close to each other, with CNN sitting higher with 8 and 9 patterns, but CCNN-OA overtakes when 10 patterns are tested. Both CCNN-OA and ACNN clearly outperform CCNN in this memory capacity test.

100 Hopfield CNN CCNN CCNN-OA

Recall rate (%)

80

60

40

20

0 1

2

3

4

5 6 Memory capacity

7

8

9

10

Fig. 11 Percentage of patterns recalled versus the number of memorized patterns .

4 Conclusion Chaotic neural networks have not found a wide use in information processing, although its spatio-temporal neurodynamics can produce some interesting behaviours that are useful for a number of tasks, such as optimization and pattern recognition. In fact,

13

although the associative memory in the ACNN model overcomes the fixed-point convergence in the conventional Hopfield model [16], its chaotic dynamic leads the output of the ACNN to wander among all stored patterns. A parameter modulated control method using feedback mechanism is introduced in the CCNN model to control the dynamics of CNN, so that the network can converge to the periodic orbits and become dependent on the initial patterns. This enables CNN to be used for information processing tasks such as pattern recognition. In this paper, the pattern retrieval characteristics of both ACNN and CCNN are studied and it is found that neither of them can be applied in information retrieval tasks where recall of multiple patterns is intended. This leads us to modify the refractoriness control mechanism in CCNN and employ the online pattern averaging in feedback. Despite a small modification, this generates an interesting itinerancy pattern where the target (best-matching) pattern receives dominant retrieval, while other relevant patterns are also visited occasionally. The simulation study on composite patterns is particularly interesting, showing that the retrieval dominance is controlled by the similarity between input pattern and stored patterns. We consider this a very encouraging result pointing towards further exploration. We have also conducted an empirical study on the memory capacity of the relevant CNN models using a 10-pattern dataset. The CNNs compare favourably to the conventional Hopfield model. Among the CNN models, our proposed model (CCNN-OA) performs similarly as ACNN, but better than CCNN. Given that CCNN-OA is intended to conduct selective pattern retrieval and imposes a controlled dynamics other than that of a chaotic itinerancy in ACNN, this result is quite impressive. On the other hand, the scale of the associative memory investigated in our simulation remains limited. The adoption of a Hopfield-like associative memory matrix also limits our investigation with binary patterns. Future work will be on extending the computational model so that it is capable of handling multi-valued patterns with a realistic memory capacity. That may eventually pave the way for its application in real-world information retrieval tasks.

References 1. Adachi, M., Aihara, K.: Associative dynamics in a chaotic neural network. Neural Networks 8, 83–98 (1997) 2. Aihara, K.: Chaos engineering and its application to parallel distributed processing with chaotic neural networks. Proceedings of IEEE 90(5) (2002) 3. Aihara, K., Takabe, T., Toyoda, M.: Chaotic neural networks. Physics Letters A 144, 333–340 (1990) 4. Calitoiu, D., Oommen, B.J.: Desynchronizing a chaotic pattern recognition neural network to model inaccurate perception. IEEE Trans. Sys. Man & Cybern. Part B 37, 692–704 (2007) 5. Carpenter, G., Grossberg, S.: Adaptive Resonance Theory (ART), 2nd edn., pp. 79–82. MIT Press (2003) 6. Chartier, S., Renaud, P., Boukadoum, M.: A nonlinear dynamic artificial neural network model of memory. New Ideas in Psychology 26(2), 252 – 277 (2008). DOI DOI: 10.1016/j.newideapsych.2007.07.005. URL http://www.sciencedirect.com/science/article/B6VD4-4PK8MJ71/2/9c68ac243863610954a6d0e0af16a647. Dynamics and Psychology 7. Chau, M., Chen, H.: Incorporating web analysis into neural networks: An example in hopfield net searching. IEEE Transactions on Systems, Man, and Cybernetics, Part C 37(3), 352–358 (2007)

14 8. Chen, L., Aihara, K.: Chaotic simulated annealing by a neural network model with transient chaos. Neural Networks 8, 915–930 (1995) 9. Deng, D.: Content-based image collection summarization and comparison using selforganizing maps. Pattern Recognition 40(2), 718–727 (2007) 10. Deng, J.D., Li, S.: Improving the pattern retrieval characteristics of the aihara chaotic neural network mode. In: Proc. of ICITA’08, pp. 732–737 (2008) 11. Hasegawa, M., Ikeguchi, T., Aihara, K.: Solving large scale traveling salesman problems by chaotic neurodynamics. Neural Netw. 15(2), 271–283 (2002). DOI http://dx.doi.org/10.1016/S0893-6080(02)00017-5 12. Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall (1999) 13. He, G., Chen, L., Aihara, K.: Associative memory with a controlled chaotic neural network. Neurocomputing 71, 2794–2805 (2008) 14. He, G., Shirimali, M.D., Aihara, K.: Threshold control of chaotic neural network. Neural Network 21, 114–121 (2008) 15. He, G., Shrimali, M.D., Aihara, K.: Partial state feedback control of chaotic neural networks and its application. Physics Letters A 371, 228–233 (2007) 16. Hopfield, J.J.: Neural networks and physical systems with emergent collective computation abilities. Proceedings of National Academy of Sciences 79, 2554–2558 (1982) 17. Kanamaru, T.: Chaotic pattern transitions in pulse neural networks. Neural Netw. 20(7), 781–790 (2007). DOI http://dx.doi.org/10.1016/j.neunet.2007.06.002 18. Kohonen, T., Schroeder, M.R., Huang, T.S. (eds.): Self-Organizing Maps. Springer-Verlag New York, Inc., Secaucus, NJ, USA (2001) 19. Kosuge, S., Osana, Y.: Chaotic associative memory using distributed patterns for image retrieval by shape information. In: Proc. Inter. Joint. Conf. Neural Networks (IJCNN) 2004, vol. 2, pp. 903–908 (2004) 20. Kristian Nybo, J.V., Kaski, S.: The self-organizing map as a visual information retrieval method. In: Proceedings of WSOM’07, 6th International Workshop on Self-Organizing Maps (2007). URL http://eprints.pascal-network.org/archive/00003556/ 21. Lee, R.: Lee-associator a chaotic auto-associative network for progressive memory recalling. Neural Networks 19, 644–666 (2006) 22. Lin, J.S.: Annealed chaotic neural network with nonlinear self-feedback and its application to clustering problem. Pattern Recognition 34, 1093–1104 (2001) 23. Mirzaei, A., Safabakhsh, R.: Optimal matching by the transiently chaotic neural network. Applied Soft Computing 9(3), 863 – 873 (2009). DOI DOI: 10.1016/j.asoc.2008.07.009 24. Park, S.S., Seo, K.K., Jang, D.S.: Expert system based on artificial neural networks for content-based image retrieval. Expert Systems with Applications 29(3), 589 – 597 (2005). DOI DOI: 10.1016/j.eswa.2005.04.027 25. Romero, R.A.F., Vicentini, J.F., Oliveira, P.R., Traina, A.M.J.: Investigating the potential of art neural network models for indexing and information retrieval: Research articles. Int. J. Intell. Syst. 22(4), 319–336 (2007). DOI http://dx.doi.org/10.1002/int.v22:4 26. Wolf, A., Swift, J.B., Swinney, H.L., Vastano, J.A.: Determining lyapunov exponents from a time series. Physica D: Nonlinear Phenomena 16(3), 285 – 317 (1985). DOI DOI: 10.1016/0167-2789(85)90011-9 27. Xiu, C., Liu, X., Tang, Y., Zhang, Y.: A novel network of chaotic elements and its application in multi-valued associative memory. Physics Letters A 331, 217–224 (2004) 28. Zhao, L., Caceres, J., Jr., A.D., Szu, H.: Chaotic dynamics for multi-value content addressable memory. Neurocomputing 69, 1628–1636 (2006) 29. Zhao, L., Cupertino, T.H., Bertini Jr. Jo a.R.: Chaotic synchronization in general network topology for scene segmentation. Neurocomput. 71(16-18), 3360–3366 (2008). DOI http://dx.doi.org/10.1016/j.neucom.2008.02.024

Suggest Documents