autoassociative memory cellular neural networks - Semantic Scholar

0 downloads 0 Views 2MB Size Report
Nov 12, 2010 - tical changes being obvious in an upright face .... the left side of the face (Hillary Clinton and Tom Cruise), that is, faces are judged to be more.
November 12, 2010

11:18

WSPC/S0218-1274

02764

International Journal of Bifurcation and Chaos, Vol. 20, No. 10 (2010) 3225–3266 c World Scientific Publishing Company  DOI: 10.1142/S0218127410027647

AUTOASSOCIATIVE MEMORY CELLULAR NEURAL NETWORKS MAKOTO ITOH Fukuoka 811-0214, Japan LEON O. CHUA Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA 94720, USA Received November 3, 2009; Revised March 1, 2010 An autoassociative memory is a device which accepts an input pattern and generates an output as the stored pattern which is most closely associated with the input. In this paper, we propose an autoassociative memory cellular neural network, which consists of one-dimensional cells with spatial derivative inputs, thresholds and memories. Computer simulations show that it exhibits good performance in face recognition: The network can retrieve the whole from a part of a face image, and can reproduce a clear version of a face image from a noisy one. For human memory, research on “visual illusions” and on “brain damaged visual perception”, such as the Thatcher illusion, the hemispatial neglect syndrome, the split-brain, and the hemispheric differences in recognition of faces, has fundamental importance. We simulate them in this paper using an autoassociative memory cellular neural network. Furthermore, we generate many composite face images with spurious patterns by applying genetic algorithms to this network. We also simulate a morphing between two faces using autoassociative memory. Keywords: Autoassociative memory; Hebb’s rule; cellular neural network; Hopfield model; spatial derivative; Thatcher illusion; hemispatial neglect; split-brain; spurious pattern; genetic algorithm; morphing.

1. Introduction An autoassociative memory 1 is a device which accepts an input pattern and generates an output as the stored pattern which is most closely associated with the input. The function of the autoassociative memory is to recall the corresponding stored pattern, and then reproduce a clear version of the pattern as the output [Maimon & Rokach, 2008]. Autoassociative memory can retrieve an entire pattern from only a tiny sample of itself, namely, it can recreate the whole from a small subset. From an engineering viewpoint, autoassociative memory is one of the most valuable brain functions.

For human memory, research on “visual illusions” and “brain damaged visual perception”, such as the Thatcher illusion, the hemispatial neglect syndrome, the split-brain, and the hemispheric differences in recognition of faces seems to have fundamental importance. • The Thatcher illusion is a phenomenon where it becomes difficult to detect local featurechanges in an upside-down face, despite identical changes being obvious in an upright face [Thompson, 1980]. It is named after British former Prime Minister Margaret Thatcher on whose

1

A memory that produces output patterns dissimilar to its inputs is termed heteroassociative (i.e. associating patterns with other patterns). 3225

November 12, 2010

3226

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

photograph the effect has been most famously demonstrated as shown in Fig. 1. • The Hemispatial neglect syndrome (also known as unilateral neglect, hemineglect, or spatial neglect) is characterized by a reduced awareness of stimuli on one side of space, even though there may be no sensory loss [Husain, 2008]. It results most commonly from brain injury to the right cerebral hemisphere, causing visual neglect of the lefthand side of space as illustrated in Figs. 2 and 3. Right-sided spatial neglect is rare because there is redundant processing of the right space by both the left and right cerebral hemispheres. • The split-brain is characterized by an incapacity of the cerebral hemispheres to exchange information [Sperry, 1993]. A brain has the two hemispheres linked by the corpus callosum, through which they communicate and coordinate. They excel at different functions. The right hemisphere of the cortex excels at nonverbal and spatial tasks, whereas the left hemisphere is usually more dominant in verbal tasks such as speaking and writing. The right hemisphere controls the left side of the body, and the left hemisphere controls the right side. A patient with a split-brain is shown a picture of a chicken claw and a house

on a snowy field in separate visual fields, and asked to choose from a pack of picture cards the best association with the pictures, as illustrated in Fig. 4. The patient would choose a chicken to associate with the chicken claw, and a shovel to associate with the snow. That is, the patient’s left hand picks up the picture of a hen since it is associated with the chicken claw on the topleft picture as viewed by the right eye. Similarly, the patient’s right hand picks up a snow shovel since it is associated with the snow on the ground on the top-right picture as viewed by the left eye. The left hand is pointed to the right hemisphere’s choice, and the right hand is pointed to the left hemisphere’s choice. However, when asked to name the object on the left screen, the patient cannot name the left object, that is, what the right side of the brain is seeing, since communication between the two sides of the brain is inhibited, and visual information projected on the left side of the screen goes to the patient’s right hemisphere, which does not control language. In contrast, when asked to name the object on the right side of the screen (top-right picture), the patient replied correctly, since visual information projected on the right side of the screen goes

Fig. 1. Thatcher illusion (reproduced from [Sugita 2008]). Thatcher illusion is a phenomenon where local feature-changes in an upside-down face such as flipping the two eyes and the mouth vertically (by reversing the two eyes and the mouth with respect to the horizontal axis) did not appear obvious (top), in sharp contrast with the dramatic change in an upright face (bottom).

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3227

Fig. 2. A patient with hemispatial neglect might perceive only the right half of an object, and if asked to copy a drawing, they are likely to omit the material on the left (reproduced from Visual Hemi-Neglect: http://www.psych. ucalgary.ca/PACE/VA-Lab/).

to the patient’s left hemisphere, which controls language. • The hemispheric differences in recognition of faces is characterized by a phenomenon where emotions are expressed more intensely on the left side of the face, that is, faces are judged to be more intensely emotional when viewed exclusively by the right hemisphere as shown in Fig. 5. The right hemisphere is dominant in the perception and recognition of faces [Rhawn, 1990]. Simulation of visual illusion and brain damaged visual perception may provide fundamental insights into the general brain mechanisms of memory, perception and cognition [Itoh & Chua, 2007, 2008a]. The brain is a network of electrically active cells that communicate with each other via synapses. Our knowledge through experience that are wired in our brain is combined, and is merged together in order to create a new possibility. This process may be simulated by genetic algorithms. Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection and crossover [Mitchell, 1998]. Using these operations, the genetic algorithm may be able to arrive at better solutions. In the case of the cellular neural networks, many interesting composite templates are obtained by using simple operations from genetic algorithms [Itoh & Chua, 2003]. Thus, application of genetic algorithms to autoassociative

Fig. 3. Examples of drawings, and the copies made of them by hemispatial neglect patients (reproduced from [Thomas, 2008]). Left images are drawn by a patient suffering from hemispatial neglect. For example, if a patient is asked to draw a clock, their drawing might show only the numbers 1 to 7, the other side being distorted or left blank (top). After damage to one hemisphere of the brain, patients fail to be aware of items on one side of space.

memory may also provide fundamental insights into the general brain mechanisms of perception and cognition. In this paper, we propose an autoassociative memory cellular neural network, which is derived from Hebb’s rule. The network consists of onedimensional cells with spatial derivative inputs, thresholds and memories. Computer simulations show that it exhibits good performance in face recognition. That is, the network can retrieve the whole face from a part of the face image. It can also reproduce a clear version of face image from a noisy one. Furthermore, we simulate the Thatcher illusion, the hemispatial neglect (visual neglect) syndrome, the split-brain syndrome, and the hemispheric differences in recognition of faces using this

November 12, 2010

3228

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

(a)

(b)

Fig. 4. (a) Vision Field (reproduced from The Split Brain Experiments: http://nobelprize.org/). Right vision field is connected to the left hemisphere. Left vision field is connected to the right hemisphere. (b) Split-brain experiment (reproduced from Tycko Medical & Biological Art: http://tyckoart.com/). A split-brain patient receives separate visual stimulation, i.e., the top-left or the top-right picture is shown at different times (not simultaneously) to each cortical hemisphere. The right hemisphere saw the picture on the left (a chicken claw), and the left hemisphere saw the picture on the right (a snow scene). Both hemispheres could see all of the cards. The patient can show recognition of an object with their left (resp. right) hand, since that hand is controlled by the right (resp. left) side of the brain. Here, the patient’s left hand picks up the picture of a hen since it is associated with the chicken claw on the top-left picture as viewed by the right eye. Similarly, the patient’s right hand picks up a snow shovel since it is associated with the snow on the ground on the top-right picture as viewed by the left eye. The left and right hemispheres easily picked the card that related to the picture it saw. The left hand pointed to the right hemisphere’s choice, and the right hand pointed to the left hemisphere’s choice. When the patient is asked to name the object on the left side of the screen (top-left picture), he cannot name the left object, that is, what the right side of the brain is seeing, since communication between the two sides of the brain is inhibited. Visual information projected on the left side of the screen goes to the patient’s right hemisphere, which does not control language. On the contrary, when the patient is asked to name the object on the right side of the screen (top-right picture), he replied correctly. Visual information projected on the right side of the screen goes to the patient’s left hemisphere, which controls language.

Fig. 5. Hemispheric differences in recognition of faces (reproduced from Brain Overview: http://brainmind.com/). Emotions are expressed more intensely on the left side of the face (Hillary Clinton and Tom Cruise), that is, faces are judged to be more intensely emotional when viewed exclusively by the right hemisphere.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

neural network. Finally, we show that many composite face images with spurious patterns can be retrieved by applying genetic algorithms to the network connection weights (coupling coefficients), and we also simulate a morphing between two faces using an autoassociative memory neural network.

We next define the difference of si,j  si,j − si,j−1 if j = 1, ∆ ∆si,j = if j = 1. s1,j − si,N

3229

(4)

From Eq. (2), we obtain ∆si,j = si,j − si,j−1

2. Dynamics of Associative Memories

=

An autoassociative memory is a device which accepts an input pattern and generates an output from the stored pattern which is most closely associated with the input. The dynamics of autoassociative memories is described as follows: Consider M binary (−1/1) patterns σ1 , σ 2 , . . . , σ M , where each pattern σi contains N bits of information σim ; namely       1 2 σM σ σ  11   12   1M   σ2   σ2  σ       2  σ 1 =  . , σ 2 =  . , . . . , σ M =  . .  ..   ..   ..        1 2 M σN σN σN (1)

M m=1

=

M m=1

σim σjm



m=1

σim (σjm

where ∆ ∆σjm =

M



m σim σj−1

m σj−1 )

=

M m=1

σim ∆σjm ,

 m σjm − σj−1

if j = 1,

m σ1m − σN

if j = 1,

(5)

(6)

and ∆σjm and ∆si,j indicate the difference values between adjacent pixel and adjacent coupling coefficients, respectively, which can be calculated as finite derivatives. Since the difference ∆σjm satisfies  m , 0 if σjm = σj−1 |∆σjm | = (7) m , 2 if σjm = σj−1

m | = 2 a set of points satisfying |∆σjm | = |σjm − σj−1 indicates an edge, which can be interpreted as a  jump in intensity from one pixel to the next. From Eqs. (5) and (6), we have (1) Assign the coupling coefficients (Hebb’s  M  rule):  σim ∆σjm ∆σjn ∆si,j = ∆σjn M    m=1 σim σjm if i = j, (2) sij = M m=1    σim (∆σjm ∆σjn ), (8) = 0 if i = j.

The dynamics of a basic autoassociative memory is defined as follows [Itoh & Chua, 2004]:

m=1

(2) Assign the initial state vi (0). (3) Update the state of a network:  vi (n + 1) = sgn

N

and N

 sij vj (n),

∆σjn ∆si,j =

j=1

(3)

j=1

=



where vi denotes state of the ith neuron, and  1 if x ≥ 0, sgn(x) = −1 if x < 0.

N M j=1 m=1 N

σim (∆σjm ∆σjn )

σin (∆σjn )2

j=1

+

M

σim

m=1, m=n

If we define

The above associative memory will converge to the desired pattern if M  N [M¨ uller & Reinhardt, 1990].

∆σ

 σ m − σ m−1 = σm − σN

m ∆

N

∆σjm ∆σjn .

(9)

j=1

if m = 1, if m = 1,

(10)

November 12, 2010

3230

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

and if we assume that ∆σ m and ∆σn are orthogonal (m = n), that is, N

∆σjm ∆σjn = 0,

(11)

j=1

then the second term in Eq. (9) is null, and we would have N N ∆σjn ∆si,j = σin (∆σjn )2 j=1

j=1

= σin

N

(∆σjn )2 ∝ σin ,

(12)

j=1

where the symbol A ∝ B denotes “the relation A is proportional to B”. From Eq. (12), we get the relation   N ∆σjn ∆si,j  = σin . (13) sgn

In the discrete Hopfield model, a modified bipolar output function is used, where the output of the cells remains the same if the current state is equal to the threshold value [Hopfield, 1982]. Thus, we used the output function U (x) in place of sgn(x). In order to realize U (x), each cell must have a memory.2 For practical reasons, we add a random threshold θi (n) to Eq. (15), and we update the network asynchronously.3 The random threshold θi (n) avoids converging to a spurious pattern, and the asynchronous update plays a significant role in the convergence of the system [Levine & Aparicio, 1994]. Hence, the dynamics of our noise-augmented autoassociative memory cellular neural network is defined as follows: 

(1) Assign the coupling coefficients and their difference (finite derivative):  M  j=1   σim σjm if i = j, Hence, the dynamics of our autoassociative memory sij = m=1   is defined as follows:  0 if i = j, (17)    (1) Assign the coupling coefficients and their si,j − si,j−1 if j = 1, difference: ∆si, j =  M if j = 1. s1,j − si,N   σ m σ m if i = j, (2) Assign the initial state vi (0). sij = m=1 i j  (3) Generate a random integer i. Update the  0 if i = j, (14) state of the ith cell asynchronously:    si,j − si,j−1 if j = 1, N ∆si,j = if j = 1. s1,j − si,N vi (n + 1) = U  ∆sij ∆vj (n) − θi (n), j=1 (2) Assign the initial state vi (0). (3) Update the state of the network: (18)   N where θi (n) denotes the threshold of the ith vi (n + 1) = U  ∆sij ∆vj (n), (15) cell, which is generated by a random integer j=1 generator, and  vj (n) − vj−1 (n) if j = 1, where ∆ ∆vj (n) =  if j = 1, vj (n) − vN (n) ∆ vj (n) − vj−1 (n) if j = 1, ∆vj (n) =  if j = 1, vj (n) − vN (n) if x > 0,  1  ∆ (16) if x > 0, U (x) = vi (n) if x = 0, 1  ∆ −1 U (x) = vi (n) if x = 0, if x < 0.  (19) −1 if x < 0. 

2







The output function U (x) can be realized by using a current-controlled memristor, since the memristance does not change and holds the previous value, if the current through the memristor is zero [Chua, 1971; Chua & Kang, 1976; Itoh & Chua, 2008a]. Observe that each memristor would save one memory cell in any hardware implementation. 3 We select neurons randomly, and update individual neurons independently.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3. Performance of Associative Memory Most autoassociative memory networks suffer from the serious problem of spurious patterns, that is, the network may converge to a spurious pattern different from the stored patterns. There are some practical approaches to overcome these problems [Maimon & Rokach, 2008]. In this section, we demonstrate that autoassociative memory cellular

3231

neural network exhibits good performance over this problem.

3.1. Difference patterns The network (18) retrieves stored patterns using the difference values between adjacent pixels: ∆vj (n) = vj (n) − vj−1 (n). For simplicity, we display a set of points satisfying |∆vj (n)| = 2, which exhibits a jump in intensity from one pixel to the next in Fig. 6.4

Fig. 6. Four stored patterns (left) and a set of points satisfying |∆vj (n)| = 2, representing a jump in intensity from one pixel to the next (right). 4

We used face images with 62 pixels in width and 40 pixels in height. It has a total of 62 × 40 = 2480 pixels. An image of W pixels wide by H pixels high has W H resolution. In this case, W = 62, H = 40, and WH = 2480. Thus, it has N = 2480 bits of information. Furthermore, due to memory capacity restriction, we cannot store more than four patterns with 2480 resolution.

November 12, 2010

3232

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 7. Retrieved patterns from Eqs. (3) and (18) starting from stored patterns (left) as their initial states. The network (18) retrieves the stored pattern correctly (right). In contrast, the network (3) cannot retrieve any stored patterns (center), but converges instead to spurious patterns.

We also display the retrieval of four stored patterns in Fig. 7. Compare the performance of network (3) with that of network (18), which is updated asynchronously5 without threshold.

3.2. Retrieval performance Humans can retrieve a complete face from a part of the face. Neural networks can also retrieve the whole from a part of a pattern as shown in Figs. 8– 13. Compare the performance of network (3) with that of network (18) under asynchronous update. The network (18) without thresholds can retrieve the whole pattern starting from only a part of a stored pattern. In contrast, the network (3) cannot retrieve any stored pattern completely.

3.3. Noise performance The brain can reduce noise or ameliorate its effects. Neural networks can also reproduce a clear, 5

noise-free pattern at the output when the input is a noisy version of the stored pattern. We show the robustness to noise of associative memory in Figs. 14–17. Let us compare the noise performance of network (3) with that of network (18). Observe that network (18) without thresholds can reproduce the whole pattern starting from a noisy pattern. In contrast, network (3) cannot reproduce the stored pattern completely.

3.4. Random generation of stored patterns The network (18) can reproduce stored patterns from random noise inputs. It reproduces each of the four stored patterns (in random order) as shown in Fig. 18, devoid of any spurious pattern if we use a random threshold θi (n). Note that the appearance rate of the four stored patterns is not uniform over the short period of time exhibited.

In this paper, we updated the cells asynchronously to achieve quick convergence [Levine & Aparicio, 1994].

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3233

Fig. 8. Retrieval of the whole from a part (a set of points satisfying ∆vj (n) = 2) of the stored pattern on the left. The network (3) cannot retrieve the stored pattern completely.

Fig. 9. Retrieval of the whole from a part (a set of points satisfying ∆vj (n) = 2) of the stored pattern on the left. The network (18) retrieves the stored pattern completely.

November 12, 2010

3234

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 10. Retrieval of the whole from a part (top of head) of pattern on the left. The network (3) cannot retrieve the stored pattern completely.

Fig. 11. Retrieval of the whole from a part (top of head) of pattern on the left. The network (18) retrieves the stored pattern completely.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3235

Fig. 12. Retrieval of the whole from a part (10% of stored pattern) of pattern on the left. The network (3) cannot retrieve the stored pattern completely.

Fig. 13. Retrieval of the whole from a part (10% of stored pattern) of pattern on the left. The network (18) retrieves the stored pattern completely.

November 12, 2010

3236

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 14. Random noise performance. The network (3) cannot reproduce the stored pattern (left) completely from an initial noisy pattern (center). In this case, 30 percent of image data are replaced with random binary noise.

Fig. 15. Random noise performance. The network (18) reproduces the stored pattern (left) starting from an initial noisy pattern (center). In this case, 30 percent of image data are replaced with random binary noise.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3237

Fig. 16. Random noise performance. The network (3) cannot reproduce the stored pattern (left) completely from an initial noisy pattern (center). In this case, 84 percent of image data are replaced with random binary noise. .

Fig. 17. Random noise performance. The network (18) reproduces the stored pattern (left) starting from an initial noisy pattern (center). In this case, 84 percent of image data are replaced with random binary noise.

November 12, 2010

3238

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 18. Reproduction of stored patterns from 100% random noise inputs. The appearance of four patterns is not uniform. The first 50 reproduced patterns are ordered from top to bottom, and then from left to right. Here, the random threshold θi (n) is generated by random numbers between −400 and 400.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

4. Visual Illusion and Visual Neglect In this section, we simulate the Thatcher visual illusion, the hemispatial neglect syndrome, the split-brain syndrome, and the hemispheric differences in recognition of faces via our autoassociative memory.

4.1. Thatcher illusion The Thatcher illusion is a phenomenon where it becomes difficult to detect local feature-changes in an upside-down face, despite identical changes being obvious in an upright face, as shown in Fig. 1. The network (18) without thresholds cannot retrieve the whole face from an upside-down face image (without local feature change) completely as shown in Fig. 19. However, the same network (18) without thresholds can retrieve the whole face completely starting from an upside-down face image with vertically-flipped 6 eyes as shown in Fig. 20. This example suggests that our brain may also not recognize this feature-change, since the network can

3239

retrieve the whole face easily. The time evolution depicting the dynamics from Fig. 20 is shown in Fig. 21. Similarly, the network (18) without thresholds cannot retrieve the whole face from a mirrored face image completely as shown in Fig. 22. However, it can retrieve the stored pattern starting from a mirrored image with a horizontally flipped part, as shown in Fig. 23. We also simulated the Thatcher illusion phenomenon using more familiar images displayed in Figs. 24–31. Observe that the network (18) without thresholds cannot retrieve the whole face from a mirrored face image completely as shown in Fig. 27. In contrast, the network correctly retrieves the stored pattern if an upside-down face image with vertically-flipped eyes (resp. mouth) is given as the initial pattern as shown in Fig. 28 (resp. Fig. 29). The network (18) can also retrieve a stored image from a well-known upside-down face with verticallyflipped eyes and mouth as shown in Figs. 30 and 31. Note that we have replaced the stored binary pattern of Thatcher in Fig. 26 with that of Fig. 31.

Fig. 19. Time evolution showing the retrieval of stored patterns from flipped (upside-down) initial patterns. The network (18) without threshold cannot retrieve correctly the second and the third stored patterns from their corresponding flipped patterns. The symbol m denotes the iteration number of the asynchronous update. 6

This operation turns eyes upside-down, that is, eyes are reversed along its horizontal axis.

November 12, 2010

3240

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 20. Initial patterns with local feature-changes in an upside-down face (left). Observe that the upside-down face images have vertically-flipped (reversed along the horizontal axis) eyes (center). Note that the flipped pattern of the fourth woman’s left eye is not changed since it is a vertically symmetric rectangle. The network (18) without threshold correctly retrieves all four stored patterns.

Fig. 21. Time evolution showing the retrieval of stored patterns from a flipped pattern with local feature-changes (identical to the leftmost patterns in Fig. 20). The network (18) without threshold correctly retrieves each stored pattern if an upside-down face image with vertically-flipped eyes (left) is given as an initial pattern.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3241

Fig. 22. Retrieval of stored patterns from a mirrored initial pattern. The network (18) without threshold cannot retrieve the stored pattern completely from a mirrored initial pattern.

4.2. Hemispatial neglect The hemispatial neglect syndrome is characterized by reduced awareness of stimuli on one side of space, even though there may be no sensory loss in that part of the retina. It most commonly results from brain injury to the right cerebral hemisphere, causing visual neglect of the left-hand side of space as shown in Fig. 3. The hemispatial neglect syndrome can be simulated by using the network (18) as shown in Fig. 32. Observe that the left-hand side of the woman’s face disappears upon setting the coupling coefficients sij (or their difference ∆sij ) corresponding to the lefthand side of a face to 0: W , sij = 0 or ∆sij = 0 if (i mod W ) ≤ 2 (20) and θi (n) = −1, where W indicates the width of a face image. We set θi (n) = −1, since the new state vi (n+1) must turn to the background color quickly,

that is, vi (n + 1) → 0, if N

∆sij ∆vj (n) − θi (n) = 0.

(21)

j=1

The hemispatial neglect syndrome can be also simulated by using the system   N ∆sij ∆vj (n) − θi (n). (22) vi (n + 1) = sgn j=1

We also simulated the hemispatial neglect syndrome using more familiar images, which is displayed in Figs. 33 and 34.

4.3. Split-brain The split-brain is characterized by a brain in which the two hemispheres have been separated by severing the commissures that connect them as shown in Fig. 4. A split-brain patient receives separate visual stimulation. The patient can show recognition of an

November 12, 2010

3242

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 23. Retrieval of stored patterns from a mirrored initial pattern with local feature-changes. The network (18) without threshold retrieves the stored pattern completely if a mirrored face image with a horizontally-flipped part (center) is given as initial pattern. The local pattern denoted by a left-right double arrow is reversed along the central vertical axis again (center). Thus, a mirrored initial pattern has local feature-change (a horizontally flipped part). Since the pictures are not centered completely, some small patterns disappear by a flipping operation.

object with their left (resp. right) hand, since that hand is controlled by the right (resp. left) side of the brain. Split-brain can be simulated by using the network (18) as shown in Figs. 35 and 36. That is, by neglecting the inputs on the left side, the network (18) can retrieve what the left side of the brain is seeing (left object). Observe that the whole face of the woman is retrieved by neglecting the inputs on

the left-hand side (resp. right-hand side) of a face as shown in Fig. 35 (resp. Fig. 36). It can also be simulated by setting ∆sij = 0, which correspond to the left-hand side of a face: W . (23) ∆sij = 0 if (j mod W ) ≤ 2 In this example, a random threshold is used to avoid converging to a spurious pattern. The network (18) can also associate three (or more) stored patterns

Fig. 25. Thatcher illusion. It becomes difficult to detect local featurechanges in an upside-down mouth (top left), despite identical changes being obvious in an upright face (bottom left).

11:18

Fig. 24. Thatcher illusion. It becomes difficult to detect local feature-changes in an upside-down face (top left), despite identical changes being obvious in an upright face (bottom left).

November 12, 2010 WSPC/S0218-1274

3243

02764

Fig. 26. Color images (top) and their binary patterns (bottom). The last two binary images (bottom center and bottom right) are revised by inserting zero pixels (white area) outside of the left and right edges of original images to obtain the binary images with the same width and same height.

November 12, 2010 11:18

3244

WSPC/S0218-1274 02764

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3245

Fig. 27. Retrieval of stored patterns (right) from flipped initial patterns (left). The network (18) cannot retrieve correctly the first and the second stored patterns from their corresponding flipped patterns.

November 12, 2010

3246

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 28. Retrieval of stored patterns from a flipped pattern with local feature-changes. The network (18) correctly retrieves the stored pattern if an upside-down face image with vertically-flipped eyes (center) is given as the initial pattern. An upside-down face image with vertically-flipped eyes, and its binary pattern, and a retrieved pattern are illustrated from left to right.

Fig. 29. Retrieval of stored patterns from a flipped pattern with local feature-changes. The network (18) correctly retrieves the stored pattern if an upside-down face image with vertically-flipped mouth (center) is given as the initial pattern. An upside-down face image with vertically-flipped mouth, and its binary pattern, and a retrieved pattern are illustrated from left to right.

Fig. 30. Retrieval of a stored pattern from a well-known flipped image in Fig. 1. The network (18) correctly retrieves the stored pattern if an upside-down face image with vertically-flipped eyes and mouth (center) is given as an initial pattern.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

Fig. 31.

3247

Stored binary pattern corresponding to Fig. 1.

Fig. 32. Hemispatial neglect. The left-hand side of a face disappears upon setting the coupling coefficients corresponding to the left-hand side to 0.

November 12, 2010

3248

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 33.

Four stored patterns.

Fig. 34. Hemispatial neglect. The left-hand side of a face disappears upon setting the coupling coefficients corresponding to the left-hand side to 0.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3249

Fig. 35. Split-brain. The whole face of the woman on the right is retrieved by neglecting the inputs of the left-hand side of a composite face.

November 12, 2010

3250

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 36. Split-brain. The whole face of the woman on the right is retrieved by neglecting the inputs of the right-hand side of a composite face.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3251

Fig. 37. Multi-split-brain. Woman faces are retrieved by using a part of the composite face, that is, upper left-hand part of face (fourth row), upper right-hand part of face (fifth row), and lower part of face (bottom).

November 12, 2010

3252

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 38. Hemispheric differences in recognition of faces. The network (18) for α = 2/3 can retrieve what the right side of the brain is seeing (left object).

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

for one given pattern if a multi-composite pattern is given as its initial pattern, as shown in Fig. 37.





 m m σi σj , if i = j, = 0, if i = j,

(25)

 σ1m σ m   2  =  .. .  .  m σN

(26)

where

The right hemisphere is dominant in the perception and recognition of faces. Faces are judged to be more intensely emotional when viewed exclusively by the right hemisphere (left object) as shown in Fig. 5 [Rhawn, 1990]. It can be simulated by suppressing ∆sij corresponding to the right-hand side of a face:   ∆sij ∆vj (n) vi (n + 1) = U  (j mod W )≤ W 2

m coefficients sm i,j for the mth pattern σ :

sm ij

4.4. Hemispheric differences in recognition of faces

  ∆sij ∆vj (n) − θi (n),

(j mod W )> W 2

(24) where 0 ≤ α < 1. Observe that the network (18) with α ≤ 2/3 can retrieve what the right side of the brain is seeing (left object) as shown in Fig. 38. If α = 0, the system (24) is equivalent to the system for the split-brain.

5. Composite Pattern Generation via Genetic Algorithms The brain is a network of electrically active cells that communicate with each other via synapses. Our knowledge through experience that are wired in our brain is combined, and is merged together in order to create a new possibility. That is, our brain can create new faces by combining and merging the stored faces. This process may be simulated by genetic algorithms [Mitchell, 1998]. Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection and crossover. In this section, we apply genetic algorithms to the coupling coefficients si,j in order to generate composite patterns. We first define the ith parent’s string using the coupling coefficients si,j : si,1 si,2 si,3 . . . si,N −1 si,N where (i = 1, 2, . . . , N ). Thus, there are N parent’s strings with length of N . Define next the coupling

3253



σm

We can also define the strings using the sm i,j (m = 1, 2, . . . , M ) from (25): m m m m sm i,1 si,2 si,3 . . . si,N −1 si,N

where (i = 1, 2, . . . , N ). In this case, we have MN strings with length N . The genetic algorithm uses two basic operators: a crossover and a mutation [Mitchell, 1998]. We next explain these operators using binary strings. (1) One-point crossover : A single crossover point on both parents’ strings is selected. All data beyond that point is swapped between the two parents as shown in Fig. 42. (2) Two-point crossover : Two crossover points are selected on the parent strings. Everything between the two points is swapped between the parents, rendering two child strings as shown in Fig. 43. The selected crossover points of the two parents may be different from each other as shown in Fig. 44. (3) Mutation: One or more bit values are changed from its initial string. This can result in entirely new string values as shown in Fig. 45. We apply next the above crossover operators to the N + MN = (M + 1)N strings, which are defined by the coupling coefficients si,j and sm i,j . Many interesting composite face images with spurious patterns can be retrieved as shown in Fig. 46. The crossover points and parent’s strings are randomly selected. In this example, we used the network   N sij vj (n), (27) vi (n + 1) = U  j=1

since the network (18) could not generate many spurious patterns. We also show another example in Figs. 47 and 48.

November 12, 2010

3254

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 39. Hemispheric differences in recognition of faces. The network (18) for α = 2/3 can retrieve what the right side of the brain is seeing (left object).

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3255

Fig. 40. Hemispheric differences in recognition of faces. The network (18) for α = 2/3 can retrieve what the right side of the brain is seeing (left object).

November 12, 2010

3256

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 41. Hemispheric differences in recognition of faces. The network (18) for α = 2/3 can retrieve what the right side of the brain is seeing (left object).

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

3257

Fig. 42. One-point crossover. A single crossover point on both parents’ strings is selected. All data beyond that point is swapped between the two parents.

Fig. 44. Two-point crossover. The selected crossover points of the two parents may be different from each other.

Fig. 43. Two-point crossover. Two crossover points are selected on the parent strings. Everything between the two points is swapped between the parents, rendering two child strings.

Fig. 45. Mutation. One or more bit values are changed from its initial string. This can result in entirely new string values.

November 12, 2010

3258

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 46. Composite pattern generation via genetic algorithms. Crossover points and parent’s strings are randomly selected. An initial pattern is given by the left upper corner pattern. Other composite patterns are generated by applying genetic algorithms repeatedly. They are sampled at every 50 000 asynchronous updates, and listed from top to bottom, and from left to right.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

Fig. 46.

(Continued)

3259

November 12, 2010

3260

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 47.

Three stored patterns.

Fig. 48. Composite pattern generation via genetic algorithms. An initial pattern is given by the left upper corner pattern. Other composite patterns are randomly selected from the patterns generated by genetic algorithms, and listed from top to bottom, and then from left to right.

11:18

Fig. 49. Morphing from the left image to the right image (reproduced from Morphing: http://en.wikipedia.org/wiki/Morphing and http://dictionary.zdnet.com/ definition/morphing.html).

November 12, 2010 WSPC/S0218-1274

3261

02764

November 12, 2010

3262

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 50.

Morphing. An initial pattern (top) is changed into a stored pattern (bottom) smoothly.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

Fig. 51.

Morphing. An initial pattern (top) is changed into a stored pattern (bottom) smoothly.

3263

November 12, 2010

3264

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

Fig. 52. Morphing. An initial pattern (top) is changed into a stored pattern (bottom). The last two transitions are noisy since the two morphing patterns are quite different.

November 12, 2010

11:18

WSPC/S0218-1274

02764

Autoassociative Memory Cellular Neural Networks

Fig. 53.

The network (28) transforms initial patterns on the top into the patterns on the bottom.

3265

November 12, 2010

3266

11:18

WSPC/S0218-1274

02764

M. Itoh & L. O. Chua

6. Morphing Morphing changes (or morphs) one image into another through a seamless transition [Gomes et al., 1998]. It is used to depict one person turning into another smoothly as shown in Fig. 49. We can simulate morphing by using the network   N  ∆sm (28) vi (n + 1) = U  ij ∆vj (n) , j=1

where ∆sm ij denotes the difference of the coupling m coefficients sm ij for the transformed pattern σ . The network (28) transforms an initial pattern v(0) into the pattern σm . Observe the smooth transition between the two woman faces in Figs. 50 and 51. However, if the two faces are quite different, the transition will be noisy, as shown in Fig. 52. Finally, we simulated the morphing operation using more familiar images, as displayed in Fig. 53.

7. Conclusion In this paper, we proposed an autoassociative memory cellular neural network, which exhibited good performance in face recognition, simulation of visual illusions, simulation of brain damaged visual perception, composite face-image generation, morphing, etc. There are many possible generalizations and applications of this neural network, which will be presented elsewhere.

Acknowledgment This work is supported in part by AFOSR grant number FA9550-10-1-0290.

References Chua, L. O. [1971] “Memristor–The missing circuit element,” IEEE Trans. Circuit Th. CT-18, 507–519. Chua, L. O. & Kang, S. M. [1976] “Memristive devices and systems,” Proc. IEEE 64, 209–223.

Gomes, J., Darsa, L., Costa, B. & Velho, L. [1998] Warping & Morphing of Graphical Objects (Morgan Kaufmann, San Francisco). Hopfield, J. J. [1982] “Neural networks and physical systems with emergent collective computational abilities,” Proc. Nat. Acad. Sci. USA 79, 2554–2558. Husain, M. [2008] “Hemineglect,” Scholarpedia 3, 3681. Itoh, M. & Chua, L. O. [2003] “Equivalent CNN cell models and patterns,” Int. J. Bifurcation and Chaos 13, 1055–1161. Itoh, M. & Chua, L. O. [2004] “Star cellular neural networks for associative and dynamic memories,” Int. J. Bifurcation and Chaos 14, 1725–1772. Itoh, M. & Chua, L. O. [2007] “Advanced image processing cellular neural networks,” Int. J. Bifurcation and Chaos 17, 1109–1150. Itoh, M. & Chua, L. O. [2008a] “Imitation of visual illusions via openCV and CNN,” Int. J. Bifurcation and Chaos 18, 3551–3609. Itoh, M. & Chua, L. O. [2008b] “Memristor oscillators,” Int. J. Bifurcation and Chaos 18, 3183–3206. Levine, D. S. & Aparicio, M. [1994] Neural Networks for Knowledge Representation and Inference (Lawrence Erlbaum Assoc Inc.). Maimon, O. & Rokach, L. [2008] Soft Computing for Knowledge Discovery and Data Mining (SpringerVerlag, NY). Mitchell, M. [1998] An Introduction to Genetic Algorithms (MIT Press, Cambridge, MA). M¨ uller, B. & Reinhardt, J. [1990] Neural Networks: An Introduction (Springer-Verlag, NY). Rhawn, J. [1990] Neuropsychology, Neuropsychiatry, and Behavioral Neurology (Plenum Press, NY). Sperry, R. W. [1993] Some Effects of Disconnecting the Cerebral Hemispheres, Nobel Lectures in Physiology or Medicine 1981–1990 (World Scientific, Singapore). Sugita, Y. [2008] “Face perception in monkeys reared with no exposure to faces,” Rep. Japan Sci. Technol. Agency 456 (http://www.jst.go.jp/pr/info/ info456/). Thomas, N. J. T. [2008] Mental Imagery, The Stanford Encyclopedia of Philosophy. Thompson, P. [1980] “Margaret Thatcher: A new illusion,” Perception 9, 483–484.

Suggest Documents