Dr. S. S. Pandey. Vikas Pandey. Department of Computer Science,. Institute of Engineering &. Technology,. Dr. B. R. Ambedkar University,. Khandari, Agra (U. P.).
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
www.elkjournals.com …………………………………………………………………………………………………………………… PERFORMANCE EVALUATION OF HOPFIELD ASSOCATIVE MEMORY FOR COMPRESSED IMAGES
Manu Pratap Singh Department of Computer Science, Institute of Engineering & Technology, Dr. B. R. Ambedkar University, Khandari, Agra (U. P.)
Dr. S. S. Pandey Department of Mathematics & Computer Science Rani Durgawati University, Jabalpur (M. P.) India
Vikas Pandey Department of Mathematics & Computer Science Rani Durgawati University, Jabalpur (M. P.) India
Abstract This paper is designed to analyze the performance of a Hopfield neural network for storage and recall of compressed images. In this paper we are considering the images of different sizes. These images are first compressed by using wavelet transformation. The compressed images are then preprocessed and the feature vectors of these images are obtained. The training set consists with all the pattern information of the preprocessed and compressed images. Here each input pattern is of size 900 X 1. Each pattern of training set is encoded into Hopfield neural network using hebbian and pseudo inverse learning rules. Once all the patterns of training set are encoded then we simulate the performance of trained neural network for the presented noisy patterns of the already encoded patterns. These noisy test patterns are also compressed and preprocessed images. The performance for associative memory phenomena of Hopfield neural network is analyzed. The analysis is considered in terms of successful and correct recalling of the patterns in the form of original compressed images. It is found from simulated results that the performance of Hopfield neural network for recalling of the patterns and then the reconstruction of images decreases as the noise or distortion in the original images is above 30 %. It is also found that the Hopfield neural network is failed to recall the correct images if the presented prototype input pattern of the original image is containing the noise more than 50 %.
Keywords: Hopfield Neural Networks, Associative memory, Compressed Images storage & recalling, pattern storage networks
1.
Introduction
Pattern storage & recalling i.e. pattern
feature.
Pattern
association is one of prominent method for
accomplished
the pattern recognition task that one would
consisting of processing units with non-
like to realize using an artificial neural
linear bipolar output functions.
by
storage a
is
feedback
generally network
network (ANN) as associative memory 13
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
The Hopfield neural network is a simple
prediction, financial analysis, and control
feedback neural network (NN) which is able
and optimization [15]. In most current
to store patterns locally in the form of
applications, neural networks are best used
connection strengths between the processing
as aids to human decision makers instead of
units. This network can also work for the
substitutes for them. The Neural Networks
pattern completion on the presentation of
have been designed to model the process of
partial
prototype input
memory recall in the human brain [16].
pattern. The stable states of the network
Association in human brain refers to the
represent the memorized or stored patterns.
phenomenon of one thought causing us to
Since the Hopfield neural network with
think
associative memory [1-2] was introduced,
associative memory is the function where
various modifications [3-10] are developed
the brain is able to store and recall
for the purpose of storing and retrieving
information, given partial knowledge of the
memory patterns as fixed-point attractors.
information content [17].
The dynamics of these networks have been
Associative Memory is a dynamical system
studied
their
which has a number of stable states with a
potential applications [11-14]. The dynamics
domain of attraction around them. If the
determines the retrieval quality of the
system starts at any state in the domain, it
associative
to
will converge to the locally stable state,
pattern
which is called an attractor [18]. One such
information or
already
extensively
memories
stored
because
of
corresponding
patterns.
The
of
another.
information in an unsupervised manner is
model,
encoded as sum of correlation weight
neurons in such a way that they function as
matrices in the connection strengths between
Associative Memory or also called as
the proceeding units of feedback neural
Content
network
available
proposed by J. J. Hopfield and was named
information of the pre and post synaptic
after him as Hopfield Model. It is a fully
units which is considered as final or parent
connected neural network model in which
weight matrix.
patterns can be stored by distributing among
The neural network applications address
neurons and we can retrieve one of the
problems
previously presented patterns from an
using
in
the
locally
pattern
classification,
describing
the
Correspondingly,
Addressable
organization
Memory,
of
was
14
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
example which is similar to, or a noisy version of it [17, 18]. The network associates each element of a pattern with a binary neuron. The neurons are updated asynchronously and in parallel. They are
Points d and e would also vary with the number of patterns currently stored in the network. The simplest rule that can be used to train a network is the hebbian rule but it suffers from a number of problems such as:
initialized with an input pattern and the network activation converges to the closest learnt pattern [19]. This dynamical behavior of the neurons strongly depends on the synaptic strength between neurons. The specification of the synaptic strength is conventionally referred to as learning [20]. Learning employs a number of learning algorithms as perceptron, hebbian, pseudo inverse, LMS etc. [21]. While choosing a learning algorithm, there are a number of considerations. The following considerations
a) The maximum capacity is limited to just 0.14N, where N is the number of neurons in the network [22]. b) The recall efficiency of the network deteriorates patterns
stored
the in
number the
of
network
increases [23]. c) The network’s ability to correct noisy patterns is also extremely limited and deteriorates with packing density of the network. d) New
are used in this paper:
as
patterns
could
hardly
be
associated to the stored patterns. a) The maximum
capacity of the
network. b) The network’s ability to add patterns incrementally to the network. c) The network’s ability to correctly recall patterns stored in the network.
The next rule to be considered to overcome the disadvantages of the hebbian rule was the pseudo inverse learning rule. The standard pseudo inverse rule is known to be better than the hebbian rule in terms of the capacity (N), recall efficiency and pattern
d) The network’s ability to associate a
corrections [24]. In this paper we are
noisy pattern to its original pattern.
considering the images of different sizes.
e) The network’s ability to associate a
These images are first compressed by using
new pattern its nearest neighbor.
wavelet transformation. The compressed images are then preprocessed and the feature vectors of these images are obtained. 15
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
Therefore for each image a pattern vector of
Section II provides a brief description of the
size 900 X 1 is constructed. The training set
Hopfield network as associative memory
consists with all the pattern information of
and its storage and update dynamics. Section
the preprocessed and compressed images.
III elaborates the Pseudo inverse Rule, the
Here each input pattern is of size 900 X 1.
associated
Each pattern of training set is encoded into
overcome them. Section IV considers the
Hopfield neural network using hebbian and
pattern formation for compressed images.
pseudo inverse learning rules. Once all the
Section V contains the experiments whose
patterns of training set are encoded then we
results have been compiled and discussed in
simulate the performance of trained neural
Section VI. Conclusions then follow in
network for the presented noisy patterns of
section VII.
problems
and
measures
to
the already encoded patterns. These noisy test patterns are also compressed and preprocessed images. The performance for associative memory phenomena of Hopfield neural network is analyzed. The analysis is considered in terms of successful and correct recalling of the patterns in the form of original compressed images. It is found from
2. Hopfield Memory
Network
as
Associative
The proposed Hopfield model consists of N (900 = 30 X 30) neurons and N N (900 X 900) connection strengths. Each neuron can be in one of the two states i.e. ±1, and L(9)
simulated results that the performance of
bipolar patterns have to be memorized in the
Hopfield neural network for recalling of the
Hopfield neural network of associative
patterns and then the reconstruction of
memory.
images decreases as the noise or distortion
Hence, to store L(9) number of patterns in
in the original images is above 30 %. It is
this pattern storage network, the weight
also found that the Hopfield neural network
matrix w is usually determined by the
is failed to recall the correct images if the
Hebbian rule as follows:
presented prototype input pattern of the
w xlT xl
original image is containing the noise more than 50 %.
This paper is organized as
L
(1)
l 1
or, wij l
1 N
a a l i
l j
(2)
follows: 16
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
and, wl or, wij
1 N l l ai a j N i, j
neural network, there should be one stable
(3)
state corresponding to each stored pattern.
1 L l l ai a j (i j) and wii 0 (4) N l 1
Thus at the end, the memory pattern should be fixed-point attractors of the network and
where,
must satisfy the fixed-point condition as:
{ ail , i 1,2,3 N ; l 1,2,3 L and
yil sil wij s lj
i j; with set of
N
L patterns to be
memorized and N is the number of processing units}. The network is initialized as:
N
or, yil sil wij s lj where yil 0
i 1 to N
dynamics
(5)
The activation value and output of every
the
following
equation
must
activation satisfy
to
accomplish the pattern storage: N
f ( wij s lj ) sil ;
unit in Hopfield model can represent as: N
yi wij si t ; i, j 1,2,3 N ; i j j 1
(10)
j i
where i, j 1,2, , N ; i j P {x1 , x 2 , , x L }
(6)
Let the pattern set be
and si t 1 sgn yi
(7)
where sgn yi 1 for yi 0 and
where [ x1 (a11 , a12 , , a1N ),
yi 0
x 2 (a12 , a22 , , a N2 ),
respectively. Associative
(9)
j 1
Therefore,
sil 0 ail 0 for all
(8)
j 1 i j
memory
involves
the
-
retrieval of a memorized pattern in response
-
to the presentation of some prototype input patterns as the arbitrary initial states of the
x L (a1L , a2L , , a NL )
network. These initial states have a certain degree of similarity with the memorized
with N 1,2, ,900
patterns and will be attracted towards them
and L 1,2, ,9] .
(11)
with the evaluation of the neural network. Hence, in order to memorize 9 scanned images of in a 900-unit bipolar Hopfield
Now,
the
initial
weights
have
been
considered as wij 0 (near to zero) for all 17
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
i' s and j' s . From the synaptic dynamics as
and W old W new
vectors we have the following equation for
similarly for the Lth patterns, we have:
encoding the patterns information as:
W L W L1 X L
W
new
W
old
X .X T
(13)
T
XL
(14)
(12)
Thus, after the learning for all the patterns, the final parent weight matrix can be represented as:
0 L 1 a2l a1l N W L l 1 | | L 1 l l aN a1 N l 1
1 N
L
a a l 1
l 1
l 2
0 |
1 L l l a1 a3 N l 1 1 L l l a2a3 N l 1 |
|
|
| 1 N
L
a l 1
l N
a2l
1 N
L
a l 1
l N
a3l
|
1 L l l a1 aN N l 1 1 L l l a2aN N l 1 | (15) | 0
Now, to represent W L in the convenient representation form, let us assume following notations: L
L
L
l 1
l 1
l 1
L
L
l 1
l 1
S1 S 2 a1l a 2l , S1 S 3 a1l a3l -------------, S1 S N a1l a Nl ,
S 2 S1 a2l a1l , S 2 S 3 a2l a3l
L
--------- , S 2 S N a2l a Nl , l 1
L
L
L
l 1
l 1
l 1
S N S1 a Nl a1l , S N S 2 a Nl a2l , S N S 3 a Nl a3l
(16)
So that, from equation (6.16) & (6.17), we get:
18
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
0 s s 2 1 1 L W | N | s N s1
s1 s 2 0 |
s1 s3 s 2 s3 |
| s N s2
| s N s3
s1 s N s 2 s N | | | | 0
(17)
This square matrix is considered as the
Pattern recall involves setting the initial state
parent weight matrices because it represents
of the network equal to an input vector ξi.
the partial solution or sub-optimal solution
The states of the individual units are then
for the pattern recalling corresponding to the
updated repeatedly until the overall state of
presented prototype input pattern vector.
the network is stable. Updating of units may
Hopfield suggested that the maximum limit
be synchronous or asynchronous [30]. In the
for the storage is 0.15N in a network with N
synchronous update all the units of the
neurons, if a small error in recalling is
network are updated simultaneously and the
allowed.
theoretically
state of the network is frozen until update is
calculated as p 0.14 N by using replica
made for all the units. While in the
method [25]. Wasserman [26] showed that
asynchronous update, a unit is selected at
the maximum number of memories ‘ m ’ that
random and its state is updated using the
can be stored in a network of ‘ n ’neurons
current state of the network. This update via
and recalled exactly is less that cn 2 where ‘
random choice of a unit is continued until no
c ’is a positive constant greater than one. It
further change in the state takes place for all
has been identified that the storage capacity
the units i.e. the network reaches a stable
Later,
this
was
strongly depends on learning scheme.
state. Each stable state of the network
different
corresponds to a stored pattern that has a
learning schemes, instead of the Hebbian
minimum hamming distance from the input
rule to increase the storage capacity of the
pattern [31]. Each stable state of the network
Hopfield neural network [27, 28] Gardner
is associated an energy E and hence that
showed that the ultimate capacity will be
state acts as a point attractor. And during
Researchers
have
proposed
p 2 N as a function of the size of the basin
of attraction [29].
update the network moves from an initial high energy state to the nearest attractor. All stable states which are similar to any of ξi of 19
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
training
set
are
Fundamental
rule in such a way that some characteristics
Memories. Apart from them there are other
of hebbian learning are also incorporated
stable
such that locality and incrementally is
states,
called
including
inverses
of
fundamental memories. The number of such
ensured. The hebbian rule is given as:
fundamental memories and the nature of additional stable states depend upon the
L
Wij = 1/N ∑ ξli * ξlj
learning algorithm that is employed.
for i≠j
(19)
l=1
= 0, for i=j, 1≤i, j≤N 3. Pseudo inverse Rule
where, N is the number of units/neurons in the
In Hopfield, we can use the pseudo inverse learning rule to encode the pattern information if pattern vectors are even non orthogonal. It provides the more efficient
network ξl for l = 1 to L are the vectors / images to be stored, where each component of ξl is binary i.e. each ξli = ±1 for i=1 to N.
method for learning in the feedback neural network models. The pseudo inverse weight
Now the pseudo inverse of the weight matrix can be calculated as
matrix is given by
Wpinv = Wt * (W * Wt)-1 W = Ξ Ξ -1
(18)
where Ξ is the matrix whose rows are ξn and Ξ
-1
(20)
is its pseudo inverse. The matrix with
the property that Ξ -1 Ξ = I [32]. The pseudo inverse rule is neither local nor incremental as compared to the hebbian rule. This means that the update of a particular connection does not depend on the information available on either side of the connection and also patterns cannot be incrementally added to the network. These problems can be solved by modifying the
Where, Wt is the transpose of the weight matrix W and (W * Wt)-1 is the inverse of the product of W and its transpose. This method will overcome the locality and incrementally problems associated with the pseudo inverse rule. In addition it has the benefits of the pseudo inverse rule in terms of the storage capacity and recall efficiency over the hebbian rule. Pattern recall refers to the identification and retrieval of the corresponding image when an image is presented as input to the 20
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
network. As soon as an image is fed as input to the network, the network starts updating
V
1 N N Wij s s j i si 2 i 1 j 1 i
(22)
itself. In the current paper, we use
Hopfield has shown that for symmetric
asynchronous update of the network units to
weights with no self feedback connections
find their new states. This update via
and bipolar output functions, the dynamics
random choice of a unit is continued until no
of the network using asynchronous update
further change in the state takes place for all
always lead towards energy minima at
the units. That is, the state at time (t+1) is
equilibrium.
the same as the state at time t for all the
corresponding to these energy minima are
units.
termed as stable states [34] and the network
si (t+1) = si (t) for all i
(21)
Such a state is referred to as the stable state.
The
network
states
uses each of these stable states for storing individual patterns.
In a stable state the output of the network will be a stable (trained) pattern that has a minimum hamming distance from the input
4. Pattern
formation
using
Image
Preprocessing techniques
pattern [31]. The network is said to have
The pattern formation is an essential step for
converged and recalled the pattern if the
performing
output
presented
Hopfield neural network model. Hence to
initially as input. For pattern association, the
construct the pattern information to encode
patterns stored in an associative memory act
the pattern, the preprocessing steps are
as attractors and the largest hamming
required. Preprocessing, in the form of
distance within which almost all states flow
image enhancement, of the images is
to the pattern is defined as the radius of the
required to convert the images into suitable
basin of attraction [32].
patterns
Each state of the Hopfield Network is
Network. The term image enhancement
associated with an energy value, whose
refers to making the image clearer for easy
value either reduces or remains the same as
further operations. The images considered
the state of the network changes [33]. The
for the study are the images of the
energy function of the network is given by
impressions of different individuals. The
matches
the
pattern
for
the
associative
storage
in
feature
the
in
Hopfield
21
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
images are not of perfect quality to be considered for storage in a network. Hence enhancement methods are required to reveal the fine details of the images which may remain uncovered due to insufficient ink or imperfect impressions. The enhancement methods
would
increase
the
contrast
between image components and connect the broken or incomplete image components.
Fig 1: (a) Original Greyscale Images
The images were first scanned as gray images and then transform in wavelet to retain the fine details in the images. The image was then subjected to binarization. Binarization refers to conversion of a grayscale image to black and white image. Typically binarization converts an image of up to 256 gray levels to a black and white image as shown in figure 1 (a) and (b).
Fig 1: (b) Binary Images After attaining binarization, the need was to convert the binary image to bipolar image, since Hopfield networks work best with bipolar data. A bipolar image is one where each pixel has value either +1 or -1. Hence, 0
all pixel values are verified and those with value 0 are converted to -1, thus converting the binary image to bipolar image. Finally the image is converted to bipolar vectors. 22
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
All the image vectors are stored into a comprehensive
matrix
11 12
which
has
following structure.
the
21 22
31 32
41 42
51 52
61 62
71 72
81 82
. .
. .
. .
. .
. .
. .
. .
P .[ . .
91 92 . ] .
(23)
1900 2900 3900 4900 5900 6900 7900 8900 9900
The equation 23 is representing the training
and modified by altering the values of k
set. This training set is used to encode the
randomly chosen pixels. Also, assume
pattern
vectors ynew to store the new states of the
information
of
all
the
nine
preprocessed images in to Hopfield neural
network.
network.
initialized to value 1.
5. Implementation detail and experiment design The patterns in the form of bipolar vectors created in section 4 were then stored in the Hopfield
network
via
the
following
algorithm separately for hebbian and pseudo inverse rules in separate weight matrices. Algorithm:
Pattern
Storage
and
Recalling The algorithm for pattern recall in a Hopfield Neural Network storing L patterns is as follows: The algorithm would be implemented both for Hebbian and Pseudo inverse rules and results would be recorded. Assume a pattern
Consider
a
variable
count
1. Initialize weights to store patterns (Use Hebbian and Pseudo inverse Rule) as per the equations 14 and 20 respectively for Pattern Storage. While activations of the net are converged perform steps 2 to 8. 2. For each input vector x, repeat steps 3 to 7. 3. Set initial activations of the net equal to the external input vector x, yi = xi (i=1 to n). 4. Perform steps 5 to 7 for each unit yi. 5. Compute the net input Y j xi Y j * W ji i, j
x, of size N, already stored in the network 23
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
6. Determine the activation (output
and convergence to the original pattern
signal):
occurs.
1, ifY j 0 Yj ifY j 0 1
6. Results and Discussion
Broadcast the value of Y j to all other units. The Hopfield network has the ability to 7.
Test for convergence as per equation 4.
recognize unclear pictures correctly. This means that the network can recall actual
The following parameters are used to encode the sample patterns of training set.
pattern when the noisy or partial clues of that pattern are presented to the network. It
Table 1: Parameters used for Hebbian and Pseudo inverse learning rule
is known and has been shown [18] that Hopfield networks converge to the original patterns if up to 40% distorted version of a
Parameter
Value
Initial state of
Randomly Generated Values Either –1 and 1 0.00
stored pattern is presented. The patterns are neurons Threshold values of neurons
stored in the network in the form of attractors on the energy surface. The network can then be presented with either a portion of one of the images (partial cue) or
The value of threshold θ is assumed to be
an image degraded with noise (noisy cue)
zero. Each unit is randomly chosen for
and through multiple iterations it will
update.
attempt to reconstruct one of the stored
The
maximum
number
of
patterns
images.
successfully recalled by the above procedure is a pointer to the maximum storage capacity of the Hopfield Network, which is further discussed in the results. Further the recall efficiency
for
noisy
patterns
is
also
determined as up to what percentage of error in the patterns is acceptable by the network Fig 1: (c) Recall images 24
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
preprocessed and presented as the prototype All the patterns after preprocessing as depicted in Section 4 were converted into bipolar vectors ready for storage into the Network. As per the above discussed algorithm, the patterns were input one by one into the Hopfield Network first by hebbian rule and then by pseudo inverse rule. The weight matrices for both are 900 × 900 symmetric matrix.
input pattern vector to the trained Hopfield neural network for recalling. Similarly the figure 2 (c) and 2 (d) are showing the noisy form of the compressed images from wavelet
transformation,
Fourier
transformation and DCT transformation are presented to the Hopfield network for recalling of corresponding correct recalled images.
The storage capacity of a neural network refers to the maximum number of patterns that can be stored and successfully recalled for a given number of nodes, N. The Hopfield network is limited in storage capacity to 0.14N when trained with hebbian rule [35, 36, 37]. But the capacity enhances to N with pseudo inverse rule. Experiments were conducted to check the same and the network was able to store and perfectly recall 0.14N i.e. 126 patterns with hebbian
Fig 2 (a) Error images
rule and N i.e. 900 patterns with pseudo inverse rule. Thus the critical storage capacity for the Hopfield Network comes out to be 0.14N with hebbian and N with pseudo inverse rule without any error in pattern recall, for the current study.
Fig 2 (b) noisy images
The figure 2 (a) and 2 (b) are repressing the distorted and noisy form of the already encoded images. These distorted images are
25
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
The figure 2 (e) is representing the Binarization of the noisy images as shown in 2(a) to 2(d). These images are presented again in the form of 900 X 1 pattern vectors. These pattern vectors are presented to the Fig 2 (c) compressed (noise) images wavelets
Hopfield network as the prototype input pattern vector. The Hopfield neural network produces the recalled images corresponding to each presented input pattern vector. The figure 2 (f) is representing the recalled images. It can see that the 5 out of 9 images are same as the memorized binary images but 4 images out of 9 are not correctly recalled. The recalled images is containing some amount of error.
Fig 2 (d) compressed (noise) images Fourier transform
Fig 2(e)
Noisy Binary Images
Fig 2 (d) compressed (noise) images DCT
26
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
The behavior with distorted patterns .l
is similar with both the rules i.e. up to 40% distortion the same pattern is associated but at 50% distortion some other stored pattern or the nearest neighbor is associated. New
Fig 2(f) Recall images
patterns are not recognized by the
7. Conclusion
hebbian rule but by pseudo inverse they are associated to some stored
It was observed that the network performs sufficiently well for the compressed image with wavelet transform, DCT and Fourier transformations.
Further
it
has
been
observed that the network’s efficiency starts deteriorating as the network gets saturated. The performance of the network deteriorated with 80 patterns for hebb rule and 130 patterns for pseudo inverse rule. This result can be attributed to the reduction of the Hamming Distance between the stored patterns and the consequent reduction of the radius of the basin of attraction of each stable state. Hence only few patterns could settle into the stable states of their original patterns. The following points are observed form the experimental results: 1. For all the 9 images the recalling is correct if any one of the original images is presented as the prototype input pattern for the pattern recalling.
pattern. 2. The performance of the Hopfield neural network is found better for the compressed images with wavelet transform,
DCT
and
Fourier
transformation. 3. The patterns are correctly recalled even with 50 % of the noise in the images compressed with wavelet transform,
DCT
and
Fourier
transformation. 4. Hopfield neural network exhibits the associative
memory
phenomena
correctly for the small number of patterns but its performance starts deteriorate as the more number of images are stored. 5. It can quite obviously verify that the performance
of
Hopfield
neural
network for pattern storage and recalling depends heavily of the 27
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
method which is used for feature
Academy
extraction form the given image.
pp.3088 – 3092, (1984).
6. It
is
also
considered
that
the
[3]
Sciences,
USA,
81,
Amit, D. J., Gutfreund, H., and
compression of images is conducted
Somopolinsky, H., “Storing Infinite
with wavelet transform DCT and
Number of Patterns in a Spin-glass
Fourier
Model
transformation
provides
of
Neural
Networks”,
more effective features for the
Physical Review Letters, vol. 55(14),
construction of pattern information.
pp. 461-482, (1985).
The performances of Hopfield neural
[4]
Amit,
D.
J.,
“Modeling
Brain
network can also analysis for the more
Function: The World of Attractor
number of images with some more
Neural
sophisticated
University Press New York, NY,
methods
of
feature
extraction. The performance of Hopfield neural network for pattern recalling can further
improve
with
the
use
Networks”,
Cambridge
USA, (1989). [5]
Haykin, S., “Neural Networks: A Comprehensive Foundation”, Upper
of
evolutionary algorithms.
Saddle River: Prentice Hall, Chap 14, pp. 64, (1998). [6]
8. References
Zhou,
Z.,
“Improvement [1]
of
Zhao, the
H.,
Hopfield
Hopfield, J. J., “Neural Networks
Neural Network by MC-Adaptation
and Physical Systems with Emergent
Rule”, Chin. Phys. Letters, vol.
Collective Computational Abilities”,
23(6), pp. 1402-1405.
Proceedings Academy
[2]
and
of
the
Sciences,
National USA,
[7]
79,
Zhao, H., “Designing Asymmetric Neural Networks with Associative
pp.2554 – 2558, (1982).
Memory”, Physical Review, vol.
Hopfield, J. J., “Neural Networks
70(6) 066137-4.
and Physical Systems with Emergent
[8]
Kawamura, M., and Okada, M.,
Collective Computational Abilities”,
“Transient Dynamics for Sequence
Proceedings
Processing Neural Networks”, J.
of
the
National
28
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
[9]
[10]
[11]
Phys. A: Math. Gen., vol. 35 (2), pp.
Rev., vol. E 72(6), pp. 066111-7,
253, (2002).
(2005).
Amit, D. J., “Mean-field Ising Model
[14]
and Low Rates in Neural Networks”,
Recall Analysis of the Hopfield
Proceedings of the International
Neural Network with a Genetic
Conference on Statistical Physics, 5-
Algorithm”,
7 June Seoul Korea, pp. 1-10,
Mathematics with Applications, vol.
(1997).
60(4), pp. 1049-1057, (2010).
Imada, A., and Araki, K., “Genetic
[15]
and
Paliwal, M. and Kumar, U. A., “Review: Neural
Associative Memory”, Proceedings
statistical techniques”, A review of
of the sixth International Conf. on
applications, Expert Systems with
Genetic Algorithms, pp. 413 – 420,
Applications, vol. 36(1), pp. 2-17,
(1995).
(2009).
Hopfield, J. J. and Tank, D. W.,
[16]
Tarkowski
networks
W.,
Lewenstein
and
M.,
Nowak A., “Optimal Architectures
Optimization Problems”, Biological
for Storage of Spatially Correlated
Cybernetics, vol. 52 (3), pp. 141-
Data in Neural Network Memories”,
152, (1985).
ACTA Physica Polonica B, Vol. 28,
Tank, D. W. and Hopfield, J. J.,
No.7, pp 1695 – 1705, (1997).
“Simple
Neural
Optimization
Networks: An A/D Converter, Signal
[17]
Networks”,
MIT
Department of Physics, (2007).
Programming Circuit”, IEEE Trans. Circuits and Syst., vol. 33(5), pp.
Takasaki K., “Critical Capacity of Hopfield
Decision Circuit, and a Linear
[13]
Computers
Algorithm Enlarges the Capacity of
“Neural Computation of Decisions in
[12]
Kumar, S. and Singh, M. P., “Pattern
[18]
Ramachandran R., Gunusekharan N.,
533-541, (1986).
“Optimal Implementation of two
Jin, T. and Zhao, H., “Pattern
Dimensional
Recognition
Asymmetric
Model Neural Network”, Proc. Natl.
Attractor Neural Networks”, Phys.
Sci. Counc. ROC (A), Vol. 24(1), pp
using
Bipolar
Hopfield
73 – 78 (2000). 29
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
[19]
[20]
[21]
McEliece, R. J., Posner, E. C.,
W.,
Lewenstein
M.,
Nowak A., “Optimal Architectures
S., “The capacity of the Hopfield
for Storage of Spatially Correlated
associative memory”, IEEE Trans
Data in Neural Network Memories”,
Information Theory IT-33 4, pp.
ACTA Physica Polonica B, Vol. 28,
461-482, (1987).
No.7, pp 1695 – 1705 (1997).
Ma J., “The Object Perceptron
[25]
Amit, D. J., Gutfreund, H., and
Learning Algorithm on Generalized
Somopolinsky, H., “Storing Infinite
Hopfield Networks for Associative
Number of Patterns in a Spin-glass
Memory”, Neural Computing and
Model
Applications, Vol. 8, pp. 25 – 32
Physical Review Letters, vol. 55(14),
(1999).
pp. 461-482, (1985).
Atithan G., “A Comparative Study of
[26]
of
Wasserman,
Neural
P.
Networks”,
D.,
“Neural
Two Learning rules for Associative
Computing: theory and practice”,
Memory”, PRAMANA – Journal of
Van Nostrand Reinhold Co., New
Physics, Vol. 45, No. 6, pp 569 –
York, NY, USA, (1989).
Pancha
Error
[27]
G.,
“Feature
and
and
Venkatesh
by Matrix Operators”, IEEE Tran
Memory-Selective
Correction
in
Computers, vol. C-22(7), pp. 701-
Neural
Hassoun eds. Associative Neural Memories:
Theory
and
Kohonen, T. and Ruohonen, M. “Representation of Associated Data
S.,
702, (1973).
Associative Memory”, in M. H.
[23]
Tarkowski
Rodemich, E. R. and Venkatesh, S.
582 (1995). [22]
[24]
[28]
Pancha, G. and Venkatesh, S. S., “Feature
and
Memory-Selective
Implementation, Oxford University
Error
Press, pp.-225-248, (1993).
Associative Memory”, Associative
Abbott L. F., Arian Y., “Storage
Neural
Capacity of Generalized Networks”,
Implementation, M. H. Hassoun, ed.,
Rapid
Oxford University Press, pp. 225-
Communications,
Physical
Review A, Vol. 36, No. 10, pp 5091
Correction
Memories:
in
Neural
Theory
and
248, (1993).
– 5094 (1987). 30
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015) ……………………………………………………………………………………………………………………
[29]
[30]
Gardner, E., “The Phase Space of
Physical Systems with emergent
Models”, Journal of Physics, vol. 21
Collective Computational Abilities”,
A, pp. 257-270, (1988).
PNAS, Vol. 79, pp 2554 -2558
Yegnanarayana B., “Artificial Neural
(1982). [35]
Networks:
Hopfield Networks for Associative
Implementations and Applications”,
Memory”, Neural Computing and
IEEE AES Magazine, (1996).
Applications, Vol. 8, pp. 25 – 32
“Neural
Perez C. J., Vicente, “Hierarchical
(1999) [36]
Neural Network with High Storage
Neural Associative Memory, IEEE
40(9), 5356 – 5360 (1989).
Transactions on Neural Networks,
Streib F. E, “Active Learning in Recurrent
Neural
Jankowski S., Lozowski A., Zurada J. M., “Complex Valued Multistate
Capacity”, Physical Review A, Vol.
[33]
Ma J., “The Object Perceptron Learning Algorithm on Generalized
Vonk E, Veelenturf L. P. J, Jain L. C.,
[32]
Hopfield J. J., “Neural Networks and
Interactions in Neural Networks
Networks”, PHI, (2005). [31]
[34]
Networks
7(4), 1491 – 1496 (1996). [37]
Meyder
A.,
Kiderlen
C.,
Facilitated by a Hebb-like Learning
“Fundamental Properties of Hopfield
Rule
Neural
Networks and Boltzmann Machines
Information Processing – Letters and
for Associative Memories”, Machine
Reviews, 9(2), 31 – 40 (2005).
Learning (2008).
with
Memory”,
31