The partitioning of magnetic configurations using

0 downloads 0 Views 178KB Size Report
tion mapping associating the input (ri, mi) with the magnetic moment wj*(i) of winning neuron ˜wj*(i) localized at ri. We define dual set of homogeneous domains.
The partitioning of magnetic configurations using the self-organized neural network ∗ ) ´th, Daniel Reitzner Martin Gmitra, Denis Horva ˇ arik, Department of Theoretical Physics and Astrophysics, University of P.J.Saf´ Park Angelinum 9, 040 01 Koˇsice, Slovak Republic † ) Received Day Month 2004 The numerical procedure performing the partitioning of the complex magnetic configurations obtained by detailed micromagnetic simulation of thin-layer forming the squareshaped particle is presented. The partitioning into set of representative quasi-homogeneous domains is done by using self-organized artificial neural network supplemented by criterion of the creation and annihilation of neurons. Their number is limited by demand of the network output complexity. The salient point of method is introduction of similarity measure of the local magnetic order based on the pseudo-Euclidean metric combining magnetic and geometric information. PACS : 75.10, 75.40.Mg, 75.70.Ak, 42.30.Rx Key words: self-organized neural network, pattern recognition, magnetic domains

1

Introduction

It is generally known fact that famous concepts of domains and domain walls are less fundamental for soft magnetic materials and magnetic nanostructures than for the hard bulk ferromagnets. The reason is an interplay of the wall width and nanoparticle geometry. Therefore, the search for alternative elementary magnetic geometrically localized objects is of theoretical interest. We assume that such objects should be relevant for representation of results of detailed micromagnetic simulations or magnetooptic experiments [0]. The aim of presented work is to design and to test the artificial neural network (ANN) capable to decompose given magnetic configuration to spatially compact and magnetically quasi-homogeneous regions which are called here representative magnetic domains. The decomposition process includes: (i) localization of centres of representative domains; (ii) assignment of localized magnetic moments (encoded by components of neurons) to domain centres; (iii) control of amount of information encoded by ANN. 2

Self-organization algorithm

In this paper we deal with architecture of specific ANN based on the adaptive resonance theory (ART) developed by Carpenter and Grossberg [0, 0]. The family of ART nets belongs to unsupervised self-organized algorithms motivated by need ∗) †)

Presented at 12-th Czech and Slovak Conference on Magnetism, Koˇsice, 12-15 July 2004 Corresponding author. e-mail address: [email protected]

Czechoslovak Journal of Physics, Vol. 54 (2004), Suppl. A

XXXX

A1

Martin Gmitra et al.

to increase adaptivity with respect to inputs. The adaptivity of ART ANN means that network generates new neurons if distance of inputs overcomes the predefined threshold represented by so called vigilance parameter. In the case of redundancy, the information is eliminated by the annihilation of neurons. The behaviour of network is demonstrated by analysis of specific thin-layer magnetic configuration of magnetic moments mi localized at the positions ri of square L × L lattice. The magnetic configuration is organized into format of fivecomponent tuple m ˜ i consisting of the spatial and magnetic part. For i-th moment mi ≡ (mi,x , mi,y , mi,z ) (normalized as kmi k = 1) localized at the plane position ri ≡ (xi , yi ) we define m ˜ i ≡ (ri , mi ) ≡ ( xi , yi , mi,x , mi,y , mi,z ) ,

i = 1, 2, . . . , L2 .

(1)

The set of tuples P = {m ˜ i : i = 1, 2, . . . , L2 } includes an input patterns of ART network. In such representation each domain is rendered by a single neuron  w w w ˜j ≡ (rw , j = 1, 2, . . . , Nw  L2 , (2) j , wj ) = xj , yj , wj,x , wj,y , wj,z

where Nw is the number of neurons. The self-organization process of neurons develops according to coincidence between m ˜ i and w ˜j . As a measure of coincidence we suggested a pseudo-Euclidean distance r  κ 2 w T T (rj − ri ) · ( rw (3) kw ˜j − m ˜ ik = j − ri ) + (wj − mi ) · (wj − mi ) , L where T labels vector transposition. From above formula we see that compromise between the spatial compactness and homogeneity is controlled by the parameter κ, whereas complexity of output is driven by the vigilance parameter ρ mentioned previously. The learning begins after the initialization settings Nw ← 1, w ˜1 ← m ˜ i , where site i is randomly selected. It consists of repeated steps: (i) competition of neurons to be adapted on randomly selected input m ˜ i . The winner j ∗ (k, i) at k-th network iteration is determined by kw ˜j ∗ (k,i) − m ˜ ik =

min

j=1,2,...,Nw

kw ˜j (k) − m ˜ ik .

(4)

(ii) comparison of kw ˜j ∗ (k,i) (k) − m ˜ i k to vigilance parameter ρ. The obeying of kw ˜j ∗ (k,i) − m ˜ i k ≤ ρ is canonically considered as network resonance. The winner’s adaptation via modified Hebbian learning rule [0] is done according  w ˜j ∗ (k,i) (k + 1) = w ˜j ∗ (k,i) (k) + η(k) m ˜i −w ˜j ∗ (k,i) (k) . (5)

The update shifts a winner toward the nearby input with rate proportional to plasticity parameter η(k) = η0 exp (−t(k)/τlearn ) relaxing during the typical training time τlearn . Here the epoch t(k) is indexed by t(k) = [k/L2 ]integer . If number of neurons is insufficient to cover inputs, i.e. for kw ˜j ∗ (k,i) (k)−m ˜ i k > ρ, a new neuron is created. It technically means performing of updates Nw ← Nw + 1, w ˜ Nw ← m ˜ i.

A2

XXXX

Czech. J. Phys. 54 (2004)

The Partitioning of Magnetic Configurations Using the Self-organized Neural Network

(iii) annihilation of neuron pair z1∗ , z2∗ selected according to minimum distance kw ˜z1∗ − w ˜z2∗ k = min kw ˜ z1 − w ˜ z2 k z1 ,z2

(6)

if kw ˜z1∗ − w ˜z2∗ k < ρ . The product of annihilation is single neuron wz1∗ (when ∗ ∗ z1 < z2 ). The annihilation is described by decremented Nw ← Nw − 1, and ˜z2∗ ). After the update synaptic weight given by midpoint rule w ˜z1∗ ← 21 (w˜z1∗ + w ∗ of w ˜z1 , the synaptic weights of neurons indexed by j ≥ z2∗ are changed: w ˜j−1 ← w ˜j . (iv) testing criterion of network stability after L2 iterations listed in (i)-(iii) Nw (t) X 1 kw ˜j (t(k)) − w ˜j (t(k) − 1)k ≤ tolerance = 10−6 . Nw (t) j=1

(7)

When the criterion fails or number of neurons fluctuates, the learning continues from (i) with k incremented by 1. 3

Results

The presented self-organized ANN was applied to partition input thin-layer magnetic configuration. The realistic configuration was prepared by integration of Landau-Lifshitz-Gilbert equation solved for the initial random orientation of magnetic moments on the square lattice (L = 40). The damped precession of spins was simulated under the local effective field taken as minus gradient of the sum of the exchange Eexch and dipolar Edip energies, respectively. The simulation led to long-living configuration with energy ratio f ≡ Edip /Eexch = 0.2832 (let us to note that ratio 1.0424 of the homogeneous configuration mi = (1, 0, 0), i = 1, . . . , L2 is much higher). In Fig.a we see the input pattern and its decomposition onto Nw = 22 representative domains following from the ART self-organization process applied with ρ = 0.8 and κ = 2.6. As is depicted in Fig. b, stabilization of Nw is influenced by κ and ρ. To test predictability of ART decomposition let us introduce the homogenization mapping associating the input (ri , mi ) with the magnetic moment wj ∗ (i) of winning neuron w ˜j ∗ (i) localized at ri . We define dual set of homogeneous domains PART = {(ri , wj ∗ (i) ), i = 1, 2, . . . , L2 }. The similarity of P and PART is measurable P L2 q by distance dmag = (1/L2 ) i=1 (wj ∗ (i) − mi ) · (wj ∗ (i) − mi )T . Its mean value

dmag = 0.200(6) has been calculated as an average taken over 104 independent learning trials corresponding to unique P (the main parameters are listed in caption of Fig. ). More physical seems to be energy measures of similarity of P and PART . We have defined EQ = 1 − E Q (PART )/EQ (P), where Q replaces indices dip and exch, and E Q (PART ) means average over the learning trials. The simulation provides: Edip = 8.64%, Eexch = 8.59%. The ratio fART ≡ Edip (PART )/Eexch (PART ) = 0.2830(6) belonging to network output yields deviation |f − fART | ' 10−4 . Clearly, degree of homogenization m ˜i → w ˜j ∗ (i) can be easily controlled by the number of neurons. Czech. J. Phys. 54 (2004) XXXX A3

Martin Gmitra et al.: The Partitioning of Magnetic Configurations . . .

Nw 70

q = κ, ρ = 0.8 q = ρ, κ = 2.6

60 50 40 30 20 10 0 0.5

a)

1

1.5

2

2.5

3

3.5

4

4.5

q

b)

Fig. 1. a) ART representation of magnetic configuration by means of Nw = 22 neurons (black arrows represent set PART ). The boundaries of representative domains are depicted by lines. The partitioning for learning parameters: ρ = 0.8, κ = 2.6, η0 = 0.05 and τlearn = 60; b) The simulation results uncovering the influence of parameters q = κ, ρ on the mean size of population of neurons Nw . The differences between learning trials serve to determine the error bars.

4

Conclusions

The partitioning of magnetic configuration onto quasi-homogeneous representative domains has been performed using artificial self-organized ANN. The pseudoEuclidean metrics has been introduced which captures homogeneity and compactness demands of decomposition. It is clear that presented approach is universal tool of analysis of the complex vector fields. The comparison of energies calculated for the original (input) and homogenized patterns indicates that remarkable computational aspect of ART partitioning should be its application to hierarchical ressumation techniques relevant for the models of magnetic nanostructures based on the long-range magnetostatic interactions [0]. This research has been supported by the Slovak Grant agency VEGA (grant no.1/9034/02) and APVT-51-052702. References [1] H. V. Jones, R. W. Chantrell: J. Appl. Phys. 91 (2002) 8855. [2] G. A. Carpenter, S. Grossberg: Computer Vision 37 (1987) 54. [3] S. Grossberg: Biological Cybernetics 23 (1976) 121. [4] S. Haykin: Neural Networks. A Compreh. Found., Prentica Hall, New Jersey 1999. [5] D. Horv´ ath, M. Gmitra: J. Magn. Magn. Mater. 256 (2003) 195.

A4

XXXX

Czech. J. Phys. 54 (2004)

Suggest Documents