A SYSTOLIC ACCELERATOR FOR POWER SYSTEM SECURITY ASSESSMENT Thierry Cornu, Paolo Ienne, Dagmar Niebur, and Marc A. Viredaz Swiss Federal Institute of Technology Centre for Neuro-Mimetic Systems IN-J Ecublens, CH-1015 Lausanne Tel.: +41-21-693 6694 – Fax: +41-21-693 5263 – E-mail:
[email protected] Abstract: This paper presents an application of a neural-network algorithm, the Kohonen feature map, to security monitoring in power transmission systems, and its implementation on parallel hardware. The computational requirements of the application led to the development of an SIMD processor array dedicated to neural networks. The heart of this system is a systolic array of simple processing elements (PEs). A VLSI custom chip containing a sub-array of 2 2 PEs has been built and the full system may contain up to 40 40 PEs. The system is designed to sustain sufficient instruction and data flows to keep the utilization rate close to 100 %. Keywords: Power-system security assessment – Neural networks – Kohonen’s self-organizing maps – Neural accelerator. 1
Introduction
today’s workstations. Accelerators using Digital Signal Processors (DSPs) for the most time-consuming computations are already in use as workstation peripherals, providing a training speed improvement of about one order of magnitude. In this paper, a prototype implementation of a dedicated hardware accelerator based on a systolic array is presented. The primary target application is the assessment of the so-called static security in power transmission systems using a Kohonen selforganizing map. The application is described in section 2; subsections 2.1 and 2.3 respectively present the algorithm and the computational requirements of the application. Even if the hardware has been designed with this application in mind, it is not restricted to the Kohonen neural network but other commonly used models can be run on it, including backpropagation and Hopfield networks. The system is presented in section 3 and subsections 3.1, 3.2, and 3.3 deal with the PE architecture, the full hardware system and the system integration, respectively. Subsection 3.4 discusses the peak performance of the system and its adequacy to the requirements.
In an electric power system, the generation is distributed over several power plants, and the load is located at discrete places in the system. In most countries, the transmission of power between the production plants and the loads is provided by an AC high-voltage transmission mesh. Although relatively rare, major power outages such as the 1977 New York blackout, that was due to cascading outages originated by a lightning stroke and left the city of New York without power for several hours, present costs in the order of billions of dollars ([WIL78]). In the area of power system security assessment, neural networks have been first proposed by [SOB89] for the estimation of critical clearing time for transient stability. A detailed review of the state of the art in neural networks for power system security can be found in [NIE93]. However, none of the approaches for security assessment presented so far investigates neural net techniques for real-world systems. The computational requirements are not yet met by neural net simulators running on conventional workstations where the highly parallel nature of neural nets is not exploited. The training process in particular can be a very time-consuming task if performed on a sequential machine. The performance of a machine for a given learning algorithm is expressed in connection updates per second, i.e., the number of synaptic weights in the neural network, multiplied by the number of training steps, and divided by the total time spent for training. Typically, the performance of a workstation for training is, nowadays, in the order of one Million of Connection Updates Per Second (MCUPS). Note that the performance required for the neural network in the prediction phase (once it has learned) is negligible, since it represents less than the effort necessary for a single step of the learning procedure. The extension of the artificial neural network approach to security assessment problems from didactic examples to real-world cases will require training performances two to three orders of magnitudes higher than those provided by
2 Power System Security Assessment 2.1 Kohonen Self-Organizing Maps Self-organizing feature maps are networks of neurons—usually arranged on a 1- or 2-dimensional grid—which map a distribution of n-dimensional vectors in a non-linear way preserving a topological order. Neurons are connected according to a specified neighbourhood relation (the 1-D or 2-D grid); for instance, all 8 neurons adjacent to a given neuron can be defined as its first-order neighbours. The neurons adjacent to a first-order neighbour, excluding those already considered, are neighbours of the second order, etc. In other words, input data representing similar characteristics are mapped to contiguous clusters. A criterion to assess similarity can be the Euclidean 1
Inputs Weights 2-dimensional Kohonen map w00
x0
w01 x1
w81 w02
1
2
3
4kohonen-map.epsf 5 6
7
0
w80
w82
x2 Figure 1. A 4
110
49 mm 9
10
11
12
13
14
15
for i
i
=
= 1, 2, . . . , m;
the network is n. Figure 1 represents a 2-dimensional Kohonen network of size 4 4, with 3-dimensional inputs. Each component of the input vector is connected to every individual neuron in the Kohonen network (although only the connections to neurons in the first column of the Kohonen network are represented in figure 1). Thus there is a connection between each neuron i of the Kohonen network and every component of the n-dimensional input vector.
v u n X u t (xj ? Wi,j )2 j =1
W
(1)
W
where and m i indicates the i-th row of matrix the number of neurons in the map. The network learns without supervision; that is, the topological order of the input data is not necessarily known a priori ([KOH82, KOH89]). In a first phase the neural network learns the topology of the n-dimensional vector space on a given, well chosen set of training vectors. The process consists in the selection of a winner neuron I 2 f1, 2, . . . , mg, whose weight vector is the closest to the input vector:
in (~x, WI ) in (~x, Wi)
8i 2 f1, 2, . . . , mg.
2.2 Application to Power System Security Assessment The targeted application is the classification of the static states of a power system. The goal of static security analysis is to predict whether the power-flows in the branches (lines and transformers) and the bus voltages of the system will, after an unforeseen outage, exceed the supported limit of the corresponding components or not. Power-flows of a power system may be computed by solving a set of non linear equations using an iterative method, for instance the Newton-Raphson’s. The parameters of the equation set are the impedances of the power system branches and the control variables of the system (roughly voltage magnitude and active power generation at a few control busses and power injections at the others). Given a current operating point of a power system, classical static security analysis considers every possible combination of outages and iteratively computes a power flow for each. This is a heavy computational burden which, for large power systems, can not be carried out in realtime on sequential computers. Several techniques, such as network flows and expert systems, have been proposed to help solving these problems. Nevertheless, the computing effort to obtain a solution in realtime, even by a heuristic method, remains considerable. Another drawback of conventional simulation lies in the quantitative nature of their results. The numerical outputs they provide require, from the power system operators, an interpretation using the current system state and its potential stochastic evolution. Whereas in a weakly loaded system a critical power flow value might be tolerated, this requires immediate
(2)
Subsequently, the weights are modified according to the following relation:
W
order 2
4 Kohonen network with 3-dimensional input.
x
x W)
order 1
8
distance between two n-dimensional vectors. For every input vector ~ , its distance from each neuron i is measured in the n-dimensional input space. Common distances are the scalar product, or the Euclidean distance:
yi = in (~ ,
order 0
:= Wi + (topo (i, I )) ~xT ? Wi for i = 1, 2, . . . , m; (3) where topo (i, I ) is the distance of neuron i from the i
winner neuron I in the topological space. The neighbourhood function restricts the update to neurons close to the winner. After iterating this update operation, the weight vectors are self-organized and represent prototypes of classes of input vectors. The size of each class depends on the density of the probability distribution in the training vectors and the selection of neurons through equation (2) is equiprobable. Input vectors belonging to the same class share some similarities. Applying equation (2), the network will map a vector with unknown features to the cluster where its closest training vector has been mapped to. The dimension of the vectors which will be presented as inputs to train 2
Generation Pa
Generation Pb Pab
Bus a P ac
by dividing the security space into categories. The centres of each class are the neurons located at the coordinates defined by the weight vectors. The 2dimensional picture of the network gives a quite accurate interpretation of the power-system space.
Bus b
three-bus.eps 50 44 mm
P bc
Simulations performed with the non-linear power system model have shown that indeed the selforganizing network maps safe, critical and unsafe operational points to different clusters. Knowing that a cluster relates to some security state and mapping each operational point to one of these will immediately identify whether the system is operating safely or not. If the neural network is chosen large enough, the mapping will further split up the safe, critical, and unsafe clusters into more specific subclusters corresponding to different regions in the security space. This not only provides a simple labelling of the state but also identify in which area of the state space the operational point is located. Hence, the closest boundaries can easily be identified. Since these are defined by the system constraints (for instance, the maximal currents), this identifies which limits are most likely to be violated in critical states and which ones will be violated in unsafe emergency states. This concept is illustrated in figure 3 for the small test network consisting of 3 busses and 3 lines. For larger power systems this operating space can no longer be graphically represented.
Bus c Load Pc
(a) P
bc
Unsafe
P bc max boundaries.epsf 79 54 mm
Critical
Safe Pab Pac
Pab max
Pac max
(b) Figure 2. (a) A 3-bus, 3-line linear power system model. (b) The corresponding operating space. The axes represent power flow between busses a, b and c.
The operating space of the 5-bus 7-line network has been quantized using the self-organizing feature map. The 14-dimensional training vectors consist of the real and imaginary parts of the 7 complex line power flows. A 7 7 neural net with all single and double contingencies of this system has been trained. The weights of the neurons are processed with the classic power flow technique to label each neuron with the corresponding state space region. In this example, with one exception, all outages of line North-South corresponding to an overload of North-Lake are classified by neurons in the lower left corner of the map (figure 4). Similarly, all outages, except one of North-Lake corresponding to an overload of North-South, are classified by neurons in the upper right corner. The exception is the double outage of North-Lake and North-South that results in splitting the network in two parts. This situation is classified by the cluster consisting of neuron 47 and 48, together with other outages also resulting in the disconnections and/or unsupplied power demands. Safe outages are classified by neurons in the middle of the map. After training, 184 cases not belonging to the training set plus the 46 training vectors have been presented to the Kohonen map, resulting in an error rate of about 6.5 %. However, only 3.4 % of unsafe cases have been misclassified as safe. All neurons whose weight vector component wrongly predict a safe value for an overloaded line, all correctly predict a higher overload for a second line which thus masks the less severe overload. All other classifications were on the conservative side, wrongly indicating highly loaded cases as overloaded. A detailed discussion of these results, as well as those of
action in a heavily loaded system. It is therefore up to the operator to analyze these results to construct a global qualitative picture of the system state. But, even if it were possible to obtain contingency analysis results of a 1000 line system in real time, assembling this pointwise information for 1000 line outages simply overburdens the operator. Since decisions in control centres often have to be taken under time pressure and stress, a fast, concise, and synthetic representation of security assessment results is essential. The application of Kohonen algorithm to this interpretation problem has been investigated in [NIE92]. Figure 2 illustrates the shape of the security boundaries of an academic three-bus system. According to the different security criteria, the boundaries of the safe state space are given by the limits prescribed for the busses and lines. In the case of line overload prevention, these limits are given by the maximal power supported by the lines: jPik j Qik j Sik,max , where Pik j Qik is the complex power flow. A safe operating point always lies inside the boundaries of the safe space. Critical points still lie inside but are close to the boundaries. Finally, unsafe points are outside the region boundaries. Figure 2 shows the case of the linear power system model where the safe region of the operating space is convex. The goal of this approach is to map the operational points of the multi-dimensional power-system state space onto a 2-dimensional Kohonen network
+
+
3
Pbc P bc max
AAAAA AAA AAAA AAA AAAA AAAA AAA AAAAA AAA AAAA AAAA AAA AAAAA AAA AAAA AAA AAAAA AAA Unsafe
quantization.epsf 85 50 mm
Safe
Pab
Pab max Pac
Pac max
Critical
Figure 3. Quantization of the operating space.
6: North-Lake &S-M
Slack Bus G-NORTH ~
LAKE
NORTH
MAIN
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
ELM
SOUTH
clustermap.epsf 121 71 mm
G-SOUTH ~
28:North-South & Lake-Main 42: North-South & G-South
47: South-Elm & Main-Elm
48: G-North & G-South North-Lake & North-South
Figure 4. A 5-bus, 7-line system and the corresponding feature map.
necessary for a daily training on the Swiss transmission system. Such an amount of computation takes on a conventional workstation (approx. 1 MCUPS) more than 8 hours, hardly feasible in practice. An increase in computational power of one to two orders of magnitude is required. Accelerators such as that proposed will drastically reduce the learning for larger power systems down to acceptable times. Such performance should be available, for an engineered system, at approximately the price of a high-performance workstation.
the IEEE 24 bus test system, can be found in [NIE92]. 2.3 Computational Requirements In the proposed application, a new security analysis and a new neural net learning is to be performed every day. This is necessary to take into account the daily-changing operating conditions of the power system. Although a utility controls only a small part of the system, power system simulation has to take into account a major part of the network in order to yield accurate results. In the case of a small country like Switzerland the transmission network consists of approximately 150 busses and 250 lines. In our application, this requires processing of input vectors composed of 300 elements. At the same time, the number of neurons in the feature map may not realistically be much more than 1000, since for each of them, an expensive power flow computation has to be performed after the neural-network training. Considering that 105 iterations is an ordinary training length for a large Kohonen network, the number of connection updates necessary for a real-world power system may be roughly evaluated: a minimum of 300 1000 105 3 1010 connection updates are
3 The Neural Network Accelerator A careful analysis of the neural-network model used for this application and described in section 2.1 shows that it can be implemented in specialized hardware with only a small set of operations. A small extension of the operation set makes the mapping of other neural algorithms possible on the same hardware. Therefore, the accelerator has been designed providing the basic operations for the following models:
=
4
Mono-layer networks: Perceptron and delta rule.
NIN WGTIN
I v1
I v2
I v3
I v4
PE
PE
PE
PE
1,1
1,2
1,3
1,4
NOUT
INSTIN
Inst.
WOUT
L
W0
W1
O h1 PE
PE
PE
2,1
2,2
2,3
2,4
array.eps 66 67 mm
PS := PS + W . D 2 PSdiag.eps := PS + (D - W) if PS ≤ D 79 mm 85 :={PS PS N max if PS > D if PS ≥ D PS :={PS N min if PS < D W := W + PS . D W := W + PS . (D - W)
I h2
O h2
PE
PE
PE
PE
3,1
3,2
3,3
3,4
D
O h3
I h3 PE
PE
PE
PE
4,1
4,2
4,3
4,4
EIN LIN LOUT
I h1 PE
INSTOUT
WIN
U
EOUT
PS
UIN UOUT
O h4
I h4 O v1
O v2
O v3
O v4
SOUT WGTOUT
(a)
SIN
(b)
Figure 5. (a) Square systolic array of GENES IV processing elements. (b) Architecture of a GENES IV diagonal PE.
Multi-layer feed-forward networks: back-propagation rule. Fully-connected recurrent networks: model.
the corresponding three multiplexers). The key features of the proposed system include:
Hopfield
Self-organizing feature maps: Kohonen model.
A comprehensive description of these algorithms can be found in any classic introductory book on neural networks (e.g., [HER91]).
3.1 The GENES IV systolic array A VLSI digital circuit called GENES IV was designed. The present section provides a short overview of this chip. A more detailed description can be found in [IEN93]. A GENES IV array is a square mesh of simple processing elements (PEs). Each PE is connected by serial lines to its four neighbours as shown in figure 5(a). All input and output operations are performed by the PEs located on the North-West to South-East diagonal. A GENES IV system implements six different fixed-point operations:
1. Matrix-vector product (which can also be considered as the scalar or dot product between a vector and each row of a matrix). 2. Squared Euclidean distance (between a vector and each row of a matrix).
A systolic flow of instructions (through dedicated registers), designed such that instructions accompany the corresponding data. This feature avoids the need to empty and refill the system at each operation change, hence ensuring a maximum utilization rate. Two registers W0 and W1 , holding the synaptic weights. This characteristic gives to the GENES IV architecture the ability to efficiently deal with virtual matrices larger than the physical array. The virtual matrix is divided into submatrices that fit on the array, which is then timeshared between them. A sub-matrix can therefore be used for computation while the previous one is saved to memory and the next one simultaneously loaded, hence maintaining a 100 % utilization rate. Four multiplexers, to exchange the functionality of rows and columns. Any operation can therefore be executed on the transpose of the stored matrix (required to implement the backpropagation rule).
The computation is performed on fixed-point values. The components of input vectors are coded on 16 bits, as well as the weights. The latter quantities have however 16 additional bits used only during learning (weight update operations). Outputs are computed on 40 bits. A VLSI chip with a sub-array of 2 2 PEs has been designed in CMOS 1m standard cell technology. It contains 71,690 transistors (3,179 standard cells) on a die measuring 6.3 6.1 mm 2 .
3. Hebbian learning rule (synaptic weight updating). 4. Kohonen learning rule (synaptic weight updating). 5. Search for the largest element in a vector. 6. Search for the smallest element in a vector. Figure 5(b) shows the architecture of a diagonal PE (non-diagonal PEs do not require neither the input/output signals UIN, LIN, UOUT, and LOUT, nor 5
GENES IV Array
Weight Memory & FIFOs
XY Memories & FIFOs
Sigma Unit
GACD1 Array
mantra1.eps 98 104 mm
Function of Y Unit
Desired Output Memory &FIFO
Interrupt
Controller Memory FIFO
TMS 320C40
Communication Links Figure 6. Architecture of the MANTRA I system.
3.2 The MANTRA I system The MANTRA I machine is a neuro-computer, based on GENES IV circuits. The computational heart of the Single Instruction stream Multiple Data stream (SIMD) module is a square array of 40 40 PEs running at 10 MHz. This array is shown in the top-left corner of figure 6. A large section of the SIMD part (delimited by a dashed line in the figure) is devoted to an input/output system with a bandwidth large enough to keep the systolic array busy. Two look-up tables and another VLSI chip, called GACD1 — performing auxiliary computations for the delta and back-propagation rules — also belong to this part. The control part is a complete Single Instruction stream Single Data stream (SISD) system based on a commercial microprocessor: the TMS320C40 from Texas Instruments. Its tasks include configuration of the SIMD part, instruction dispatching, and input and output management. It also handles the communications with a host computer and between multiple MANTRA I accelerators. The system (see figure 7) consists of four printed circuit boards: (1) the processor board (33 chips), (2) the control board (135 chips), (3) the input/output board (465 chips including 14 GACD1 chips), (4) the GENES IV array board (containing a matrix of 10 10 100 GENES IV chips). One of the latter board is required for configurations of the system up to 20 20 400 PEs. For larger configurations — up to 40 40 1600 PEs — four PE array boards should be interconnected. A more detailed description of the
photo.ps 81 76 mm
Figure 7. The MANTRA I machine.
MANTRA I architecture can be found in [VIR93].
3.3 System Integration and Software The control processor of the MANTRA I machine (i.e., the TMS320C40 DSP) is linked to a front-end SUN SPARCstation through a second TMS320C40 processor. This second DSP is interfaced directly to the SPARCstation’s SBus. The two DSPs communicate through one of the six 8-bit dedicated communication
= = =
6
links available on this type of microprocessor. From a software point of view, the intermediate DSP is transparent. The MANTRA I machine communicates with user processes, running on the Unix front-end computer, in a client/server fashion. No more than one user process may be connected to the MANTRA I machine at once. A user process issues remote procedure calls, that are executed by the server program running on the MANTRA I control processor (figure 8). This implies that the potential parallelism between the frontend workstation and the MANTRA I SISD component has not been exploited so far: for the sake of software simplicity, processes on the Unix system wait idle until the end of the remote procedure call. The parallelism of the SIMD module is hidden to external users. The systolic machine is accessed via a set of neural-network procedures, implemented ad hoc on the MANTRA I hardware in low level microcode and provided to the users as Unix libraries. No way is currently provided to ordinary users to explicitly program the MANTRA I machine or to control the inherent parallelism of the machine directly from client programs. An automatic scheduler is being developed to simplify the programmer paradigm, essentially hiding the machine parallelism also to the low-level programmer. The neural-network algorithms currently available on the MANTRA I machine are the delta rule for a single layer of perceptrons and Kohonen feature maps (currently under test). The implementation of backpropagation learning for multi-layered perceptrons is planned for the near future.
formance (20 MCUPS in this configuration). This factor is due to the need of sending the learning data to the array through the FIFO (dynamic utilization rate: 52 %), to the fact that only 14/20th of the array are actually used for computation (where 14 is the number of inputs of the network, spatial static utilization rate: 70 %) and to array initialization and unloading (temporal static utilization rate: 99 %). Larger network configurations may imply further deterioration of the dynamic utilization rate due to the need of saving the weight matrix into the processor memory through the FIFO. 4 Conclusions Until now, single contingency analysis of large-scale power systems (1000 busses) could only be performed in a reasonable amount of time by supercomputers, such as a CRAY, and double contingencies present a major problem even for supercomputers. The acquisition and maintenance costs of supercomputers are not acceptable for all those utilities who exploit only a small part of the power system but who for security reasons need to simulate the major part of the surrounding power system. Neural networks, thanks to their intrinsic parallelism, makes it possible to develop cost-effective specialized hardware whose performance matches real-world computational requirements. The GENES IV systolic architecture, presented here, is a good candidate for an SIMD processor array. Its strength comes from its large number of simple PEs that can sustain a utilization rate close to 100 %. Such an array has been integrated in the prototype MANTRA I neuro-computer. Its design was mainly focussed on sustaining the instruction flow and input/output data flows required to keep the GENES IV array busy. The peak performances (together with the first measurements on the prototype) indicate that the computational power matches the requirement of the proposed application.
3.4 Performance Evaluation There are two traditional metrics to measure the computational power of neural-network computers: (1) the number of connections processed in the unit time (measured in Millions of Connections Per Second, MCPS) for the evaluation phase and (2) the number of connections updated per time unit (MCUPS) in the learning phase. Table 1 shows the peak performance of a 40 40 PEs MANTRA I system for some implementable algorithms (performance for other models is data dependent). As a comparison, a NEC SX3 performs 130 MCUPS ([MUL92]). This comparison is however not quite fair, since the NEC SX-3 uses floating-point numbers, while MANTRA I uses integers. On the other hand, once the algorithm converges in both cases, floating-point numbers only provide the user with a more comfortable programming paradigm. The implementation of the algorithm is now under way. The first performance measurements relate to the learning of the 5-bus, 7-line system on a 4 5 Kohonen map. The current hardware configuration is one fourth of the maximum array size (i.e., a 20 20 PEs array) and runs at 8 MHz. The measured performance is 7.20 MCUPS or 36 % of the peak per-
Acknowledgements The present work is supported by the Swiss National Fund for Scientific Research through the SPP-IF program and by the Swiss Federal Institute of Technology (EPFL) through the MANTRA project. The research on the power-system security assessment as been co-funded by the EPFL and by the Jet Propulsion Laboratory, California Institute of Technology. The authors wish to thank Profs. Alain Germond and JeanDaniel Nicoud for their constant support. The original works of Franc¸ois Blayo and Christian Lehmann provided the bases of the accelerator architecture. Special thanks go to Peter Bru¨hlmeier, Giorgio Caset, Andre´ Guignard, Christophe Marguerat, and Georges Vaucher for their help in designing and building the hardware, and in writing the system software. 7
[email protected]
user workstations ethernet principes.ps SISD 169 70 mm
component of MANTRA I
SIMD component of MANTRA I
DSP
front-end workstation
MANTRA 1
Mantra client process
Mantra server process
Figure 8. MANTRA I system integration.
Model
Evaluation phase
Learning phase
Perceptron, delta rule Back-propagation rule Kohonen with minimum/maximum
400 MCPS 400 MCPS 200 MCPS
200 MCUPS 133 MCUPS 100 MCUPS
Table 1. Peak performance of the MANTRA I system.
References
power system static security states. Journal of Electrical Power and Energy Systems, 14(2-3):233–42, April-June 1992.
[HER91] John Hertz, Anders Krogh, and Richard G. Palmer. Introduction to the Theory of Neural Computation. Santa Fe Institute Studies in Sciences of Complexity. Addison-Wesley, Redwood City, Calif., 1991.
[NIE93] Dagmar Niebur et al. Artificial neural networks for power systems: A literature survey. International Journal of Engineering Intelligent Systems for Electrical Engineering and Communications, 1(3):133–58, December 1993. CIGRE TF Report no. 38.06.06.
[IEN93] Paolo Ienne and Marc A. Viredaz. GENES IV: A bit-serial processing element for a multi-model neural-network accelerator. In Luigi Dadda and Benjamin Wah, editors, Proceedings of the International Conference on Application Specific Array Processors, pages 345–56, Venezia, October 1993.
[SOB89] D. J. Sobajic and Y.-H. Pao. Artificial neuralnet based dynamic security assessment for electric power systems. IEEE Transactions on Power Systems, 4(1):220–28, February 1989.
[KOH82] Teuvo Kohonen. Self-organizing formation of topologically correct feature maps. Biological Cybernetics, 43(1):59–69, 1982.
[VIR93] Marc A. Viredaz. MANTRA I: An SIMD processor array for neural computation. In Peter Paul Spies, editor, Proceedings of the Euro-ARCH’93 Conference, pages 99–110, Mu¨nchen, October 1993.
[KOH89] Teuvo Kohonen. Self-Organization and Associative Memory, volume 8 of Springer Series in Information Sciences. SpringerVerlag, Berlin, third edition, 1989.
[WIL78] G. L. Wilson and P. Zarakas. Anatomy of a blackout. IEEE Spectrum, 15(2):38–46, February 1978.
[MUL92] Urs A. Mu¨ller, Bernhard Ba¨umle, Peter Kohler, Anton Gunzinger, and Walter Guggenbu¨hl. Achieving supercomputer performance for neural net simulation with an array of digital signal processors. IEEE Micro, pages 55–65, October 1992. [NIE92] Dagmar Niebur and Alain J. Germond. Unsupervised neural network classification of 8