Visual Processing Platform Based on Artificial Retinas

2 downloads 0 Views 2MB Size Report
software modules to test the vision algorithms before programming them in ... prototypes in the field of visual rehabilitation and to demonstrate the feasibility of a.
Visual Processing Platform Based on Artificial Retinas Sara Granados, Eduardo Ros, Rafael Rodríguez, and Javier Díaz Department of Computer Architecture and Technology, University of Granada, Spain {sgranados,eduardo,rrodriguez,jdiaz}@atc.ugr.es

Abstract. We present a system that integrates a retinomorphic chip into a platform that includes a board with reconfigurable hardware (FPGA device) and a conventional computer in order to evaluate image processing schemes, such as motion detection based on this front-end. We have used an artificial retina that transforms light intensity into spikes and sends them using an event-driven protocol. To set up this development platform, we have built a driver for a board with a FPGA that acts as an interface between the retina and a personal computer in which we store the grabbed spikes. Also, we have developed software modules to test the vision algorithms before programming them in Hardware Description Languages. Keywords: Artificial retina, retinomorphic, bio-inspired, vision, FPGA, AER.

1 Introduction Human visual system efficiency and velocity is higher than current robotic systems from the very reception system, the retina, which not only grabs intensity and colour but also reduces the visual cortex work load [5] taking advantage of significant focalplane processing. This reduction is mainly due to the pre-processing performed by the retinal ganglions which extract directly spatio-temporal transition related information and transfer it using neural spikes [1]. Moreover, the retinal system shrinks data bandwidth because of its event-driven communication scheme [2]. Most of the visual cortex neurons respond to linear edge orientation in a hierarchical organization following a serial processing model, i.e. as the simple cells react to light stimulus, the complex ones respond to orientation parameters [15]. Neurons are organized in a columnar architecture where the ones in the same column have the same attributes. Based on this idea, several important researchers are focused on neuromorphic prosthesis. In particular, Cortivis project [8, 10, 11] aims to develop prototypes in the field of visual rehabilitation and to demonstrate the feasibility of a cortical neuro-prosthesis (interfaced with the visual cortex) as an aid to deep blind people. Whereas, other important research efforts are currently concentrated on the development of bio-inspired vision systems that will provide an alternative to the conventional sensors (i.e. cameras) and grabbing systems. In particular, Boahen’s artificial retina [1, 4] mimics mammal retina on a silicon chip. In this paper, we introduce a simulator which is able to emulate an artificial retina behaviour. Moreover, we have designed a spike-grabber that records spikes from F. Sandoval et al. (Eds.): IWANN 2007, LNCS 4507, pp. 506–513, 2007. © Springer-Verlag Berlin Heidelberg 2007

Visual Processing Platform Based on Artificial Retinas

507

Boahen’s retina and store them in a computer for further off-line processing. In the next section, the retinomorphic chip is described, and the spike-grabber and the simulator designs are outlined. Section 3 summarizes the work outcome and Section 4 provides some conclusions.

2 Material and Methods 2.1 Development Platform The development platform consists on an artificial retina [1] connected to the processing board (the FPGA board). This FPGA board transforms the received spikes into event interpretable data and sends it to the PC through the PCI bus. In the PC, besides the spike-grabber which receives the information from the FPGA board, there is the simulator which contains the processing modules (such as motion-detection module) that can be used either with information from the spike-grabber or with video sequences (in AVI format) transformed into spikes sequences. Fig. 1 outlines the whole platform.

Artificial Retina

FPGA Board

PC

Robots

PC Spike Grabber

Retinal Simulator

Vision processing modules

Fig. 1. Development Platform: Artificial retina is connected to the FPGA which is connected to the PC

The FPGA board will be used not only as a data interface but also as the perfect platform for the on-line post-processing modules. Although this work is focused on the offline processing of the spikes grabbed from the artificial retina, the next step is to program the processing modules in the reconfigurable board in order to achieve real time higher level processing tasks.

508

S. Granados et al.

2.2 Retina Model First of all, it is necessary to understand the artificial retina’s behaviour and its communication protocol. As specified in [1], the artificial retina model consists on four types of ganglion cells: two of them are sustained or static (ON and OFF) and the other two are transient (increasing and decreasing). The ON sustained cells respond to amplitude signal increases whereas the OFF ones fire in response to amplitude decreases in a similar fashion. ON transient cells pick up the increases at the excitatory centre’s leading edge, while OFF transient cells pick up the decreases. After this transduction (transforming energy changes into a bioelectrical signal) the picked-up information is sent to the processing engines. The artificial retina communication protocol [3] is based on a four-phase handshake [7]. Whenever a ganglion cell is activated, its address and its type are sent into an unique bus. In Fig. 2 a schematic diagram outlines the Address-Event-Representation (AER). RESET

NO_DATA Ry=‘1’ Rx=‘1’

Ry=‘0’ Rx =‘1’

ACK Å ‘0’

READ_Y ACK Å ‘1’ Ry=‘1’ Rx=‘0’

Ry=‘1’ Rx=‘1’

READ_X ACK Å ‘0’ Fig. 2. AER protocol scheme: Rx and Ry are control signals that specify whether there is a new data ready and what kind of cell is active. ACK (acknowledge) is the handshaking signal. Communicating a single row-column address pair involves a sequence of eight transitions in the row request (Ry, active-high), column request (Rx, active-low), and acknowledge (Ack, active high for Ry but active low for Rx) signals [3].

AER is an event-driven protocol. This means that there is no information exchange unless a change takes place in the incoming stimuli. This characteristic reduces dramatically the data load to a minimum amount, making the communication as efficient as possible.

Visual Processing Platform Based on Artificial Retinas

509

This retinomorphic chip transforms photons into spikes and stores them using only 0.06 w [16]. This bio-inspired vision system can be interfaced with other neural-like chips to test different vision schemes based on this kind of sensors [9]. 2.3 Spike-Grabber Design We have built a spike-grabber using reconfigurable hardware to capture the signals generated by the retina with high temporal resolution. This can also be interfaced with other retinal-model development platforms [8]. We have used the Xirca V2 processing board (www.sevensols.com) for receiving retina data and transforming it into a format for its visualization or storage. It has a Xilinx Virtex II FPGA device, three independently accessible SRAM banks, PCI Bus interface and a vision extension board with a VGA controller and other connectors. We have employed a Hardware Description Language (HDL), specifically VHDL, to program the FPGA. Therefore, our final system is composed by the artificial retina [1], Xirca V2 and a personal computer. Table 1 shows a summary of FPGA main resources used for the spike-grabber. Table 1. This table contains a summary of FPGA resources used for the two spike-grabbers developed: visualization (without necessity of a personal computer) and storage. The FPGA is a Xilinx Virtex II XC2V3000 its means that it has three (3) millions of gates. *This version uses the SRAM modules for data’s storage. Develop version

Number of Slices

% occupation

% on-chip memory

Spike-Grabber – Visualization

136

1

2

Spike-Grabber - Storage

634

4

0*

We need to store information from the retina while the FPGA keeps reading further retina’s spikes. The problem of this specification is that sending information through the Xirca V2 PCI bus involves its storage in the SRAM modules and they have only one read/write bus; so we need to commute them without loosing any data. Although the PCI bus arbitrage is managed by a specific device (CPLD), memory exchange has to be controlled by the designer. In order to solve this problem, we have developed a handshaking protocol in which the FPGA acts as master and actives control flags; blocking and unblocking SRAM modules while the PC acts as slave reading information only when the FPGA is not using that bank. It is also necessary that the spikes from the retina are stored in chronological order. This made forces us to develop a four-phase handshaking [7] as the one used for the communication between the FPGA and the artificial retina. Furthermore, the FPGA has to translate the spikes into readable information by extracting the cell type which is codified in the less significant bit of the spike encoded addresses as shown in Fig. 3.

510

S. Granados et al.

Fig. 3. A spike from Boahen’s retina [1] consists on X and Y vectors which codify the spike’s addresses (row and column) and the cell type. To transfer and to store these spikes, we extract row and column addresses and the cell type. We join X0 and X1 (the two less significant bits of vector X) and Y0 (the less significant bit of vector Y) to get the cell type. Row address is extracted from the remaining vector X’s bits and column address is extracted from the remaining vector Y’s bits.

2.4 Simulator Design Programming vision algorithms in HDL is a hard task due to the lack of specific libraries and complex structures in that kind of language. Moreover, small changes involve a great effort and long redesign times. In order to solve these difficulties, we have developed a software platform programmed in Matlab that extracts spikes from ordinary videos (in AVI format) and stores them as our spike grabber does. This provides us an environment to evaluate our algorithms before programming them in VDHL [10] to deal with real retinal signals in real-time. As we have explained in the latter section, the sustained cells of our retina model extract information of light spatial changes. This is similar to edge-extraction scheme [15]. There are a wide variety of edge-extraction algorithms, such as Canny Filter [14]. Nevertheless, we try to emulate the bio-inspired artificial retina. With the aim to achieve this goal, we chose a Difference of Gaussians (DoG) model which compares one pixel light with its neighbourhood [14]. Meanwhile, transient events are fired responding to temporal changes in a local area.

3 Results As shown in Fig. 4, the final software platform allows the user to adjust different parameters such as the inner or outer Gaussian ratios. We have also emulated the refractory period of the firing cells. This is a biological feature of neuronal systems which establish the amount of time that an excitable membrane needs before it can be effectively excited after firing a spike [5]. Moreover, we have included in this software a first approach for motion-detection. This approach consists on a simple algorithm that tries to match spikes in a current interval (frame) with the previous intervals activity in a local spatial neighbourhood. Therefore, we have not only a spatial correlation, but also a temporal one. In this way we test spiking-based processing schemes and we are able to directly validate them with real artificial retina signals (taking advantage of all the processing that takes place at the focal plane). With the aim of joining this platform with the spike-grabber, we have added an interface that is able to load and visualize the spikes received from the artificial retina. Also, these spikes can be used as an input to our motion-detection algorithm in order to check how it works with real stimuli.

Visual Processing Platform Based on Artificial Retinas

511

Fig. 4. Retina Simulation platform: extract a spike film using DoG filters. Extracted events are stored as a film and different cell types are codified in colors (we have inverted the colors to make easy their view). A sequence is translated into spikes with the same format used by the artificial retina [1].

512

S. Granados et al.

4 Conclusions The work described here is a development platform to facilitate the interface of retinomorphic sensors for further research on vision algorithms either for real-time processing or offline evaluations. Furthermore, we have developed the necessary drivers to use a real artificial retina on a prototyping board. This is useful for the integration of this system with other real-time processing technologies, such as image processing architectures on FPGA devices [6]. As future work we plan to investigate processing schemes based on spikes for low vision tasks (motion, stereo, motion-in-depth, etc). We also plan to integrate high performance spike processing engines [12, 13] to the development platform to efficiently study different vision schemes. Acknowledgments. This work has been supported by the EU grant DRIVSCO (FP6IST- 016276-2) and the Spanish National Grant (DPI-2004-07032).

References [1] Boahen, K.: A Retinomorphic Chip with Parallel Pathways: Encoding ON, OFF, INCREASING, and DECREASING Visual Signals. Journal of Analog Integrated Circuits and Signal Processing 30(2), 121–135 (2002) [2] Boahen, K.: A Burst-Mode Word-Serial Address-Event Link-I: Transmitter Design. IEEE Transactions on Circuits and Systems I 51(7), 1269–1280 (2004) [3] Boahen, K.: A Burst-Mode Word-Serial Address-Event Link-II: Receiver Design. IEEE Transactions on Circuits and Systems I 51(7), 1281–1291 (2004) [4] Culurciello, E., Etienne-Cummings, R., Boahen, K.: A Biomorphic Digital Image Sensor. IEEE Journal of Solid State Circuits 38(2), 281–294 (2003) [5] Kolb, H., Lipetz, L.E.: The anatomical basis for colour vision in the vertebrate retina. Vision and Visual Dysfunction volume 6, The Perception of Colour, Macmillan Press Ltd (1991) [6] Díaz, J., Ros, E., Pelayo, F.J., Ortigosa, E.M., Mota, S.: FPGA-based real-time opticalflow system. IEEE Trans. Circuits Syst. Video Techn. 16(2), 274–279 (2006) [7] Mead, C.A.: Introduction to VLSI Systems. Addison-Wesley, Reading, MA (1980) [8] Morillas, C., Romero, S., Martínez, A., Pelayo, F., Ros, E., Fernández, E.: A design framework to model retinas. Biosystems 87(2-3), 156–163 (2007) [9] Pelayo, F.J., Ros, E., Arreguit, X., Prieto, A.: VLSI Implementation of a Neural Model Using Spikes. Analog Integrated Circuits and Signal Processing 13(1/2), 111–121 (1997) [10] Pelayo, F.J., Romero, S., Morillas, C., Martínez, A., Ros, E., Fernández, E.: Translating Image Sequences into Spike Patterns for Cortical Neuro-stimulation. Neurocomputing: Special Issue on Computational Neuroscience: Trends in research 58-60, 885–892 (2004) [11] Pelayo, F. J., Martinez, A., Romero, S., Morillas, C. A., Ros, E., Fernández, E.: Cortical Visual Neuro-Prosthesis for the Blind: Retina-Like Software/Hardware Preprocessor. In: IEEE-EMBS International Conference on Neural Engineering (NER-2003), Capri (March 2003) [12] Ros, E., Ortigosa, E.M., Agís, R., Arnold, M., Carrillo, R.: Real time computing platform for spiking neurons Real Time Spiking Neurons (RT-Spike). IEEE Transactions on Neural Networks 17(4), 1050–1063 (2006)

Visual Processing Platform Based on Artificial Retinas

513

[13] Ros, E., Carrillo, R., Ortigosa, E.M., Barbour, B., Agís, R.: Event-driven simulation scheme for spiking neural networks using look-up tables to characterize neuronal dynamics. Neural Computation 18(12), 2959–2993 (2006) [14] Sonka, M., Hlavac, V., Boyle, R.: Image Processing, Analysis and Machine Vision, Ed. Brooks/Cole Publishing Company (1999) [15] Stryer, L.: Visual excitation and recovery. Journal of Biological Chemistry, vol. 266 (1991) [16] Zaghloul, K.A., Boahen, K.A.: Optic Nerve Signals in a Neuromorphic Chip I: Outer and Inner Retina Models. IEEE Transactions on Biomedical Engineering 51(4), 657–666 (2004)

Suggest Documents