Journal of Physiology - Paris 98 (2004) 281–292 www.elsevier.com/locate/jphysparis
Visual guidance based on optic flow: a biorobotic approach Nicolas Franceschini
*
Laboratory ‘‘Motion and Perception’’, CNRS and Univ. de la Mediterranee, 31 Chemin J. Aiguier, Marseille, France Available online
Abstract This paper addresses some basic questions as to how vision links up with action and serves to guide locomotion in both biological and artificial creatures. The thorough knowledge gained during the past five decades on insects’ sensory-motor abilities and the neuronal substrates involved has provided us with a rich source of inspiration for designing tomorrow’s self-guided vehicles and micro-vehicles, which will be able to cope with unforeseen events on the ground, under water, in the air, in space, on other planets, and inside the human body. Insects can teach us some useful tricks for designing agile autonomous robots. Since constructing a ‘‘biorobot’’ first requires exactly formulating the biological principles presumably involved, it gives us a unique opportunity of checking the soundness and robustness of these principles by bringing them face to face with the real physical world. ‘‘Biorobotics’’ therefore goes one step beyond computer simulation. It leads to experimenting with real physical robots which have to pass the stringent test of the real world. Biorobotics provide us with a new tool, which can help neurobiologists and neuroethologists to identify and investigate worthwhile issues in the field of sensory-motor control. Here we describe some of the visually guided terrestrial and aerial robots we have developed since 1985 on the basis of our biological findings. All these robots behave in response to the optic flow, i.e., they work by measuring the slip speed of the retinal image. Optic flow is sensed on-board by miniature electrooptical velocity sensors. The very principle of these sensors was based on studies in which we recorded the responses of single identified neurons to single photoreceptor stimulation in a model visual system: the fly’s compound eye. 2004 Elsevier Ltd. All rights reserved. Keywords: Vision; Flies; Visuo-motor control; Biomimetics; Biorobotics; Bionics
1. Introduction Animals and humans are all able to move about autonomously in complex environments. These natural ‘vehicles’ [9] provide us with eloquent proof that physical solutions to elusive problems such as those involved in visually-guided locomotion existed long before roboticists started tackling these problems in the 20th century. Over the past two decades, some research scientists have been attempting to tap biology for ideas as to how to design smart visually-guided vehicles, e.g. [2,3,6, 9,10,13,14,18,22,26,31,41,44,46–49,59–63,66–70,73–75, 78,80,84–88,92,101,102,109]. Some authors have used both the principles and the details of biological signal processing systems to produce mobile seeing machines. Many innovations owe their existence to arthropods, particularly insects, which were largely dismissed in the
*
Tel.: +33-4-91164129; fax: +33-4-91220875. E-mail address:
[email protected] (N. Franceschini).
0928-4257/$ - see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.jphysparis.2004.06.002
past as being dumb invertebrates able to perform only stereotyped manoeuvres. Insects have quite a wide behavioral repertoire, however, and they can teach us how to cope with complex, unpredictable environments using smart sensors and limited processing resources. Flying insects, in particular, have developed widely and account for about three quarters of all animal species. They often attain a level of skill, agility and circuit miniaturization which greatly outperforms that of both vertebrate animals and our present day mobile robots. Insects’ sensory-motor control systems are admirable feats of integrated optronics, neuronics and micromechatronics. Their neural circuits are highly complex, in keeping with the sophisticated behavior they mediate, but unlike most (no less complex) vertebrate neural circuits, they can be investigated by looking at single, uniquely identifiable neurons, i.e., neurons that can be reliably identified in all the individuals of the species on the basis of their location in the ganglion, their exact shape and their consistent electrical responses [12,28,45, 77,94,95]. This unique advantage, which the nervous systems of insects and all arthropods have in common,
282
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
enables research workers to accumulate knowledge during anything from a few days to several decades about an individual neuron or a well-defined neural circuit. Under these conditions, it is not surprising to observe that many of the robots which have ever been designed by emulating part of a nervous system were inspired by arthropods [3,6,10,13,60,92,109]. The biologically based robots that we have been designing and constructing since 1985 have contributed considerably to creating the field of Biorobotics, in which natural principles or systems are implemented in the form of hardware physical models. Beyond their possible usefulness as intelligent machines, these physical models make it possible to subject biological hypotheses to the rigorous tests imposed by the real world [107–109]. The autonomous robots we are putting together at present are still far from the sophisticated industrial ‘‘microsystems’’, which benefit from the latest collective micro-manufacturing technologies. Building a complete sensory-motor control system based on a tiny chip the size of an insect brain is still beyond our grasp, but the control systems of our robots do make use of various microsystems. In addition, their processing architecture resembles that of their biological counterparts and therefore departs considerably from the mainstream Artificial Intelligence approach to mobile robotics. Indeed, software seems to be absent from animal brains, whose ‘‘intelligence’’ lies primarily in the layout of adhoc adaptive analog circuits. Our robots are in line with the principles of ‘‘neuromorphic engineering’’ [20] because they rely on biologically based, parallel, analog and asynchronous processing systems. They also rely heavily on discrete analog electronic components (Surface Mounted Devices, SMDs). Although this technology does not lend itself to the same degree of miniaturization as analog VLSI (Very-Large-ScaleIntegration)-technology [64,103], it has several unique advantages as a means of testing biological principles at physiological laboratories and achieving fast iterations between neurophysiological experiments and new robot designs and tests: low cost, low power, rapid prototyping, fast cycling between trials, no dependence on silicon brokers, opportunities for circuit tuning and component matching, etc. This approach has consistently proved to be most convenient for developing small terrestrial and even aerial robots endowed with insect-inspired visuomotor control systems. It is now proposed to describe some of these systems after briefly outlining some relevant aspects of insect vision and motion perception.
2. Fly visual microcircuitry Our own laboratory pet is the fly, which belongs to the best known of all insect species. Flies are agile seeing
creatures that are able to navigate swiftly through the most unpredictable environments, avoiding all obstacles without any need for sonars or laser range-finders. They process their sensory signals on-board and need not be tethered to a super-computer and an external power supply. The housefly is modestly equipped with only about a million neurons (i.e., about 0.001% of the number of neurons present in the human brain) and views the world through its two panoramic compound eyes with only 3000 pixels each (i.e., roughly 1000 times less than a conventional digital camera and 40,000 times less than a human eye). Flies are objectionable in many ways, but they now add insult to injury by showing that it is definitely possible to achieve the smartest sensory-motor behavior such as 3D navigation at 500 body-lengths per second using quite modest processing resources. The front end of an insect visual system consists of a mosaic of facet lenslets (Fig. 1) and an underlying layer of photoreceptor cells forming the ‘‘retina’’ proper. The fly seems to possess one of the best organized retinae in the animal kingdom. It has been described in exceptionally great detail, with its six different spectral types of photoreceptor cells, polarization sensitive cells, and sexually dimorphic cells, which can be identified in vivo in any single individual [24,25,40]. Flying insects avoid colliding with obstacles and manage to guide themselves gracefully through their complex surroundings by processing the ‘optic flow’ (OF). The optic flow field is a vector field that gives the angular speed (direction in degrees; magnitude in rad/s) of the image of each contrasting object encountered in the environment when the animal is moving and/or when something moves in its surroundings [36,53,56]. Even when a fly is travelling through a stationary environment, the resulting optic flow field will be complex, except under special conditions such as pure translation or pure rotation. Current evidence shows that insects are able to perform the complex task of extracting the information necessary for short range navigation from the optical flow field, e.g. [16,17,38,52,58,90,93,110,104]. This ability results from the fact that the visual system is equipped with smart sensors called ‘motion detecting neurons’, which are able to gauge the relative motion between the animal and the contrasting features of the environment [65]. The fly is one of the best animal models currently available for studies on motion perception [11,21, 33,37,42,43,54,81–83,96]. A great deal has already been learned from neuroanatomical and neurophysiological studies on that part of the 3rd optic ganglion called the lobula plate, which appears as a genuine visual motion processing center. This region, which comprises approximately 65 uniquely identifiable neurons, is dedicated in particular to (i) analysing the movement of the retinal image, i.e., the optic flow field generated when
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
283
Fig. 1. Head of the blowfly Calliphora erythrocephala (male) showing the two prominent panoramic compound eyes with their facetted cornea. There are as many sampling directions (pixels) in each eye as there are facets. This photograph was taken with a laboratory made Lieberk€ uhn microscope based on the parabolic mirrors from two bicycle lights. On the left is a photograph of the retinotopic circuit used by the robofly (Fig. 2a) to guide itself at a high speed towards a target while avoiding obstacles on its way.
the animal is walking or flying, and (ii) transmitting the result via descending neurons to the thoracic interneurons that ultimately drive the wing-, leg-, and headmuscles [42,54,95,96]. The lobula plate neurons are collator neurons driven by a set of retinotopic ‘‘Elementary Motion Detectors’’ (EMDs). Whether in the vertebrate visual system [99] or in the insect visual system [21], the actual neural circuitry underlying the generation of directionally selective motion sensitivity in an EMD has yet to be elucidated, in spite of 45 years of neuroanatomical and neurophysiological research. In flies, two types of columnar neurons have been identified, the transmedullary neuron Tm1 and the bushy lobula neuron T5 which seem to be major players for conveying smallfield motion information down to the largefield lobula plate neurons in a retinotopic way [21]. Regardless of the neural circuitry underlying motion detection, the problem we addressed in the 1980s was the functional principle underlying an EMD at the most elementary level. Taking advantage of the micro-optical techniques we had developed for analysing and stimulating the fly retina at the single photoreceptor level [23,24], we adopted a fairly direct approach that consisted in stimulating a single EMD in the eye of the living insect. Microelectrode recordings were performed on a collator neuron (H1) in the lobula plate of the housefly while applying optical stimuli to single identified photoreceptor cells on the retinal mosaic [25,83].
Pinpoint stimulation was applied to two photoreceptors (diameter 1 lm) of a single ommatidium by means of a special instrument (a hybrid between a microscope and a telescope developed at our laboratory), in which the main objective lens was the facet lens itself (diameter 25 lm, focal length 50 lm). This optical instrument [33] served (i) to select a given facet lenslet, (ii) to locate the group of 7 Receptor distal endings (R1 through R7) on its focal plane, (iii) to select 2 of these 7 receptors, namely R1 and R6 (which are oriented horizontally), (iv) to illuminate these two receptors successively with 1 lm light spots. This procedure simulated a local motion (‘‘apparent motion’’) occurring in the animal’s visual field––but presented here to well identified receptor cells. H1 responded to this microstimulation (applied to only 2 out of the 48,000 photoreceptor cells of the visual system) by a conspicuous increase in the impulse (‘spike’) frequency, as long as the phase relationship between the two stimuli mimicked a movement occurring in the preferred direction. By contrast, when the sequence mimicked a movement in the opposite, null direction, H1showed a marked decrease in its resting discharge or did not respond at all [25,83]. It did not respond either when the same sequence was presented to a pair of receptors (such as R1 and R2, or R1 and R3) that defined a vertical direction on the eye (in agreement with the fact that the H1 neuron is not sensitive to vertical motion [42]), or when one of the two selected
284
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
photoreceptors was the central cell R7 (confirming that this cell does not participate in motion detection). Many experiments of this kind were carried out on identified cells, in which carefully planned sequences of light steps and/or pulses were applied to the two receptors. In this way, we established an EMD block diagram and characterized each block’s dynamics and nonlinearity [25,33]. The overall scheme we arrived at using this single neuron recording procedure combined with single cell stimulation relies on lateral facilitation of a high-pass filtered signal [27,33]. This scheme departs largely from the popular Hassenstein–Reichardt correlation model [11,81,82]––which was originally derived from experiments based on largefield stimulations of unidentified cells and never experimentally confirmed with single cell stimulation. Our system analysis of an EMD at the elementary level does not unveil the details of the underlying neural circuit. But this functional level is the level par excellence at which a system can be
understood, described, and transcribed in another, manmade technology. Inspired by our electrophysiological results, we designed a miniature electronic EMD whose signal processing scheme approximates that of the biological EMD [7,8,30]. All the robots described below (Fig. 2) were equipped with these fly based electronic velocity sensors. A very similar EMD principle has been discovered independently 10 years later by C. Koch’s group at CALTECH, who spread it under the name ‘‘facilitate and sample velocity sensor’’ [50] and patented a smart analog VLSI chip based on this principle, without any reference to a possible inspiration from the fly [89].
3. Fly-inspired visually-guided terrestrial robots By the end of the 1980s, we had designed a small robot equipped with a planar compound eye and a fly-
Fig. 2. Three of the visually-guided robots designed and constructed at the laboratory on the basis of our biological findings on sensory-motor control in flies. (a) The robofly (in French: ‘‘le robot-mouche’’) has a visual system composed of a compound eye (visible at half-height) for obstacle avoidance, and a target seeker (visible on top) for detecting the light source serving as a goal. This 12-kg three-wheeled robot, which was completed in 1991 [8], is fully autonomous as regards its processing and power resources [31,32,80]. Despite its small number (116) of pixels, this artificial creature can avoid obstacles at a relatively high speed (50 cm/s, i.e., 1.7 body diameters per second) by reacting to the optic flow generated by its own locomotion. It carries a set of 114 fly based electronic velocity sensors (visible immediately above the compound eye) (from [32]). (b) FANIA is an electrically-powered rotorcraft equipped with a 20-pixel frontal–ventral motion sensing visual system which enables it to jump over obstacles [photo: Go€etgel€ uck]. This self-sustained 0.84-kg aerial creature is tethered to a light pantographic whirling arm that allows only three degrees of freedom: forward and upward motion and pitch. It rotates around a central pole at speeds up to 6 m/s and ascends or descends depending on what it sees (from [74]). (c) OSCAR is a tethered micro-air vehicle (MAV) with a two-pixel visual system that relies not only on visual motion detection but also on a microscanning process inspired by that recently found to occur in flying flies [29]. In this miniature twin-engine robot (mass 100 g), which is tethered to a 2 m-long thin wire secured to the ceiling, vision and inertial sensing are combined so that it can detect and fixate a target (a dark edge or a bar). If this target is set in horizontal motion, OSCAR is able to track it smoothly at speeds of up to 30/s, with no net velocity slip [101,102]. OSCAR carries out all its (analog) signal processing on-board and features an endurance of 1 h. It locks visually onto its target much more accurately than one might expect, given the rather coarse spatial sampling of its eye (from [102]).
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
inspired EMD array [80]. The latter was used to sense the optic flow field generated in the horizontal plane by the robot’s own locomotion among stationary objects. The 12-kg robofly (in French: ‘‘le robot-mouche’’) we arrived at in 1991 (Fig. 2a) was the first OF-based, completely autonomous robot (i.e., without any umbilical whatsoever) able to avoid contrasting obstacles encountered on its way, while traveling to its target at a relatively high speed (50 cm/s) [8,26,31,32]. The target was an electric light which the robot detected by means of an additional dorsal eye (visible on the top of Fig. 2a), whose information was fused with that of the obstacle avoiding compound eye (visible at half-height in Fig. 2a). The robofly is based not only on our neurophysiological findings on the logics underlying the fly’s EMD (Section 2) but also on ethological findings on the flight behavior of flies. The most common flight trajectories have been found to consist of straight flight sequences interspersed with rapid turns at a high angular speed termed saccades [15,43,90,97,105,111]. In the mid-1980s, we concluded that these jerky, zigzag flight paths of flies might result from a clever motor strategy subserving vision. More specifically, the straight flight sequences performed at speed V near an obstacle located at a distance D might serve to generate a purely translational optic flow X, the processing of which could be dealt with by a brain only the size of a pinhead. In line with Gibson [34,35], many workers over the past 50 years have noted the remarkable simplicity of the translational optic flow X, which depends only on V , D and #, the azimuth of the obstacle with respect to the heading direction, e.g. [53,56,72,112]: X ¼ ðV sin #Þ=D
ð1Þ
This very general formula for the translational optic flow holds for any observer (fly, human, robot, etc.) translating at speed V and expresses the two commonsense impressions that (i) the image of close objects (D small in Eq. (1)) seems to ‘‘move faster’’ across our retina than the image of more distant objects, (ii) objects to the side move faster across our retina than objects located in the vicinity of our heading direction (# small in Eq. (1)). Our robofly proceeds by performing a sequence of purely translational steps DL (length 10 cm and duration 200 ms) at a speed which is maintained at V ¼ 50 cm/s by an electronic speed controller. During each single step, the robot collects the self-generated OF from the contrasting environment by means of its electronic fly-based EMDs. The robot was tested wandering about autonomously in an arena in which obstacles (vertical poles) were arranged at random. Its translational steps alternate with rapid steering locks at an angle that depends on the bearings of the obstacles detected. Each of these locks defines a new heading direction (the eye turns at one with the wheels). Vision is
285
inhibited during rotation by a process akin to ‘‘saccadic suppression’’, a process which has long been discussed for the vertebrate visual system and has also been documented in an insect [113]. By the end of each translational step, the whole EMD-array of the robot’s eye has drawn up a map of the local environment. This map is expressed in polar coordinates in the robot’s eye reference frame (which is also the body reference frame since the eye turns integral with the wheels). The next course to be steered is immediately given, generating an eye+ body saccade in the new direction, and a new fleeting map of obstacles is formed during the next translation step, completely obliterating the previous one. Updating the map thus requires only a short-term (200 ms) memory, which is reset at the start of each new step. No stops and no changes in heading occur at the end of an elementary step DL if no obstacles have been detected by the EMD-array. The elementary translational steps are seamlessly connected due to the high speed of the parallel, analog mode of processing used. The result is a rather jerky, ‘‘fly-like’’ trajectory, which is very reminiscent of the flight trajectories that were recorded quite recently in studies on real flies [19,97]. The robot skirts any obstacles encountered before reaching the target, with the advantage that it adapts ‘‘naturally’’ to novel or changing environments - including moving targets. This contrasts with what occurs in the case of many robots of the traditional ‘‘sense-model-plan-act’’ variety, based on high-level Artificial Intelligence, which have to devote large computational resources to drawing up an extensive map of the environment at rest, planning a safe path, and eventually making a move forward. The robofly must actually move in order to be able to see. And it is during its actual movement at 50 cm/s that it establishes a short-lived, running representation of nearby space in a retinocentric frame of reference. No ‘‘path planning’’ phase is required at the outset to define the route to be taken through a set of obstacles: the only thing which is ‘‘planned’’ is the direction of the next step. The design of the optical architecture of the robot’s eye was dictated by the specific locomotor strategy adopted, which involved sensing the self-generated translational OF [31,80]. The robofly actually views the world through a horizontal ring of facets––corresponding approximately to those present in a horizontal slice through the housefly’s head. Any two neighboring facets drive an EMD, and a total of 114 EMDs scan the optic flow in the azimuthal plane. In addition, we provided this compound eye with a resolution gradient such that the interommatidial angle Du increases according to a sine law as a function of the eccentricity. The idea of incorporating a sine gradient [8,31,80] was based on the need to ‘‘compensate for’’ the sine law inherent to the translational optic flow field (Eq. (1)). Once embedded in the anatomical structure of the eye, this resolution
286
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
gradient ensures that any contrasting feature will be detected with certainty if, during a robot’s translation by DL it enters the robot’s ‘‘circle of vision’’, the radius Rv of which increases linearly with DL: Rv ¼ k DL
ð2Þ
Any contrasting feature which is more distant than Rv will be automatically ‘‘filtered out’’ (i.e., not detected) because it cannot possibly cross the two visual axes of an EMD during the DL translational step. In spite of the added machine work required to construct a compound eye with nonuniform sampling abilities, we had no hesitation about using this approach because this is what occurs in many natural visual systems. Vivid proof of the existence of a front-to-back sampling gradient was obtained in the fly by aiming a telescope at the eye of the live animal under antidromic illumination [23]. A forward-pointing acute zone has long been known to exist in many insects’ eyes and was ‘‘explained’’ as adaptation to forward flight through a textured environment [55]. Obviously, a similar nonuniform sampling is known to exist in the human eye as well, from the fovea to the periphery. In 1990, van de Grind [39] established that the cortical magnification factor M (which mainly reflects the retinal ganglion cell density and hence the retinal resolution) fits the inverse sine of eccentricity relatively well, as if it were compensating for the sine law of the translational optic flow, Eq. (1). One of the main advantages of having a sine gradient in the retinal sampling zone (in both humans and the robofly) is that it makes it possible to design the underlying EMDs uniformly, i.e., each having the same time constants as its neighbors [31]. This feature greatly simplifies the engineering of a robot, in the same way as it may have simplified the ontogenetic instruction of neural circuits in human and animal visual systems. The main reason for choosing a brainlike, parallel, analog, asynchronous mode of signal processing onboard the robofly (Fig. 2a) was the expectation that in practising nature’s way of doing things, and in fighting on the same ground, we would learn more about the advantages, constraints and adaptability of this mode of processing, which has enabled winged insects to survive in their highly complex natural environment for 100 million years. We therefore decided to do without any von Neumann-type architecture on-board this early robot, and to restrict the use of computers to the initial simulation phases in the projects. Fig. 3 shows the odd routing pattern connecting the thousands of analog devices used to blend together the input signals from the robofly’s compound eye and those arising from the dorsal target-seeking eye, eventually delivering a single analog output: the steering angle. The robofly is based on biological data collected from a so-called lowly creature, the fly. But by reconstructing
Fig. 3. Routing diagram of one face of the printed circuit board (PCB) that integrates information about the obstacles and the target on-board the robofly (Fig. 2a). This six-layered PCB has 210 parallel inputs (114 EMD inputs + 96 inputs from the target seeker) and a single, meaningful output (near the centre of the pattern). This output gives (in Volts) the next steering angle required to reach the target while avoiding all obstacles. This side and the reverse side of the PCB are both covered with thousands of analog devices of only four kinds (resistors, capacitors, diodes, and operational amplifiers), some of which can be seen in Fig. 1 (inset). The mosaic layout of this purely analog circuit contrasts strikingly with that of a von Neumann computer and is reminiscent of the neural architecture of visual areas in animal brains. The rose-window-like pattern results from the numerous repeat units and their retinotopic projections (from [31]).
part of its visual system, we may have captured, formulated and reproduced an essential feature of visuomotor guidance in animals and humans. Humans themselves are known to navigate largely on the basis of optic flow, e.g. [57,106]. The most unique feature of the robofly is not only its complete (computational as well as energetic) autonomy but also the fact that it can cope with unknown and novel environmental situations without any need for maps or data about the location of the obstacles and without any need for a learning phase of any kind. Simulation studies have shown that a robot of this kind cannot only drive round obstacles at a high speed but also automatically adjust its speed to the density of the obstacles present in the environment [62]. This ability emerges automatically if, instead of imposing on the robot constant DL translation bouts as described above, one imposes constant Dt translation times. During any one Dt, the robot will cover a distance DL proportional to its current speed V : DL ¼ V Dt
ð3Þ
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
From Eqs. (2) and (3), one obtains Rv ¼ k DL ¼ k V Dt Rv ¼ k 0 V The radius of vision Rv therefore increases in proportion to the speed, which makes it possible for the robot to see (and therefore to avoid) obstacles within a range that increases suitably with the speed of travel. This is indeed very suitable behavior for robots, animals, and humans. Simulations based on this principle have shown the robot making a detour around a dense a forest (represented by a large set of trees forming the obstacles), then automatically accelerating in a clearing and automatically braking before traversing the second, less dense forest [62].
4. Fly-inspired visually-guided aerial robots Further work was aimed at elucidating how natural flying creatures such as insects and birds have solved the control problems involved in performing visually guided short range navigation. Simulations showed that the same motion detection principles can be used to guide a flying agent, have it follow a rough terrain [67], and land automatically [73]. The principle was validated on-board FANIA, an experimental miniature helicopter system having a single rotor with a variable pitch (Fig. 2b). This 0.8-kg tethered rotorcraft has only 3 degrees of freedom. Mounted at the tip of a light, counterbalanced whirling arm, the robot lifts itself by increasing the rotor collective pitch, and pitches forward by orienting a servo-vane located in the propeller wake. On its circular track, the robot reaches horizontal speeds of up to 6 m/s and climbing speeds of up to 2 m/s. It is equipped with an inertial system and a frontal-ventral eye with a resolution of only 20 pixels and their corresponding 19 EMDs. Visually controlled terrain avoidance is initiated by increasing the collective pitch in discrete steps as a function of the fused signals transmitted from the EMD array. Tests in the circular arena showed FANIA jumping over contrasting obstacles [74]. Upon formalizing the results of observations made by Kennedy 50 years ago on the behavior of migrating locusts [51], we came up with a basic scheme for an optic flow based autopilot, called OCTAVE (Optic flow Control sysTem for Aerospace VEhicles). We then built a 100-gram helicopter-demonstrator equipped with this autopilot. Tested in its circular arena, this miniature rotorcraft was able to perform challenging manoeuvres such as terrain following at various speeds from 1 to 3 m/s [84], automatic take-off and automatic landing, while reacting smartly to wind perturbations [86]. In this system, a ventral EMD [87], continuously measures the OF in the downward direction and compares it to an OF setpoint. The error signal controls the robot’s lift––and
287
hence its height via the heave dynamics––so as to maintain the perceived OF at a constant reference value. This occurs whatever the robot’s groundspeed V , whatever disturbances (such as wind) affect that speed, and whatever disturbances (such as a terrain with a gradually increasing slope) affect the robot’s local height (in Eq. (1), distance D becomes the local height H over the terrain). As a consequence of this optic flow regulation, the OCTAVE autopilot automatically brings the robot to an altitude which increases suitably with the flying speed. Two noteworthy results were obtained in these studies [84–86]: 1. Risky manoeuvres such as automatic takeoff, ground collision avoidance, terrain following, suitable wind reactions and automatic landing are all performed as a result of one and the same feedback control loop. 2. These challenging manoeuvres are all performed without explicit knowledge of absolute altitude, local height over terrain, groundspeed, airspeed, descent speed and windspeed. This bio-inspired autopilot therefore differs strikingly from classical man-designed autopilots, which need a large number of (costly and bulky) metric sensors (e.g., a radio-altimeter, a Doppler radar, a laser range finder, a GPS, a Pitot tube), to achieve aircraft altitude hold or speed hold, as well as bulky off-board instrumentation (such as Instrument Landing Systems, ILS) to be able to perform automatic landing. OCTAVE’s objective is not to provide for altitude hold or speed hold. The OCTAVE autopilot automatically adapts the vehicle’s height at any time to its groundspeed, so as to hug the terrain below without ever crashing. And it does so safely at any ground speed, raising the robot in proportion to its current groundspeed, whatever (on-board or off-board) factors affect the ground speed. The result is measured in terms of behavior, and not in terms of variables (speed, height, etc.) measured on-board. By performing electrophysiological recordings on flying flies, and combining the results with micro-optical observations of the eye, we recently discovered a retinal microscanning device in the fly’s compound eye [29], the function of which we attempted to understand using again a biorobotic approach that included simulations followed by robot constructions. This puzzling process was actually the source of inspiration for two major biorobotic projects. Both projects are based on the hypothesis that the microscanning process in flies operates in connection with motion detection. The first project resulted in a 0.7-kg wheeled Cyclopean robot which is able to move about in a square arena under its own visual control, avoiding the four contrasting walls despite the very low resolution of its eye (only 24 pixels) [68,69]. This surprising ability is the result of a symmetrical anterograde retinal microscanning process, which assists the robot in detecting any
288
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
obstacles located close to the heading direction (i.e., the frontal ‘pole’ of the optic flow field). Indeed, the periodic microscanning amounts to periodically adding a known amount of rotational optic flow Xr to the very small translational optic flow Xt generated by frontal obstacles (expressed by small values of # in Eq. (1)). Since the amount of added OF is known on-board, its value can be subsequently subtracted out from the overall measured OF. The result then corresponds to the purely translational OF––i.e., the component of OF which depends on the distance scaled by the speed of travel (Eq. (1)). The main advantage of the microscanning process is that it improves the detection of small translational OF that would otherwise have remained subliminal. The second project that we developed on the basis of the fly’s retinal microscanner ended up with a novel optronic sensor, called OSCAR (Optical Scanning sensor for the Control of Aerial Robots) [100], and a small aerial robot, the OSCAR robot [101], equipped with this sensor. The OSCAR robot is attached via a swivel to a thin, 2-m long wire secured to the ceiling of the laboratory and is free to adjust its yaw by driving its two propellers differentially (Fig. 2c). This tiny (100-g) robot is able to lock visually onto a nearby ‘‘target’’ (a dark edge or a bar) via its microscanning visual system and to track this target at angular speeds of up to 30/s––a value similar to the maximum tracking speed of the human eye. Target fixation and tracking occur regardless of the distance (up to 2.5 m) and contrast (down to 10%) of the target and in spite of major disturbances such as pendulum oscillations, ground effects, (gentle) taps and wind gusts. The OSCAR sensor was shown to be endowed with hyperacuity, as it can locate an edge with a resolution of 0.1 degrees, which is 40 times finer than the interreceptor angle Du [102]. Moreover, the relatively short reaction time of this robot (closed loop time constant ¼ 0.15 s) is due to sensory fusion. Inspired by the fly’s thoracic halteres that have long been known to act as gyroscopes, we equipped the OSCAR robot with an additional rate control loop based on a miniature rate gyro. The interplay of these two sensory modalities (visual and inertial), combined in nested control loops, enhances both the stability and the dynamic performances of the yaw fixation and pursuit system [85,102]. In contrast with the other three robots (Robofly, FANIA and OCTAVE), which all exploit the translational optic flow generated by their own motion, the last two robots, equipped with a microscanning retina, owe their visual performances to exploiting a purely rotational optic flow.
5. Discussion: from animal physiology to robot technology and back Since our robots were born and raised at a laboratory where we also perform animal experiments, they have
benefited from the latest biological knowledge. In return, these physical models were called upon to improve our knowledge of biology. The more closely a robot mimics specific types of behavior, the activity of specific neurons, specific neural architectures and signal processing modes, the more likely it is that it will provide neurophysiologists [108] and/or neuroethologists [107] with useful feedback information. The following are some examples of the biological ‘‘returns’’ our biorobotic approach has yielded: 1. Given that in most animals obstacle avoidance behavior relies on the shifting of the image of their surroundings across the retina as they move around, monitoring this ‘‘changing optic array’’ or flow field does not necessarily imply calculating the distance to obstacles. While our first terrestrial robot, the robofly (Fig. 2a) did have access to the metric distance D to obstacles (by measuring X, h and V in Eq. (1)), the flying robot OCTAVE goes one step further by being able to fly around at relatively high groundspeeds (0–3 m/s) and follow a shallow terrain without any knowledge of the actual distance to the terrain. It features a visuomotor control loop whose sensor is an OF sensor, and which acts upon the lift so as to maintain a reference OF at any time, whatever the current groundspeed––and whatever the disturbances affecting the groundspeed. The OCTAVE test-demonstrator robot showed that this OF regulator is reliable and powerful. It allows the robot to take-off automatically, avoid ground collision automatically, follow terrain automatically, react suitably to headwinds and downwinds and land automatically and safely, without any knowledge of local height over terrain, absolute altitude, ground speed, air speed, descent speed and wind speed [84–86]. The OCTAVE autopilot is so simple and yet so powerful that it provides an appealing working hypothesis for the processing at work in the brain of visually guided animals, particularly flying animals. With hindsight, this suggests that while range perception is an acclaimed task of spatial vision [52,58,65,91,93], this task may not be required for short range navigation in animals. 2. The small Cyclopean robot described in Section 4, which uses a retinal scanner emulating the one recently discovered in the fly compound eye, is able to recover the small translational OF associated with objects close to the heading direction. With hindsight, this suggests that the fly uses its retinal scanner for the same task, alleviating the problem caused by the frontal pole of the optic flow field and the problem caused by the relatively coarse spatial sampling of the eye (Du 1). 3. The OSCAR robot, which is also based on the fly’s microscanner, emits behaviors which are reminiscent of both the hoverfly’s ability to clamp its rotational velocity at zero while hovering near a tree, in wait for passing females [15,16], and the male housefly’s ability
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
to track the female smoothly during sexual pursuit [111]. We therefore propose that these observed fly behaviors result, at least in part, from the very presence of the microscanner in the compound eye, which is known to come into play only during locomotion, leaving the retina perfectly stationary at rest [29]. These conclusions can be considered as ‘‘biorobot inspired working hypotheses’’. They suggest new physiological and behavioral experiments which should aim at disproving these concepts. And their results are likely to lead to the construction of new test-demonstrator robots. Some authors tend to feel that computer simulations are all we need [76], and that taking the trouble to construct a real physical robot is a disproportionately difficult means of checking the soundness of ideas gleaned from nature. There is everything to be gained, however, from completing computer simulations with real physical simulations, for the following reasons: • Some theories and simulations that look fine on paper (or on the computer screen) may no longer stand under real-world conditions, or may lack robustness to such an extent that they would be useless on-board an animal or a robot. • Robots allow supposedly understood biological principles to be properly tested in realistically noisy, unstructured and harsh environments. • At a given level of complexity, it can actually be easier and cheaper to use a real robot in a real environment than to attempt to simulate both the robot and the vagaries of the environment, see also [10]. • Working with a real robot, with its dynamics, nonlinearities, noisy circuits and other imperfections can help one to decipher the clever tricks the living animal has come up with. The ready solutions tinkered by Nature can be fathoms apart from what the 21st century scholars were expecting, see also [71,78,79]. Our ‘re-construction‘ approach has not only provided numerous insights into the subtle interactions between visual and motor systems in animals and machines, but has also inspired a number of laboratory experiments. These early efforts have spawned a new field that could be called ‘‘Biorobot Assisted Neuroscience’’ (BAN), in which artificial creatures equipped with sensors, actuators and control systems that faithfully mimic physiological mechanisms can help neurobiologists and neuroethologists to identify and investigate worthwhile issues. This is a useful means of obtaining interesting ideas and concepts from nature, checking the soundness of biological hypotheses and raising novel questions about the whys and wherefores of the (often highly enigmatic) sensors and neural processors mounted on-
289
board natural creatures. Research on these lines therefore leads to a real state of synergy between two disciplines which might seem a priori to be fathoms apart: the study of neurobiological mechanisms, and the production of artificial sighted robots. The fact that both natural and artificial creatures endowed with vision have to cope with similar problems in the complex world they both inhabit, and the idea that the difficult problems they have to solve may have a limited number of solutions [98] have led to adopting a common approach to natural and artificial creatures. We are thus tending towards a truly multidisciplinary approach in the study of sensory perception and sensory-motor control systems.
6. Conclusion Here we have described the terrestrial and aerial robots we have developed on the basis of our biological findings. The architecture of these robots is akin to that of biological systems in spirit, and so is the parallel and analog mode of signal processing with which they operate. These visually guided robots make use of selfgenerated optic flow to carry out the humble task of detecting, locating, avoiding and tracking environmental features. This approach fits a more general framework called ‘‘active perception’’ [4] since the ad hoc movement of a motion sensor (caused by locomotion in the robofly and the OCTAVE robot, or by retinal microscanning in the OSCAR robot) is used here to constrain the sensory inputs so as to reduce the processing burden involved in perceptual tasks [1,5]. The biorobotic approach that we initiated in 1985 is a transdisciplinary approach which is fairly demanding in terms of time, money, and human resources. This approach has turned out, however, to be most rewarding because it can kill two flies with one stone: • It can be used to implement a basic principle borrowed from nature and check its soundness, robustness and scaling on a real physical machine in a real physical environment. This method can lead to designing novel devices and machines, particularly in the field of cheap sensory-motor control systems for autonomous vehicles and micro-vehicles, which can thus benefit from the million-century long experience of biological evolution. • It yields valuable feedback information in the fields of neurophysiology and neuroethology, as it urges us to look at sensory-motor systems from a new angle, sheds new light here and there, challenges widely accepted facts, suggests new experiments to be carried out on the animal, and raises new biological questions which might not have been thought of otherwise (because they were too subtle) or which may simply
290
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
not have been addressed (because they were seemingly too naive). The thorough knowledge gained over the past five decades on insects’ visuomotor abilities and the neuronal substrates involved has provided us with a rich source of inspiration for designing tomorrow’s self-guided vehicles and micro-vehicles, which will be able to cope with unforeseen events on the ground, under water, in the air, in space, on other planets, and inside the human body. Insects’ neural circuits, which can be analyzed at the level of single, uniquely identifiable neurons, can teach us some tricks to designing the nervous system of agile autonomous robots and micro-robots. It is time for us to realize that the millions of insect species constitute a gigantic untapped reservoir of ideas for sophisticated micro-sensors, micro-actuators and smart control systems. This means that insect neurophysiology and neuroethology will certainly feature among the most promising subsections of Information Science and Technology in the new millennium.
Acknowledgements I am very grateful to numerous colleagues who have worked at the Laboratory over the years, with whom I have had many stimulating discussions: C. Blanes, J.M. Pichon, N. Martin, F. Mura, T. Netter, S. Viollet, F. Ruffier, S. Amic and M. Boyron. The English manuscript was revised by J. Blanc. This research has been supported by CNRS (Life Sciences, Engineering Sciences, Information Science and Technology, and Cognitive Science and Microsystem Programs), and by various EC contracts (ESPRIT, TMR and IST-1999-29043).
References [1] Y. Aloimonos, Active Perception, Lawrence Erlbaum, Hillsdale, USA, 1993. [2] R. Arkin, Behavior-based Robotics, MIT Press, Cambridge, USA, 1998. [3] J. Ayers, J.L. Davis, A. Rudolph, Neurotechnology for Biomimetic Robots, MIT Press, Cambridge, USA, 2002. [4] R. Bajcsy, Active perception versus passive perception, in: Proc. 3rd IEEE Workshop on Computer Vision: Representation and control, Bellaire, MI, USA, 1985, pp. 55–59. [5] D. Ballard, Animate vision, Artificial Intelligence 48 (1991) 57– 86. [6] H.B. Barlow, J.P. Frisby, A. Horridge, M. Jeeves (Eds.), Natural and Artificial Low-level Seeing Systems, Clarendon Press, Oxford, 1993. [7] C. Blanes, Appareil visuel elementaire pour la navigation a vue d’un robot mobile autonome, DEA thesis (Neurosciences), AixMarseille Univ., 1986. [8] C. Blanes, Guidage visuel d’un robot mobile autonome d’inpiration bionique, Dr thesis, National Polytechnic Institute, Grenoble, 1991.
[9] V. Braitenberg, Vehicles, MIT Press, Cambridge, USA, 1984. [10] R.A. Brooks, Cambrian Intelligence, MIT Press, Cambridge, USA, 1999. [11] E. Buchner, Behavioral analysis of spatial vision in insects, in: M. Ali (Ed.), Photoreception and Vision in Invertebrates, Plenum, New York, 1984, pp. 561–621. [12] M. Burrows, The Neurobiology of an Insect Brain, Oxford University Press, Oxford, 1996. [13] C. Chang, P. Gaudiano, Biomimetic robotics, Robotics and Autonomous Systems (Special Issue) 30 (2000). [14] D. Cliff, P. Husbands, J.A. Meyer, S.W. Wilson, From animals to animats III, in: Proc. Intern. Conf. on Simulation of Adaptive Behavior, MIT Press, Cambridge, 1994. [15] T. Collett, M. Land, Visual control of flight behaviour in the hoverfly Syritta Pipiens L, J. Comp. Physiol. A 99 (1975) 1–66. [16] T. Collett, H.O. Nalbach, H. Wagner, Visual stabilisation in arthropods, in: F.A. Miles, J. Wallman (Eds.), Visual Motion and its Role in the Stabilization of Gaze, Elsevier, Amsterdam, 1993, pp. 239–263. [17] T.S. Collett, Peering: a locust behaviour pattern for obtaining motion parallax information, J. Exp. Biol. 76 (1978) 237– 241. [18] D. Coombs, K. Roberts, Bee-Bot: Using the peripheral optic flow to avoid obstacles, in: Intelligent Robots and Computer Vision XI, SPIE vol. 1835, Bellingham, USA, 1992, pp. 714–725. [19] M. Dickinson, L. Tammero, M. Tarsino, Sensory fusion in freeflight search behavior of fruit flies, in: J. Ayers, J. Davis, A. Rudolph (Eds.), Neurotechnology for Biomimetic Robots, MIT Press, Cambridge, USA, 2002, pp. 573–592. [20] R. Douglas, M. Mahowald, C. Mead, Neuromorphic Engineering, Annu. Rev. Neurosci. 18 (1995) 255–281. [21] J.K. Douglass, N.J. Strausfeld, Anatomical organization of retinotopic motion-sensitive pathways in the optic lobes of flies, Microsc. Res. Technol. 62 (2003) 132–150. [22] A.P. Duchon, W.H. Warren, Robot navigation from a Gibsonian viewpoint, in: IEEE Intern. Conf. on Syst., Man and Cybernetics, San Antonio, IEEE Press, Los Alamitos, USA, 1994, pp. 2272–2277. [23] N. Franceschini, Sampling of the visual environment by the compound eye of the fly: fundamentals and applications, in: A. Snyder, R. Menzel (Eds.), Photoreceptor Optics, Springer, Berlin, 1975, pp. 98–125 (Chapter 17). [24] N. Franceschini, Chromatic organisation and sexual dimorphism of the fly retinal mosaic, in: A. Borsellino, L. Cervetto (Eds.), Photoreceptors, Plenum, New York, 1984, pp. 319–350. [25] N. Franceschini, Early processing of color and motion in a mosaic visual system, Neurosci. Res. (Suppl. 2) (1985) 17–49. [26] N. Franceschini, Engineering applications of small brains, Future Electron Devices Journal (Suppl. 7) (1996) 38–52. [27] N. Franceschini, Sequence-discriminating neural network in the eye of the fly, in: F.H.K. Eeckman (Ed.), Analysis and Modeling of Neural Systems, Kluwer-Academic, Norwell, USA, 1992, pp. 142–150. [28] N. Franceschini, Combined optical, neuroanatomical, electrophysiological and behavioural studies on signal processing in the fly compound eye, in: C. Taddei-Ferretti (Ed.), Biocybernetics of vision: Integrative Mechanisms and Cognitive Processes, World Scientific, London, 1998, pp. 341–361. [29] N. Franceschini, R. Chagneux, Repetitive scanning in the fly compound eye, in: N. Elsner, H. W€assle (Eds.), G€ ottingen Neurobiology Rep., Georg Thieme, Stuttgart, 1997, p. 279. [30] N. Franceschini, C. Blanes, L. Oufar, Passive noncontact velocity sensor, Dossier Technique ANVAR/DVAR No. 51,549, Paris, 1986 (in French). [31] N. Franceschini, J.M. Pichon, C. Blanes, From insect vision to robot vision, Philos. Trans. Roy. Soc. Lond. B 337 (1992) 283– 294.
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292 [32] N. Franceschini, J.M. Pichon, C. Blanes, Bionics of visuomotor control, in: T. Gomi (Ed.), Evolutionary Robotics: from Intelligent Robots to Artificial life, AAAI Books, Ottawa, Canada, 1997, pp. 49–67. [33] N. Franceschini, A. Riehle, A. Le Nestour, Directionally Selective Motion Detection by Insect Neurons, in: D.G. Stavenga, R.C. Hardie (Eds.), Facets of Vision, Springer, Berlin, 1989, pp. 360–390 (Chapter 17). [34] J.J. Gibson, Perception of the Visual World, Houghton, Mifflin, Boston, 1950. [35] J.J. Gibson, P. Olum, F. Rosenblatt, Parallax and perspective during aircraft landings, Am. J. Psychol. 68 (1955) 372–385. [36] J.J. Gibson, Visually controlled locomotion and visual orientation in animals, Brit. J. Psychol. 49 (1958) 182–194. [37] K.G. G€ otz, Flight control in Drosophila by visual perception of motion, Kybernetik 4 (1969) 199–208. [38] M. Goulet, R. Campan, The visual perception of the relative distance in the wood cricket Nemobius sylvestris, Physiol. Entomol. 6 (1981) 357–387. [39] W. Grind van de Grind, Smart mechanisms for the evaluation and control of self-motion, in: R. Warren, A.H. Wertheim (Eds.), Perception and Control of Self-motion, Lawrence Erlbaum, London, 1990, pp. 357–398. [40] R.C. Hardie, Functional organization of the fly retina, in: D. Ottosson (Ed.), Progress in Sensory Physiology 5, Springer, Berlin, 1985. [41] R. Harrison, C. Koch, A silicon implementation of the fly’s optomotor control system, Neural Computation 12 (2000) 2291– 2304. [42] K. Hausen, M. Egelhaaf, Neural mechanisms of visual course control in insects, in: D.G. Stavenga, R.C. Hardie (Eds.), Facets of Vision, Springer, Berlin, 1989, pp. 391–424 (Chapter 18). [43] M. Heisenberg, R. Wolf, Vision in Drosophila, Springer, Berlin, 1984. [44] G.A. Horridge, The evolution of visual processing and the construction of seeing systems, Proc. Roy. Soc. Lond. B 230 (1987) 279–292. [45] G. Hoyle, Identified Neurons and Behavior of Arthropods, Plenum, New York, 1977. [46] S.A. Huber, M.O. Franz, H.H. B€ ulthoff, On robots and flies: modeling the visual orientation behavior of flies, Robot. Autonom. Syst. 29 (1999) 227. [47] M. Ichikawa, H. Yamada, J. Takeuchi, Flying robot with biologically inspired vision, J. Robot. Mechatron. 6 (2001) 621–624. [48] F. Iida, Biologically inspired visual odometer for navigation of a flying robot, Robot. Autonom. Syst. 44 (2003) 201–208. [49] F. Iida, D. Lambrinos, Navigation in an autonomous flying robot by using a biologically inspired visual odometer, in: G.T. McKee, P.S. Schenker (Eds.), Sensor Fusion and Decentralized Control in Robotic Systems III, SPIE vol. 4196, Bellingham, USA, 2000. [50] G. Indiveri, J. Kramer, C. Koch, System implementation of analog VLSI velocity sensors, IEEE Micro 16 (1996) 40–49. [51] J.S. Kennedy, The migration of the desert locust, Philos. Trans. Roy. Soc. B 235 (1951) 163–290. [52] W.H. Kirchner, M.V. Srinivasan, Freely flying honeybees use image motion to estimate distance, Naturwissenschaften 76 (1989) 281–282. [53] J.J. Koenderink, Optic flow, Vis. Res. 26 (1986) 161–180. [54] H. Krapp, B. Hengstenberg, R. Hengstenberg, Dendritic structure and receptive-field organisation of optic flow processing interneurons in the fly, J. Neurophysiol. 79 (1998) 1902–1917. [55] M. Land, Variations in the structure and design of compound eyes, in: D.G. Stavenga, R.C. Hardie (Eds.), Facets of Vision, Springer, Berlin, 1989, pp. 90–111 (Chapter 5). [56] D.N. Lee, The optic flow field: the foundation of vision, Philos. Trans. Roy. Soc. Lond. Ser. B 290 (1980) 169–179.
291
[57] D.N. Lee, Visual information during locomotion, in: R.B. Macleod, H.L. Pick (Eds.), Perception: Essays in Honour of James Gibson, Cornell University Press, Ithaca/London, 1974, pp. 250–267. [58] M. Lehrer, M.V. Srinivasan, S.W. Zhang, G.A. Horridge, Motion cues provide the bee’s visual world with a third dimension, Nature (London) 332 (1988) 356–357. [59] M.A. Lewis, M.A. Nelson, Look before you leap: peering behavior for depth perception, in: R. Pfeifer, B. Blumberg, J.A. Meyer, S. Wilson (Eds.), From Animals to Animats 5, MIT Press, Cambridge, USA, 1998, pp. 98–103. [60] M.A. Lewis, M. Arbib, Biomorphic robots, Autonomous Robots (Special Issue) 77 (1999). [61] P. Maes, Designing Autonomous Agents: Theory and Practice from Biology to Engineering and Back, MIT Press, Cambridge, USA, 1991. [62] N. Martin, N. Franceschini, Obstacle avoidance and speed control in a mobile vehicle equipped with a compound eye, in: I. Masaki (Ed.), Intelligent Vehicles, MIT Press, Cambridge, USA, 1994, pp. 381–386. [63] M.J. Mataric, Navigating with a rat brain: a neurobiologically inspired model for robot spatial representation, in: J.A. Meyer, S. Wilson (Eds.), From Animal to Animats, MIT Press, Cambridge, USA (1990). [64] C.A. Mead, Analog VLSI and Neural Systems, Addison-Wesley, Reading, MA, 1989. [65] F.A. Miles, J. Wallman, Visual Motion and its Role in the Stabilization of Gaze, Elsevier, Amsterdam, 1993. [66] R. M€ oller, D. Lambrinos, R. Pfeifer, T. Labhart, R. Wehner, Modeling ant navigation with an autonomous agent, in: R. Pfeifer, B. Blumberg, J.A. Meyer, S. Wilson (Eds.), From Animals to Animats 5, MIT Press, Cambridge, USA, 1998, pp. 185–194. [67] F. Mura, N. Franceschini, Visual control of altitude and speed in a flying agent, in: D. Cliff, P. Husbands, J.A. Meyer, S.W. Wilson (Eds.), From Animals to Animats, MIT Press, Cambridge, 1994, pp. 91–99. [68] F. Mura, N. Franceschini, Obstacle avoidance in a terrestrial mobile robot provided with a scanning retina, in: M. Aoki, I. Masaki (Eds.), Intelligent Vehicles II, 1996, pp. 47–52. [69] F. Mura, N. Franceschini, Biologically inspired ‘retinal scanning’ enhances motion perception of a mobile robot, in: A. Bourjault, S. Hata (Eds.), Proc. 1st Europe–Asia Congress on Mechatronics, vol. 3, ENSM, Besancßon, 1996, pp. 934–940. [70] F. Mura, I. Shimoyama, Visual guidance of a small mobile robot using active, biologically-inspired eye movements, in: Proc. IEEE Intern. Conf. Rob. Autom. 3, 1998, pp. 1859–1864. [71] W. Nachtigall, Bionik, second ed., Springer, Berlin, 2002. [72] K. Nakayama, J.M. Loomis, Optical velocity patterns, velocity sensitive neurons and space perception: a hypothesis, Perception 3 (1974) 63–80. [73] T. Netter, N. Franceschini, Neuromorphic optical flow sensing for nap-of-the-earth flight, in: Mobile Robots XIV, SPIE vol. 3838, Bellingham, USA, 1999, pp. 208–216. [74] T. Netter, N. Franceschini, A robotic aircraft that follows terrain using a neuromorphic eye, in: Intelligent Robots and Systems, Proc. IROS-2002, EPFL, Lausanne, 2002, pp. 129–134. [75] T.R. Neumann, H.H. B€ ulthoff, Insect inspired visual control of translatory flight, in: Proc. European Conf. on Artificial Life, ECAL 2001, Springer, Berlin, 2001, pp. 627–636. [76] T.R. Neumann, S. Huber, H.H. B€ ulthoff, Artificial systems as models in biological cybernetics, Behav. Brain Sci. (2001) 1071– 1072. [77] M. O’Shea, H. Rowell, CHF, complex neural integration and identified interneurons in the locust brain, in: G. Hoyle (Ed.), Identified Neurons and Behaviour of Arthropods, Plenum, New York, 1977, pp. 307–328.
292
N. Franceschini / Journal of Physiology - Paris 98 (2004) 281–292
[78] R. Pfeiffer, D. Lambrinos, Cheap vision––exploiting ecological niche and morphology, in: V. Hlavac, K.G. Jeffery, J. Wiedemann (Eds.), SOFCEM 2000, 27th Conf., Current Trends in Theory and Practice of Informatics, Milovy, Czech Republic, November 2000, pp. 202–226. [79] R. Pfeiffer, C. Scheier, Understanding Intelligence, MIT Press, Cambridge, 2001. [80] J.M. Pichon, C. Blanes, N. Franceschini, Visual guidance of a mobile robot equipped with a network of self-motion sensors, in: W.J. Wolfe, W.H. Chun (Eds.), Mobile Robots IV, Proc. SPIE vol. 1195, Bellingham, USA, 1989, pp. 44–53. [81] W. Reichardt, Movement perception in insects, in: Processing of Optical Data by Organisms and by Machines, Academic Press, New York, 1969, pp. 465–493. [82] W. Reichardt, Evaluation of optical motion information by movement detectors, J. Comp. Physiol. A 161 (1987) 533–547. [83] A. Riehle, N. Franceschini, Motion detection in flies: parametric control over ON–OFF pathways, Exp. Br. Res. 54 (1984) 390– 394. [84] F. Ruffier, N. Franceschini, OCTAVE: a bioinspired visuomotor control system for the guidance of Micro-Air-Vehicles, in: A. Gabriel-Vasquez, D. Abbott, R. Carmona (Eds.), Bioengineered and Bioinspired systems, SPIE vol. 5119, 2003, pp. 1–12. [85] F. Ruffier, S. Viollet, N. Franceschini, Visual control of two aerial mini-robots by insect based autopilots, Advanced Robotics, 2004, in press. [86] F. Ruffier, N. Franceschini, Visually guided micro-aerial robot: take off, terrain following, landing and wind reaction, in: Proc. IEEE Intern. Cong. Robotics and Automation (ICRA 2004), New Orleans, USA, 2004. [87] F. Ruffier, S. Viollet, S. Amic, N. Franceschini, Bio-inspired optical flow circuits for the visual guidance of micro-air vehicles, in: Proc. IEEE Int. Symp. on Circuits and Systems, ISCAS 03, Bangkok, Thailand, 2003. [88] G. Sandini, J. Santos-Victor, F. Curotto, S. Garibaldi, Robotic bees, in: Proc. IEEE Conf. on Intelligent Robots and Systems (IROS93), New York, 1993. [89] R. Sarpeshkar, J. Kramer, C. Koch, Pulse domain neuromorphic circuit for computing motion, United States Patent No. 5,781,648 (1998). [90] C. Schilstra, J.H. van Hateren, Blowfly flight and optic flow. 1. Thorax kinematics and flight dynamics, J. Exp. Biol. 202 (1999) 1481–1490. [91] M.V. Srinivasan, How insects infer range from motion, in: F.A. Miles, J. Wallman (Eds.), Visual Motion and its Role in the Stabilisation of Gaze, Elsevier, 1993, pp. 139–156. [92] M. Srinivasan, S. Venkatesh, From Living Eyes to Seeing Machines, Oxford University Press, Oxford, 1997. [93] M.V. Srinivasan, M. Lehrer, W. Kirchner, S.W. Zhang, Range perception through apparent image speed in freely flying honeybees, Vis. Neurosci. 6 (1991) 519–535. [94] D.G. Stavenga, R.C. Hardie, Facets of Vision, Springer, Berlin, 1989.
[95] N.J. Srausfeld, Atlas of an Insect Brain, Springer, Berlin, 1976. [96] N.J. Strausfeld, Beneath the compound eye: neuroanatomical analysis and physiological correlates in the study of insect vision, in: D.G. Stavenga, R.C. Hardie (Eds.), Facets of Vision, Springer, Berlin, 1989, pp. 317–359 (Chapter 16). [97] L.F. Tammero, M. Dickinson, The influence of visual landscape on the free flight behavior of the fruitfly Drosophila Melanogaster, J. Exp. Biol. 205 (2002) 327–343. [98] S. Ullman, Artificial intelligence and the brain; computational studies of the visual system, Annu. Rev. Neurosci. 9 (1986) 1– 26. [99] D.I. Vaney, S. He, W.R. Taylor, W.R. Levick, Direction selective ganglion cells in the retina, in: J.M. Zanker, J. Zeil (Eds.), Motion Vision: Computational, Neural and Ecological Constraints, Springer, Berlin, 2001, pp. 13–65. [100] S. Viollet, N. Franceschini, Biologically-inspired visual scanning sensor for stabilization and tracking, in: Proc. IEEE Intern. Conf. Intelligent Robots and Systems (IROS’99) Kyon-gyu, Korea, 1999, pp. 204–209. [101] S. Viollet, N. Franceschini, Visual servo-system based on a biologically-inspired scanning sensor, in: Sensor Fusion and Decentralized Control II, SPIE vol. 3839, Bellingham, USA, 1999, pp. 144–155. [102] S. Viollet, N. Franceschini, Superaccurate visual control of an aerial minirobot, in: U. R€ uckert, J. Sitte, U. Witkowski (Eds.), Autonomous Minirobots for Research and Edutainment, Heinz Nixdorf Institut, Paderborn, Germany, 2001, pp. 215– 224. [103] E. Vittoz, Analog VLSI signal processing: why where and how? J. VLSI Signal Proc. 8 (1994) 27–44. [104] H. Wagner, Flow-field variables trigger landing in flies, Nature 297 (1982) 147–148. [105] H. Wagner, Flight performance and visual control of flight of the free-flying housefly Musca domestica, I/II/III, Philos. Trans. Roy. Soc. B 312 (1986) 527–600. [106] W.H. Warren, B.A. Kay, D. Zosh, P. Duchon, S. Sahuc, Optic flow is used to control human walking, Nature Neurosci. 4 (2001) 213–216. [107] B. Webb, Can robots make good models of biological behavior? Behav. Brain Sci. 24 (2001) 6. [108] B. Webb, Robots in invertebrate neuroscience, Nature 417 (2002) 359–363. [109] B. Webb, T. Consi, Biorobotics, MIT Press, Cambridge, USA, 2001. [110] R. Wehner, Spatial Vision in Arthropods, in: H.J. Autrum (Ed.), Handbook of Sensory Physiology, vol. VII/6C, Springer, Berlin, 1981, pp. 288–616. [111] C. Wehrhahn, Sex-specific differences in the chasing behaviour of free-flying houseflies, Biol. Cyb. 32 (1979) 239–241. [112] T.C. Whiteside, D.G. Samuel, Blur zone, Nature 225 (1970) 94– 95. [113] M. Zaretsky, F.C.H. Rowell, Saccadic suppression by corollary discharge in the locust, Nature 280 (1979) 583–585.