Divergent Stereo in Autonomous Navigation : From Bees ... - CiteSeerX

3 downloads 0 Views 339KB Size Report
many experiments to show the robustness of the approach, and a detailed description of .... is closer to one wall, the velocity of the ipsilateral eye is larger. .... Figure 3: Di erence between the left and right average ows, during the obstacle ...
Divergent Stereo in Autonomous Navigation : From Bees to Robots  J. Santos-Victor

(1) ,

G. Sandini

(2) ,

F. Curotto

(2) ,

S. Garibaldi

(2)

(1) CAPS/ Instituto Superior Tecnico Instituto de Sistemas e Robotica Complexo 1, Av. Rovisco Pais, 1096 Lisboa codex, Portugal e-mail: [email protected] (2) DIST - University of Genova Laboratory for Integrated Advanced Robotics (LIRA - Lab) Via Opera Pia 11A - I16145 Genoa, Italy e-mail:[email protected]

Abstract

This paper presents some experiments of a real-time navigation system driven by two cameras pointing laterally to the navigation direction (Divergent Stereo). Similarly to what has been proposed in [1, 2], our approach [3, 4] assumes that, for navigation purposes, the driving information is not distance (as it is obtainable by a stereo setup) but motion and, more precisely, by the use of qualitative optical ow information computed over non-overlapping areas of the visual eld of two cameras. Following this idea, a mobile vehicle has been equipped with a pair of cameras looking laterally (much like honeybees) and a controller based on fast, real-time computation of optical ow has been implemented. The control of the mobile robot (Robee) is based on the comparison between the apparent image velocity of the left and the right cameras. The solution adopted is derived from recent studies [5] describing the behavior of freely

ying honeybees and the mechanisms they use to perceive range. This qualitative information (no explicit measure of depth is performed) is used in many experiments to show the robustness of the approach, and a detailed description of the control structure is presented to demonstrate the feasibility of the approach in driving the mobile robot within a cluttered environment. A discussion about the potentialities of the approach and the implications in terms of sensor structure is also presented. The research described in this paper has been supported by the Special Projects on Robotics of the Italian National Council of Research and by the ESPRIT project VAP-II. A fellowship from a bilateral collaboration between Consiglio Nazionale delle Ricerche (CNR) and Junta Nacional de Investigaca~o Cient ca e Tecnologica (JNICT) is gratefully acknowledged by Jose Santos-Victor. 

1

1 Introduction Research in computer vision has often been motivated by similarities with solutions adopted in natural living systems. Since the early work in image processing and computer vision, the structure of the human visual system has often been used, as the \natural" model of arti cial visual systems. Comparatively less e ort has been devoted to the study and implementation of arti cial vision systems based on the so-called \lower level" animals. The motivation of this biased research activity stems mainly from the idea that by \understanding human vision" one could obtain a \general" solution to the visual process, and eventually be able to develop a \general purpose visual system". We certainly agree with the idea that there is a lot to learn from human vision (and conversely there is a lot to learn by trying to build arti cial systems with anthropomorphic features) but, on the other hand we do not believe that such a \general purpose vision system" actually exists, and we are convinced that a lot can be gained and understood by looking at much simpler biological systems. Looking more carefully, in fact, it is evident that the human visual system is not as general purpose as it may look. For example, it is of little use underwater, it has a limited spectral band, it cannot measure distance, size or velocity in metric terms. Even its recognition system can be fooled very easily, for example by turning upside-down pictures of even familiar faces. The apparent generality of the system comes from the fact that we actually perform a limited number of motor and cognitive \actions" and, within this limited domain, our visual system (or more generally the integration of our perceptual systems) performs very eciently. Following these ideas, one could say that the goal of a vision system in a \living" agent is not generality but speci city: the physical structure and purpose drive perception [6, 7, 8]. Within the scope of this paper we would like to argue that the frontal position of the eyes (with a very large binocular eld) is mainly motivated by manipulation requirements and, if one restricts the purpose to navigation control, a potentially more e ective positioning is the one adopted by ies, bees and other ying insects, namely with the eyes pointing laterally. The tradeo between these two extreme situations is that in the latter case the global visual eld (i.e. the union of the visual elds of the two eyes) can be enlarged without increasing the computational power . On the other hand, by increasing the binocular part of the visual eld, the region where a stereo-based depth measure is possible, becomes larger. Looking again at biology, one nds out that the position of the eyes in di erent species becomes more frontal as the manipulative abilities increase (it is not a case that humans and primates have, among all species, the wider binocular eld and the more frontal positioning). Other aspects worth considering are the fact that stereo acuity is maximal at short dis1

2

And, possibly, the notion that \human vision is complex while insect vision is simple". Of course one could use \technological tricks" like rear-viewing mirrors, but this is not a valid argument in this context. 1 2

2

tances, which, behaviorally correspond to the manipulative workspace. Moreover, stereo is the only visual modality providing depth information in static environments (which is behaviorally relevant particularly in manipulative tasks). Conversely, motion parallax is useful if the \actor" or the environment is dynamic and its accuracy (and the corresponding range of operation) can be tuned by an active observer, by varying its own velocity. In this respect motion-derived features such as time-to-crash, seem more relevant to dynamically control posture and other navigation parameters: if one has to jump over an obstacle, the change in posture while he/she is approaching the obstacle (for example when to start to lift the leg), may be driven by \timeto-crash" more than by distance (which would be dependent upon the speed of approaching). The biological model of the navigating actor proposed in this paper comes from insects and the use they make of ow information to solve apparently complex motor tasks like ying in unconstrained environments and landing on surfaces[9, 10, 11, 1]. Particularly relevant to this paper is the experiment reported in [5] where honeybees were trained to navigate along corridors in order to reach a source of food. The behavioral observation is the fact that, even if the corridor was wide enought to allow for \irregular" trajectories, bees were actually ying in the middle of the corridor. This nding is even more surprising if we take into consideration that insects do not have accommodation and, that the stereo baseline is so small that disparity cannot be reliably measured. Apparently, then, no depth information can be derived using \traditional" methods. The solution presented in [5], is rather simple and it is based on the computation of the di erence between the velocity information computed from the left and the right eyes: if the bee is in the middle of the corridor the two velocities are the same, if the bee is closer to one wall, the velocity of the ipsilateral eye is larger. A simple control mechanism (the so-called centering e ect), then, may be based on motor actions that tend to minimize this di erence: move to the left if the velocity measured by the right eye is larger than that measured by the left (and vice versa). Following this line of thought, a mobile robot (Robee) has been equipped with laterally looking cameras and a controller has been implemented, based upon motion-derived measures, which does not rely on precise geometric information but takes full advantage of the continuous (in time) nature of visual information. Navigation is controlled on the basis of the optical ow eld computed over windows of images acquired by a pair of cameras pointing (in analogy with the position of the eyes in the honeybees and other insects) laterally and with non overlapping visual eld. We called this camera placement Divergent Stereo. While working on the experiments presented here, we became aware of a similar implementation (the beebot system) proposed by Coombs and Roberts [2]. A discussion of the di erences between the two approaches will be presented later on. However, the main di erence lies on the fact that, while in the system proposed by Coombs and Roberts gaze control is part of the navigation strategy, in the present implementation a simpler setup has been analyzed where gaze 3

control becomes unnecessary by appropriately positioning the two cameras and by controlling some behavioral variables such as the \turning speed". The experiments reported describe the behavior of robee in tasks like: following a wall, avoiding obstacles, making sharp and smooth turns, navigating along a funneled corridor with a very narrow exit. In Section 2, the Divergent Stereo approach proposed for robot navigation will be described along with the experimental setup. The simpli ed computation of the optical ow is presented in Section 3. The control aspects will be presented in Section 4. Finally, after the presentation of the experimental results (in Section 5), a brief discussion will summarize the major points of the paper within the framework of qualitative vision.

2 The Divergent Stereo Approach The basis of the visually guided behavior of robee is the centering re ex, described in [5] to explain the behavior of honeybees ying within two parallel \walls". The qualitative visual measure used is the di erence between the image velocities computed over a lateral portion of the left and the right visual elds. Even if the principle of operation is very simple, a few points are worth discussing, before entering into the implementation details, in order to explain the underlying diculties and the design principles adopted. The rst, and possibly major, driving hypothesis is the use of qualitative depth measurements: no attempt is made to actually compute depth in metric terms. The second guideline is simplicity: whenever possible, the tradeo between accuracy and simplicity has been biased toward the latter criterion. Finally, the goal of our visuo-motor controller is limited to the \re exive" level of a navigation architecture acting at short-range. In spite of these limitations, we will demonstrate a variety of navigation capabilities which are not restricted to obstacle avoidance or to the \centering re ex". Among the practical problems encountered in the actual implementation, two are worth discussing to better appreciate some of the concepts presented later on. The rst point regards the relationship between heading direction and optical ow computation. Strictly speaking,

ow information can be reliably used for the centering re ex only when the directions of the two cameras are symmetric with respect to the heading direction. With the camera placement of robee (see Figure 1) this requirement is satis ed only during translational motion of the robot. During rotational motions (as during obstacle avoidance), the ow eld is not solely dependent on the scene structure. This problem has been solved by Coombs and Roberts [2] by stabilizing the cameras against robot rotations and by introducing a control loop keeping the gaze aligned with the heading direction. The solution adopted in robee is di erent, as the two cameras are xed. Section 3.1 presents a detailed analysis of the rotational eld and discusses ways to 4

overcome this problem, either by using special control strategies, or by carefully selecting the camera positions. A second, relevant, point is the unilateral or bilateral absence of ow information, caused by absence of texture, or by localized changes in environmental structure (e.g. an open door along a corridor) . By applying the centering re ex in a crude mode, these lack of ow information would produce a rather unsteady behavior of the robot. This problem has been solved by introducing a sustaining mechanism to stabilize, in time, the unilateral ow amplitude in case of lack of information. This simple qualitative mechanism does not alter the re exive behavior of robee (in the sense that it is neither based on prior knowledge of the environment, nor on metric information) and extends the performance of the system to a much wider range of environmental situations (see Section 4.3 for more details). The experimental setup is based on a computer controlled mobile platform, TRC Labmate, with two cameras pointing laterally. Two cameras mounted with 4.8mm, auto-iris lenses are connected to a VDS 7001 Eidobrain image processing workstation. The left and right images are acquired simultaneously, during the vehicle motion. The setup is illustrated in Figure 1. During the experiments, the vehicle forward speed is approximately 80 mm/s. 3

Figure 1: Robee with the divergent stereo camera setup

3 Optical Flow Computation To compare the image velocity seen by the left and right cameras, the average of the optical

ow on each side is computed. The optical ow V = (u; v), is usually obtained by using the fundamental optical ow constraint [12], shown in equation (1), or by imposing second order This situation occurs, even with textured environments, if the relationship between \wall(s)" distance and vehicle speed is such that the resolution of the optical ow computation is lower than image velocity (e.g. if the robot is moving slowly and the \walls" are very far away). 3

5

stationarity constraints in the brightness function [13, 14, 15], according to equation (2): @I u + @I v + @I = 0 (1) @x @y @t d rI = 0 : (2) dt By using expression (2), one ends up with the following system of equations, which can be solved to obtain both components of the optical ow.

Ixxu + Ixy v = ?Ixt Ixy u + Iyy v = ?Iyt

(3) (4)

where Ixx, Iyy and Ixy stand for the second spatial derivatives of the image, while Ixt and Iyt stand for the time-space derivatives. In order to obtain useful ow estimates, it is necessary, as in other similar approaches, to smooth the images, in both space and time domains. Usually, this is accomplished through the use of gaussian smoothing (in space and time). The temporal smoothing is generally computed, centering the lter kernel in the image to be processed and, as a result, this procedure requires not only past images, but has also to use images to be acquired after the time instant under consideration (a non causal lter). The time delay introduced with this procedure, becomes relevant when the visual information is to be used for real-time control because it introduces a lag in the feedback control loop. In our approach, instead, having the control application in mind, a causal rst order time lter has been used, which recursively updates the time smoothed image:

Is(0) = I (0) Is(t) = Is(t ? 1) + (1 ? )I (t)

01

(5) (6)

where I (t) and Is(t) stand for the image acquired at time t and the corresponding temporal smoothed image. The parameter  controls the degree of smoothing desired. When  is larger, the images are more smoothed in time. Using the recursive time ltering procedure, only present and past images are required for the time ltering process . The experiments used to demonstrate the Divergent Stereo navigation concept are based upon a mobile platform moving on a at oor. As the robot motion is constrained to the ground plane, it can be assumed that the ow along the vertical direction is negligible (unless there is a signi cant lateral motion, which is seldom the case). Hence, we can use a simpler computation procedure, which is faster and more robust (since, for example, it does not involve 4

4

Regarding memory storage, it is only necessary to store I (t), Is (t) and Is (t ? 1).

6

the computation of second derivatives), by simply assuming in equation (1) that the vertical

ow component, v, is 0 : @I u + @I v + @I = 0 (7) @x @y @t v=0 (8) and u is given by:

(9) u = ? IIt x where Ix and It stand for the x spatial derivative and the temporal derivative of the image. The spatial smoothing is performed by convolving the time smoothed images, with a gaussian lter, and the time derivative is simply computed by subtracting two consecutive space and time smoothed images. To speed up the optical ow computation, a set of ve images is acquired at video rate and the temporal smoothing of both, left and right image sequences, starts concurrently to the acquisition. The last two images (on each side) of the time smoothed sequence, are then used to compute the average left and right optical ow vectors. Finally, the di erence between the average ow on the left and right images is used to synthesize the control law. Then, a new sequence of 5 images is acquired, and the whole process is repeated. In this way, the images are sampled at video rate even if the complete system operates at a slower rate (determined by the computation time which varies with the number of non-zero ow vectors). The images acquired have a resolution of 256x256 pixels, and the optical ow is computed over a window of 32x64 pixels on each side. In the current implementation, the averaged optical

ow vectors are computed at a frequency of approximately 1.5Hz. In order to clarify the Divergent Stereo approach, we have realized an open-loop experiment, illustrating the obstacle detection capabilities. Figure 2 shows the experiment setup and an image with the ow vectors superimposed. The vehicle is moving at a forward speed of 80mm/s. 42cm

108cm

Figure 2: Experimental setup of the obstacle detection experiment. A sample of the computed optical ow is shown on the right. Along the rectilinear trajectory, the robot will pass midway between the left wall and an obstacle placed on the right side. At this point, the robot is centered relatively to the left wall and the obstacle, and the bilateral ow di erence should be 0. 7

The evolution in time of the di erence between the left and right average ow elds is shown in Figure 3. As expected, the di erence is initially positive, as the robot is closer to the left than to the right wall and, close to the obstacle, the error approaches zero, as the vehicle is approximately centered with respect to the obstacle and the left wall. After passing the obstacle, the di erence tends again to approximately the initial value, as the robot continues to follow a rectilinear path. 1.2 1 0.8 0.6 0.4 0.2

Measured

0

Filtered

-0.2 -0.4 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Figure 3: Di erence between the left and right average ows, during the obstacle detection experiment. The full path takes 20 seconds, covering a distance of about 2.4m. The ltered signal (dashed line) results from applying a rst order lter to the measured values. This experiment shows the sensitivity of the perception process proposed in an obstacle detection task, thus motivating its use in a closed loop fashion. An important remark is that the system does not critically rely on the accuracy of the optical ow computation, because the measurements will continuously be used in closed loop.

3.1 Analysis of rotational ow The vision-based centering re ex relies on the assumption that the optical ow amplitude is only dependent on the distance between the cameras and the environment. Strictly speaking, this assumption is true if the two cameras are pointing symmetrically with respect to the heading direction, and the heading direction does not change. This constraint does not hold during the rotational motion, necessary to adjust the heading direction, because the roto-translation of the cameras introduces a component on the ow eld which does not depend on distance. This section presents a detailed analysis of the rotational ow of the Divergent Stereo setup, and explains how the disturbances introduced during robot rotations, can be reduced and eventually become irrelevant, by appropriately positioning the cameras with respect to the vehicle rotation center and by tuning some of the centering re ex control variables. Figure 4 describes the set of coordinate systems associated to the robot. Let the robot coordinate system be denoted by fX,Yg, and the left/right cameras coordinate systems by fx,yg. The setup is symmetric and the right camera optic center has polar coordinates (; ) with respect to the origin of the robot coordinate system. 8

Y x y

.

z

.

z

y x

X

Figure 4: The coordinate systems associated to the vehicle for the analysis of the rotational motion. The robot coordinate system is denoted by fX,Yg and the left/right cameras coordinate systems are denoted by fx,yg. The setup is symmetric and the center of the right camera has polar coordinates (; ) with respect to the robot coordinate system origin. For small rotations, rst order approximations can be used to determine the translational motion a ecting the cameras optic centers:

X_ L = ? sin() _ Y_L = ? cos() _

X_ R = ? sin() _ Y_R =  cos() _

where the subindex refers the left or right camera. By expressing this motion parameters in the cameras coordinate systems, one is led to:

!yR = _ TxR = ? cos() _ ? TM TzR = ? sin() _

!yL = _ TxL = ? cos() _ + TM TzL =  sin() _

where TM is the robot forward speed. The in uence of the rotation can be perceived in the following equations describing the horizontal component of the image motion eld : UL = Z1 [xa sin() _ +  cos() _ ? TM ] ? (1 + xa) _ L 1 UR = Z [?xa sin() _ +  cos() _ + TM ] ? (1 + xa) _ 2

2

R

where xa denotes an image point coordinate expressed in units of focal length. Observing that the left and right ow elds have opposite directions, (due to the choice of the cameras coordinate systems) the comparison of both left/right lateral ows is given by the 9

sum of UL and UR:

e = U L + UR (10) 1  1  1 1 = [TM ? xa sin() _] Z ? Z +  cos() _ Z + Z ? 2(1 + xa) _ (11) R L R L In the absence of rotation, this equation is simpli ed to :   e = TM Z1 ? Z1 : (12) R L This equation shows that, without rotation, the error signal, e, is directly proportional to the deviation from the center trajectory. The robot forward speed appears as a scaling factor which has the role of a sensitivity gain. If the rotational motion is important, the circumstances under which the lateral ow comparison is still meaningful as a trajectory deviation measurement, have to be considered. There are three main contributions to be analyzed : 1. In the rst term, [TM ? xa sin() _]( ZR ? ZL ), the rotation a ects the sensitivity gain. Avoiding large variations of this gain, which would in uence the system closed-loop behavior, introduces a constraint between the maximum rotation speed and the robot forward speed, TM . This requirement can be met by suitably selecting the values of  and , or increasing the vehicle speed. 2. The term  cos() _( ZR + ZL ) depends on the unknown 3D structure but does not convey any information on the deviation from the optimal trajectory. It can be made small enough, by installing the cameras closer to the vehicle rotation center, hence reducing . 3. The term ?2(1 + xa) _ is not dependent on the 3D structure of the environment. A calibration procedure could be designed to compute the correspondence between rotational speed (which is internally controlled) and displacements in the image plane. During normal operation, this term can be compensated. 2

1

1

1

1

2

3.2 Design speci cations The problem of determining a set of design constraints to overcome the problems due to the rotational motion will be analyzed in this section. The analysis will be restricted to the structure dependent terms which appear in equation (11). Concerning the rst term, it can be seen that the rotation leeds to change in the sensitivity of the error signal to the trajectory deviation factor (see equation (12) ). Very fast rotations could even lead to unstability. Therefore, a natural constraint will be such that the rotation term does not a ect the sensitivity gain, too signi cantly. xa sin() _ < TM (13) 10

where 2 [0; 1] is the admissible relative change in the sensitivity gain. Alternatively, the second term in the error signal should also be kept small relatively to TM :  cos() _ < TM (14) where 2 [0; 1] quanti es the admissible relative magnitude of this term. A suitable setup (, ) can be chosen to meet these speci cations. There are two other ways to put these constraints into practice. The rst one, consists in xing the value for the forward speed, and limiting the value of _, by saturating the control variable before it is applied to the robot (in natural systems this may be an intrinsic constraint): ! T T M M _ < min x  sin() ;  cos() : (15) a Another way of addressing the problem consists in having an extra control loop, that dynamically adjusts the vehicle forward speed, to ful ll the design constraints: _ cos() ! _ a sin()  x : (16) ; TM > max As a numerical example, let the values of the current setup be considered:

 = 72o TM = 0:08 m=s  = 0:34 m : Since the focal length of the lenses is 4:8mm, roughly the same order of magnitude as the size of the CCD chip, a reasonable approximation is xa ' 1. The following constraints arise from these settings: _ < min( 0:247 ; 0:759 ) rad=s : (17) For example, by setting the project parameters and to 0.5 and 0.2, respectively, yields : _ < 7:1 deg=s : (18) 5

To conclude, one can say that through careful placement of the camera system and some design parameters, the constraints derived in this section can be met. It is worth stressing, however, that the proposed solution only represents an approximation based on qualitative observations. On the other hand, the experiments performed, clearly show that the desired behavior is obtained and that the choice of the design parameters is by no means critical, proving the robustness of the approach proposed. The choice of these numerical values is a way of specifying a threshold relative to some known nominal quantity. 5

11

Obviously, another possibility to overcome the rotation problem is to inhibit the control action whenever the robot is required to rotate faster than the maximum allowable speed (a sort of \saccadic" motion which resembles very much the behavior of insects). During our experiments we have tried several approaches such as using the design speci cations explained in this section or adopting the \inhibitory" solution, by actually disregarding the ow measurements during fast rotations. Robee performed equally well in both situations. The solution adopted by Coombs and Roberts is also satisfactory, even if the higher complexity, introduced by the stabilization of the cameras against head rotations and by the system keeping the eyes aligned with the heading direction, does not seem to improve behavioral performance.

4 Real-time Control The overall structure of robee control system involves two main control loops: 1. Navigation loop - Controls the robot rotation speed in order to balance left and right optical ow. 2. Velocity loop - Controls the robot forward speed by accelerating/decelerating as a function of the amplitude of the lateral ow elds . Additionally, a sustaining mechanism is implicitly embodied in the control loops to avoid erratic behaviors of the robot, as a consequence of localized (in space and time) absence of ow information. These aspects and the analysis of the di erent control loops will be presented in the following sections. 6

4.1 Navigation Loop The analysis of the control loops is based on simpli ed dynamical models of the control chain components and on the use of linear systems theory . The robot heading direction (controlled variable) is controlled by applying a rotation speed (control variable) superimposed on the forward motion. The simplest dynamic model of the system must account for two important terms: an integral term relating the input angular speed and the output angular position; the mechanical inertia of the system, which is modeled by a rst order dynamic system. In continuous time, the transfer function relating the control and controlled variables, according to our simple model, is given by: Gc (s) = s (sa+ a) (19) 7

Robee accelerates if the lateral ow is small (meaning that the walls are far away), and slows down whenever the ow becomes larger (meaning that it is navigating in a narrow environment). 7A more accurate control analysis/synthesis is undergoing. 6

12

where a is the dominant low-frequency mechanical pole. Since we are using digital control, we must determine how the computer (where the control algorithm is implemented) \sees" the system. As a sample-and-hold mechanism is being used, the step invariant method is appropriate to determine the discretized system [16]. Considering the low-frequency pole at 5 Hz, and a sampling period of  = 0:7s, we obtain: z + 0:0318 Gd (z) = z ? (10+:468 (20) 1:51e? )z + 1:51e? : Again for simplicity, we may assume that the di erence between the left and right ow vectors, provides the error between the robot direction and the direction of the lateral walls, with a delay of one sampling period introduced by the ow computation. This approximation is only valid for small course deviations, since the visual process is, in fact, non linear. The sensor model is then given by: 2

7

7

e(k) = walls(k ? 1) ? robot(k ? 1) :

(21)

Qualitatively, e(k) is positive if the left side ow is larger than the right side ow, which means that the left wall is closer than the right one. Hence, the appropriate control action would be turning to the right. The discrete time PID controller, that is used to close the navigation loop, performs the following control law:

u(k) = Kp[ e(k) + Ki

X n

e(k ? n) + Kd(e(k) ? e(k ? 1)) ]

(22)

where u is the control variable (rotation speed in degrees/s ) and e(k) stands for the error signal observed at time instant k. The transfer function corresponding to the PID is given by: (23) GP ID (z) = Kp (Ki + Kd + 1)zz(z??(11)+ 2Kd )z + Kd : By connecting all these models, we obtain a linear feedback loop as shown in Figure 5. 2

Divergent Stereo Sensor Ref. Direction

+

+

z-1

e(k)

PID

u

Mobile Robot

Motion Direction

Gd(z)

-

Figure 5: This block diagram illustrates a simple modelization of the robot navigation system Although a thorough analysis of the system behavior, with di erent control settings, can hardly be done based on approximate models, a discussion can still provide us some insight and understanding on the performance of several classes of controllers. 13

As a start, the loop could be closed simply by using a proportional gain (by setting the integral and derivative gains to zero). The root-locus corresponding to this situation is shown in Figure 6. Even though the proportional controller succeeds in stabilizing the system, fast responses can only be attainable with large values of gain which lead to signi cant oscillatory behavior, as the dominant poles become complex conjugate. 1 0.8 0.6 0.4 0.2 0

ox

x

-0.2 -0.4 -0.6 -0.8 -1 -1

-0.5

0

0.5

1

Figure 6: Root-Locus corresponding to the proportional controller. The unit circle is shown in dotted line for easier stability analysis. By adding a derivative component (which basically works as a predictive term, thus coping better with the delay) to the controller, one can expect to achieve faster responses, even for smaller gain values. Figure 7 shows the root-locus in this situation. The derivative component is xed to d = 1 5 . The e ect of adding the extra zero is that the low frequency pole is attracted into the higher frequencies, thus improving the response time of the system. Also, for large gains the oscillatory behavior of the system is reduced. Finally, the e ect of inserting the integral component of the controller can be analyzed by considering the root-locus shown in Figure 8. The parameters used are i = 0 1 and d = 1 5. In this last situation, adding a discrete integrating e ect (an extra pole in 1), decreases the stability margin of the system, as well as the response speed. In practice, the system may even become unstable due to various unmodeled components or perturbations, such as the non linearities in the visual processing. This simpli ed analysis, allowed us to gain important insight and understanding on the behavior of the system under the di erent controller topologies. Further modeling is still necessary for a quantitative analysis or a more systematic control synthesis. Nevertheless, some of the ideas discussed in this section were later on veri ed in the experiments, as described in Section 5. K

:

K

14

:

K

:

1 0.8 0.6 0.4 0.2 0

o*

o

*

-0.2 -0.4 -0.6 -0.8 -1 -1

-0.5

0

0.5

1

Figure 7: Root-Locus corresponding to the proportional-derivative controller. The derivative gain was xed at Kd = 1:5 .

1 0.8 0.6 0.4 0.2 0

o*

o

o

*

-0.2 -0.4 -0.6 -0.8 -1 -1

-0.5

0

0.5

1

Figure 8: Root-Locus corresponding to the PID controller.

15

4.2 Velocity Control This section presents the strategy proposed to control the robot forward speed based on the environment structure. The rationale is that, if the robot is navigating on a narrow environment, it is safer to decrease the forward speed; whereas if it is moving on a wide clear area, it is reasonable to increase its speed. The mean ow vector on each side of the robot gives a qualitative measurement of depth. By averaging these bilateral mean ow vectors, a qualitative measurement of width is obtained. The control objective amounts to keeping the average ow close to some speci ed reference value. If the ow increases, the robot should slow down, thus reducing the observed ow. This behavior, which can be implemented within the purposive approach described in this paper, is not only coherent from the perceptual viewpoint (it agrees with what a human driver, for example, would do) but also increases the safety. Qualitatively, this corresponds to saying that the size of the environment is scaled by the robot speed. Let To be the nominal speed, at which the robot should move, and let fo be the corresponding nominal ow. For safety purposes, the robot speed, T , is constrained to the interval [(1 ? )To; (1 + )To], where 2 [0; 1] quanti es the permissible excursion. A sigmoid function is used as smooth saturation, as shown in Figure 9. 8

T

TM

To Tm

fm

fo

fM

f

Figure 9: The sigmoid function, used in the velocity control loop, describes how the robot forward speed should change, with relation to the measured ow. It is used to introduce a smooth saturation on the robot velocity. The velocity to be applied to the robot is given by: # " 2 (24) T = To 1 ? + 1 + e f ?fo where f is the average between the left and right ows, and  determines how fast should the speed variation be with respect to the ow variation. To determine , one can say, for example, that 90% of the total velocity excursion should be reached for a relative variation of  around (

8

)

Qualitative in the sense that it is not a measure of depth, and, moreover, depends on the robot speed.

16

the nominal ow. By using equation (24),  is given by:  = lnf19 : (25) o In the current implementation, we have used = 0:5,  = 0:3, and a reference ow of fo = 2:0 pixels/frame, yielding  = 4:95.

4.3 \Sustained" behavior The navigation system, described in the previous sections, allows the robot to navigate by balancing the ow measurements on the left and right sides. Therefore, it can only be applied as long as there is texture on both sides of a corridor{like environment. This situation is not entirely satisfactory for two reasons:

 It would be nice if the reactive behavior of Robee could be used in environments far more complex than corridors.

 In most mission scenarios, Robee would most certainly nd environments with \walls" not

uniformly covered with texture. This fact would cause an illusory perception of in nite distance and, therefore, elicit unappropriate behaviors (for example, if an open door is found while traveling along a corridor).

To overcome these problems, we have introduced in the control strategy a mechanism able to cope with unilateral lack of ow information. Whenever it occurs, the control system uses a reference ow that should be sustained on the \seeing" camera (i.e. the camera still measuring reliable ow). This mechanism monitors whether there is signi cant ow being measured on both sides of the vehicle, or if just one (or none) of the cameras is capturing signi cant ow. Three situations may arise:

 Bilateral ow - Optical ow is measured in both cameras. Consequently, the robot is locally navigating in a corridor, and the standard navigation strategy can be applied

 Unilateral ow - Only one camera is capturing ow information. Without the sustaining

mechanism, the robot would simply turn towards the side without ow measurements, trying to balance lateral ows. A more appropriate behavior, instead, would be keeping the unilateral ow constant, hence following the ipsilateral wall at a xed distance. With such strategy, the robot may cross corridors with open doors, even with partially untextured walls and, when arriving to a room, follow the walls at a xed distance. 17

 Blind - If none of the cameras is capturing ow information, the robot is virtually blind. This robot should either stop or follow straight ahead for a while, or even wander in a random exploratory way, until some texture is again found. Currently, robee stops if this situation arises.

Let us analyze how, without any prior knowledge of the environment, the sustaining mechanism is implemented . During normal operation, the reference ows are estimated by ltering over time each of the lateral mean ow vectors. The time ltering takes into account the number of vectors that contributed to the mean computation :  f t = n t? nf t? ++(1(1?? )n)n t f t (26) t? t ( )

(

1) ( (

1)

( ) ( )

1)

( )

n + (1 ? )n (27) n t = n t? + (1 ? )n t t? t where f t is the mean ow, computed at time t, with n t ow vectors; f t , n t are the corresponding time ltered values; and 2 [0; 1] is a time decay constant. Functionally, determines the amount of past ow information that should be \remembered" in the ltering process. The system enters the sustaining mode whenever it is unable to estimate reliable ow vectors on one side. In the experiments realized, was set to 0.6, which agrees with the time ltering parameter used for temporal smoothing. As a nal remark, we would like to stress that the approach proposed extends considerably the performance of the \reactive" behavior and that the use of both the corridor and the wall following behaviors in task{driven navigation is straightforward. In particular, two potential applications are worth mentioning: ( )

2 (

1)

2 ( )

(

1)

( )

( )

( )

( )

( )

 the \reactive" control of robee can be used to acquire information about environmental structure. In fact, odometric information, coupled with the sensory information of robee, would allow to build a map of the environment in terms of corridors and walls which could be used by a planning system to drive navigation on a higher level.

 The planner of a robot moving in a known (but variable) environment could take ad-

vantage of \task{level" commands like: \navigate to the end of the corridor" or \follow a wall" without relying on geometric information which, ultimately, may be dicult to acquire and maintain in realistic environments.

5 Results This section presents a set of tests that clearly demonstrate several applications of the Divergent Stereo navigation approach. The goal of the experiments is the study of the closed loop behavior 18

of the visually guided robot. In order to test the performance in a wide range of environmental situations and to analyze in more detail the in uence of the di erent controller settings, three experimental situations with increasing degree of diculty have been considered. The nal experiment illustrates the sustaining mechanism along with the velocity control loop. In all the results presented the trajectory of the robot was recorded from odometric data during real-time experiments.

5.1 Turn Experiment On the rst set of experiments, the robot was tested in a turning corridor setup, as shown in Figure 10. 3.6

Y [m]

-0.6 -1.2

X [m]

1.8

Figure 10: Setup used for the turn experiment. The vehicle is supposed to perform the turn, using the divergent stereo navigation strategy. The whole navigation system has been tested with di erent controller settings. For these experiments, the integral gain Ki , was xed to zero, as suggested by the discussion on the controller design. In the rst three experiments, the in uence of the derivative gain on the navigation performance has been studied. Figure 11, shows the trajectories followed by the robot with a xed proportional gain, p = 1 5, and using for Kd the values of 1:0, 1:2 and 1:4. The trajectories were recorded using the odometry information, and are shown superimposed on the experimental setup. To test the e ect of the proportional gain on the controller, another set of trials was performed. The integral and derivative gains were kept constant ( i = 0 and d = 1 2), while using for p the values of 1 25, 1 5 and 1 75. The results are shown in Figure 12. The analysis of the trajectories show that, by increasing the value of Kd , the response becomes faster, even though the trajectories may become less smooth. On the other hand, by K

:

K

K

:

:

:

19

K

:

3.6

3.6

3.6

Y

Y

Y

-0.6

-0.6 -1.2

[m]

[m]

[m]

X [m]

-1.2

1.8

-0.6 X [m]

-1.2

1.8

X [m]

1.8

Figure 11: Results obtained in the closed loop operation in the turn experiment. The controller settings were Kp = 1:5, Ki = 0 and using for Kd the values (from left to right) of 1:0, 1:2 and 1:4. The trajectory was recovered using odometry.

3.6

3.6

3.6

Y

Y

Y

[m]

-0.6

-0.6

-0.6 -1.2

[m]

[m]

X [m]

1.8

-1.2

X [m]

1.8

-1.2

X [m]

1.8

Figure 12: Results obtained in the closed loop operation in the turn experiment. The controller settings were Kd = 1:2, Ki = 0 and using for Kp the values (from left to right) of 1:25, 1:5 and 1:75. The trajectory was recovered using odometry.

20

increasing Kp, the response becomes faster but the vehicle does not succeed to have such fast reactions as by increasing the derivative component.

5.2 Funnel In another set of experiments, we used a funneled corridor with an obstacle. The setup is shown in Figure 13. When navigating in this environment, the robot must avoid the obstacle while trying to keep centered in the funneled corridor. This situation is particularly interesting because it forces the robot to react to sudden changes (the obstacle) as well as to smooth changes of the environment structure. 3.6

Y[m]

-0.6 -1.8

1.2

X[m]

Figure 13: Setup used for the funneled corridor experiment. In this scenario, the robot has to avoid an obstacle while managing to keep the track in the funneled corridor. Again, di erent settings of the PID controller have been used, in order to study the robot behavior. Figure 14 shows the trajectories corresponding to three of those experiments. The robot trajectory, recovered from odometry, is superimposed on the setup layout. The rst trial represents an experiment with the controller tuned with f p = 1 5, i = 0 0, and d = 1 5g, which led to a nice trajectory. On the second trial, the integral component has been introduced using f p = 1 5 i = 0 1 d = 1 5 g. As suggested by the discussion made in Section 4, an unstable behavior was observed, with the vehicle moving towards the left wall. Finally, on the third experiment a nice performance was also obtained by slightly increasing the proportional gain, while diminishing the derivative gain f p = 1 8, i = 0 0, d = 1 2g. As expected, a somewhat smooth trajectory was obtained. The tests performed, enhance the importance of suitable control system design, and clearly show the use of our discussion of the modeling and control system design. Even if further modeling is, certainly, necessary it is worth noting that the behavior of the robot does not K

K

:

K

:

K

: ; K

: ; K

:

K

21

:

K

:

K

:

:

3.6

3.6

3.6

Y[m]

Y[m]

Y[m]

-0.6

-0.6

-0.6 -1.8

X[m]

-1.8

1.2

X[m]

1.2

-1.8

X[m]

1.2

Figure 14: Funneled corridor experiment. The leftmost and rightmost plots show the results of increasing the proportional gain while decreasing the derivative gain. In the center, it is seen the unstable behavior due to the insertion of the integral action. depend upon critical values of the controller settings and, therefore, the robustness of the system is very promising.

5.3 Corridor In this set of experiments, a corridor which is just slightly larger than the robot and with a sharp turn in the end has been used (see Figure 15) to test the performance of the system on a combination of di erent environmental situations. 7.2

Y[m]

-0.6 -1.2

X[m]

2.4

Figure 15: Setup used for the corridor experiment. The corridor is just slightly larger than the robot, and has a tight turn in the end. 22

Also in this situation, several trials were performed, changing the parameters of the PID controller. Figure 16 shows the trajectories obtained by increasing the values of p. The derivative gain is xed in d = 1 2, while the proportional gain increases from left to right p = 1 2 1 4 1 6 . The integral gain is not used. K

K

K

: ;

: ;

:

:

7.2

7.2

7.2

Y[m]

Y[m]

Y[m]

-0.6

-0.6 -1.2

2.4

X[m]

-0.6 -1.2

X[m]

2.4

-1.2

X[m]

2.4

Figure 16: Corridor experiment. The derivative gain is xed in Kd = 1:2, while the proportional gain increases from left to right Kp = 1:2; 1:4; 1:6. The integral gain is not used. On the second set of trials, the proportional gain is set to p = 1 4, and the derivative gain is increased from left to right d = 1 0 1 2 1 4 . Again, the integral gain is not used. The trajectories are shown in Figure 17. The results show that by increasing, both the proportional and derivative gains, the response becomes somewhat faster. In general, by increasing the derivative gain leads to a faster behavior of the control system. K

K

: ;

: ;

:

:

5.4 Velocity Control In order to test the velocity control, the robot was navigating through a funneled corridor, where the width changes from 1.65m to 1.25m. The full length of the funneled corridor is about 2.25m. Since the corridor is becoming narrower, the average ow increases. By introducing the velocity control mode, the robot velocity will decrease in order to keep a constant ow. Figure 18 shows the average between the left and right ows measured over time. The lled line was obtained without velocity control (showing increasing ow values), while the dotted line shows the action of the velocity control keeping image ow close to the desired value of 2 pixels/frame. 23

7.2

7.2

7.2

Y[m]

Y[m]

Y[m]

-0.6

-0.6 -1.2

-0.6

-1.2

2.4

X[m]

X[m]

2.4

-1.2

X[m]

2.4

Figure 17: Corridor experiment. The proportional gain is xed in Kp = 1:4, while the derivative gain increases from left to right Kd = 1:0; 1:2; 1:4.

3

2.5

2

1.5

1

0.5

0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Figure 18: The average between the left and right ows obtained in an experiment with (dotted line) and without ( lled line) velocity control. The nominal ow is 2.0 pixels/frame and the nominal velocity 80mm/s. The evolution of the robot velocity along the path is shown in Figure 19. It is seen that, as the corridor is narrowing, the velocity decreases in order to keep the ow to a lower safer value. Other experiments were performed using the proposed strategy. Particularly, it is interesting to see that, during the corridor experiment, the nal turn is done at a reduced speed enabling the robot to make a softer, safer turn.

24

160 140 120 100 80 60 40 20 0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Figure 19: The robot speed in mm/s during the funneled corridor experiment. As the corridor narrows, also the velocity decreases for safer operation.

5.5 Sustained behavior This experiment was designed to show the in uence of the sustaining mechanism upon the robot behavior and how it can deal with di erent environment structures. The rst experiment consisted in using a corridor which exhibits a lack of texture on the right side, and whose left wall suddenly nishes in a room. Figure 20 shows the trajectories obtained by activating (center) and de-activating (left) the sustaining mechanism. In the latter case, the robot turns left trying to balance both lateral ows while, in the former, a reference ow value is sustained leading the robot to follow the right wall. 7.2

3.6

3.6

Y[m]

Y

Y

[m]

-0.6 -0.6

[m]

-0.6

X

[m]

1.2

-0.6

-0.6

X

[m]

1.2

-1.2

X[m]

2.4

Figure 20: Sustained mechanism experiment. The left diagram corresponds to the behavior of the robot without the sustaining mechanism. At the center, due to the sustaining behavior, the robot manages to follow the right wall. The rightmost diagram corresponds to an experiment along the corridor with an open door on the left and a lack of texture on the right. 25

Another experiment was made using the corridor setup, where some texture was removed from the corridor walls, and a door, roughly located at the corridor center, is left opened. The results are documented in Figure 20. To conclude, it seems reasonable to claim that the proposed approach, both on the control and visual perception facets, led to good results and proved the feasibility of a navigation system based on these principles. Furthermore, it should be noticed that with the introduction of the sustained behavior, the robot is able to navigate in a much wider set of environments. In fact, only one textured wall is needed for the navigation strategy to work. It is worth noting that, the design principles adopted to eliminate the in uence of the rotational component of the optical ow, did prove successful in our experimental conditions and, therefore, even if a stabilization mechanism is desirable or even mandatory in other situations, it is not strictly necessary to produce the kind of behaviors described in this paper.

6 Conclusions A qualitative approach to visually-guided navigation based on optical ow has been presented motivated by studies and experiments performed on freely ying honeybees. The approach is based on the use of two cameras mounted on a mobile robot with the optical axes directed in opposite directions such that the two visual elds do not overlap (Divergent Stereo). Range is perceived by computing the apparent image speed on images acquired during robot motion. A real-time computation of optical ow is presented, based upon the constraints imposed by the geometry of the cameras and by the navigation strategy. Furthermore, some suggestions have been presented on how to select some design variables in order to make the disturbances due to the rotational motion, irrelevant to the reactive behaviors described in the paper. A PID controller was used to close the visuo-motor control loop. The closed loop behavior was studied, based on models of the di erent control system components. The analysis of the control system design led to a suitable con guration for the PID controller. The approach has been tested using real-time experiments to accomplish di erent navigation tasks, like performing a tight turn or navigating through a funneled corridor. The in uence of the control parameters on the system behavior was studied and the results con rmed, within the assumptions made, the discussion on the control system design. A controller for the robot forward velocity was also studied and implemented. Experiments have been made to show the improvement achievable, by including this control loop in cluttered environments. Finally, through the insertion of a sustained behavior, the robot is able to operate in environments rather more complex than a simple corridor, showing the capability of operating in 26

sparsely textured corridors, and following unilaterally textured walls. All the experiments were performed without the need for accurate depth or motion estimation, nor required a calibration procedure (besides the manual positioning of the two cameras). The main features of Robee can be summarized as follows:

 Purposive de nition of the sensory apparatus and the associated processing. The ap-

proach proposed, in fact, cannot be considered general but, with limited complexity, solves a relevant problem in navigation: the control of heading direction in a cluttered environment.

 Use of qualitative and direct visual measures. In our opinion this is not only a \reli-

gious" issue but, more importantly, a way to achieve a reasonable autonomy with limited computational power. Successful examples of this approach have recently appeared in the literature both with respect to re ex-like behaviors for obstacle avoidance [17, 18, 19, 20] and in relation to more \global" measures of purposive navigation parameters [21].

 Continuous use of visual measures. A further aspect worth mentioning is the at-

tempt made at developing a sensory system providing a continuous stream of environmental information. A rst advantage is the increased robustness implicit in the use of repeated measures (no single mistake produces catastrophic errors) and a secondary, and potentially more important, advantage is the possibility of implementing sensory-motor strategies where the need for a continuous motor control is not bounded by an \intermittent" ow of sensory information. This paradigm is, in our opinion, a non trivial evolution of some active vision implementations where the motion of the (active) observer is seen \only" as a way of taking advantage of the stability of the environment (e.g. by moving the vehicle along pre-programmed, known trajectories to reduce uncertainty). The use of vision during action [22], on the contrary, may be a very powerful extension of the concept of active observer by exploiting the use of dynamic visual information not only at the \re exive" level of motor control.

 Simplicity. This feature is often regarded as an engineering and implementation aspect

and, as such, is not explicitly considered a scienti c issue. This view, in our opinion, must be changed if reasonable applications of computer vision are addressed. The issue of simplicity, however, should not be considered, within speci c aspects of intelligent actor's design (such as, sensory systems, mechanical design, computational architecture etc.) but must be considered at system level. Robee is an example of such an holistic view of simplicity where the purpose is achieved by a comprehensive analysis and integration of visual processing, sensor design, sensor placement, control law and vehicle structure. In this respect low-level animals (and insects in particular) are extremely interesting 27

examples of \simple" actors where all engineering aspects are mixed exploiting not only \computational" issues but, more importantly, the cooperation of \intelligent" solutions which, if considered separately, may look like interesting implementational tricks but, once acting together, produce intelligent behaviors. Before concluding, it is worth mentioning the fact that, at its current level of implementation, robee is not entirely satisfactory (even within the intrinsic limitations of a re ex-like control) because it is blind in the direction of motion (it will bump against an obstacle just in front of it). On the other hand we are currently using only a very limited portion of the visual eld. The use of a more frontal part of the visual eld, is currently investigated to extract other motion-derived measures (e.g. time-to-crash) which could help robee not only to implement the behaviors described in this paper, but also to control docking speed.

References [1] N. Franceschini, J. Pichon, and C. Blanes. Real time visuomotor control: from ies to robots. In Fifth Int. Conference on Advanced Robotics, Pisa, Italy, June 1991. [2] D. Coombs and K. Roberts. Centering behaviour using peripheral vision. In D.P. Casasent, editor, Intelligent Robots and Computer Vision XI: Algorithms, Techniques a nd Active Vision, pages 714{21. SPIE, Vol. 1825, Nov. 1992. [3] G. Sandini, J. Santos-Victor, F. Curotto, and S. Garibaldi. Robotic bees. Technical report, LIRA-Lab - University of Genova, October 1992. [4] J. Santos-Victor, G. Sandini, F. Curotto, and S. Garibaldi. Divergent stereo for robot navigation: Learning from bees. In Proc. CVPR-93, New Yors, U.S.A., 1993. [5] M.V. Srinivasan, M. Lehrer, W.H. Kirchner, and S.W. Zhang. Range perception through apparent image speed in freely ying honeybees. Visual Neuroscience, 6:519{535, 1991. [6] R. K. Bajcsy. Active perception vs passive perception. In Proc. Third IEEE Workshop on Computer Vision: Representation and Control, pages 13{16, Bellaire (MI), 1985. [7] D.H. Ballard, R.C. Nelson, and B. Yamauchi. Animate vision. Optics News, 15(5):17{25, 1989. [8] J. Aloimonos. Purposive and qualitative active vision. In Proc. of Int. Workshop on Active Control in Visual Perception, Antibes (France), 1990. [9] G. Horridge. The evolution of visual processing and the construction of seeing systems. Proc. Royal Soc. London, pages 279{292, 1987. 28

[10] M. Lehrer, M.V. Srinivasan, S.W. Zhang, and G.A. Horridge. Motion cues provide the bee's visual world with a third dimension. Nature, 332 No.6162:356{357, 1988. [11] M.V. Srinivasan. Distance perception in insects. Current Directions in Psychological Science, 1:22{26, 1992. [12] B. K. P. Horn and B. G. Schunck. Determining optical ow. Arti cial Intelligence, 17 No.1-3:185{204, 1981. [13] Nagel H. On the estimation of optical ow: Relations between di erent approaches and some new results. Arti cial Intelligence, 33:299{323, 1987. [14] S. Uras, F. Girosi, A. Verri, and V. Torre. Computational approach to motion perception. Biological Cybernetics, 60:69{87, 1988. [15] E. DeMicheli, G. Sandini, M. Tistarelli, and V. Torre. Estimation of visual motion and 3d motion parameters from singular points. In Proc. of IEEE Int. Workshop on Intelligent RObots and Systems, Tokyo, Japan, 1988. [16] K. Astrom and B. Wittenmark. Computer Controlled Systems: Theory and design. Prentice-Hall, 1986. [17] G. Sandini and M. Tistarelli. Robust Obstacle Detection Using Optical Flow. Proc. of IEEE Intl. Workshop on Robust Computer Vision, Seattle, (WA), Oct. 1-3, 1990. [18] W. Enkelmann. Obstacle detection by evaluation of optical ow elds from image sequences. In Proc. of rst European Conference on Computer Vision, pages 134{138, Antibes (France), 1990. Springer Verlag. [19] F. Ferrari, M. Fossa, E. Grosso, M. Magrassi, and G. Sandini. A pratical implementation of a multilevel architecture for vision-based navigation. In Proceedings of Fifth International Conference on Advanced Robotics, pages 1092{1098, Pisa, Italy, June 1991. [20] M. Fossa, E. Grosso, F. Ferrari, G. Sandini, and M. Zapendouski. A visually guided mobile robot acting in indoor environments. In Proc. of IEEE Workshop on applications of Computer Vision, Palm Springs, U.S.A., 1992. [21] Cornelia Fermuller. Navigation preliminaries. In Yiannis Aloimonos, editor, Active Perception, pages 103{150. Lawrennce Erlbaum Associates, 1993. [22] G. Sandini, F. Gandolfo, E. Grosso, and M. Tistarelli. Vision during action. In Yiannis Aloimonos, editor, Active Perception, pages 151{190. Lawrennce Erlbaum Associates, 1993. 29

Suggest Documents