Autonomous Vision-Based Navigation of a Quadrotor ... - SAGE Journals

2 downloads 0 Views 414KB Size Report
Vision-based bio-inspired control strategies offer great promise in ..... where, y are the measured optic flow outputs, the noise w is zero mean E{w} = 0 with ...
111

Autonomous Vision-Based Navigation of a Quadrotor in Corridor-Like Environments Jishnu Keshavan, Greg Gremillion, Hector Alvarez-Escobar and James Sean Humbert Autonomous Vehicles Laboratory, Department of Aerospace Engineering, University of Maryland, College Park 20742, USA Email address: [email protected]

ABSTRACT Vision-based bio-inspired control strategies offer great promise in demonstrating safe autonomous navigation of aerial microsystems in completely unstructured environments. This paper presents an innovative navigation technique that embeds bio-inspired widefield processing of instantaneous optic flow patterns within the H∞ loop shaping synthesis framework, resulting in a dynamic controller that enables robust stabilization and command tracking behavior in obstacle-laden environments. The local environment is parameterized as a series of simpler corridor-like environments in the optic flow model, and the loop shaping controller is synthesized to provide robust stability across the range of modeled environments. Experimental validation is provided using a quadrotor aerial vehicle across environments with large variation in local structure, with the loop shaping controller demonstrating better tracking performance than other comparable controllers in straight-line corridors of different widths. The current approach is computationally efficient, as it does not involve explicit extraction of an environment depth map, and makes for an attractive paradigm for aerial microsystem navigation in urban environments.

Nomenclature x Vehicle state x ref Reference state u Control input y Optic flow output, rad/s w measurement noise A State matrix B Control matrix C Output matrix Cm Mean output matrix ∆C Perturbed output matrix G Open loop plant transfer function ∆G Open loop uncertainty plant transfer function Gs Shaped plant transfer function ∆Gs Shaped uncertainty transfer function Gp Perturbed plant transfer function Ml, Nl Normalized coprime factor M Closed loop nominal plant transfer function ∆ Closed loop uncertainty transfer function ∆M, ∆N Coprime factor of perturbed plant transfer function KLS H∞ loop shaping controller Kµ µ-theory based controller Ks Static gain Volume 7 · Number 2 · 2015

112

Autonomous Vision-Based Navigation of a Quadrotor in Corridor-Like Environments

Kr, Kϕ, Kθ, Kϕv, Kθu x y x1, x2, x3 u0, y˙ v, ϕ θ ψ r ur, vr, rr, ˙ Q q γ d n aE,W

Controller gains, internal to the avionics board Longitudinal offset, m Lateral offset, m Internal states of the vehicle Set forward velocity, m/s Lateral velocity in inertial frame, m/s Lateral velocity, m/s Roll offset, rad Pitch offset, rad Yaw offset, rad Yaw rate, rad/s Reference forward velocity, m/s Reference lateral velocity, m/s Reference yaw rate, rad/s Optic flow, rad/s Vehicle pose Azimuthal location, rad Radial distance to obstacles in the environment, m Nearness, 1/m Corridor half-width, m

1. INTRODUCTION The ability to sense and react to clutter within the payload and bandwidth constraints imposed by aerial microsystems is the primary requirement in demonstrating safe autonomous navigation in unstructured environments. This rules out the use of laser rangefinders and GPS-IMU based estimates of velocity and proximity to obstacles in the surrounding environment [1,2], as well as machine-vision based approaches that extract vehicle pose from camera imagery [3,4]. A safe control strategy usually relies on the synthesis of a flight controller that tracks the instantaneous obstacle-symmetric reference trajectory as the vehicle traverses the local environment. In unknown and unstructured environments, implementation becomes difficult requiring the resolution of the twin issues of local depth map extraction and accurate motion-state estimation at the bandwidths suitable for aerial navigation. Thus, most prior efforts have typically considered the simpler problem of navigation in known, well-structured environments [3-12], although onboard implementations of the coupled structure-motion state estimation problem in cluttered, urban environments exist [13,2]. Bio-inspired vision-based control strategies offer great promise in demonstrating autonomous navigation in completely unstructured environments. Insects leverage information contained in optic flow, the patterns of visual motion that form on the retina as they move, by employing wide-field interneurons that transform large numbers of distributed, local estimates of optic flow into a reduced number of motor commands that stabilize and regulate flight behavior without explicit extraction of the local depth map in cluttered environments [14-18]. The resulting closed loop has been shown to minimize asymmetry (imbalance and shift) in the global optic flow stimulus [19,20], and providing an obstacle-symmetric reference trajectory that achieves safe navigation between obstacles. This approach provides an attractive alternate paradigm for microsystems with limited sensory and processing capabilities, and has been demonstrated in several instances [21-36]. The objective of the current study is the experimental demonstration of the visual navigation technique that ensures safe centering and obstacle avoidance behavior by combining bio-inspired information extraction from optic flow with the H∞ controller synthesis framework in environments with varying structure. The reasons for the selection of the current navigation strategy are threefold. First, the strategy leverages control-theoretic tools for synthesis of closed loop systems with theoretically justified robustness guarantees, which is in contrast with most prior approaches that have employed empirical control strategies for demonstrating safe navigation in both unstructured and wellstructured environments [24-27,37]. Second, as the loop shaping controller is written in exact observer form, the current approach is also used for accurate pose estimation in unstructured environments without resorting to the explicit extraction of the detailed depth map of the local environment, which is a departure from prior studies [38,39]. Third, the approach is computationally efficient as it emulates International Journal of Micro Air Vehicles

Jishnu Keshavan, Greg Gremillion, Hector Alvarez-Escobar and James Sean Humbert

113

the front-end information extraction strategy prevalent in the insect visuomotor system [40]. Furthermore, the controller synthesis framework allows for large variation in environment structure, accounting for which is shown to be crucial in achieving safe center-line tracking behavior. A performance comparison with two different controllers from earlier studies [41,21-23] - a µ-theory based dynamic controller and a static controller - is undertaken, and the loop shaping controller is shown to demonstrate better tracking across environments with large variation in local structure. The paper is outlined as follows. Section 2 provides a complete description of the vehicle configuration and flight dynamics, including details of the sensory front end. Section 3 describes the modeling of environments with variable structure, followed by the application of the wide-field integration techniques for information extraction from optic flow. Section 4 outlines the loop shaping framework for H∞ controller synthesis for achieving safe reflexive obstacle avoidance in unstructured environments, which is followed by experimental validation on a quadrotor in Section 5. Conclusions follow in Section 6. 2. SYSTEM DESCRIPTION In this section, details of the vehicle setup for the experimental studies that follow are presented, which include a description of the mathematical model used to represent the vehicle’s lateral-directional dynamics and vision sensor hardware. 2.1 Vehicle Setup and Dynamics The vehicle chosen was the DJI Flamewheel quadrotor aerial vehicle with a gross weight of 1012.7 g and a span of 57.5 cm (Figure 1). The onboard avionics package includes rate gyros, accelerometers, magnetometer and control algorithms for attitude and rate tracking. Altitude regulation was accomplished using an accelerometer and downward facing sonar located on the underside of the vehicle, which allowed the vehicle motion to remain planar. Longitudinal and lateral velocity tracking was accomplished with the aid of a downward looking ADNS 380 optic flow sensor, which was also located on the underside of the vehicle. Outer loop control commands, necessary for planar navigation, were generated with the aid of the Eyestrip optic flow sensor ring from Centeye, Inc.TM, which provided a 360 degree field of view. A Spektrum transmitter enabled wireless transfer of the control commands to the onboard avionics board. The control architecture with the attitude tracking and visual navigation loops is shown in Figure 2. ϕ, θ represent the roll and pitch orientation of the vehicle, while u, v represent the forward and lateral velocity respectively, and r is the yaw rate. Gains Kr, Kϕ, Kθ, Kϕv, Kθu are adjustable and internal to the avionics board. The forward velocity was set by a commanded reference, which was fixed during testing in order for the vehicle to attain a constant forward speed of u0 = 0.5 m/s. The vehicle’s reference altitude was set for 0.5 m. System identification tests, similar to the procedure described in [42], were performed to obtain a linear model of the vehicle’s lateraldirectional dynamics. The vehicle state is given by x = {y, x1, x2, v, ψ, x3, r}T, with x1, x2, x3 being internal states of the system. The vehicle dynamics and kinematics are linearized about the reference flight condition xref = {0, 0, 0, 0, 0, 0, 0}T for the purpose of controller synthesis.

(1)

The linearized dynamics is then written as x˙ = Ax + Bu, with u = {vr, rr} T being the reference velocity commands that actuate the lateral and yaw dynamics respectively. For convenience, the outer-loop control computations were implemented offboard on LabView 2011, and were executed at approximately 50 Hz. It is important to note that all sensing used to close the inner loops was accomplished onboard the quadrotor, and no external sensing was used for outerloop feedback control. 2.2 Vision sensing and computation Omni-directional vision sensing and processing was accomplished using the Eyestrip, an 8-sensor optic flow ring that interfaces with the GINA Mote [43] (MSP430 1MHz microcontroller) to generate optic flow covering the 360 degree field of view. Each sensor captured images with 64 x 64 resolution at Volume 7 · Number 2 · 2015

114

Autonomous Vision-Based Navigation of a Quadrotor in Corridor-Like Environments

approximately 100 frames/s, and the embedded software converted images to grayscale. A twodimensional optic flow estimate was computed at every pixel by tracking the movement of successive grayscale image frames using the Lucas-Kanade algorithm [44]. Computing the dot product of the flow vector along the sensor azimuth, and spatially averaging over blocks of 64 x 8 target pixels resulted in 64 discrete one-dimensional optic flow estimates that determined the motion field around the azimuth of the vehicle. 3. WIDE-FIELD INTEGRATION OF PLANAR OPTIC FLOW In this section, a brief summary of the technique of wide-field integration (WFI) is presented for extraction of relevant motion cues for navigation in unstructured, urban environments. The structure of the insect visuomotor pathway provides inspiration for using this technique for processing of instantaneous optic flow patterns from planar imaging surfaces. An optic flow model is developed based on a set of expected 2D environments. Small perturbation theory is then invoked to linearize optic flow output that is a function of vehicle’s relative velocity, pose and proximity with respect to the parameterized environments.

Figure 1. Schematic diagram of quadrotor components

Figure 2. Control loop architecture. Onboard gyro, accelerometer and magnetometer enable inner-loop attitude control and onboard optic flow based outputs close outer loops for navigation.

Planar optic flow can be approximated as the relative velocity vector of material points in the surrounding environment projected into the tangential space of the circular imaging surface. In a stationary environment, it is a function of observer rotational and translational motion, together with relative proximity to surrounding objects. If the spatial distribution of objects in the environment is modeled as a continuous function of the body-referred viewing angle γ, the optic flow field can be International Journal of Micro Air Vehicles

Jishnu Keshavan, Greg Gremillion, Hector Alvarez-Escobar and James Sean Humbert

115

written as, (2) where n(γ, q) = [d(γ, q)]-1 is the nearness function, d(γ, q) is the radial distance to the nearest point in the visual field at the viewing station γ and q = (x, y, ψ) is the vehicle pose with respect to the environment, as shown in Figure 3.

Figure 3. Planar coordinate definition for the generic corridor environment.

The nearness function is assumed to be a bounded and piecewise-continuous function with a finite number of discontinuities. A generic unstructured environment with obstacles situated laterally relative to the vehicle was considered a generalization of a corridor-like environment of unequal half-widths, denoted by aE,W (Figure 3). This is a very general assumption that helps simplify the methodology required for ensuring safe navigation in unstructured environments. For flight in this environment, the optic flow field can be completely characterized in closed form, and the nearness function can be shown to be, (3)

where states γ, ψ represent the lateral displacement and heading offset from the obstacle-symmetric reference trajectory respectively. As flight in a typical urban environment mostly consists of navigating past a laterally-situated obstacle field, parameterization of environment structure as a family of simplified corridor-like environments is considered sufficient for regulating flight behavior. If the likely minimal and maximal wall clearances are known, the urban environment can then be assumed to be confined to the limits of the simplified environments. The present study is restricted to three specific environments: flight past an east-side obstacle, flight past a west-side obstacle, and flight past equidistant obstacles on both sides. Optic flow outputs are generated by making a comparison between the preferred sensitivity pattern and the pattern of the visual stimulus, with the cells thus acting as matched filters [17], and can be mathematically modeled as an inner product on a discrete or a continuous spatial domain [45]. For motion restricted to a plane, the patterns are assumed to reside in L2[0, 2π], the space of squareintegrable and piecewise-continuous functions, and optic flow outputs take the form,

Volume 7 · Number 2 · 2015

116

Autonomous Vision-Based Navigation of a Quadrotor in Corridor-Like Environments (4)

Here, γ is the body-fixed viewing angle, F(γ) represents any square-integrable, piecewise-continuous weighting function such that equation (4) exists and yi(x), the output resulting from the comparison represents the decomposition of the motion field into state perturbations from the desired pattern. With the appropriate choice of weighting functions, equation (4) can be decomposed into simple representations of proximity and relative velocity with respect to obstacles in the environment. A more complete discussion on the choice of weighting functions for information extraction from optic flow patterns can be found in [23]. For the choice of the weighting functions F(γ) = {1/√πcosγ, 1/√πcos2γ} the optic flow outputs for planar motion are listed in Table 1. Table 1. Planar optic flow outputs in a corridor (aE = aW = a)

Sensory Output

Linearization

For small perturbations about the reference flight condition xref, linearized optic flow outputs can then be obtained as y = Cx. Accounting for environment uncertainty and measurement noise, the observation equation becomes

(5)

where, y are the measured optic flow outputs, the noise w is zero mean E{w} = 0 with known covariance E{wwT}= Rw. The quantity ∆C is assumed to be zero mean random perturbation E{∆C}= 0, which captures the variation in the nearness function n(γ, q) from the mean Cm. Furthermore, it is assumed that E{w∆CT}= 0. The quantity Cm is approximated as an unweighted average of the three cases described above, (6) where aE,W = 0.5 m defines the minimum nominal wall clearance or half-width of a corridor the vehicle is likely to encounter. 4. H∞ CONTROLLER SYNTHESIS In this section, the design of the feedback controller was undertaken by embedding WFI optic flow outputs, obtained in the preceding section, within the loop shaping framework for quadrotor stabilization and navigation in unstructured environments. This framework enables synthesis of a controller that balances the competing requirements of good tracking performance in the nominal environment and providing robust stability in environments with large variations in local structure. The flight controller was realized by applying H∞ synthesis to the plant composed of the vehicle’s lateraldirectional dynamics and sensor output obtained by wide-field integration of optic flow. If the nominal open loop and uncertainty plant transfer functions are written as, (7)

The loop shaping controller can then be cast in an exact observer form as, [46] International Journal of Micro Air Vehicles

Jishnu Keshavan, Greg Gremillion, Hector Alvarez-Escobar and James Sean Humbert

117

Figure 4. Singular value plot of the closed loop incorporating uncertainty corresponding to the three environments discussed in Section 3.

(8)

where is the state estimate, u and y are the plant input and output respectively, and (9)

Z and X are solutions of the uncoupled complementary Riccati equations (10)

The central controller that robustly stabilizes the plant to uncertainties in environment structure is then given by, (11) The central controller provides an adequate stability margin (γ1 = 0.32) which should ensure good tracking performance in the nominal two-sided corridor environment. Robust stability to variation in environment structure can then be demonstrated by considering the normalized left coprime factorization of the nominal and the perturbed plant. (12)

∆M and ∆N are stable transfer functions that represent the uncertainty ∆=[∆M ∆N]T in the nominal plant G. Robust stability can then be demonstrated if the small gain theorem is satisfied, which requires that, (13) where the blocks M and ∆ represent the nominal closed loop system and the influence of uncertainty respectively. For each of the limiting cases considered in Section 3, one can then numerically determine ∆M and ∆N from equation (12) and demonstrate equation (13). Satisfaction of equation (13) for each limiting case then ensures robust stability for a more complex urban-like environment. The singular values of the loop gain M ∆ for the closed loop incorporating uncertainty for the various limiting cases are shown

Volume 7 · Number 2 · 2015

118

Autonomous Vision-Based Navigation of a Quadrotor in Corridor-Like Environments

in Figure 4. As can be seen from the plot, robust stability is indeed achieved for the family for modeled environments. Thus, this approach leverages control-theoretic tools for the synthesis of closed loop systems with theoretically justified robustness guarantees for safe obstacle avoidance behavior in environments with large variations in local structure. Furthermore, the framework allows for accurate pose estimation without resorting to the explicit extraction of the depth map of the local environment. 5. VALIDATION In this section, the results of the navigation experiments with quadrotor flight in different environments are presented. The objective was to validate the current bio-inspired navigation strategy and demonstrate safe autonomous navigation of the quadrotor in corridor-like environments. Additionally, the influence of environment structure on controller performance was considered through a performance comparison study with comparable controllers in corridor environments of different widths. The attainable root mean square deviation (RMSD) of the vehicle trajectory about the reference centerline was considered the performance measure, with lower values indicating better controller performance. 5.1 Results For the purposes of validation, robust stabilization of the quadrotor by the H∞ controller was demonstrated in environments with significant variation in local structure. Two different straight-line corridors were considered: a nominal baseline environment of width 2 m (Figure 5A) and a partially two-sided corridor of width 2 m. Multiple trials with different lateral and orientation offsets were also considered. The flight tests were conducted in adequate lighting conditions with the side walls of the environments having sufficient texture for optic flow patterns to be detected (Figure 5B).

Figure 5. A. Corridor test environment; B. Environment side-wall texture.

5.1.1 Corridor width 2 m Figure 6A shows multiple quadrotor trajectories for flight in the nominal straight-line corridor environment of width 2 m. The trajectory data was acquired using a ViconTM visual tracking system. For two different initial orientation offsets of -30° and 30°, ten trials each with numerous initial lateral offsets distributed between ±0.2 m were performed. It is seen that the vehicle avoids collision with the walls in all cases. 5.1.2 Partial two-sided corridor Figure 6B shows multiple quadrotor trajectories for flight in the partially two-sided corridor of width 2 m. As the vehicle moves past the opening to the west, lateral proximity value varies between 1 m ≤ aW ≤ ∞. As the size of the opening (2.18 m) is comparable with corridor width, this results in a large increase in lateral optic flow asymmetry across large regions of the azimuth (π ≤ γ ≤ 2π), inducing the vehicle to veer away from the reference trajectory. Thus, robust stabilization becomes especially important as the vehicle traverses the length of the corridor. For two different initial orientation offsets of -30° and 30°, ten trials each with numerous initial lateral offsets distributed between ±0.2 m were again performed. It is seen that the vehicle again avoids collision with the walls in all cases. A large increase in lateral optic flow asymmetry results in greater International Journal of Micro Air Vehicles

Jishnu Keshavan, Greg Gremillion, Hector Alvarez-Escobar and James Sean Humbert

119

Figure 6. Vehicle trajectories for A. fully two-sided corridor, and B. partially two-sided corridor.

reference velocity command, leading to greater stabilization of the vehicle as it negotiates the opening in the corridor. Thus, the controller demonstrates better tracking (RMSD 0.34 m) of the reference trajectory in this environment when compared with the nominal environment (RMSD 0.43 m). 5.2 Discussion As is apparent from the results, embedding optic flow output within the H∞ loop shaping framework realized a robust controller that enabled safe reflexive navigation in corridor-like environments with varying structure. A typical urban environment is characterized by large variation in local structure. To demonstrate the importance of incorporating such variation during controller synthesis, a performance comparison study was undertaken with two comparable controllers: a µ-theory based controller (denoted Kµ henceforth) which was synthesized to provide robustness and tracking performance guarantees in corridor-like environments with width ranging between 1.5 - 2.5 m (Figure 7) [41]; and a static compensator (denoted Ky henceforth) which was synthesized for nominal performance in the 2 m wide baseline corridor, and in a manner similar to earlier studies [21-23] where infinite horizon output linear quadratic regulation [47] was employed to obtain the set of static feedback gains. Linear stability analysis of the closed loop system with the static controller showed that the eigenvalues lie in the open left half plane for corridor half-width a > 0.5m (Figure 8). Thus, the controllers were flight tested in two different environments: the baseline 2 m wide corridor with initial angular and lateral offsets as before, and in a corridor of width 1 m where both controllers (Kµ, Ky) were expected to perform poorly. As the vehicle spanned more than half the narrow corridor’s width, numerous trials with smaller initial angular offsets of ±15° (and lateral offsets as before) were considered in this environment.

Figure 7. Structured singular value upper bound obtained with the µ-theory based controller demonstrating good robust performance in corridorlike environments with width 1.5 ≤ 2a ≤ 2.5 m. [41]

Volume 7 · Number 2 · 2015

Figure 8. Root locus plot for the static output feedback gain with closed loop eigenvalues computed for corridor half-width aE = aW = a ranging from 0.25 to 10 m.

120

Autonomous Vision-Based Navigation of a Quadrotor in Corridor-Like Environments Table 2. RMSD values (in m) for different controllers in a straight-line corridor

Corridor Width 2m

0.43

0.32

0.27

1m

0.09

-

-

The results of the study are summarized in Table 2, which lists the RMSD values attained by the various controllers in the straight-line corridor environments. In the baseline case, both the static compensator and the µ-theory based controller demonstrate better tracking and outperform the loop shaping controller. The nominal static compensator performs best in the environment it is designed for, attaining the lowest RMSD values, while the loop shaping controller performs poorly in comparison. However, in the narrow 1 m wide corridor environment, the loop shaping controller outperforms the other controllers, and provides good stabilization with the vehicle avoiding collision with the walls in all cases (Figure 9A). In contrast, the µ-theory based controller managed to avoid collision in approximately 50% of the trials (Figure 9B), while the static controller performed the worst with crashes recorded in all cases. This demonstrates the importance of accounting for large variation in environment structure during robust controller synthesis for achieving safe autonomous navigation in cluttered and less-structured environments.

Figure 9. Vehicle trajectories in a corridor of width 1 m for the A. loop shaping controller; and B. µ-theory based controller, ‘x’ denotes vehicle collision with walls of the corridor.

Furthermore, as urban environments are also characterized by obstacles in motion, it is important to note that the navigational strategy outlined in this paper remains valid even in non-stationary environments. An obstacle moving in the same (opposite) direction as the observer would induce lower (higher) optic flow in the corresponding region of the azimuth in comparison to the stationary environment. As demonstrated in the experiments above, the controller would then induce the vehicle to steer towards a non-centerline reference trajectory thereby minimizing lateral optic flow asymmetry. Coupled with the high degree of robustness exhibited by the controller to variation in environment structure, this should result in the loop shaping controller outperforming both ?-theory based controller and static controller in such environments. The current reflexive navigation strategy employs formal closed-loop stability and performance analysis to achieve robust tracking in less-structured environments. In contrast, as most prior studies present results in the absence of such analysis [24,26,36,45], safe obstacle avoidance behavior in such studies cannot be guaranteed. The current strategy is based on the minimization of lateral optic flow asymmetry, which emulates the centering strategy employed by honeybees [19], and precludes prior knowledge of vehicle motion and environment structure. This is again in contrast with most studies that require well-structured environments [27,46,47], or prior knowledge of vehicle motion [36]. Additionally, it is also important to note that by specifying a non-zero reference state, which specifies International Journal of Micro Air Vehicles

Jishnu Keshavan, Greg Gremillion, Hector Alvarez-Escobar and James Sean Humbert

121

the desired level of optic flow asymmetry, the current strategy allows for the realization of different navigational behaviors. For instance, wall and terrain-following applications can be realized by setting a non-zero reference lateral offset. Finally, in contrast with centering performance produced by only controlling lateral offset [26], the vehicle in this study demonstrates superior performance through control of both lateral and orientation offset. It is important to note that while successful depth-extraction strategies employing vision and laser rangefinders exist for path planning applications in urban environments [2], such strategies typically impose a greater payload and computational burden than the strategy considered in this study (with onboard computations performed using the MSP430 1MHz microcontroller), rendering them less suitable for reflexive obstacle avoidance applications. Thus, it is seen that embedding bio-inspired wide-field processing techniques within the H∞ loop shaping framework results in a computationally efficient strategy that does not require accurate depth map extraction in environments with large variation in local structure. Additionally, the current framework realizes a reduced-order controller (4-state, 2-input, 2-output system) that can be implemented on a microprocessor onboard the quadrotor. As inner-loop attitude control was entirely accomplished onboard, and no offboard sensing was required for outer-loop control, the current strategy offers a practical alternative for microsystem navigation in cluttered environments. 6. CONCLUSION In this paper, planar navigation experiments are used to validate a novel approach that couples spatial decomposition of optic flow patterns with the H∞ loop shaping framework for the synthesis of a closed loop system with theoretically justified robustness guarantees. Environments with varying structure are considered as a generalization of a nominal corridor environment in the optic flow model, and robust stability is explicitly demonstrated for a family of simple 2-D corridor-like environments which is shown to be sufficient in ensuring safe navigation behavior in corridor-like environments of varying widths. As explicit estimation of the environment depth map is rendered superfluous, the current approach provides an attractive alternate paradigm for aerial microsystem navigation applications in unstructured environments. ACKNOWLEDGEMENTS This research was supported in part by the ONR MURI grant N00014-10-1-0952 and ARO N911NF10-1-0449. REFERENCES [1] J. Evers, Biological inspiration for agile autonomous air vehicles. Platform innovations and system integration for unmanned air, land and sea vehicles, Neuilly-sur-Seine, 2007. [2] A. Bachrach, S. Prentice, R. He, and N. Roy, RANGE-Robust autonomous navigation in GPSdenied environments. J. Field Robotics, 28, 2011, 644-666. [3] K. Celik, S. J. Chung, and A. Somani, Mono-vision corner SLAM for indoor navigation, IEEE International Conference on Electro/Information Technology, 2008, 343-348. [4] C. Kemp, Visual control of a miniature quad-rotor helicopter, Ph.D Thesis, University of Cambridge, 2006. [5] M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart, Vision based MAV navigation in unknown and unstructured environments. IEEE International Conference on Robotics and Automation, 2010, 21-28. [6] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, Monoslam: Real-time camera slam, IEEE Transactions on Pattern Analysis and Machine Intelligence, 29, 2007, 1052-1067. [7] N. Johnson, Vision assisted control of a hovering air vehicle in an indoor setting, MS Thesis, Brigham Young University, 2008. [8] R. He, S. Prentice, and N. Roy, Planning in information space for a quadrotor in GPS-denied environments, IEEE International Conference on Robotics and Automation, 2008, 1814-1820. [10] G. Angeletti, J. P. Valente, L. Iocchi, and D. Nardi, Autonomous indoor hovering with a quadrotor, Proceedings of SIMPAR Workshop, 2008, 472-481.

Volume 7 · Number 2 · 2015

122

Autonomous Vision-Based Navigation of a Quadrotor in Corridor-Like Environments

[11] S. Ahrens, D. Levine, G. Andrews, and J. How, Vision-based guidance and control of a hovering vehicle in unknown GPS-denied environments, IEEE International Conference on Robotics and Automation, 2002, 72-77. [12] B. Steder, G. Grisetti, S. Grzonka, C. Stachniss, A. Rottmann, and W. Burgard, Learning maps in 3D using attitude and noisy vision sensors. IEEE Conference on Intelligent Robots and Systems, 2007, 644-649. [13] S. Weiss, M. Achtelik, L. Kneip, D. Scaramuzza, and R. Siegwart, Intuitive 3D maps for MAV terrain exploration and obstacle avoidance, J. Intelligent and Robotic Systems, 61, 2011, 473-493. [14] J. Gibson, The perception of the visual world. Houghton Mifflin. 1950. [15] A. Borst, and J. Haag, Neural networks in the cockpit of the fly, J. Computational Physiology, 188, 2002, 419-437. [16] M. Egelhaaf, R. Kern, H. Krapp, J. Kretzberg, R. Kurtz, and A. Warzecha, Neural encoding of behaviourally relevant visual-motion information in the fly, Trends in Neurosciences, 25, 2002, 96-102. [17] H. Dahmen, M. Franz, and H. Krapp, Extracting ego-motion from optic flow: limits of accuracy and neuronal filters, Processing Visual Motion in the Real World — A Survey of Computational, Neuronal and Ecological Constraints, Springer-Verlag. 2001. [18] H. G. Krapp, B. Hengstenberg, and R. Hengstenberg, Dendritic structure and receptive-field organization of optic flow processing interneurons in the fly. J. Neurophysiology, 79, 1998, 19021917. [19] M. V. Srinivasan, S. W. Zhang, M. Lehrer, and T. S. Collett, Honeybee navigation en route to the goal: visual flight control and telemetry. J. Experimental Biology, 199, 1996, 237-244. [20] M. V. Srinivasan, and S. W. Zhang, Visual motor computations in insects, Annual Review of Neuroscience, 27, 2004, 679-696. [21] J. S. Humbert, and A. M. Hyslop, Bio-inspired visuomotor convergence, IEEE Transactions on Robotics, 26, 2010, 121-130. [22] A. M. Hyslop, H. G. Krapp, and J. S. Humbert, Control theoretic interpretation of directional motion preferences in optic flow processing interneurons, Biological Cybernetics, 103, 2010, 339-352. [23] J. Conroy, G. Gremillion, B. Ranganathan, and J. S. Humbert, Implementation of wide-field integration of optic flow for autonomous quadrotor navigation, Autonomous Robots, 27, 2009, 189-198. [24] A. Beyeler, J. C. Zufferey, and D. Floreano, Optic-flow to steer and avoid collisions in 3D, Flying insects and robotics, Springer-Verlag, 2009. [25] W. E. Green, P. Y. Oh, and G. Barrows, Flying Insect Inspired Vision for Autonomous Aerial Robot Maneuvers in Near-Earth Environments, IEEE international conference on robotics and automation, 2004. [26] S. Hrabar, G. S. Sukhatme, P. Corke, K. Usher, and J. Roberts, Combined optic-flow and stereobased navigation of urban canyons for a UAV, IEEE International Conference on Intelligent Robots and Systems, 2005, 3309-3316. [27] S. Zingg, D. Scaramuzza, S. Weiss, R. Siegwart, MAV navigation through indoor corridors using optical flow, International Conference on Robotics and Automation, 2010, 3361-3368. [28] M. V. Srinivasan, J. Chahl, K. Weber, S. V. M. Nagle, and S. W. Zhang, Robot navigation inspired by principles of insect vision, Robotics and Autonomous Systems, 26, 1999, 203-216. [29] M. Franz, and H. Mallot, Biomimetic robot navigation, Robotics and Autonomous Systems, 30, 2000, 133-153. [30] J. Santos-Victor, and G. Sandini, Embedded visual behaviors for navigation, Robotics and Autonomous Systems, 19, 2007, 299-313. [31] D. Coombs, M. Herman, T. Hong, and R. Nashman, Real-time obstacle avoidance using central flow divergence and peripheral flow, IEEE Transactions on Robotics, 14, 1998, 48-59. [32] T. Netter, and N. Francheschini, A robotic aircraft that follows terrain using a neuromorphic eye, Proceedings IEEE/RSJ IROS Conference on Robotics and Systems, 2002.

International Journal of Micro Air Vehicles

Jishnu Keshavan, Greg Gremillion, Hector Alvarez-Escobar and James Sean Humbert

123

[33] L. Muratet, S. Doncieux, Y. Briere, and J. Meyer, A contribution to vision-based autonomous helicopter flight in urban environments, Robotics and Autonomous Systems, 50, 2005, 195-209. [34] J. Serres, D. Dray, F. Ruffier, and N. Francheschini, A vision-based autopilot for a miniature air vehicle: joint speed control and lateral obstacle avoidance, Autonomous Robots, 25, 2008, 103122. [35] S. Griffiths, J. Saunders, A. Curtis, B. Barber, T. Mclain, and R. Beard, Maximizing miniature air vehicles, IEEE Robotics and Automation Magazine, 13, 2006, 34-43. [36] J. S. Chahl, and M. V. Srinivasan, A complete panoramic vision system, incorporating image ranging and three dimensional navigation, Proceedings IEEE Workshop on Omnidirectional Vision, 2002, 104-111. [37] N. Francheschini, F. Ruffier, and J. Serres, A bio-inspired flying robot sheds light on insect piloting abilities, Current Biology, 17, 2007, 329-225. [38] A. X. Miao, G. L. Zacharias, and R. Warren, Passive navigation from image sequences: a practitioner’s approach, AIAA Conference on Flight Simulation Technologies, 1996. [39] J. J. Kehoe, A. S. Watkins, R. S. Causey, and R. Lind, State estimation using optical flow from parallax-weighted feature tracking, AIAA Conference on Guidance, Navigation and Control, 2006. [40] M. Frye, and M. Dickinson, Fly flight: a model for the neural control of complex behaviour, Neuron, 32, 2001, 385-388. [41] J. Keshavan, G. Gremillion, H. Escobar - Alravez, and J. S. Humbert, A µ analysis-based controller-synthesis framework for robust bioinspired visual navigation in less-structured environments, Bioinspiration and Biomimetics, 9, 2014, 25011. [42] J. Conroy, J. S. Humbert, and D. Pines, System identification of a rotary-wing micro air vehicle, J. American Helicopter Society, 2011, 25001. [43] A. M. Mehta, Mobility in wireless sensor networks, Ph.D Thesis, University of California Berkeley, 2012. [44] B. Lucas, and T. Kanade, An iterative image registration technique with an application to stereo vision, Proceedings of the 7th International Joint Conference on Artificial Intelligence, 1981, 674679. [45] M. O. Franz, J. S. Chahl, and H. G. Krapp, Insect-inspired estimation of egomotion. Neural Computation, 16, 2004, 2245-2260. [46] J. Sefton, and K. Glover, Pole-zero cancellations in the general H∞ problem with reference to a two block design, Systems and Control Letters, 14, 1990, 295-306. [47] H. Kwakernaak, and R. Sivan, Linear Optimal Control Systems, Wiley-Interscience, 1972, 408.

Volume 7 · Number 2 · 2015