coordinate transform which we call inverse perspective a geometrical ... while retaining the center of projection. intelligent .... stereo disparities arise. This we call ...
Visual Obstacle Detection for Automatically Guided Vehicles K.
Storjohann,
Zielke, H.A. Mallot and Institut fUr Neuroinformatik Ruhr-Universitét Th.
4630 Bochum,
w.
von Seelen
FRG
Abstract have developed a stereo obstacle detection system f0!’ 8l1t0!T18tiC8llY guided V6hiCl6$ (AGVS) that Operat OH flat (f8Cll0I‘Y) fl00Y$- our $Y$t6m C1068 HOK attempt to visually reconstruct the 3D environment but simply tries to answer the question ‘Do I see anything else than the oor ,3,’ We
AGVs have a broad range of applications in industry. Increasing demands on the flexibility of AGVs as well as safety considerations call for reliable and versatile sensor systems for obstacle detection.
An analysis of the more general task of navigation a moving observer in a natural environment has led to the proposal of a new paradigm for deriving from visual information certain features of the 3D structure of the environment that are important for obstacle avoidance [5]. Central to this paradigm is a coordinate transform which we call inverse perspective mapping. The method is related to principles of biological information processing such as retinotopic mapping. We have applied the method of inverse perspective
for
A novel approach to stereo image processing is presented that uses inverse perspective mappings to facilitate matching of the binocular field of vision against the expected 3D structure of the environment. Assuming a known relative camera model, we compute a geometrical image transformation which essentially compensates the stereo disparities for the image points of the floor. After the mapping operation the images are compared and local mismatches are interpreted as poss ibl e ob s tacl e l o cat ion s .
mapping to the processing of stereo images. In the following it is shown how this can enormously reduce the computational complexity for a class of problems where the knowledge about the imaging geometry and the expected environment can be represented by means of mapping functions. _
system has been successfully tested in a factory environment. The implementation runs on standard microprocessor hardware in real-time. The
Introduction Autonomous mobile robots have been a research topic for several years. It has recently been shown that industrial applications of autonomous mobile
Inverse Perspective Magpmg Inverse perspective mappings project the image of onto a plane different from the image plane while retaining the center of projection. Fig‘ Shows P being
systems have become feasible. VISOCAR (Fig. 1) is an automatically guided vehicle (AGV) that features an intelligent route planner and a path planner which is supported by a machine vision system. Frohn and v. Seelen [2] describe the general architecture of VISOCAR and its modules for lane tracking and
a scene
viewed by a Camera with lens nodal pom“ O ,and snag: plane E' the lmafe po_mt_p of_ P1is_detlfrm1nied y 6 pI'OC€S$ O C€I'ltI'8 pI'O]€CtlOI1, 16 I 1S t 8 po t
landmark recognltlon. .
_
.
T.
1'1
image plane, Em, P
f
ZI
Y'
O
Y
Z
P
h
horizontal plane, Fig. 1:
automatically guided vehicle equipped with a v1sua1 navigation system_
EL”
P’
VISOCAR, an
Fig. 2: Coordinate systems for the inverse perspective mapping.
761
where the ray from the object point through the nodal point intersects the image plane. Then the inverse perspective mapping of p onto a plane E‘ is given by the point of intersection of the same ray with E‘. The inverse perspective mapping Q: E -> E' maps a point p,,,,, in the image plane E onto a point p',,-,,- in a plane E‘ slanted by an angle Ffli with respect to the optical axis:
(x', y’)
h (_X9 =
-y cos phi
+
flsin
phi
(1)
Wh6I"8 COS Phi 1= (Y' »Z) (Note: Without loss of generality, for the definition ab0V6 We have Chosen the X and X' axes to coincide-I Inverse perspective mappings as defined here can also be viewed as the class of image transformations that could be achieved by changing the focal length and/or the camera target's orientation relative to the optical axis. In terms of projective geometry, it is a projective collineation (cf. [1]). -
An important application of the inverse perspective mapping is to undo the perspective distortion with respect to a certain flat visible surface of which the relative attitude is known. Fig. 3a depicts a box standing on a flat floor that shows a regular covering of square tiles. After an inverse perspective mapping operation which maps the image onto a plane parallel to the floor the square texture comes out correctly. In other words, the resulting image is an overhead view of the floor, under the assumption that all pixels in the original image are projections from the floor surface. As a consequence, Fig. 3b contains a region made up of image points that were ‘incorrectly’ treated just like the floor pixels but in fact belong to the image of the box. When viewing the mapped image, the region caused by the box immediately pops out because the inverse perspective mapping for the box does not have the same functional meaning as it has for the floor. >}@i> 3.}
It i
a-‘ a 1 "'$1 §‘¢‘a‘&‘§*‘