Obtaining 3D models of indoor environments with a mobile robot by estimating local surface directions
Mariano Martín Nevado, Jaime Gómez García-Bermejo and Eduardo Zalama Casanova Department of Control System Engineering. University of Valladolid. Paseo del Cauce s/n 47011 Valladolid. Spain
Abstract In this paper a method for obtaining the location, size and shape of main surfaces in an environment, from points measured by a laser scanner onboard a mobile robot, is presented. The most likely orientation of the surface normal is first calculated at every point, from points in an adaptive-radius neighboring region. Then, similar planes are merged under orientation and distance criteria. Finally, points are projected onto the planes they belong to, and the corresponding boundaries are obtained from Voronoi’s diagram. A simple representation of the environment is thus obtained, which matches the real topology thanks to the detailed prior analysis of the local geometry at every point. Key words: Mobile robot, 3D model of the environment, spatial variance analysis, adaptive neghboring region size, laser scanner
1 Introduction
Obtaining 3D models from a real environment has been a primary research subject in recent years. New laser scanners have simplified environment measurement with respect to other solutions such as stereoscopic-vision based approaches. However, work on 3D modeling has focused on small and medium-sized objects (1), and only some works have focused on building 3D models from the entire environment of a mobile robot (2; 3; 4). Furthermore, usually only 2D map reconstruction has been envisaged, since these maps are adequate for navigation and path planning. On the other hand, 3D maps are suitable for architects, decorators, or even firemen and Email address:
[email protected] fax: +34 983 423545 (Eduardo Zalama Casanova).
Preprint submitted to Elsevier Preprint
25 May 2004
rescue teams, who can familiarize with a target building before entering. Teleoperation of robotic systems through virtual or augmented reality also benefit from 3D models. There are some important differences between the medium-sized object reconstruction problem and the environment reconstruction problem: • Data amount is usually larger in the case of environment reconstruction than in the case of object reconstruction. This leads to a slow handling of environment models when these models are obtained through classical reconstruction methods (especially when working through lan networks). • Usually, architectonic environments have many planes, while objects may not have (at least in the case of free-form objects). • Common 3D surface simplification methods are adequate for low-noise, highaccuracy measured points. On the other hand, points measured by a mobile robot are usually noisier and less accurate, due to the robot localization errors and to the nature of the commonly used scanners (e.g. time of flight scanners versus triangulation scanners). • Environment model reconstruction from mobile robots usually requires dealing with data that are obtained from many different locations, often not accurately known, while objects are often measured from few, well-known viewpoints. In this article, the 3D modeling of the inside of buildings from data obtained by a time-of-flight scanner on-board a mobile robot is addressed. The scanner provides a few hundred distance readings within a vertical 180 o -width, plane measuring field, from the left side, through up, to the right side of the robot. The measuring field is roughly perpendicular to the local-movement direction of the robot. The robot measures points through navigation, while its location and orientation can be found from either a horizontal laser using probabilistic techniques (5; 6; 7), or other localization device or technique such as (8). The whole measured points represent the environment; however, these points are of course unorganised, so that the high-level structure of the environment is hidden. In order to recover this high-level structure, the most likely normal direction is first obtained at every measured point. The neighboring region is taken into account, to avoid measurement noise leading to erroneous normal direction calculation (as is the case in (9), for example). The size of this neighboring region evolves as the algorithm progresses, so it adapts to the measurement error and the local shape of the surface. A test is also performed for deciding whether a point can, or cannot be supposed to be locally plane. This way, the whole process can be automated, while avoiding non locally-planar points to introduce erroneous directions. Finally, a common Delaunay triangulation procedure allows the contours of the points in the corresponding planes to be obtained (10; 11). At the end of the process, an environment representation through a small set of 2
planes is obtained. This representation has clear advantages over traditional, trianglebased representations: • The maps obtained are simple and easy to handle. In particular, remote quick handling (e.g. through lan networks) becomes possible. • The noise level of the models obtained is low. • The total data amount can be largely reduced, while keeping details in small, non-planar regions. • High level features in the environment such as walls, doors and windows can be easily identified, thus making the construction of symbolic maps easier. The rest of this paper is organized as follows. First, base elements and tools are introduced in Section 2. Then, a description of the proposed approach is provided in Section 3. Some results are shown in Section 4, and related works are discussed in Section 5. Finally, some concluding remarks are summarized in Section 6.
2 Elements of the 3D model
In this section, the basic elements of the 3D model, i.e. points, planes and contours, are described, as well as some tools for their manipulation.
2.1 Point analysis
As mentioned above, points are measured by using a sensor, such as a 2D-laser scanner, onboard a mobile robot. Some inaccuracy caused by the sensor noise and the robot localization error, affects the data. This inaccuracy is described in terms of three unknown standard-deviation noises along the corresponding point coordinates. We analyze sets of points through their spatial variance matrix. This is a 3 × 3 matrix whose elements are the three coordinate variances and the corresponding co-variances. Let X be a set of points, X = {xi ∈ R3 } , i = 1..N
(1)
then the centroid and the spatial variance matrix of X are x¯ = (A[x] , A[y] , A[z]) σ2x σxy σxz 2 Var[X] = σxy σy σyz σxz σyz σ2z
(2)
(3)
3
y
u2 u1
x
Fig. 1. Ellipse obtained from the principal variances of a set of 2-D points. The ellipse defines how the points are distributed (along a line, in this example). In the 3-D case, the ellipse turns into an ellipsoid (which defines whether the points are distributed along a line, a plane, or within a sphere).
respectively. A[ξ] is the average value of ξ, σ2ξ the corresponding variance, and σξζ the ξ to ζ co-variance. We will call the three (positive) eigen values of the spatial variance matrix principal variances, σ21 ≥ σ22 ≥ σ23
(4)
and the corresponding eigen directions, described by three unity vectors principal directions (5) [u1 u2 u3 ] It is well-known that the principal directions and variances, along with the centroid of a set of points, are closely related to the distribution of these points in the space: The region occupied by a set of points in the space can be approached from the standard deviations (σ1 , σ2 , σ3 ), measured along the eigen directions [u1 , u2 , u3 ], given that the distribution law is known. An example in 2-D space is shown in figure 1.
2.2 Characterization of the planes
The 3D environment is represented as a set of mathematical planes (unbounded plane regions). Each plane is related to a set of points located close to it. We will say that these points “occupy the plane” or simply that they “belong to the plane”. There is a set of points that do not occupy any plane. We will call these points discarded points, and the modeling process will look to reduce the number of discarded points. A plane Π j is represented by its normal direction and its distance to the coordinate origin, in the way Π j = {n j , d j } ∈ R3 × R+ 4
(6)
where n j is the normal unity vector and d j is the said distance. n j is chosen so that d j is positive. Thus, d j n j gives the position of the plane point closest to the coordinate origin. Furthermore, the distance from a given point x ∈ R 3 to Π j is d(x, Π j ) = |x · n j − d j |
(7)
and the component of a point along the normal direction to a plane can be directly obtained by means of (8) w(x, Π j ) = x · n j − d j The points belonging to a plane can be simply characterized by the corresponding centroid and the spatial variance.
2.3 Contour model
Once the set of planes has been defined, and the points belonging to each plane have been determined, the limits of the occupied regions (i.e. the different surfaces lying in a given plane) have to be obtained. Of course, one or more contours may be present on a given plane. A contour in a plane can be expressed as a sequence of indexes pointing to the vertex of a closed polygonal line surrounding the actual surface, plus some other polygonal lines representing holes within this surface. Several surfaces may be present in a same plane whenever the distance between them is over a given threshold.
3 3D model estimation
The construction of the 3D model of the environment is addressed in this Section.
3.1 Local normal directions and “thickness angle”
In order to determine which planes will be present in the 3D model, the most likely normal direction is first calculated at every measured point. This direction is obtained from points inside a given region (a cubic region) centered at the current point. The most probable local normal direction is then the direction corresponding to the smallest variance value, σ3 . We will call this direction local normal at a given point. However, some confidence figure related to the local normal should be obtained, in order to ensure the actual region is “plane enough” to take into account the obtained vector, i.e. to consider the obtained local normal to be correct (see figure 2). 5
z
z
y x
y x
(a) Reliable local normal
(b) Unreliable local normal
Fig. 2. Local normal estimation. The local normal corresponds to the smallest variance value σ3 . In figure 2(a) points roughly fit in plane (σ3 is clearly smaller than σ1 and σ2 ), so the obtained local normal is expected to be correct. In figure 2(a) points are far from fitting in a plane, so the obtained local normal is not reliable.
If points belong to a perfect plane, the smallest variance is 0 and then the local normal is correct. In practice, the smallest variance should be significantly lower than the other two variances for the local normal to be considered as correct. Thus, we will introduce a thickness angle , γ,
2 σ3 γ = arctan √ · √ σ1 σ2 3
(9)
In this expression, we will call σT tangential deviation, σT =
√
σ1 σ2
(10)
and we will usually call σ3 normal deviation, σN . The so defined thickness angle characterizes how long points raise from the plane they belong to (see figure 3). The normal vector obtained from the spatial variance will then be retained whenever the thickness angle of the used set of points is under a given threshold (say 10o ). The normal vectors corresponding to a thickness angle greater than this threshold are not retained. The term √23 in the thikness angle definition transforms the standard deviations into the metric scale of the data. Generally speaking, there is a relation between the standard deviation of a set of points, and the domain along which these points are distributed: given a set of points, distributed within an R-radius interval, the standard deviation σ can be obtained from the point distribution law, and vice versa. For example, let us consider the case of N + 1 points {ζ0 , . . ., ζN }, uniformly distributed along a given direction every D metric units. R = 12 D N is then the corre6
u3
u1
Fig. 3. The thickness angle quantifies the relation between the radius along the normal direction axis (the lowest variance axis) and the radius along the tangential direction axis (the largest variance axis).
sponding radius, while the average value and the standard deviation are 1 2 σζ = R 1+ 3 N
ζ¯ = ζ0 + R
(11)
respectively. Thus,
R=
3N σ N +2 ζ
(12)
which, for a large number of points, approaches R
√
3 σζ
(13)
In the case of random distribution laws, the same relation is obtained for large N values. On the other hand, following equation 8, the normal-to-plane Π j component of a given point xi can be calculated as wi = xi · n j − d j
(14)
We will assume wi follows a normal distribution N(0, σ), where σ is the standard noise deviation. Given that N is large, it can be assumed that σ N σ. Thus, 95% of the points will be in an R 1, 96 σN 2 σN
(15)
radius interval. Assuming that a given set of points locally correspond to a plane, the tangential component of the points (with respect to the plane) should follow a uniform (ideal case) or random (general case) distribution law, while the normal component of the points should follow a normal distribution law.√So, points will be distributed within ±2 σ N along the normal direction, and ± 3 σT along the tangential direction. 7
3.2 Local region radius
The local normal direction at a given point is calculated from neighboring points, i.e. points inside a region centered at the current point. The radius of this region should accommodate the local shape, noise and the spatial density of points. An automatic, adaptive method has been designed for updating the radius value depending on the actual value and the local variance. In particular, the new radius is obtained from: • The actual radius, R(k) . A smooth variation of the radius value is looked for. • The number of points inside the actual local region, N. A smooth variation of the number of points is also looked for. • The relation between the principal variances. This relation determines the shape of the actual region. The radius adapts so that the local region is as plane shaped as possible. In particular, the radius value at point (k + 1) is obtained from data at point (k) following R(k+1) = (1 − wσ − wN ) R(k) + wσ Rσ + wN RN
(16)
In this expresión, Rσ and RN are the radius values that would be obtained from the separate application of the two last criteria above. wσ and wN weight the influence of each term. Appropriate Rσ and RN functions, and wσ and wN constant values will now be discussed. Rσ is obtained from Rσ = R(k)
Λ λ(k)
(17)
where Λ and λ(k) are two coefficients used for parametrizing the shape of the region. λ(k) is defined as λ(k) =
σ2/σ3 σ1/σ2
=
σ22 σ1 σ3
(18)
Table 1 shows the relation between the λ(k) value and the spatial distribution of the points. Distribution
λ
Planes
∞
Spheres
1
Lines
0
Table 1 Relation between λ and the spatial distribution of points in a given region.
8
Aligned points (σ2 σ3 = 0) usually correspond to a single laser scan (see figure 4(a)). (Points are in the scanning plane.) In this case, points from other scans are required for correctly calculating the local normal vector. So, the radius of the region should increase. The spherical case may be caused by either too small a radius (figure 4(b)) , or a fast-varying local geometry, e.g. a corner(figure 4(c)). So, the radius should also increase (former case), but in a moderately way because this strategy may not lead to a plane (latter case). The number of points will limit the radius enlargement, through RN (v.i.).
y
y
y
x
x
(a) Single laser scan
(b) Small radius case
x (c) Fast varing geometry
Fig. 4. Example of different spatial point distributions
Finally, a highly planar set of points correspond to too a large region. The radius should decrease in order to obtain a less planar region (without reaching the spherical case). Λ is the target value for λ(k) , and is a design parameter whose value should be clearly greater than unity. In this work, Λ = 10 has been chosen. In order to obtain a proper RN value, a function has been used which maintains the number of points between two values, n MIN and nMAX , while pushing the number of points to a given figure, n0 . This function is
RN = R(k) btan[
mπ 2
α(k) ]
(19)
α(k) is related to the number of points through
α(k) =
nO −n(k) nO −nMIN
−
n(k) −nO nMAX −nO
whenever n(k) < nO (20)
9
whenever n(k) > nO
1,5
RN/R(k)
1
0,5 1
0,5
0
-0,5
-1
(k)
Fig. 5. RN adjustment to α. The closer to unity b and m are, the sharper the curve near the limits is. A smoother curve is obtained through increasing b and decreasing m.
b and m constant values define how sharp the curve near the limits is, as shown in figure 5. In this work, b = 1.05 m = 0.9
(21)
have been used. With respect to wσ and wN constant weights, they are chosen so that eq. (16) leads to a constant radius value for a point distribution with regular statistical properties; i.e., R(k+1) = R(k) in eq. (16) when λ(k) cte.. Using eq. (17) and eq. (19), this leads to wσ or
Λ 1− λ
tan[ m2π α] = wN b −1
mπ Λ wσ α ln(b) = ln 1 + 1 − tan 2 λ wN
For a solution to this equation to exist, Λ wσ 1+ 1− >0 λ wN is required, so λ>
Λ =ε 1 + wwNσ
(22) (23)
(24)
(25)
This condition allows the design parameter ε to be determined. Particularly, following table 1, a 0 < ε < 1 value such as ε 0.5 ensures the convergence of the radius for the plane and the spherical cases, but not for the linear case. In fact, in the linear case the radius will increase progressively. This is just the desired behavior for 10
the local region to cover neighboring scans, as referred to above. Furthermore, R(k) should remain within the proposed limits. When this does not happen, the obtained local normal can be retained or rejected, depending on the concrete actual case: • Points are distributed along a line far away from other points in the scene. (The radius increases in order to get a non line-shaped region.) These points should be rejected. • Points are within small regions far away from other points in the scene. (The radius increases in order to enlarge the number of points.) These points could be retained or rejected depending on the thickness angle. Suitable values for wN and wσ are obtained upon eq. (25). Selected values are shown in table 2, along with the above-mentioned Λ and ε values. Magnitude
Value
Λ1
10
0 9 o : 12,3%
4.0 cm
Min. occuo: 20 pts
7,2%
Accum. time
7s
15 s
36 s
38 s
Nb. planes
511
172
54
38
Report
Aligned: 11,6%
Average 2σn :
Detail level: 98,1%
Discarded:
θ > 12 o : 19,6%
5.0 cm
Min. occuo: 39 pts
12,1%
Accum. time
78 s
109 s
247 s
266 s
Nb. planes
582
89
23
20
Report
Aligned: 0,6%
Average 2σn :
Detail level: 99%
Discarded:
θ > 15 o : 32,9%
5.5 cm
Min. occuo: 92 pts
10,02%
276 s
348 s
648 s
682 s
(12,000 points)
CARTIF hall (15,000 points)
CMU Wean hall (85,000 points)
Accum. time
Table 3 Experimental results from three different real environments.
A second experiment has been performed on the main hall of the CARTIF Technology Center (see figures 8 and 9). In this experiment, the robot navigated across the hall taking data every 10 cm to 20 cm (10 cm along short paths, 20 cm along long paths). A 12 o instead of 9 o thickness angle threshold was used in this case, because the location noise was supposed to be slightly greater than in the corridor case. A 98 % detail level was used. Further details are given in table 3. The number 16
Fig. 8. Two photographs of the main hall of the CARTIF Technology Center.
(a) Initial point set
(b) Solid view
(c) Contour view
Fig. 9. Hall of the CARTIF Technology Center. Holes in the model mainly correspond to window and door glass. The armchair also produces a shadow region on the wall (thus a hole in the model).
of aligned points found in this experiment was smaller than in the previous one, due to the complex trajectory followed by the robot, which led to a more dense sampling of the environment. 38 planes were found, and only 12.1 % were discarded. The total process lasted 266 s. A final experiment was performed on a more dense, 83,000 point set (see figures 10). (This point set has also been tested in other works, such as (9; 13).) The original point set, along with the corresponding contour view are shown in figure 10. A 15 o thickness angle and a 99 % detail lever were used in this case. The number of aligned points drop significantly with respect to the previous experiments, due to the higher sampling rate. Furthermore, the number of points with a thickness angle over the selected threshold is greater than in the previous experiments. This is due to the fact that a more dense sampling rate allows smaller neighbouring regions to 17
(a) Initial point set
(b) Contour view
Fig. 10. Data from Wean Hall (Carnegie Mellon University) (Data set provided by S. Thrun)
be used for normal calculation, which in turn lead to higher thickness angles. At the end of the whole process, 20 planes were found (in 682 s), and only 10.02 % points were discarded. The results obtained show that the location and shape of the main walls in the environment are correctly obtained, and that the proposed method is suitable for a wide range of environment complexity, measuring conditions and 3D sampling rates. In particular, the algorithm deals suitably with noise in scanner measures and robot localization. Related infomation, data sets and a prototype implementation (C source code) of the proposed algorithm can be found at “http://www.eis.uva.es/˜eduzal/3d/3d_planar.html”.
5
Related work
A number of approaches have been applied to the construction of 3D models from real environments, due to the wide range of applications. Some authors have tried to recover 3D structures from the analysis of image sequences, through computer vision techniques (14; 15; 16; 17). However, the measured scenes are often not well understood, resulting in 3D topological ambiguities and errors in the model obtained. The approach proposed in the present paper is related to the principal component analysis method (18) with regard to the use of the spatial variances and the analysis of the principal directions to obtain normal vectors. However, the introduction of the thickness angle concept allows a direct categorisation of the different planes to be carried out upon the local shape of the surface. Furthermore, the size of the local region over which the spatial variances are calculated is automatically, dynamically adapted, upon the number of points inside region and the spatial variances. The present approach is also related to (9). In this work, the way the noise (largely) affects the local topology determination is discussed, and the normal calculation has to be finally dismissed due to the noise. On the other hand, the approach proposed here performs a
18
local topology analysis so that the principal directions in the environment are correctly determined (in spite of the noise). Furthermore, the procedure adapts to the noise level and allows the quality of the obtained normal vectors to be quantified. In (19; 20) a mobile robot with a servoed laser is used for 3d mapping. While scanning, different online algorithms based on matching techniques and Hough transformation are applied to detect lines. Another algorithm joints lines in surfaces, and finally objects are approximated by boxes. The algorithm requires all the information of the scene to be available and surface reconstruction is highly sensitive to noise and initial 2d segmentation. A probabilistic approach is presented in (13) which fits a number of planes to the measured points. However, the maximisation of a probability index does not guarantee that the number of estimated planes agrees with the number of actual planar surfaces in the environment. In our work, the number of planes is inferred from the information provided by each plane in the model, upon the number of points. Finally, a number of approaches for free-form surface reconstruction through triangles and higher order analytical surfaces have been proposed, such as (1), (21) and (22) (used in (23)). However, they only work correctly under low noise level, and do not take advantage of the a priori knowledge of the scene structure (i.e. most points rely on planar surfaces).
6 Conclusions
A new approach for the modeling of indoor environments with planar surfaces has been presented. 3D points are measured by a laser scanner on board a mobile robot. The proposed algorithm operates by calculating the normal vector at every point, from points in a neighboring region. The size of the neighboring region is automatically adapted upon the spatial distribution of the points. The number of normal vectors is reduced in a second step, under planes similarity and detail level criteria. The proposed approach allows a 3D model of the environment to be automatically obtained. The only input parameter is a thickness angle threshold, which is largely sceneindependent, along with a level of detail related to the minimum number of points at the resulting planes. Other parameters such as Λ (related to the expected local planarity of the surface), ε (related to the region radius adaptation) and wN (related to the number of points in the neighboring regions) does not depend on the scene and thus, can be assumed to be constant. The algorithm allows the environment to me modeled through a small number of planar patches, instead of thousands of points. Due to the simplicity, the obtained model can be used for augmented reality or virtual navigation through the environment, where different, arbitrary perspectives can be quickly generated. Furthermore, the use of planar patches allows the ulterior addition of photorealistic textures to be largely simplified. Finally, points discarded during the model construction can feed a common free-form surface modeling to merge such objects into the obtained environment.
19
Future work will focus on the use of the obtained model for virtual and augmented reality aided navigation, and for precise robot location and navigation.
Acknowledgment
Partialy supported by Ministery of Science and Technology (project number DPI2002– 04377–C02–01), Fomento ministry (project number FO-C-2002-4) and the Local Government in Castilla y León (Spain).The authors want to thank Dr. Sebastian Thrun (currently at the Stanford University), for the information and the test data provided.
References [1] [2]
[3]
[4]
[5]
[6] [7] [8]
[9]
[10] [11] [12] [13]
H.Hoppe, Surface reconstruction from unorganized points, Ph.D. thesis, University of Washington (1994). M. Hebert, R. Hoffman, A. Johnson, J. Osborn, Sensor based interior modeling, in: American Nuclear Society 6th Topical Meeting on Robotics and Remote Systems (ANS ’95), 1995, pp. 731 – 737. Y. Liu, R. Emery, D. Chakrabarti, W. Burgard, S. Thrun, Using EM to learn 3D models with mobile robots, in: Proceedings of the International Conference on Machine Learning (ICML), 2001. L. Iocchi, K. Konolige, M. Bajracharya, Visually realistic mapping of a planar environment with stereo, in: Proc. of the seventh International Symposium on Experimental Robotics. Hawaii, 2000. E.Zalama, G.Candela, J. Gómez, J.R.Perán, Concurrent mapping and localization for mobile robots with segmented local maps, in: Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2002, pp. 546–551. K.Konolige, Improved occupancy grids for map building, Autonomous Robots 4 (1997) 351–367. S.Thrun, W.Burgard, D.Fox, A probabilistic approach to concurrent mapping and location for mobile robots, Machine Learning 31/5 (1998) 1–25. E. Zalama, S. Dominguez, J. Gómez, J. Perán, A new beacon navigation system, in: Mechatronics and Machine Vision 2002: Current Practice, Research Studies Press Ltd, Baldock, 2002, pp. 263–270. D. Hähnel, W. Burgard, S. Thrun, Learning compact 3D models of indoor and outdoor environments with a mobile robot, in: Proceedings of the fourth European workshop on advanced mobile robots (EUROBOT ’01), 2002. S.Fortune, Voronoi diagrams and delaunay triangulations, in: AT&T Bell Laboratories, 1992. M.Teillaud., Towards Dynamic Randomized Algorithms in Computational Geometry, Rapports de Recherche, Inc, ISSN 0249-6399, 1992. R.Sedgewick, Algorithms in C++, Addison-Wesley Publishing Company, Inc, ISBN 0-201-62574-1, 1992. Y.Liu, R.Emery, D.Chakrabarti, S. W.Burgard, Using em to learn 3d models of in-
20
[14]
[15] [16]
[17]
[18] [19]
[20]
[21]
[22]
[23]
door environments with mobile robots, in: Eighteenth International Conference on Machine Learning, 2001. R.Bajcsy, G.Kamberova, L.Nocera, 3d reconstruction of environments for virtual reconstruction, in: Proc. of the 4th IEEE Workshop on Applications of Computer Vision„ 2000. S.Becker, M.Bove, Semiautomatic 3-d model extraction from uncalibrated 2-d camera views, in: Proc. of the SPIE Symposium on Electronic Imaging, San Jose, 1995. P.E.Debevec, C.J.Taylor, J.Malik, Modeling and rendering architecture from photographs, in: Proc. of the 23rd International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 1996. H.Shum, M.Han, R.Szeliski, Interactive construction of 3d models from panoramic mosaics, in: Proc. of the International Conference on Computer Vision and Pattern Recognition (CVPR), 1998. I. T. Jolliffe, Principal component analysis, Springer-Verlag, New York, USA, 1986. H. Surmann, K. Lingemann, A. Nuchter, J. Hetzberg, A 3D laser finder for autonomous mobile robots, in: Proceedings of the 32nd International Symposium on Robotics, Seoul, Korea, 2001, pp. 153–158. H. Surmann, K. Lingemann, A. Nuchter, J. Hetzberg, Fast acquiring and analysis of three dimensional laser range data, in: Proceedings of the 6th International Fall Workshop Vision, Modelling, and Visualization 2001, Stuttgart, Germany, 2001, pp. 59–66. H. Dinh, G. Turk, A sampling of surface reconstruction techniques, Tech. Rep. GVU-00-28, Graphics, Visualization and Usability Center. College of Computing, Georgia Institute of Technology, Altlanta, Georgia (2000). URL http://www.cs.stevens-tech.edu/˜ quynh/papers/surf-rep_techrep.pdf P. Heckbert, M. Garland, Optimal triangulation and quadric-based surface simplification, Journal of Computational Geometry: Theory and Applications 14(1–3) (1999) 49–65. S. Thrun, W. Burgard, D. Fox, A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping, in: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE, San Francisco, CA, 2000.
21
Mariano Martín Nevado received the BSc degree in electronics and automatic control engineering from the University of Valladolid (Spain), in 2002. He is currently R&D Engineer at ATTICO Foundation. His research interests mobile robots and expert systems.
Jaime Gómez García-Bermejo received the BSc degree in electronics and automatic control engineering from the University of Valladolid (Spain), in 1990, the MSc degree in image processing from the École Nationale Supérieure des Telecommunications of Paris (France), in 1991; and the DPh degree in engineering from the University of Valladolid, in 1995. He is currently associate professor at the Dep. of Systems Engineering and Automatic Control at the University of Valladolid, and leads the Artificial Vision Area at CARTIF Technology Center. His research interests 3D and color computer vision, and mobile robots. Eduardo Zalma Casanova received the BSc degree in electronics and automatic control engineering from the University of Valladolid (Spain), in 1988, and the DPh degree in engineering from the University of Valladolid, in 1994. He is currently associate professor at the Dep. of Systems Engineering and Automatic Control at the University of Valladolid, and leads the Mobile Robots Area at CARTIF Technology Center. His research interests mobile robots, microcontrollers, neural networks and 3D computer vision.
22