sensors Article
Extended Line Map-Based Precise Vehicle Localization Using 3D LIDAR Jun-Hyuck Im 1 , Sung-Hyuck Im 2 and Gyu-In Jee 1, * 1 2
*
Department of Electronic Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 05029, Korea;
[email protected] Satellite Navigation Team, Korea Aerospace Research Institute, 169-84 Gwahak-ro, Yuseong-gu, Daejeon 305-806, Korea;
[email protected] Correspondence:
[email protected]; Tel.: +82-2-450-3070
Received: 13 August 2018; Accepted: 19 September 2018; Published: 20 September 2018
Abstract: An Extended Line Map (ELM)-based precise vehicle localization method is proposed in this paper, and is implemented using 3D Light Detection and Ranging (LIDAR). A binary occupancy grid map in which grids for road marking or vertical structures have a value of 1 and the rest have a value of 0 was created using the reflectivity and distance data of the 3D LIDAR. From the map, lines were detected using a Hough transform. After the detected lines were converted into the node and link forms, they were stored as a map. This map is called an extended line map, of which data size is extremely small (134 KB/km). The ELM-based localization is performed through correlation matching. The ELM is converted back into an occupancy grid map and matched to the map generated using the current 3D LIDAR. In this instance, a Fast Fourier Transform (FFT) was applied as the correlation matching method, and the matching time was approximately 78 ms (based on MATLAB). The experiment was carried out in the Gangnam area of Seoul, South Korea. The traveling distance was approximately 4.2 km, and the maximum traveling speed was approximately 80 km/h. As a result of localization, the root mean square (RMS) position errors for the lateral and longitudinal directions were 0.136 m and 0.223 m, respectively. Keywords: extended line map; precise vehicle localization; 3D LIDAR; road marking; vertical structure
1. Introduction Currently, precise vehicle localization is being recognized as a key technology for autonomous driving. Although there are no standards for the localization accuracy required for autonomous driving, the report in [1] requires a localization accuracy of less than 0.7 m in the lateral direction at a 95% confidence level. Recently, a localization accuracy of less than 0.5 m in the lateral direction and less than 1 m in the longitudinal direction has been generally required at a 95% confidence level. In general, the accuracy of the Real-Time Kinematic (RTK) Global Positioning System (GPS) meets such requirements. In urban areas, however, even the costly RTK GPS/Inertial Measurement Unit (IMU) integrated system cannot meet the localization accuracy requirements for autonomous driving. To address this problem, many studies are being conducted to improve localization accuracy through the convergence of various sensors mounted on autonomous vehicles. In particular, 3D Light Detection and Ranging (LIDAR) is being used as a key sensor for precise vehicle localization. As LIDAR can provide accurate distances and reflectivity information of surrounding objects, it can also estimate the relative vehicle position for surrounding objects in a very accurate manner. Therefore, precise vehicle localization is possible using 3D LIDAR if precision maps for the surroundings are available.
Sensors 2018, 18, 3179; doi:10.3390/s18103179
www.mdpi.com/journal/sensors
Sensors 2018, 18, 3179
2 of 28
Precision maps are essential for precise vehicle localization. In general, precision maps for LIDAR-based precise vehicle localization are divided into point maps, line maps, 2D/2.5D plane maps, and 3D maps. First, there are the most basic localization methods based on point maps. In the papers of [2] and [3], the line components of a building wall were extracted from LIDAR data, and both end-points of the lines or corners where two lines met were detected and used for localization. In these papers, however, the quantitative localization accuracy was not derived, and only the point landmark detection performance was analyzed. The study in [4] used columns such as street trees and lamps as point landmarks based on their characteristics perpendicular to the ground. However, the localization produced position errors higher than 0.5 m in many areas. The study in [5] detected the vertical corners of buildings and used them for precise vehicle localization. As such, vertical corners are perpendicular to the ground and can be expressed as point landmarks on a 2D horizontal plane. For this reason, the vertical corner map had a very small data size (28 KB). However, vertical corners were not detected in some areas, and in such areas, an increase in the vehicle position error is inevitable. Furthermore, this method cannot be used in areas that have no buildings. Secondly, there are localization methods that use line detection. In the papers of [6,7], the horizontal line components of a hallway were detected using a LIDAR mounted on an indoor traveling robot, and localization was performed through the distance and angle information between the lines and the robot. In an indoor hallway, it is possible to easily detect the line components of the wall. Outdoors, however, line detection is not easy as the surrounding buildings and structures have extremely complicated shapes and are hidden by obstacles, such as street trees. For autonomous vehicle localization, road markings such as lanes are typically used. Such road markings can be detected using the reflectivity information of LIDAR and are used to create a line map [8]. In addition, it is possible to correct the position error of a vehicle by matching the detected lane and the line map [9]. For the study in [9], the curb was extracted using LIDAR. From the curb, the area of interest was set and lanes were detected. As a result of localization, the average errors in the lateral and longitudinal directions were 0.14 and 0.20 m, respectively. However, position errors higher than 0.5 m occurred in many sections. Thirdly, there are localization methods that use plane maps. Hybrid maps were generated by integrating the lane and landmark information with 2D occupancy grid maps, and localization was performed using the hybrid maps [10–12]. In the paper of [13], a multilevel surface map was used for localization in a multi-floor parking lot. Studies in which 2D LIDAR data were accumulated on a plane and applied to localization were also conducted [14,15]. In the papers of [16,17], a road reflectivity map was generated and used to perform localization. However, the road reflectivity map was significantly affected by changes in weather and illumination, as well as the presence of vehicles. To deal with such shortcomings, methods using vertical structures were researched [18,19]. A multiresolution Gaussian mixture map in which the height information of vertical structures is stored in a 2D grid map was proposed. The localization using this method produced RMS position errors of 10 and 13 cm in the lateral and longitudinal directions, respectively. Despite the excellent localization performance, the multiresolution Gaussian mixture map had a very large data size (44.3 MB/km). Most of the localization methods using plane maps require extremely large calculation amounts for map matching. Finally, there are localization methods based on 3D maps. Recently, localization methods using Normal Distribution Transformation (NDT) scan matching have been researched [20,21]. The NDT scan matching exhibits relatively accurate results with a standard deviation of the horizontal position error of less than approximately 30 cm. However, the calculation time is very long (approximately 2.5 s). In general, the localization accuracy and reliability increases alongside the amount of information. On the other hand, the data file size increases along with the calculation amount for map matching. For real-time vehicle localization, the data file size and calculation amount must be small. Therefore,
Sensors 2018, 18, 3179
3 of 28
Sensors 2018, 16, x FOR PEER REVIEW
3 of 28
it is important to ensure the highest localization accuracy and reliability using a map with a small amount of information. As lanes must exist on roads where vehicles travel, map production companies are producing As lanes must exist on roads where vehicles travel, map production companies are producing lane maps most preferentially. Furthermore, such lane maps are stored in line form to minimize the lane maps most preferentially. Furthermore, such lane maps are stored in line form to minimize the data file size. However, methods with high localization accuracy and reliability among the existing data file size. However, methods with high localization accuracy and reliability among the existing localization methods using LIDAR mostly use maps with very large data sizes. Therefore, research localization methods using LIDAR mostly use maps with very large data sizes. Therefore, research to ensure high localization accuracy using actual produced line maps is necessary. Consequently, to ensure high localization accuracy using actual produced line maps is necessary. Consequently, considering the compatibility with actual produced maps as well as the data size, it is deemed most considering the compatibility with actual produced maps as well as the data size, it is deemed most effective that maps for LIDAR-based localization also have a line form. effective that maps for LIDAR-based localization also have a line form. In this paper, a road reflectivity map and occupancy grid map for surrounding vertical In this paper, a road reflectivity map and occupancy grid map for surrounding vertical structures structures were generated, and line components were extracted from each map. The extracted lines were generated, and line components were extracted from each map. The extracted lines were stored were stored in a map in node and line forms. This map is called an extended line map (ELM) in this in a map in node and line forms. This map is called an extended line map (ELM) in this paper. As an paper. As an ELM includes all information for road markings and vertical structures, it can ELM includes all information for road markings and vertical structures, it can mutually supplement mutually supplement the shortcomings of both, and thus can ensure high localization accuracy, the shortcomings of both, and thus can ensure high localization accuracy, reliability, and availability. reliability, and availability. Although the road marking information is stored in ELM in line form, the type of road marking to Although the road marking information is stored in ELM in line form, the type of road which each line belongs is not expressed. Likewise, for vertical structures, it is not expressed whether marking to which each line belongs is not expressed. Likewise, for vertical structures, it is not each line belongs to a building or a traffic sign. In the case of ELM-based localization, the ELM is expressed whether each line belongs to a building or a traffic sign. In the case of ELM-based converted into an occupancy grid map and is used for correlation matching. For this reason, whether localization, the ELM is converted into an occupancy grid map and is used for correlation matching. the lines included in ELM are actual lines is not important, but it is important that certain road For this reason, whether the lines included in ELM are actual lines is not important, but it is markings or vertical structures including lines are present in the area. As a result, as no lines can be important that certain road markings or vertical structures including lines are present in the area. detected from areas where nothing exists, it is not necessary to verify whether the lines were properly As a result, as no lines can be detected from areas where nothing exists, it is not necessary to verify detected when generating an ELM map. Figure 1 describes the generation process of an ELM (refer to whether the lines were properly detected when generating an ELM map. Figure 1 describes the Section 2). generation process of an ELM (refer to Section 2).
Figure1.1.Extended Extendedline linemap map(ELM) (ELM)generation generationprocess. process. Figure
Whenlocalization localizationisisperformed performedusing usingan anELM ELMgenerated generatedthrough throughthe theprocess processshown shownin inFigure Figure1,1, When theELM ELMis converted is converted intooccupancy an occupancy gridand map, and correlation with the the backback into an grid map, correlation matching matching with the generated generated grid occupancy grid map using is performed using the currently acquired data. This occupancy map is performed the currently acquired LIDAR data. This LIDAR matching result is matching is usedofas the measurement of a Kalman Filter Figureprocess 2 describes the used as the result measurement a Kalman Filter (KF). Figure 2 describes the(KF). localization using an localization ELM (refer toprocess Sectionusing 3). an ELM (refer to Section 3).
Sensors 2018, 18, 3179
Figure 2. Localization process using ELM. 4 of 28
Sensors 2018, x FOR PEER REVIEW The 3D16,LIDAR sensor used
4 of 28 in this paper was a Velodyne HDL-32E, which was installed on top of a vehicle, as shown in Figure 3. In addition, layers 1 through 16 were used to generate an occupancy grid map for vertical structures, and layers 17 through 32 were used to generate a road reflectivity map. The proposed ELM has the following benefits:
• • • •
It includes all information for road markings and vertical structures. It has a very small data file size (approximately 134 KB/km). It can be generated through a map generation algorithm, and no verification procedure is required. It is compatible with actual produced line maps for lanes. In addition, the proposed ELM-based localization method has the following benefits:
• •
It meets the localization accuracy requirements for autonomous driving. As it is used after being converted into an occupancy grid map, line detection and data association processes are not required. • A Fast Fourier TransformFigure (FFT)2.2.can be applied to using the correlation matching of the binary Figure Localization process using ELM. Localization process ELM. occupancy grid map, and the correlation matching time is very short (78 ms on average). The sensor used ininthis was HDL-32E, which was installed on on toptop of The3D 3DLIDAR sensor thispaper paper wasaand aVelodyne Velodyne which was installed Section 2LIDAR describes howused to generate an ELM, Section 3HDL-32E, explains the ELM-based localization aofvehicle, as shown in Figure 3. In addition, layers 1 through 16 were used to generate an occupancy a vehicle, as 4shown in Figure 3. In addition, layersperformance, 1 through 16 were used5 to generatethis an method. Section analyzes the ELM-based localization and Section concludes grid map forgrid vertical structures, and layers 17 through 32 were used to32 generate a road reflectivity map. occupancy map for vertical structures, and layers 17 through were used to generate a road paper. reflectivity map. The proposed ELM has the following benefits: • • • •
It includes all information for road markings and vertical structures. It has a very small data file size (approximately 134 KB/km). It can be generated through a map generation algorithm, and no verification procedure is required. It is compatible with actual produced line maps for lanes. In addition, the proposed ELM-based localization method has the following benefits:
• • •
It meets the localization accuracy requirements for autonomous driving. As it is used after being converted into an occupancy grid map, line detection and data association processes are not required. A Fast Fourier Transform (FFT) can be applied to the correlation matching of the binary Figure 3. (Left) Installation position of Velodyne HDL-32E (Right) vertical specifications. occupancy gridInstallation map, and the correlation matching timeand is very short (78 mslayer on average). Figure 3. (Left) position of Velodyne HDL-32E and (Right) vertical layer specifications.
Section 2 describes howthe to generate ELM, and Section 3 explains the ELM-based localization The proposed ELM has followingan benefits: method. Section 4 analyzes the ELM-based localization performance, and Section 5 concludes this •paper. It includes all information for road markings and vertical structures. • It has a very small data file size (approximately 134 KB/km). • It can be generated through a map generation algorithm, and no verification procedure is required. • It is compatible with actual produced line maps for lanes. • In addition, the proposed ELM-based localization method has the following benefits: • It meets the localization accuracy requirements for autonomous driving. • As it is used after being converted into an occupancy grid map, line detection and data association processes are not required. • A Fast Fourier Transform (FFT) can be applied to the correlation matching of the binary occupancy grid map, and the correlation matching time is very short (78 ms on average). Section 2 describes how to generate an ELM, and Section 3 explains the ELM-based localization method. Section 4 analyzes the ELM-based localization performance, and Section 5 concludes this paper. Figure 3. (Left) Installation position of Velodyne HDL-32E and (Right) vertical layer specifications.
Sensors 2018, 16, x FOR PEER REVIEW Sensors 2018, 18, 3179
5 of 28 5 of 28
2. How to Generate an ELM 2. How to Generate an ELM As described in the paper of [17], when vehicle localization is performed through a road reflectivity map in and plane matching, it is not necessary to know what sign a certain road As described the2D paper of [17], when vehicle localization is performed through a road reflectivity marking represents. In other itwords, is not necessary the map has all road of themarking shape information. map and 2D plane matching, is not it necessary to knowthat what sign a certain represents. This means thatit road forms information. and applied This to localization In other words, is notmarkings necessarycan thatbe themodeled map hasin allsimple of the shape means thatwhen road the 2D plane-matching method is used. markings can be modeled in simple forms and applied to localization when the 2D plane-matching Allisroads method used.where vehicles travel basically have lanes drawn, and also include many additional roadAll markings, such as stop lines, crosswalks, and arrows. Maps and of roads currently being roads where vehicles travel basically have lanes drawn, also which includeare many additional produced necessarily information on lanes, which is typically in the formare of currently lines. Such lane road markings, such asinclude stop lines, crosswalks, and arrows. Maps of roads which being maps have already been applied to vehicleon localization [8,9]. However,inas most road markings produced necessarily include information lanes, which is typically the form of lines. Suchother lane than lanes have similar forms lines,localization they can all[8,9]. just be expressed a set of lines. Therefore, maps have also already been applied to to vehicle However, as as most road markings other line maps withhave the same form as existing canofbe converted into than lanes also similar forms to lines, lane they maps can allcan justbebegenerated. expressedThey as a set lines. Therefore, 2D plane maps and applied to vehicle localization. line maps with the same form as existing lane maps can be generated. They can be converted into 2D is difficult to usetoroad markings in areas with traffic congestion. In urban areas, many tall planeItmaps and applied vehicle localization. buildings are present around andin they canwith always be scanned regardless of traffic It is difficult to use road roads, markings areas traffic congestion. In urban areas, congestion. many tall Therefore,are it present is necessary to roads, use buildings localization availability. Furthermore, the buildings around and they to canincrease always be scanned regardless of traffic congestion. outer wallsit of urban buildings mostlytocomposed of planes and are perpendicular to thethe ground. Therefore, is necessary to use are buildings increase localization availability. Furthermore, outer For this reason, such outer walls are mostly of expressed as are linesperpendicular when a 2D occupancy gripFor map is walls of urban buildings are mostly composed planes and to the ground. this generated using LIDAR. Such lines can be same formgrip as the lines extracted reason, such outer3D walls are mostly expressed as expressed lines whenina the 2D occupancy map is generated from road markings. Aslines seen,can lines extractedinfrom the reflectivity map for extracted the road surface and using 3D LIDAR. Such beare expressed the same form as the lines from road the occupancy grid map for buildings, and the extracted lines are converted into node and line markings. As seen, lines are extracted from the reflectivity map for the road surface and the occupancy forms. Finally, the position each node islines stored the ELM. into Next, the method ELMFinally, generation grid map for buildings, andof the extracted areinconverted node and linefor forms. the is explained. position of each node is stored in the ELM. Next, the method for ELM generation is explained. 2.1. 2.1. Vehicle Vehicle Trajectory TrajectoryOptimization Optimization An An experiment experiment was was carried carried out out in in the theGangnam Gangnam area areaof ofSeoul, Seoul,South SouthKorea. Korea. Figure Figure 44shows showsthe the vehicle trajectory and the street view at the four intersections. vehicle trajectory and the street view at the four intersections.
Figure 4. 4. Vehicle Vehicletrajectory trajectoryand andstreet streetview view(four (fourintersections). intersections). Figure
As As shown shown in in Figure Figure 4, 4,the theexperimental experimental environment environment is is aa dense dense urban urban area area surrounded surrounded by by tall tall buildings, buildings, and and two two laps laps were were driven driven from from the thestarting startingpoint pointto tothe thefinishing finishingpoint. point. The The traveling traveling distance distance was was approximately approximately 4.2 4.2 km, km, and and the the maximum maximum traveling traveling speed speed and and average average speed speed were were approximately 80 km/h and 32 km/h, respectively. The position of the vehicle was acquired approximately 80 km/h and 32 km/h, respectively. The position of the vehicle was acquired by by using using the the integrated integrated system system of of the theRTK RTKGPS GPSand andInertial InertialNavigation NavigationSystem System(INS) (INS)(NovAtel (NovAtelRTK/SPAN RTK/SPAN
Sensors 2018, 18, 3179
Sensors 2018, 16, x FOR PEER REVIEW
6 of 28
6 of 28
system). where there there are are many many tall tall buildings, buildings,the theposition positionerror errorof ofthe theRTK/INS RTK/INS system). In In an an environment environment where is is about 1–2 m. Therefore, the vehicle trajectory must be corrected to generate the precision about 1–2 m. Therefore, the vehicle trajectory must be corrected to generate the precision map.map. The The vehicle trajectory optimized using theGraphSLAM GraphSLAMmethod. method.Figure Figure55 shows shows the vehicle trajectory waswas optimized byby using the the graph graph optimization result of the vehicle trajectory. optimization result of the vehicle trajectory.
Figure 5. 5. Graph Graph optimization optimization result result of of vehicle vehicletrajectory trajectoryusing usingGraphSLAM. GraphSLAM. Figure
As top left leftof ofFigure Figure5,5,the theroad roadreflectivity reflectivity maps two laps match. As shown shown at the top maps forfor thethe two laps do do notnot match. On On the right of Figure 5, the red points represent the corrected vehicle trajectory after the graph the right of Figure 5, the red points represent the corrected vehicle trajectory graph optimization. As shown shown at at the the bottom bottom left left of of Figure Figure 5, 5, the the road road reflectivity reflectivity map map matches exactly. optimization. As Here, Here,the theincremental incrementalpose poseinformation informationoutputted outputtedfrom fromthe theIterative IterativeClosest ClosestPoint Point(ICP) (ICP) algorithm algorithmwas was used used as as an anedge edgemeasurement measurement of ofthe thegraph. graph. The The theory theory and andprinciples principles for forthe theGraphSLAM GraphSLAM method method are arewell welldescribed describedin in[22,23]. [22,23]. Thus, Thus, obtaining obtainingthe theoptimized optimizedvehicle vehicletrajectory trajectoryisispossible possibleby byusing usingthe the GraphSLAM. In this paper, this optimized vehicle trajectory was considered as the ground truth. GraphSLAM. paper, this optimized vehicle trajectory was considered as the ground truth. 2.2. 2.2. Line Line Extraction Extraction from from Road Road Reflectivity ReflectivityMap Map First, First, aa road road reflectivity reflectivitymap map with withaagrid gridsize sizeof of15 15cm cmwas wasgenerated generatedusing usingthe thevehicle vehicleposition position optimized in Section 2.1. Figure 6 shows the generated road reflectivity map. optimized in Section 2.1. Figure 6 shows the generated road reflectivity map. The reflectivity map of Figure 6 was generated using layers 17 through 32 of the 3D LIDAR. As seen in the figure, the reflectivity map includes the reflected parts of sidewalks or nearby vehicles. These parts must be eliminated as they may degrade the localization performance. Plane extraction was performed to extract only the LIDAR points reflected from the road. Figure 7 shows the road reflectivity map generated after plane extraction. In Figure 7, the sidewalks have not been completely eliminated, but most of the unnecessary parts have been removed. Now, lines must be extracted from the reflectivity map, as shown in Figure 7. In the reflectivity map, however, areas except for road markings are filled with certain reflectivity values. As lines can be extracted from these areas, it is necessary to eliminate the values of such areas. In general, the reflectivity value differs greatly between road markings and other areas. Therefore, the values of such areas can be eliminated in a simple manner through binarization. In this paper, binarization was performed using the Otsu thresholding method [24]. Figure 8 shows the results of applying binarization to the reflectivity map of Figure 7.
are well described in [22,23]. Thus, obtaining the optimized vehicle trajectory is possible by using the GraphSLAM. In this paper, this optimized vehicle trajectory was considered as the ground truth. 2.2. Line Extraction from Road Reflectivity Map SensorsFirst, 2018, a 18,road 3179
reflectivity map with a grid size of 15 cm was generated using the vehicle position 7 of 28 optimized in Section 2.1. Figure 6 shows the generated road reflectivity map. Sensors 2018, 16, x FOR PEER REVIEW
7 of 28
Figure 6. Generated road reflectivity map (before plane extraction).
The reflectivity map of Figure 6 was generated using layers 17 through 32 of the 3D LIDAR. As seen in the figure, the reflectivity map includes the reflected parts of sidewalks or nearby vehicles. These parts must be eliminated as they may degrade the localization performance. Plane extraction was performed to extract only the LIDAR points reflected from the road. Figure 7 shows the road reflectivity map generated after plane extraction. Sensors 2018, 16, x FOR PEER REVIEW
7 of 28
Figure 6. Generated road reflectivity map (before plane extraction).
The reflectivity map of Figure 6 was generated using layers 17 through 32 of the 3D LIDAR. As seen in the figure, the reflectivity map includes the reflected parts of sidewalks or nearby vehicles. These parts must be eliminated as they may degrade the localization performance. Plane extraction was performed to extract only the LIDAR points reflected from the road. Figure 7 shows the road reflectivity map generated plane extraction. Figure 6.after Generated road reflectivity map (before plane extraction).
Figure 7. Generated road reflectivity map (after plane extraction).
In Figure 7, the sidewalks have not been completely eliminated, but most of the unnecessary parts have been removed. Now, lines must be extracted from the reflectivity map, as shown in Figure 7. In the reflectivity map, however, areas except for road markings are filled with certain reflectivity values. As lines can be extracted from these areas, it is necessary to eliminate the values of such areas. In general, the reflectivity value differs greatly between road markings and other areas. Therefore, the values of such areas can be eliminated in a simple manner through binarization. In this paper, binarization was performed using the Otsu thresholding method [24]. Figure 8 shows the results to map the reflectivity of Figure 7. Generated binarization road reflectivity extraction). Figureof 7. applying (after plane map extraction). In Figure 7, the sidewalks have not been completely eliminated, but most of the unnecessary parts have been removed. Now, lines must be extracted from the reflectivity map, as shown in Figure 7. In the reflectivity map, however, areas except for road markings are filled with certain reflectivity values. As lines can be extracted from these areas, it is necessary to eliminate the values of such areas. In general, the reflectivity value differs greatly between road markings and other areas. Therefore, the values of such areas can be eliminated in a simple manner through binarization. In this paper, binarization was performed using the Otsu thresholding method [24]. Figure 8 shows the results of applying binarization to the reflectivity map of Figure 7.
Figure 8. Binarized road reflectivity map.
As shown in Figure 8, some parts of the sidewalks and curbs have not been eliminated, but most of the road markings remain. Now, lines are extracted from the binary map, as shown in
Sensors 2018, 18, 3179
8 of 28
As shown in Figure 8, some parts of the sidewalks and curbs have not been eliminated, but most Sensors 2018, 16, x FOR PEER REVIEW 8 of 28 of the road markings remain. Now, lines are extracted from the binary map, as shown in Figure 8. A Hough transform was used as the line extraction algorithm. Figure 9 shows the line extraction Figure 8. A Hough transform was used as the line extraction algorithm. Figure 9 shows the line results for the road for markings. extraction results the road markings.
Figure9.9.Line Line extraction extraction results Figure results for forroad roadmarkings. markings.
Figure it can seen that lines have been extracted from most the road markings.As Ascan In In Figure 9, 9, it can bebe seen that lines have been extracted from most ofof the road markings. can be seen from the right-hand figure, multiple lines have been extracted from thick road markings. be seen from the right-hand figure, multiple lines have been extracted from thick road markings. Therefore, actualinformation informationon onthe the shape shape can can be be retained as as thethe Therefore, thethe actual retainedas asmuch muchasaspossible. possible.However, However, parts markedwith withblue bluecircles circles show, show, lines lines have have been notnot road parts marked been extracted extractedfrom fromthe theparts partsthat thatareare road markings. It is difficult to eliminate these parts because they have the same reflectivity markings. It is difficult to eliminate these parts because they have the same reflectivity characteristics characteristics as road markings even though they are not road markings. However, the incorrectly as road markings even though they are not road markings. However, the incorrectly detected lines detected lines do not significantly affect the correlation matching results because many clear road do not significantly affect the correlation matching results because many clear road markings are markings are present nearby. present nearby. 2.3. Line Extraction from Occupancy Grid Map 2.3. Line Extraction from Occupancy Grid Map This paper uses the fact that the outer walls of buildings are expressed as lines on the 2D This paper uses the fact that the outer walls of buildings are expressed as lines on the 2D horizontal horizontal plane. To extract highly reliable line information, a 2D probabilistic occupancy grid map plane. To extract highly reliable line information, a 2D probabilistic occupancy grid map for vertical for vertical structures was generated, and lines were extracted from the map. Figure 10 shows the structures was and lines were extracted generated 2Dgenerated, probabilistic occupancy grid map. from the map. Figure 10 shows the generated 2D probabilistic occupancy grid map. As can be seen from Figure 10, the outer walls of buildings appear as lines on the 2D plane map. However, the occupancy probability for the outer walls of the buildings is low owing to the influence of the street trees or building forms, and many street trees around the road remain on the map. As only lines are required on the map to be used for localization, it is necessary to eliminate unnecessary parts as much as possible. For this, line components are extracted from the LIDAR point cloud, and a probabilistic occupancy grid map is generated using only the extracted points. In this case, street trees can be effectively removed from the probabilistic occupancy grid map, and the occupancy probability for the building outer walls can be increased. There are several methods to extract lines from the LIDAR point cloud [2,25–27]. Among the methods, the Iterative-End-Point-Fit (IEPF) algorithm exhibits the best performance in terms of accuracy and calculation time [28]. Figure 11 shows the results of line extraction using the IEPF algorithm. In Figure 11, most of these line segments are a set of scan points from the leaves of the roadside trees. Therefore, the laser data reflected by the leaves of roadside trees must be removed. Figure 12 shows the LIDAR points reflected by the leaves and the outer wall of the building. Figure 10. Generated 2D probabilistic occupancy grid map.
2.3. Line Extraction from Occupancy Grid Map This paper uses the fact that the outer walls of buildings are expressed as lines on the 2D horizontal plane. To extract highly reliable line information, a 2D probabilistic occupancy grid map for vertical structures was generated, and lines were extracted from the map. Figure 10 shows the Sensors 2018, 18, 3179 9 of 28 generated 2D probabilistic occupancy grid map.
Sensors 2018, 16, x FOR PEER REVIEW
9 of 28
As can be seen from Figure 10, the outer walls of buildings appear as lines on the 2D plane map. However, the occupancy probability for the outer walls of the buildings is low owing to the influence of the street trees or building forms, and many street trees around the road remain on the map. As only lines are required on the map to be used for localization, it is necessary to eliminate unnecessary parts as much as possible. For this, line components are extracted from the LIDAR point cloud, and a probabilistic occupancy grid map is generated using only the extracted points. In this case, street trees can be effectively removed from the probabilistic occupancy grid map, and the occupancy probability for the building outer walls can be increased. There are several methods to extract lines from the LIDAR point cloud [2,25–27]. Among the methods, the Iterative-End-Point-Fit (IEPF) algorithm exhibits the best performance in terms of accuracy and calculation time [28]. Figure 11 shows the results of line extraction using the IEPF algorithm. Figure Generated2D 2D probabilistic probabilistic occupancy grid map. Figure 10.10. Generated occupancy grid map.
Figure 11. Line extraction result using the IEPF algorithm.
Figure 11. Line extraction result using the IEPF algorithm.
As shown in Figure 12, the features of the LIDAR points that are reflected by the two types Figure mostdistinguished. of these line For segments aretrees, a setthe of variance scan points from the leaves of thethe roadside of In objects are11, clearly roadside of distance errors between trees. Therefore, the each laserpoint data isreflected by the of roadside mustwall be removed. Figure 12 extracted line and very large. On leaves the other hand, for trees the outer of a building, shows the LIDAR points reflected by the leaves and the outer wall of the building. the variance of the distance errors is very small. Figure 13 shows the pseudocode for outlier removal. By using the pseudocode, outliers such as roadside trees can be removed. Figure 14 shows the line extraction result after the outlier removal. In Figure 14, the green points represent the line segments that are finally extracted. The probabilistic occupancy grid map is generated again using only the LIDAR points corresponding to the extracted lines. Figure 15 shows the probabilistic occupancy grid map generated by applying the IEPF algorithm.
Figure 11. Line extraction result using the IEPF algorithm.
In Figure 11, most of these line segments are a set of scan points from the leaves of the roadside trees. Therefore, the laser data reflected by the leaves of roadside trees must be removed. Figure 12 Sensors 2018, 18, 3179 10 of 28 shows the LIDAR points reflected by the leaves and the outer wall of the building.
Sensors 2018, 16, x FOR PEER REVIEW
10 of 28
Figure 12. Light Detection and Ranging (LIDAR) points reflected by leaves of roadside trees and outer wall of building.
As shown in Figure 12, the features of the LIDAR points that are reflected by the two types of objects are clearly distinguished. For roadside trees, the variance of distance errors between the extractedFigure line and each point is very large. On the other hand, for the outer wall of a building, the 12. Light Detection and Ranging (LIDAR) points reflected by leaves of roadside trees and outer variance wall of the distance of building. errors is very small. Figure 13 shows the pseudocode for outlier removal.
Figure outlierremoval. removal. Figure13. 13.Pseudocode Pseudocode for for outlier
As canthe be pseudocode, seen from Figure 15, many street trees have trees been eliminated, but some of them By using outliers such as roadside can be removed. Figure 14remain. shows the Furthermore, the vertical structures reflecting only some layers have low probability values. However, line extraction result after the outlier removal. it can be seen that the probability values for the building outer walls are higher in Figure 15 than in Figure 10. This is because only lines with relatively high reliability have been mapped to the map through the IEPF algorithm. Binarization is performed to remove grids with low probabilities. A value between 0 and 1 is stored in each grid. This paper assumes that grids with values equal to or higher than 0.5 were occupied. Figure 16 shows the final generated occupancy grid map after binarization.
Figure 13. Pseudocode for outlier removal.
By using the pseudocode, outliers such as roadside trees can be removed. Figure 14 shows the 11 of 28 after the outlier removal.
Sensors 2018, 18, 3179 line extraction result
Sensors 2018, 16, x FOR PEER REVIEW
11 of 28
the extracted lines. Figure 15 shows the probabilistic occupancy grid map generated by applying the IEPF algorithm.Figure 14. Line extraction result from LIDAR points (after outlier removal). Figure 14. Line extraction result from LIDAR points (after outlier removal).
In Figure 14, the green points represent the line segments that are finally extracted. The probabilistic occupancy grid map is generated again using only the LIDAR points corresponding to
Figure 15. 15. Probabilistic occupancy generatedbybyapplying applying IEPF algorithm. Figure Probabilistic occupancygrid grid map map generated thethe IEPF algorithm.
Figure shows thatFigure most street beentrees removed onlyeliminated, the outer walls buildings As can be16seen from 15, trees manyhave street haveand been butofsome of them or traffic signs remain. Next, lines were extracted from this occupancy grid map. As with Section 2.2, remain. Furthermore, the vertical structures reflecting only some layers have low probability values. a Hough transform is used as the line extraction algorithm. Figure 17 shows the final line extraction result. However, it can be seen that the probability values for the building outer walls are higher in Figure 15 In Figure 17, lines have been extracted from parts other than buildings as areas marked with a thanblue in Figure 10. This is because only lines with relatively high reliability have been mapped to the map circle. However, as with the case of road markings, incorrectly detected lines do not significantly through IEPF algorithm. Binarization is performed to remove grids low probabilities. A value affectthe correlation matching because many other vertical structures are with present nearby. In this way, between 0 and 1 is stored in each grid. This paper assumes that grids with values equal to or higher than lines for vertical structures can be extracted.
0.5 were occupied. Figure 16 shows the final generated occupancy grid map after binarization.
remain. Furthermore, the vertical structures reflecting only some layers have low probability values. However, it can be seen that the probability values for the building outer walls are higher in Figure 15 than in Figure 10. This is because only lines with relatively high reliability have been mapped to the map through the IEPF algorithm. Binarization is performed to remove grids with low probabilities. A value between 0 and 1 is stored in each grid. This paper assumes that grids with values equal to or higher Sensors 2018, 18, 3179 12 than of 28 0.5 were occupied. Figure 16 shows the final generated occupancy grid map after binarization.
Sensors 2018, 16, x FOR PEER REVIEWFigure Figure 16. 16. Generated Generated occupancy occupancy grid grid map. map.
12 of 28
Figure 16 shows that most street trees have been removed and only the outer walls of buildings or traffic signs remain. Next, lines were extracted from this occupancy grid map. As with Section 2.2, a Hough transform is used as the line extraction algorithm. Figure 17 shows the final line extraction result.
Figure Figure 17. 17. Line Line extraction extraction result result for for vertical vertical structures (incorrectly detected lines in a blue circle).
2.4. Generation of ELM In Figure 17, lines have been extracted from parts other than buildings as areas marked with a blue The circle. However, with the2.2case of were road converted markings, into incorrectly detected lines do in nota lines extracted as in Sections and 2.3 nodes and links, and stored significantly correlation matching because many other vertical structures are present nearby. map. Table 1 affect presents an example of the ELM. In this way, lines for vertical structures can be extracted. Table 1. Example of ELM.
2.4. Generation of ELM Node 1 Node 2 Type Link TheIndex lines extracted in Sections 2.2 and 2.3 were converted into nodes and links, and stored in a Latitude Longitude Latitude Longitude
map. Table 1 presents an example of the ELM. 1 2
Road Marking Building
37.5116372276 127.0574300149 37.5124894367 127.0579568743 Table 1. Example of ELM.
37.5116732539 37.5125936427
127.0574065740 127.0578932581
Node 1 Node 2 AsIndex shown in Table 1, the position information of each line was stored in a text file. The data size Link Type Latitude Longitude Latitude smallerLongitude of the created ELM was approximately 134 KB/km. The ELM has a significantly data size than 1 Road Marking 37.5116372276 127.0574300149 37.5116732539 127.0574065740 2 Building 37.5124894367 127.0579568743 37.5125936427 127.0578932581 As shown in Table 1, the position information of each line was stored in a text file. The data size of the created ELM was approximately 134 KB/km. The ELM has a significantly smaller data size than other maps used for 3D LIDAR-based localization. Figure 18 shows the ELM information
Sensors 2018, 18, 3179
13 of 28
other maps used for 3D LIDAR-based localization. Figure 18 shows the ELM information displayed on a synthetic of the 2D occupancy grid and road reflectivity maps. Sensors 2018, 16, x map FOR PEER REVIEW 13 of 28
Figure 18. Generated ELM (on synthetic map). Figure 18.
In Figure 18, the yellow Method star and red star represent the positions of node 1 and node 2, respectively. 3. ELM-Based Localization The green line connects node 1 and node 2. In this study, localization was performed using an ELM 3.1. Correlation-Based created in this way. Matching Using ELM
As shownLocalization in Figure 2, localization was performed through correlation matching of the 3. ELM-Based Method occupancy grid map converted from ELM and the occupancy grid map generated using the current LIDAR data. First, nodes present within 3.1. Correlation-Based Matching Using ELM a certain distance from the current vehicle position were selected from the ELM. The current position information is acquired from a commercial GPS/Dead As shown in Figure 2, localization was performed through correlation matching of the occupancy Reckoning (DR) sensor. Figure 19 shows the node and link information of the ELM present in the grid map converted from ELM and the occupancy grid map generated using the current LIDAR data. area of interest on the 2D plane. Furthermore, the lines in Figure 19 can be mapped to the grid map. First, nodes present within a certain distance from the current vehicle position were selected from the Figure 20 shows the result of converting the line map into an occupancy grid map. ELM. The current position information is acquired from a commercial GPS/Dead Reckoning (DR) sensor. Figure 19 shows the node and link information of the ELM present in the area of interest on the 2D plane. Furthermore, the lines in Figure 19 can be mapped to the grid map. Figure 20 shows the result of converting the line map into an occupancy grid map. Next, an occupancy grid map with the same size as the map of Figure 20 was generated using the current LIDAR data. First, a road reflectivity map was generated and binarized in the same way as the case of ELM generation. Then, a probabilistic occupancy grid map for vertical structures was generated, and an occupancy grid map was generated through binarization. The two generated maps were composed of 0 and 1. Therefore, the two maps were integrated to complete an occupied grid map. Figure 21 shows the generated binary occupancy grid map.
Figure 19. ELM in area of interest.
occupancy grid map converted from ELM and the occupancy grid map generated using the current LIDAR data. First, nodes present within a certain distance from the current vehicle position were selected from the ELM. The current position information is acquired from a commercial GPS/Dead Reckoning (DR) sensor. Figure 19 shows the node and link information of the ELM present in the area of2018, interest on the 2D plane. Furthermore, the lines in Figure 19 can be mapped to the grid14map. Sensors 18, 3179 of 28 Figure 20 shows the result of converting the line map into an occupancy grid map.
Sensors 2018, 16, x FOR PEER REVIEW
Figure Figure 19. 19. ELM ELM in in area area of of interest. interest.
14 of 28
Figure 20. Result of converting Figure 19 into occupancy grid map.
Subsequently, correlation forthe themap two occupancy grid maps of Figures 20 Next, an occupancy grid matching map withwas the performed same size as of Figure 20 was generated using andcurrent 21. In LIDAR this paper, areaaof 160reflectivity m × 160 mmap waswas set generated as the areaand of binarized interest, and thesame grid way size the data.anFirst, road in the was 15case cm.ofThe maps used forThen, matching became 1081 × 1081 grid matrices. Therefore, was difficult as the ELM generation. a probabilistic occupancy map for vertical it structures was to use the general correlation matchingthrough method.binarization. However, asThe the two generated, and anserial-search-type occupancy grid map was generated two occupancy generated grid maps composed of only 0 and 1, the of were the two maps could simply be maps werewere composed of 0 and 1. Therefore, thematching two maps integrated to complete ancalculated occupied through multiplying the twothe matrices. Consequently, it was possible to apply an FFT. When correlation grid map. Figure 21 shows generated binary occupancy grid map. matching is performed using an FFT, the calculation time is approximately 78 ms (based on MATLAB). The time required for coordinate transformation of the 3D LIDAR point cloud and generation of the binary occupancy grid map is approximately 94 ms and 142 ms (based on MATLAB), respectively. Therefore, the localization execution time is less than 350 ms. In this paper, localization was performed by post-processing to verify the performance using all LIDAR data (10 Hz). Figure 22 shows the localization execution process.
Next, an occupancy grid map with the same size as the map of Figure 20 was generated using the current LIDAR data. First, a road reflectivity map was generated and binarized in the same way as the case of ELM generation. Then, a probabilistic occupancy grid map for vertical structures was generated, and an occupancy grid map was generated through binarization. The two generated Sensors 2018, 18, 3179 15 of 28 maps were composed of 0 and 1. Therefore, the two maps were integrated to complete an occupied grid map. Figure 21 shows the generated binary occupancy grid map.
Sensors 2018, 16, x FOR PEER REVIEW
15 of 28
size was 15 cm. The maps used for matching became 1081 × 1081 matrices. Therefore, it was difficult to use the general serial-search-type correlation matching method. However, as the two occupancy grid maps were composed of only 0 and 1, the matching of the two maps could simply be calculated through multiplying the two matrices. Consequently, it was possible to apply an FFT. When correlation matching is performed using an FFT, the calculation time is approximately 78 ms (based on MATLAB). The time required for coordinate transformation of the 3D LIDAR point cloud and generation of the binary occupancy grid map is approximately 94 ms and 142 ms (based on MATLAB), respectively. Therefore, the localization execution time is less than 350 ms. In this paper, localization was performed by post-processing to verify the performance using all LIDAR data (10 Hz). Figure 21.Binary Binaryoccupancy occupancy grid map generated Figure 22 shows the localization executiongrid process. Figure 21. map generated using usingcurrent currentLIDAR LIDARdata. data. Subsequently, correlation matching was performed for the two occupancy grid maps of Figures 20 and 21. In this paper, an area of 160 m × 160 m was set as the area of interest, and the grid
Figure22. 22. Localization Localization execution execution process. process. Figure
Asshown shownininFigure Figure22, 22,experiments experiments were were performed performed in a downtown downtown area As area where where many many other other vehicles were present. In the bottom center of the figure, the light green dots represent LIDAR vehicles were present. In the bottom center of the figure, dots represent LIDARscan scan data to be matched with the map. After these dots and the ELM in area ofwere interest were data to be matched with the map. After these dots and the ELM in the areathe of interest converted converted binary occupancy of the correlation (FFT) between into binary into occupancy grid maps,grid the maps, resultsthe of results the correlation matchingmatching (FFT) between the two the two maps were used for the measurements of the KF. In the bottom right of the figure, the red maps were used for the measurements of the KF. In the bottom right of the figure, the red rectangle rectangle represents the currently estimated vehicle position, while the black rectangle represents represents the currently estimated vehicle position, while the black rectangle represents the actual the actual position (ground truth). blue and light blue rectangles positionsand of position (ground truth). The blue andThe light blue rectangles represent therepresent positions the of GPS/DR GPS/DR and RTK/INS, respectively. The fact that the red and black rectangles almost overlap indicates that the position of the vehicle was estimated accurately. 3.2. Kalman Filter Configuration In this paper, the position error of the GPS/DR sensor was estimated using the map-matching results. For the GPS/DR sensor, a CruizCore DS6200 from Microinfinity (Suwon, South Korea) was
Sensors 2018, 18, 3179
16 of 28
RTK/INS, respectively. The fact that the red and black rectangles almost overlap indicates that the position of the vehicle was estimated accurately. 3.2. Kalman Filter Configuration In this paper, the position error of the GPS/DR sensor was estimated using the map-matching results. For the GPS/DR sensor, a CruizCore DS6200 from Microinfinity (Suwon, South Korea) was ◦ used, 2018, and 16, thex azimuth Sensors FOR PEER accuracy REVIEW was within 5 in open space. Figures 23 and 24 show the position 16 of 28 Sensorsand 2018,the 16, attitude x FOR PEER REVIEW 16 of 28 error error of the GPS/DR sensor, respectively.
Figure 23. Position error of GPS/DR. Figure Figure23. 23.Position Positionerror errorof ofGPS/DR. GPS/DR.
Figure 24. Attitude error of GPS/DR. GPS/DR. Figure 24. Attitude error of GPS/DR.
The position position error error was was calculated calculated based based on on the the ground ground truth truth mentioned mentioned in in Section Section 2.1, 2.1, and and the the The attitude was calculated on the roll, and yaw output values of the RTK/SPAN Theerror position error wasbased calculated based on ground truth inNovAtel Section 2.1,NovAtel and the attitude error was calculated based on thepitch, roll,the pitch, and yawmentioned output values of the System. Figure 23 shows that the position error of GPS/DR is up to 10 m. In addition, sections where attitude error was calculated based on the roll, pitch, and yaw output values of the NovAtel RTK/SPAN System. Figure 23 shows that the position error of GPS/DR is up to 10 m. In addition, the position error changes are mostly sections the rotates. in RTK/SPAN System. Figure 23 shows that the position error ofvehicle GPS/DR is the upSuch to 10rapid m.rotates. Inchanges addition, sections where therapidly position error rapidly changes are where mostly sections where vehicle Such the error may lead to large position errors during the vehicle position estimation. sections where the position error rapidly changes are mostly sections where the vehicle rotates. Such rapid changes in the error may lead to large position errors during the vehicle position estimation. rapidAs changes in the error lead to large position the vehicle position estimation. the position errormay of GPS/DR is very largeerrors in a during downtown area, error correction through As the position error In ofFigure GPS/DR in a errors downtown area, error correction through map matching is essential. 24, is thevery roll large and pitch were mostly within 2°, and the yaw map matching is essential. In Figure 24, the roll and pitch errors were mostly within 2°, and the error was mostly within 0.5°, indicating comparatively accurate results. Therefore, for yaw the error was transformation mostly withinof 0.5°, indicating comparatively accurate results. Therefore, for asthe coordinate the 3D LIDAR data, the attitude information of GPS/DR was used it coordinate transformation of the 3D LIDAR data, the attitude information of GPS/DR was used as it was estimated separately, and the attitude of the vehicle was not separately estimated. was The estimated separately, and the attitude of the vehicle was not separately estimated. state variable of the filter can be represented as Equation (1):
Sensors 2018, 18, 3179
17 of 28
As the position error of GPS/DR is very large in a downtown area, error correction through map matching is essential. In Figure 24, the roll and pitch errors were mostly within 2◦ , and the yaw error was mostly within 0.5◦ , indicating comparatively accurate results. Therefore, for the coordinate transformation of the 3D LIDAR data, the attitude information of GPS/DR was used as it was estimated separately, and the attitude of the vehicle was not separately estimated. The state variable of the filter can be represented as Equation (1): q = (δx , δy) T
(1)
where δx and δy refer to the 2D horizontal position errors of a vehicle in the East-North-Up (ENU) coordinate system. The time update is conducted as presented in Equation (2): qek = "FK · qˆk−1# 1 0 Fk = 0 1
(2)
As represented in Equation (2), no change in the error of GPS/DR for a short period of time was assumed. The measurement Equation is represented as Equation (3): " zk = Hk · qek + " 1 0 Hk = 0 1
∆x ∆y #
# (3)
where ∆x and ∆y refer to the result values of the correlation-based matching. The measurement update is conducted as presented in Equation (4): qˆk = qek + K (zk − H · qek )
(4)
where K refers to the Kalman gain. 4. Experimental Results In this paper, localization was performed for three cases: a case in which only road markings are used, a case in which only vertical structures are used, and a case in which both road markings and vertical structures are used (ELM). Figures 25 and 26 show the localization results of the case in which only road markings were used. Figure 25 shows the lateral position error. The RMS lateral position error is 0.143 m. Figure 26 shows the longitudinal position error. The RMS longitudinal position error is 0.389 m. When only road markings were used, the longitudinal position error was much larger than the lateral position error. As a road must have lanes, the lateral accuracy is high. To correct the longitudinal position error, certain road markings, such as crosswalks or stop lines, are necessary. On a number of roads, however, road markings other than lanes do not exist. Therefore, the longitudinal position error is somewhat higher. Figures 27 and 28 show the localization results when only vertical structures were used. Figure 27 shows the lateral position error. The RMS lateral position error is 0.163 m. Figure 28 shows the longitudinal position error. The RMS longitudinal position error is 0.233 m. When only vertical structures were used, it was found that the longitudinal position error significantly decreased. This is because building walls and traffic signs perpendicular to the vehicle traveling direction can provide measurements for the longitudinal direction.
4. Experimental Results In this paper, localization was performed for three cases: a case in which only road markings are used, a case in which only vertical structures are used, and a case in which both road markings and vertical structures are used (ELM). Sensors 2018, 18, 3179 of 28 Figures 25 and 26 show the localization results of the case in which only road markings were18used.
Sensors 2018, 16, x FOR PEER REVIEW
18 of 28
Figure Figure25. 25.Lateral Lateralposition positionerror error(road (roadmarking). marking).
Figure Figure26. 26.Longitudinal Longitudinalposition positionerror error(road (roadmarking). marking).
Likewise, lateral accuracyerror. is high there position are buildings both m. sides of the Figure 25the shows theposition lateral position Thebecause RMS lateral error on is 0.143 Figure 26 road. Vertical structures provide highly reliable measurements in a downtown area. However, as shows the longitudinal position error. The RMS longitudinal position error is 0.389 m. Whenthey only are typically tenswere of meters from the vehicle, the localization accuracy is significantly affected by road markings used,away the longitudinal position error was much larger than the lateral position the roll and pitch errors. As road markings are available closer to the vehicle, the localization accuracy error. As a road must have lanes, the lateral accuracy is high. To correct the longitudinal position inerror, an area withroad a large numbersuch of road markings or will be lines, higher when the road used certain markings, as crosswalks stop are necessary. Onmarkings a numberare of roads, than when vertical structures are than used.lanes Actually, as shown in Figure 25, largeposition positionerror errorsis however, road markings other do not exist. Therefore, thealthough longitudinal occasionally occur, the RMS position error is smaller than that when only vertical structures are used. somewhat higher. ItFigures is difficult to use road markings for localization in areas with traffic congestion. Figure 29 shows 27 and 28 show the localization results when only vertical structures were used. the results of localization using road markings in an area with traffic congestion.
error. As a road must have lanes, the lateral accuracy is high. To correct the longitudinal position error, certain road markings, such as crosswalks or stop lines, are necessary. On a number of roads, however, road markings other than lanes do not exist. Therefore, the longitudinal position error is somewhat higher. Sensors Figures 2018, 18, 3179 27 and 28 show the localization results when only vertical structures were used. 19 of 28
Sensors 2018, 16, x FOR PEER REVIEW
19 of 28
Figure Figure27. 27.Lateral Lateralposition positionerror error(vertical (verticalstructure). structure).
Figure Figure28. 28.Longitudinal Longitudinalposition positionerror error(vertical (verticalstructure). structure).
As can be27seen fromthe thelateral cameraposition image oferror. Figure 29,RMS mostlateral road markings hidden by surrounding Figure shows The positionare error is 0.163 m. Figure 28 vehicles. In the bottom center of the figure, road markings (light green dots) that can matched shows the longitudinal position error. The RMS longitudinal position error is 0.233 m.beWhen only tovertical the map were not significantly detected. As shown at the top of the figure, lateral matching is structures were used, it was found that the longitudinal position error significantly well-performed to some lanes and curbs, buttraffic longitudinal matching is not As a decreased. Thisowing is because building walls and signs perpendicular to performed the vehiclewell. traveling result, in the bottom-right hand figure, the estimated vehicle position (red) has a large error in the direction can provide measurements for the longitudinal direction. longitudinal direction compared toaccuracy the ground truthbecause (black).there On the hand, the sides same of area, Likewise, the lateral position is high are other buildings oninboth the vertical structures can be scanned without the influence of surrounding vehicles. Figure 30 shows the road. Vertical structures provide highly reliable measurements in a downtown area. However, as results of localization using vertical away structures an vehicle, area withthe traffic congestion. they are typically tens of meters frominthe localization accuracy is significantly
affected by the roll and pitch errors. As road markings are available closer to the vehicle, the localization accuracy in an area with a large number of road markings will be higher when the road markings are used than when vertical structures are used. Actually, as shown in Figure 25, although large position errors occasionally occur, the RMS position error is smaller than that when only vertical structures are used. It is difficult to use road markings for localization in areas with traffic congestion. Figure 29
Sensors 2018, 18, 3179
Sensors Sensors2018, 2018,16, 16,xxFOR FORPEER PEERREVIEW REVIEW
20 of 28
20 20 of of28 28
Figure 29. Results of localization using road marking area with traffic congestion. Figure congestion. Figure29. 29.Results Resultsof oflocalization localizationusing usingroad roadmarking markinginin inarea areawith withtraffic traffic congestion.
Figure 30. Results of localization using vertical structures area with traffic congestion. Figure30. 30.Results Resultsof oflocalization localizationusing usingvertical verticalstructures structuresinin inarea areawith withtraffic traffic congestion. Figure congestion.
As can be seen from Figure 30, longitudinal matching performed well even in an area with Ascan canbe beseen seenfrom fromFigure Figure30, 30,longitudinal longitudinalmatching matchingisisisperformed performedwell welleven evenin inan anarea areawith with As traffic congestion. As a result, the bottom-right hand figure shows that the red rectangle overlaps trafficcongestion. congestion. a result, bottom-right hand figure shows the rectangle red rectangle overlaps traffic AsAs a result, thethe bottom-right hand figure shows that that the red overlaps the the theblack black one. black one. one. In contrast traffic congestion, in area with aa small of Incontrast contrast to the the area with traffic congestion, in an an area withnumber smallofnumber number of nearby nearby In to to the areaarea withwith traffic congestion, in an area with a small nearby buildings, buildings, the performance of localization using vertical structures can be degraded. Figure 31 buildings, the of performance localization using vertical be 31 degraded. Figure 31 the performance localizationof using vertical structures can bestructures degraded.can Figure shows the results shows the results of localization using vertical structures in an area with a small number of nearby shows the results of localization using vertical structures in an area with a small number of nearby of localization using vertical structures in an area with a small number of nearby buildings. buildings. buildings.
Sensors 2018, 18, 3179
21 of 28
Sensors2018, 2018,16, 16,xxFOR FORPEER PEERREVIEW REVIEW Sensors
21 of of28 28 21
Figure 31. 31. Results Results of of localization using vertical structures in area area with small numbernumber of nearby nearby Figure of localization using vertical structures in small of 31. Results localization using vertical structures in with area aawith a number small of buildings. nearby buildings. buildings.
As of As can can be be seen seen from from the the camera camera image image of of Figure Figure 31, 31, there there are are not not many many buildings buildings on on the the left left As can be seen from the camera image Figure 31, there are not many buildings on the left side of the road. In this area, multiple peaks appear in the correlation shape for the lateral direction. side of of the the road. road. In In this this area, area, multiple multiple peaks peaks appear appear in the correlation shape for the lateral direction. side Furthermore, the side peak is larger than the main peak. a result, as shown inbottom-right the bottom-right Furthermore, the side peak is larger than the main peak. As aAs a result, result, as shown shown in the the bottom-right hand Furthermore, the side peak is larger than the main peak. As as in hand hand figure, the red rectangle has a lateral position error compared to the black one. On the other hand, figure, the red rectangle has a lateral position error compared to the black one. On the other hand, figure, the red rectangle has a lateral position error compared to the black one. On the other hand, ifif ifroad road markings areused usedin thesame samearea, area,the thelateral lateralposition positionerror errorcan reduced. Figure Figure 32 32 shows shows markings are used ininthe the same area, the lateral position error can be be reduced. shows road markings are the results of localization using road markings in an area with a small number of nearby buildings. the results of localization using road markings in an area with a small number of nearby buildings. the results of localization using road markings in an area with a small number of nearby buildings.
Figure 32. 32. Results Results of of localization using roadroad markings in an aninarea area with small numbernumber of nearby nearby Figure of localization using road markings in small of 32. Results localization using markings an with area aawith a number small of buildings. buildings. nearby buildings.
Asshown shownin inFigure Figure 32, the main peak can be clearly clearly detected incorrelation the correlation correlation shape for the As 32, the main peak cancan be clearly detected in thein shapeshape for thefor lateral As shown in Figure 32, the main peak be detected the the lateraldirection. direction. Asaaresult, result, can bethat seenthe thatestimated theestimated estimated vehicle position has almost nolateral lateral error. direction. As a result, it canitit becan seen vehicle position hashas almost no no lateral error. lateral As be seen that the vehicle position almost error.
Sensors 2018, 16, x FOR PEER REVIEW Sensors 2018, 18, 3179 Sensors 2018, 16, x FOR PEER REVIEW
22 of 28 22 of 28 22 of 28
As seen, road markings and vertical structures can supplement each other. Therefore, the use As seen,road roadmarkings markings and vertical structures can supplement each other. Therefore, the of an which includesand both attributes, can can exhibit further improved localization accuracy. AsELM, seen, vertical structures supplement each other. Therefore, the useuse of of an ELM, which includes bothofattributes, can exhibit further accuracy. Figures 33 and 34 show the results localization using an ELM (road improved markings +localization vertical structures). an ELM, which includes both attributes, can exhibit further improved localization accuracy. Figures 33 Figures 33 and show the results of localization using(road an ELM (road markings vertical structures). and 34 show the34results of localization using an ELM markings + vertical+ structures).
Figure 33. Lateral position error (ELM). Figure Figure33. 33.Lateral Lateralposition positionerror error(ELM). (ELM).
Figure Figure34. 34.Longitudinal Longitudinalposition positionerror error(ELM). (ELM). Figure 34. Longitudinal position error (ELM).
Figure Figure33 33shows showsthe thelateral lateralposition positionerror. error.The TheRMS RMSlateral lateralposition positionerror errorisis0.136 0.136m. m.Figure Figure34 34 shows the longitudinal position error. The RMS longitudinal position error is 0.223 m. Figures 33 and 34 Figure 33 shows the lateral position error. The RMS lateral position error is 0.136 m. Figure 34 shows the longitudinal position error. The RMS longitudinal position error is 0.223 m. Figures 33 and 34 show and longitudinal position errors were reduced. the the number shows theboth longitudinal position error. The RMS longitudinal position errorIn is particular, 0.223 m. Figures 33number andof 34 showthat that boththe thelateral lateral and longitudinal position errors were reduced. In particular, sections with large position errors was significantly reduced. Table 2 lists the localization performances show that both the lateral and longitudinal position errors were reduced. In particular, the number of sections with large position errors was significantly reduced. Table 2 lists the localization of the three maps markings, vertical structures, and ELM). of sections with large position errors was significantly reduced.and Table 2 lists the localization performances of (road the three maps (road markings, vertical structures, ELM). performances of the three maps (road markings, vertical structures, and ELM).
Sensors 2018, 18, 3179
23 of 28
Table 2. Localization performances of three maps. Position Error (m) RMS
Type of Map Road Marking Vertical Structure ELM
95% Confidence Level
99% Confidence Level
Lateral
Longitudinal
Lateral
Longitudinal
Lateral
Longitudinal
0.143 0.163 0.136
0.389 0.233 0.223
0.27 0.34 0.29
0.82 0.45 0.42
0.56 0.49 0.42
1.59 0.67 0.60
Table 2 shows that the use of an ELM exhibited the smallest RMS position errors in both the lateral and longitudinal directions. At a 95% confidence level, road markings caused the smallest lateral position error, and the ELM led to the smallest longitudinal position error. At a 99% confidence level, the ELM exhibited the smallest lateral and longitudinal position errors. As seen, ELM-based localization shows higher position accuracy than the use of either road markings or vertical structures. This is because the amount of map information available at the time of localization is increased owing to the mutual complement of road markings and vertical structures. The localization accuracy improves as the amount of map information increases. When the ELM is converted into an occupancy grid map, this map is filled with only 0 and 1. In this case, if 0 and 1 are seen as the elevation of the terrain, the concept of terrain roughness can be used to determine the amount of map information [29–32]. The terrain roughness can be expressed using the two indices of sigma-T and sigma-Z, as follows: v u N u1 2 σT = t ∑ Hi − H N i =1 v u u σZ = t
2 1 N −1 Di − D N − 1 i∑ =1
H=
Di = Hi+1 − Hi
1 N Hi N i∑ =1
(5)
1 N −1 Di N − 1 i∑ =1
D=
(6)
In Equation (5), Hi is the elevation information of the i-th grid, and N is the number of grids in the map. Sigma-T is the standard deviation of the elevation, and sigma-Z is the standard deviation of the elevation difference of neighboring grids. In general, the localization accuracy is higher as the values of sigma-T and sigma-Z become higher. The ELM is expressed as a 2D occupancy grid map. Therefore, sigma-T and sigma-Z must be calculated on the 2D plane. In this paper, sigma-T and sigma-Z are calculated as follows: σT,Lateral =
1 N
σT,Longitudinal =
σZ,Lateral = σZ,Longitudinal =
1 N 1 N
N
s 1 N −1
∑
i =1 N
∑
j =1
s 1 N −1
N −1
∑
j =1
∑
1 N
∑
i =1 1 N
N
N
∑ Hij − H i
sj=1 1 N
∑
j =1
Dij,Lat − Di
N −1 i =1
s
N
∑ Hij − H j
i =1
2
Dij,Long − D j
N
2
2
Hi = 2
1 N
Hj =
Dij,Lat = Hi+1,j − Hij Dij,Long = Hi,j+1 − Hij
N
∑ Hij
j =1 1 N
(7)
N
∑ Hij
i =1
Di =
1 N −1
Dj =
N −1
∑ Dij,Lat
i =1
1 N −1
N −1
(8)
∑ Dij,Long
j =1
where N is the map size in the area of interest based on the current vehicle position, and this map is a N × N matrix. For a localization performance analysis, this map is rotated using the current vehicle azimuth information so that the longitudinal direction faces north. Therefore, Hij denotes the elevation of the grid located at the i-th position in the longitudinal direction and the j-th position in the lateral direction on the rotated map. Figures 35 and 36 show sigma-T for the lateral
Sensors 2018, 16, x FOR PEER REVIEW Sensors 2018, 18, 3179 Sensors 2018, 16, x FOR PEER REVIEW
24 of 28 24 of 28 24 of 28
and longitudinal directions, respectively. Figures 37 and 38 show the sigma-Z for the lateral and and longitudinal directions, respectively.Figures Figures37 37and and38 38show showthe thesigma-Z sigma-Zfor forthe thelateral lateraland and longitudinal directions, respectively. and longitudinal directions, respectively. longitudinal directions, respectively. longitudinal directions, respectively.
Figure 35. Lateral roughness (Sigma-T). Figure Figure35. 35.Lateral Lateralroughness roughness(Sigma-T). (Sigma-T).
Figure Figure36. 36.Longitudinal Longitudinalroughness roughness(Sigma-T). (Sigma-T). Figure 36. Longitudinal roughness (Sigma-T).
Sensors 2018, 16, x FOR PEER REVIEW Sensors Sensors2018, 2018,18, 16,3179 x FOR PEER REVIEW
25 of 28 2525of of 2828
Figure 37. Lateral roughness (Sigma-Z). Figure Figure37. 37.Lateral Lateralroughness roughness(Sigma-Z). (Sigma-Z).
Figure Figure38. 38.Longitudinal Longitudinalroughness roughness(Sigma-Z). (Sigma-Z). Figure 38. Longitudinal roughness (Sigma-Z).
Sigma-T Sigma-Tisisrelated relatedtotothe themagnitude magnitudeofofthe thecorrelation correlationpeak, peak,while whilesigma-Z sigma-Zisisrelated relatedtotothe the sharpness of the main peak. For this reason, both sigma-T and sigma-Z must be large to find the Sigma-T is related to the magnitude of the correlation peak, while sigma-Z is related to the sharpness of the main peak. For this reason, both sigma-T and sigma-Z must be large to find correlation more accurately. The above figures show that the roughness of the be ELM is the sharpness ofpeak the main peak. For this reason, both sigma-T and sigma-Z must to highest. find correlationpeak more accurately. The above figures show that the roughness oflarge the ELM is the Furthermore, the lateral has figures higher values than the values longitudinal Asthe a correlation peak more roughness accurately. The above show the roughness the longitudinal ELM is highest. Furthermore, the lateralgenerally roughness generally has that higher thanofroughness. the result, the lateral accuracy is higher than the longitudinal accuracy, as shown in Table 2. In addition, highest. Furthermore, hasthe higher values accuracy, than the longitudinal roughness. As a result,the the lateral lateral roughness accuracy is generally higher than longitudinal as shown in when errors for road markings, vertical and thevertical ELMaccuracy, were compared with roughness. Asposition a result, the the lateral higherforstructures, than longitudinal as shown in Tablethe 2. RMS In addition, when RMSaccuracy positioniserrors roadthe markings, structures, and the the roughness values, it was found that as the roughness increased, the RMS position error decreased. Table 2. In addition, when the RMS position errors for road markings, vertical structures, and the ELM were compared with the roughness values, it was found that as the roughness increased, InELM Figure 34, compared Figure 36, decreased. and 38, Figures the points which position error was 0.5 m or were withFigure the roughness values, it was found as the the RMS position error In 34,at 36, andthe 38,longitudinal thethat points atroughness which theincreased, longitudinal more also showwas low longitudinal RMS position error decreased. In 34, longitudinal 36, and 38, roughness the points values. at which the longitudinal position error 0.5 m or moreroughness alsoFigures showvalues. low
position error was 0.5 m or more also show low longitudinal roughness values.
Sensors 2018, 18, 3179
26 of 28
As seen, the ELM has the highest roughness as well as the most excellent localization performance. Although the ELM has a very small data file size (approximately 134 KB/km), it has sufficient information required for accurate localization. As it contains both road markings and vertical structure information, it is possible to continuously perform localization even in areas with traffic congestion or in areas without surrounding buildings. Furthermore, the ELM-based localization method sufficiently meets the localization accuracy requirements for autonomous driving. 5. Conclusions This paper proposed an ELM-based precise vehicle localization method using 3D LIDAR. The proposed ELM has a very small data file size (134 KB/km). Furthermore, as it contains both road markings and vertical structure information, the ELM-based localization method exhibits better performance in terms of accuracy, reliability, and availability than methods that use either road markings or vertical structures. In addition, ELM can be generated through the ELM generation algorithm, and no verification is required for the detected lines. Furthermore, as ELM has a form that is extremely similar to the actually produced line maps of lanes, ELM is easily compatible with similar maps. As a result of ELM-based localization, the RMS position errors for the lateral and longitudinal directions were 0.136 m and 0.223 m, respectively. These results sufficiently met the localization accuracy requirements for autonomous driving. In addition, line detection and data association processes are not required because the correlation matching method was used. Furthermore, correlation matching can be performed using a Fast Fourier Transform (FFT) because simple multiplication for 0 and 1 is required. The matching time using FFT is approximately 78 ms. As seen, the ELM-based localization method can ensure high position accuracy using a map with a small data size. In the future, research on map generation and localization in more areas is required. Author Contributions: Conceptualization, J.-H.I. and G.-I.J.; Methodology, J.-H.I.; Software, J.-H.I.; Validation, J.-H.I.; Formal Analysis, J.-H.I.; Investigation, J.-H.I., G.-I.J. and S.-H.I.; Resources, G.-I.J. and S.-H.I.; Data Curation, J.-H.I.; Writing-Original Draft Preparation, J.-H.I.; Writing-Review & Editing, J.-H.I., G.-I.J. and S.-H.I.; Visualization, J.-H.I.; Supervision, G.-I.J.; Project Administration, J.-H.I. and G.-I.J.; Funding Acquisition, G.-I.J. Acknowledgments: This research was supported by a grant (18TLRP-C113269-03) from Transportation logistics research Program funded by Ministry of Land, Infrastructure and Transport of Korea government. Conflicts of Interest: The authors declare no conflict of interest.
References 1. 2.
3. 4.
5. 6.
Green, D.; Gaffney, J.; Bennett, P.; Feng, Y.; Higgins, M.; Millner, J. Vehicle Positioning for C-ITS in Australia (Background Document); Austroads: Sydney, Australia, 2013. Arras, K.O.; Siegwart, R. Feature extraction and scene interpretation for map-based navigation and map building. In Proceedings of the Symposium on Intelligent Systems and Advanced Manufacturing, Pittsburgh, PA, USA, 25 January 1997. Li, Y.; Olson, E.B. Extracting general-purpose feature from LIDAR data. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, 4–8 May 2010. Brenner, C. Vehicle Localization Using Landmarks Obtained by a LIDAR Mobile Mapping System. In Proceedings of the Photogrammetric Computer Vision and Image Analysis, Mandé, France, 1–3 September 2010. Im, J.H.; Im, S.H.; Jee, G.I. Vertical corner feature based precise vehicle localization using 3D LIDAR in urban area. Sensors 2016, 16, 1268. [CrossRef] [PubMed] Zhang, X.; Rad, A.B.; Wong, Y.K. Sensor fusion of monocular cameras and laser rangefinders for line-based simultaneous localization and mapping (SLAM) tasks in autonomous mobile robots. Sensors 2012, 12, 429–452. [CrossRef] [PubMed]
Sensors 2018, 18, 3179
7.
8. 9. 10. 11. 12. 13.
14.
15.
16. 17.
18.
19. 20. 21.
22. 23. 24. 25. 26.
27.
27 of 28
Hadji, S.E.; Hing, T.H.; Khattak, M.A.; Sultan, M.; Ali, M.; Kazi, S. 2D feature extraction in sensor coordinates for laser range finder. In Proceedings of the 15th International Conference on Robotics, Control and Manufacturing Technology (ROCOM’ 15), Kuala Lumpur, Malaysia, 23–25 April 2015. Gwon, G.P.; Hur, W.S.; Kim, S.W.; Seo, S.W. Generation of a precise and efficient lane-level road map for intelligent vehicle systems. IEEE Trans. Veh. Technol. 2017, 66, 4517–4533. [CrossRef] Hata, A.Y.; Wolf, D.F. Feature detection for vehicle localization in urban environments using a multilayer LIDAR. IEEE Trans. Intell. Transp. Syst. 2016, 17, 420–429. [CrossRef] Hu, Y.; Gong, J.; Jiang, Y.; Liu, L.; Xiong, G.; Chen, H. hybrid map-based navigation method for unmanned ground vehicle in urban scenario. Remote Sens. 2013, 5, 3662–3680. [CrossRef] Choi, J. Hybrid Map-based SLAM using a velodyne laser scanner. In Proceedings of the 2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–10 October 2014. Choi, J.; Maurer, M. Hybrid map-based slam with rao-blackwellized particle filters. In Proceedings of the 2014 17th International Conference on Information Fusion (FUSION), Salamanca, Spain, 7–10 July 2014. Kummerle, R.; Hahnel, D.; Dolgov, D.; Thrun, S.; Burgard, W. Autonomous driving in a multi-level parking structure. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009. Baldwin, I.; Newman, P. Road vehicle localization with 2D push-broom LIDAR and 3D priors. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012. Chong, Z.J.; Qin, B.; Bandyopadhyay, T.; Ang, M.H.; Frazzoli, E.; Rus, D. Synthetic 2D LIDAR for precise vehicle localization in 3D urban environment. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013. Levinson, J.; Montemerlo, M.; Thrun, S. Map-based precision vehicle localization in urban environments. In Robotics Science and Systems; MIT Press: Cambridge, MA, USA, 2007. Levinson, J.; Thrun, S. Robust Vehicle localization in urban environments using probabilistic maps. In Proceedings of the 2010 IEEE International Conference on Robotics and Automations, Anchorage, AK, USA, 3–7 May 2010. Wolcott, R.W.; Eustice, R.M. Fast LIDAR localization using multiresolution gaussian mixture maps. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. Wolcott, R.W.; Eustice, R.M. Robust LIDAR localization using multiresolution gaussian mixture maps for autonomous driving. Int. J. Robot. Res. 2017, 36, 292–319. [CrossRef] Ulas, C.; Temeltas, H. A fast and robust feature-based scan-matching method in 3D slam and the effect of sampling strategies. Int. J. Adv. Robot. Syst. 2013, 10, 1–16. [CrossRef] Akai, N.; Morales, L.Y.; Takeuchi, E.; Yoshihara, Y.; Ninomiya, Y. Robust localization using 3D NDT scan matching with experimentally determined uncertainty and road marker matching. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Redondo Beach, CA, USA, 11–14 June 2017. Grisetti, G.; Kummerle, R.; Stachniss, C.; Burgard, W. A tutorial on graph-based slam. Intellig. Transp. Syst. Mag. 2010, 2, 31–43. [CrossRef] Sunderhauf, N.; Protzel, P. Towards a robust back-end for pose graph slam. In Proceedings of the IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–18 May 2012. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [CrossRef] Borges, G.A.; Aldon, M.J. Line extraction in 2D range images for mobile robotics. J. Intell. Robot. Syst. 2004, 40, 267–297. [CrossRef] Siadat, A.; Kaske, A.; Klausmann, S.; Dufaut, M.; Husson, R. An optimized segmentation method for a 2D laser-scanner applied to mobile robot navigation. In Proceedings of the 3rd IFAC Symposium on Intelligent Components and Instruments for Control Applications, Annecy, France, 9–11 June 1997. Harati, A.; Siegwart, R. A new approach to segmentation of 2D range scans into linear regions. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007.
Sensors 2018, 18, 3179
28.
29. 30. 31.
32.
28 of 28
Nguyen, V.; Martinelli, A.; Tomatis, N.; Siegwart, R. A comparison of line extraction algorithms using 2D laser rangefinder for indoor mobile robotics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005. Siouris, G.M. Missile Guidance and Control Systems; Springer: New York, NY, USA, 2016. Tsai, D.M.; Tseng, C.F. Surface roughness classification for castings. Pattern Recognit. 1999, 32, 389–405. [CrossRef] Weeks, R.J.; Smith, M.; Pak, K.; Li, W.H.; Gillespie, A.; Gustafson, B. Surface roughness, radar backscatter, and visible and near-infrared reflectance in Death Valley, California. J. Geophys. Res. 1996, 101, 23007–23090. [CrossRef] Wilson, G.N.; Ramirez-Serrano, A. Terrain roughness identification for high-speed UGVs. In Proceedings of the International Conference of Control, Dynamic Systems, and Robotics, Ottawa, ON, Canada, 15–16 May 2014. © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).