A Practical Point Cloud Based Road Curb Detection Method for ... - MDPI

0 downloads 0 Views 14MB Size Report
Jul 30, 2017 - 1800 points per frame and they form a circle after projecting the points onto the .... the points generated by one beam form a circle which means.
information Article

A Practical Point Cloud Based Road Curb Detection Method for Autonomous Vehicle Rulin Huang 1 , Jiajia Chen 2 , Jian Liu 3, *, Lu Liu 1 , Biao Yu 4 1 2 3 4 5

*

ID

and Yihua Wu 5

School of Engineering, Anhui Agriculture University, Hefei 230036, China; [email protected] (R.H.); [email protected] (L.L.) School of Automotive and Transportation Engineering, Hefei University of Technology, Hefei 230009, China; [email protected] Department of Automation, University of Science and Technology of China, Hefei 230026, China Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China; [email protected] Ma’anshan Power Supply Company, Ma’anshan 243000, China; [email protected] Correspondence: [email protected]; Tel./Fax: +86-551-6539-3191

Received: 22 May 2017; Accepted: 21 July 2017; Published: 30 July 2017

Abstract: Robust and quick road curb detection under various situations is critical in developing intelligent vehicles. However, the road curb detection is easily affected by the obstacles in the road area when Lidar based method is applied. A practical road curb detection method using point cloud from a three-dimensional Lidar for autonomous vehicle is reported in this paper. First, a multi-feature, loose-threshold, varied-scope ground segmentation method is presented to increase the robustness of ground segmentation with which obstacles above the ground can be detected. Second, the road curb is detected by applying the global road trend and an extraction-update mechanism. Experiments show the robustness and efficiency of the road curb detection under various environments. The road curb detection method is 10 times the speed of traditional method and the accuracy is much higher than existing methods. Keywords: multi-beam Lidar; ground segmentation; road curb detection; autonomous vehicle

1. Introduction Environment perception is an indispensable component for autonomous vehicle as it provides real-time environment information. Driving safety and intelligence of autonomous vehicle is based on accurate and quick understanding of the driving environment. Among the elements that consist the driving environment, road curbs are especially important for being boundaries of the drivable area. To detect road curbs quickly and accurately, a variety of sensors are used including cameras and lasers. In the past decades, great achievements have been made in computer vision and some of the proposed methods have been successfully applied to detect lane marks, traffic participants and traffic signs including vehicle, bicycle and pedestrian, and traffic lights [1–8]. However cameras are sensitive to light changes and no distance information is provided. Compared with cameras, Lidars can provide accurate distance information and they are not affected by light conditions. 2D-Lidars were widely used to detect obstacles in advanced driver assistant system on vehicles [9]. While the data is too sparse for environment perception of autonomous vehicles. In recent researches on autonomous vehicles, 3D Lidar plays a more and more import role as it provides sufficient information, in the form of point cloud, to model the 3D environment. 3D Lidar is a kind of multi-beam Lidar which percept the environment by transmitting and receiving multiple laser rays. In 2005 DARPA Grand Challenge, only the team from Velodyne, a famous 3D Lidar manufacturer, was equipped with a 3D Lidar. While 6 of the 11 teams entered the final competition of 2007 DARPA Urban Challenge were equipped with Information 2017, 8, 93; doi:10.3390/info8030093

www.mdpi.com/journal/information

Information 2017, 8, 93 Information 2017, 8, 93

2 of 25 2 of 24

Lidar. While 6 of the 11 teams entered the final competition of 2007 DARPA Urban Challenge were a 3D Lidar [10,11]. Recently, Ford and 3D Lidars asto the main aequipped 3D Lidar with [10,11]. Recently, Google, Ford andGoogle, Baidu choose 3DBaidu Lidarschoose as the main sensor obtain sensor to obtain environment information [12–16]. environment information [12–16]. Ground segmentation segmentation isis another another way way ofofsaying sayingobstacle obstacledetection. detection. Height Height difference difference based based Ground methods were proposed to detect obstacles [17–21]. Besides, a combination of height difference and methods were proposed to detect obstacles [17–21]. Besides, a combination of height difference and convexity criterion was proposed to segment the ground in [22]. Cheng et al. [23] proposed a Ring convexity criterion was proposed to segment the ground in [22]. Cheng et al. [23] proposed a Ring Gradient-based Local Local Optimal segment thethe ground based on on the Gradient-based Optimal Segmentation Segmentation(RGLOS) (RGLOS)algorithm algorithmtoto segment ground based gradient feature between the rings. The common problem for the aforementioned method is that the gradient feature between the rings. The common problem for the aforementioned method is thata features areare adopted in ground segmentation, resulting in over segmentation with asingle singleorormultiple multiple features adopted in ground segmentation, resulting in over segmentation strict threshold while under-segmentation with loose threshold. To overcome this shortcoming, with strict threshold while under-segmentation with loose threshold. To overcome this shortcoming, ground segmentation segmentation method method based based on on the the multi-feature multi-feature with with loose loose threshold threshold is is presented presented in in [24]. [24]. ground Thesetwo twomethods methods take advantage of local features to detect without considering the These take advantage of local roadroad features to detect curbs curbs without considering the global global road trend. However, our experiment found that the applied scope of different features should road trend. However, our experiment found that the applied scope of different features should be be considered in greater detail because geometric properties different features. considered in greater detail because the the geometric properties varyvary withwith different features. Road curbs curbs can obstacles which are are boundaries of the area. Chen Road can be bedeemed deemedasasspecial special obstacles which boundaries ofdrivable the drivable area. et al. [25] selected the initial curb points from the individual scan lines by using the distance Chen et al. [25] selected the initial curb points from the individual scan lines by using the criterion, distance Hough Transform (HT), and (HT), the iterative Regression (GPR). Zhao(GPR). [26] obtained the criterion, Hough Transform and theGaussian iterativeProcess Gaussian Process Regression Zhao [26] seed points that meet three criteria and modeled the curb as a parabola while using a random sample obtained the seed points that meet three criteria and modeled the curb as a parabola while using a consensus (RANSAC) algorithm to remove the false positives dopositives not match thedo parabolic model. random sample consensus (RANSAC) algorithm to remove thethat false that not match the Least trimmed squares (LTS) regression method is applied to fit a function in the curb candidate parabolic model. Least trimmed squares (LTS) regression method is applied to fit a function in the curb points in points [27] while a sliding is applied to obtain curb points in [28]. In the of Liu candidate in [27] while a window sliding window is applied to obtain curb points in [28]. In study the study of et al. [24], distance transformation is performed on the obstacle map and threshold segmentation Liu et al. [24], distance transformation is performed on the obstacle map and threshold segmentationis is obtained obtained by by region region isused usedtotoincrease increasethe thecontinuity continuityofof the the map. map. Thereafter, Thereafter, the the drivable drivable area area is growing, and the road curb points are obtained by searching. All the aforementioned methods shared growing, and the road curb points are obtained by searching. All the aforementioned methods shared the drawback drawback of of being being easily easily affected affected by by the the obstacle obstacle inside inside the the road road area, area,especially especially for for road roadwith with the considerable number of vehicles, because of the lack of the overall road shape information. They are considerable number of vehicles, because of the lack of the overall road shape information. They are based on local features of the road rather than the global road trend. based on local features of the road rather than the global road trend. Basedon onthe theaforementioned aforementionedconsiderations, considerations,aaroad roadcurb curbdetection detectionmethod, method,which whichtakes takesglobal global Based trend of road curb into consideration, is proposed in this study and shown in Figure 1. First, a trend of road curb into consideration, is proposed in this study and shown in Figure 1. multiFirst, feature, loose-threshold, varied-scope ground segmentation method is presented in this work. a multi-feature, loose-threshold, varied-scope ground segmentation method is presented in this Features used used in the process areare discussed inindetail. detection work. Features in segmentation the segmentation process discussed detail.Second, Second,aa road road curb curb detection process is proposed. The road curb can be obtained through extraction or tracking, which is switched process is proposed. The road curb can be obtained through extraction or tracking, which is switched byevaluating evaluatingthe thepredicted predictedroad roadcurb. curb.IfIfthe thepredicted predictedroad road curb based motion vehicle by curb based onon thethe motion of of thethe vehicle is is evaluated to be the right curb, then a simple update of the road curb is sufficient. Otherwise, the evaluated to be the right curb, then a simple update of the road curb is sufficient. Otherwise, the road roadshould curb should be obtained through extraction. curb be obtained through extraction.

Figure Figure1.1. Framework Frameworkfor forroad roadcurb curbdetection detectionand andtracking. tracking.

The main contributions work areare threefold. First,First, a multi-feature, loose-threshold, variedThe contributionsofofthis this work threefold. a multi-feature, loose-threshold, scope groundground segmentation method is presented to increase to theincrease robustness ground segmentation. varied-scope segmentation method is presented theofrobustness of ground Second, a road Second, curb extraction based onmethod the analysis thethe road shape information, which segmentation. a road method curb extraction basedofon analysis of the road shape

Information 2017, 8, 93

3 of 24

Information 2017, 8, 93

3 of 25

information, which consists of road trend information and road width distribution information, is proposed. Third, a framework for road curb detection is proposed based on the multi-index consists of road trend information and road width distribution information, is proposed. Third, a evaluation offor theroad roadcurb curbdetection in three is aspects, namely, of the road curb, the of fitthe degree framework proposed basedthe onshape the multi-index evaluation roadbetween curb the road curb and the scene, and the variance between the curve and history information. in three aspects, namely, the shape of the road curb, the fit degree between the road curb and the Theand succeeding portions of this paperand are history organized as follows: The segmentation of the ground scene, the variance between the curve information. is described in Sectionportions 2. The road detection method is discussed in Section 3. of Experiment The succeeding of thiscurb paper are organized as follows: The segmentation the groundand results are introduced in Section 4. Section 5 provides our conclusions on this method and discusses is described in Section 2. The road curb detection method is discussed in Section 3. Experiment and future work that needs in to Section be completed. results are introduced 4. Section 5 provides our conclusions on this method and discusses future work that needs to be completed. 2. Ground Segmentation 2. Ground Segmentation The experimental platform and a point cloud are shown in Figure 2. A Velodyne 3D-Lidar, which contains laser transmitters, mounted on the top of the Each laser transmitter returns The64 experimental platformisand a point cloud are shown in vehicle. Figure 2. A Velodyne 3D-Lidar, which contains laser transmitters, is mounted theprojecting top of thethe vehicle. transmitterplane. returnsAs a 1800 points64per frame and they form a circleon after pointsEach ontolaser the horizontal 1800 points perof frame and they form a circle projecting thewill points onto theifhorizontal As build-in feature the Velodyne 3D-Lidar, theafter projected circles be broken obstacles plane. are detected. a build-in feature of the Velodyne 3D-Lidar, the projected circles will be broken if obstacles are Thus, ground segmentation can be completed by removing the continuous part of the circles. While the detected. Thus, segmentation caneffects be completed by removing the continuous part of the road surfaces areground not entirely flat and the of different laser transmitters are different as acircles. result of While the road surfaces are not entirely flat and the effects of different laser transmitters are different different scan ranges. It is difficult to segment the ground with a single feature because of the difficulty a result ofofdifferent scan ranges. It is difficult segmentwill the ground a single feature because the inasdefinition feature threshold. Strict or loose to threshold result inwith a low accuracy. Moreover, of the difficulty in definition of feature threshold. Strict or loose threshold will result in a low accuracy. scope should be considered as different features have different geometric properties. Based on the Moreover, the scope should be considered as different features have different geometric properties. aforementioned considerations, a philosophy for multi-feature based ground segmentation is proposed. Based on the aforementioned considerations, a philosophy for multi-feature based ground Features are analyzed and tested in various driving environments. Then the distinctive and robust segmentation is proposed. Features are analyzed and tested in various driving environments. Then ones will be applied in ground segmentation. Threshold and scope are set to remove as many ground the distinctive and robust ones will be applied in ground segmentation. Threshold and scope are set points as possible while ensuring that no obstacle point is removed. The geometric characteristic of to remove as many ground points as possible while ensuring that no obstacle point is removed. The the remaining ground points analyzed,ground and new features are constructed to remove the remaining geometric characteristic of theisremaining points is analyzed, and new features are constructed ground points. After extensive experiments various environments, featuresenvironments, adopted in this to remove the remaining ground points. Afterunder extensive experiments under various study are discussed in the subsequent subsections. features adopted in this study are discussed in the subsequent subsections.

(a)

(b)

(c)

Figure 2. System introduction. (a) Intelligent Pioneer; (b) Raw point cloud; (c) Local coordinate. Figure 2. System introduction. (a) Intelligent Pioneer; (b) Raw point cloud; (c) Local coordinate.

2.1. Maximum Height Difference in One Grid (MaxHD) 2.1. Maximum Height Difference in One Grid (MaxHD) A 750 × 500 grid map can be obtained by projecting the point cloud onto the horizontal plane 750 × 500 grid map projecting point cloud onto the plane andAthe resolution is 20 cm can × 20 be cm.obtained The localby coordinate is the shown in Figure 2c and thehorizontal origin of the and the is top 20 cm × of 20 the cm.map. TheThe local coordinate in Figure and of the origin map is resolution set as the left point Lidar is placedisinshown (500, 250), thus, a2c range 150 m × of the map is set as the left top point of the map. The Lidar is placed in (500, 250), thus, a range 100 m is covered. The maximum height difference is calculated by comparing the highest point and of 150 × 100 point m is covered. difference calculated by comparing the highest themlowest projectedThe ontomaximum the same height grid cell, which isis small in a flat area. However, our point and the lowest pointthis projected same grid small in a flat area. However, experiments show that feature onto is notthe effective for cell, wet which groundisand distant low obstacles. Theour

Information 2017, 8, 93

4 of 24

Information 2017, 8, 93

4 of 25

experiments show that this feature is not effective for wet ground and distant low obstacles. The height Information 2017, 8, 93 4 of 25 height measurement the point from wetisground is inaccurate and laser manypoints laser points are as lostmirror as measurement of the pointoffrom wet ground inaccurate and many are lost mirror reflection. When points are projected onto the distant low obstacle, the height change in one heightWhen measurement of the point from inaccurate and many laser pointsin areone lostgrid as cell reflection. points are projected ontowet theground distantislow obstacle, the height change grid cell is excessively small, as illustrated in Figure 3. mirror reflection. When points are projected is excessively small, as illustrated in Figure 3. onto the distant low obstacle, the height change in one grid cell is excessively small, as illustrated in Figure 3.

dA dA B CA B C

A

dB dB

dC dC

Figure 3. Schematic figure of obstacle scanning. Figure 3. Schematic figure of obstacle scanning.

Figure 3. Schematic figure of obstacle scanning.

In Figure 3, A–C are points a distant low curb.The Theheight height difference points is too In Figure 3, A–C are points on aondistant low curb. differenceofofthe thethree three points is too small to be used to detect obstacles. Figure 4 shows an example of the failure of the feature of small to beInused to 3, detect obstacles. Figure 4 shows an example of the failureofofthe thethree feature of maximum Figure A–C are points on a distant low curb. The height difference points is too maximum height difference in a grid cell. Figure 4a shows that the experimental scene is a wide road height difference in a to grid cell.obstacles. Figure 4aFigure shows4 that theanexperimental scene is aof wide with small to be used detect shows example of the failure the road feature of low with low stone curbs, which are approximately 10Figure cm high. 4bthe shows themap gridgenerated map generated stonemaximum curbs, which approximately 10 cm high. 4bFigure shows grid from a heightare difference in a grid cell. Figure 4a shows that the experimental scene is a wide road from a frame of Velodyne raw data whose partially enlarged detail map is shown in Figure 4c. The with low stone curbs, which are approximately 10 cm high. Figure 4b shows the grid map generated frame of Velodyne raw data whose partially enlarged detail map is shown in Figure 4c. The figure figure shows that a segment of the distant stone curb is scanned by only one beam consisting of 10 from of Velodyne raw data whose enlarged detail is shown in Figure The with shows thataaframe segment of the distant stone curbpartially is scanned by only onemap beam consisting of 104c. pixels pixels with an accumulated height difference of approximately 10 cm. The average height difference figure showsheight that a segment of of theapproximately distant stone curb scanned by only height one beam consisting 10pixel an accumulated difference 10 is cm. average difference of of one of one pixel is approximately 1 cm. Thus, it is difficult toThe distinguish the points and ground points pixels with an accumulated height difference of approximately 10 cm. The average height difference is approximately 1 cm. Thus,difference it is difficult to distinguish the points and ground points with maximum with maximum height feature. of one pixel is approximately 1 cm. Thus, it is difficult to distinguish the points and ground points height difference feature. with maximum height difference feature.

Road curb Road curb

(a)

(b)

(c)

(a) of projecting a typical driving scene (b) onto the horizontal plane. (a) Real (c) scene picture; Figure 4. Result (b) Grid map of yaw point cloud; (c) Partially enlarged detail map. Figure 4. Result of projecting a typical driving scene onto the horizontal plane. (a) Real scene picture; Figure 4. Result of projecting a typical driving scene onto the horizontal plane. (a) Real scene picture; (b) Grid map of yaw point cloud; (c) Partially enlarged detail map.

Tangential Anglepoint Feature (TangenAngle) (b)2.2. Grid map of yaw cloud; (c) Partially enlarged detail map.

2.2. Tangential AngleisFeature (TangenAngle) This feature described in [23] which can be obtained with radial and tangential angle as shown

2.2. Tangential in FigureAngle 5. TheFeature Lidar is(TangenAngle) positioned on O and P is a point on road. Q is an obstacle point. The radial This feature is described in [23] which can be obtained with radial and tangential angle as shown →



MN . M OP O indirection Figure 5.of The Lidar is positioned on and Pbeisobtained a point onwith road. Q is an obstacle The radial P is represented withwhich and tangential direction isradial represented withpoint. and are This feature is described in [23] can and tangential angle asNshown → → adjacent points of P on both sides. Thus, the tangential angles for P and Q in Figure 5 are A1 and A2, MN OP in Figure 5. The is positioned and P is a point on road. Q is an with obstacle .point. direction of PLidar is represented with on O and tangential direction is represented M andThe N areradial

→ of the tangential angle is calculated with (1) for P. → respectively, and the cosine value

adjacent of P on both sides. tangential angles for and Q in Figure 5 are A1M and A2,N are direction of Ppoints is represented with OPThus, and the tangential direction is Prepresented with MN. and → respectively, and the cosine value of the tangential angle is→ calculated with (1) for P. adjacent points of P on both sides. Thus, the tangential angles for | OP• MN | P and Q in Figure 5 are A1 and A2, AngleFeature = → → → (1) respectively, and the cosine value of the tangential angle calculated with (1) for P. |||OP • | | MN OPis•| |MN | || AngleFeature =

→→ →→ | |OP | |• | | MN | | |OP· MN | AngleFeature = → → kOPk·k MN k

(1)

(1)

Information 2017, 8, 93 Information 2017, 8, 93 Information 2017, 8, 93

5 of 24 5 of 25 5 of 25

point is is projected projected onto onto the the ground, ground, e.g., e.g., P, P, the value of AngleFeature AngleFeature Figure 5 shows that, when a point Figure 5 shows that, when a point is projected onto the ground, e.g., the value of AngleFeature 0 0because thethe two directions are nearly vertical. By contrast, when a point isaprojected is approximately approximately because two directions are nearly vertical. ByP,contrast, when point is is approximately 0 because the two directions are nearly vertical. By contrast, when a point is onto the obstacle, Q, the value of AngleFeature is higher isbecause anglethe between the two projected onto the e.g., obstacle, e.g., Q, the value of AngleFeature higher the because angle between projected onto the obstacle, e.g., Q, the value of AngleFeature is higher because the angle between directions is acute. is acute. the two directions the two directions is acute.

(a) (a)

(b) (b)

Figure 5. Schematic diagram for the construction of the tangential angle. (a) Map of original point Figure 5. Schematic diagram diagram for the construction of the tangential angle. (a) Map of original point Figure(b) 5. Schematic cloud; Schematic diagram. for the construction of the tangential angle. (a) Map of original point cloud; (b) Schematic diagram. cloud; (b) Schematic diagram.

However, our experiment found that this feature is unsuitable for the points in the two sides of However, our experiment found that this feature is unsuitable for the points in the two sides of the vehicle because the value of AngleFeature is approximately 0 even if the points are projected onto the is 0 even if the points are projected onto the valuediagram of AngleFeature the vehicle obstacle.because The schematic is shown inapproximately Figure 6. the obstacle. The schematic diagram is shown in Figure 6.

Figure 6. Schematic diagram for the variations between tangential angle and obstacle position. Figure 6. Schematic diagram for the variations between tangential angle and obstacle position. Figure 6. Schematic diagram for the variations between tangential angle and obstacle position.

Figure 6 shows three points on the road curb whose tangential angles are θ1 , θ 2 and θ3 . Point Figure 6 shows three points on the road curb whose tangential angles are θ1 , θ 2 and θ3 . Point 3 is found the side theroad vehicle, so AngleFeature is approximately 0 θeven if it3 is Figure on 6 shows threedirection points onofthe curb whose tangential angles are θ1 , θ2 and 3 . Point 3 is foundonon the side direction of theangle vehicle, so AngleFeature isthe approximately 0 farther even ifaway it is projected the obstacle. The tangential becomes smaller when points become found on the side direction of the vehicle, so AngleFeature is approximately 0 even if it is projected on projected on the obstacle. The tangential angle becomes smaller when the points become farther away the obstacle. The e.g., tangential becomes smaller when the points become farther away from the θ3 > θangle from the LIDAR, 2 > θ1 . Thus, the value of AngleFeature becomes larger, which means θ > θ > θ from thee.g., LIDAR, e.g., . Thus, the value of AngleFeature larger, which means 3 2 1 LIDAR, θ > θ > θ . Thus, the of AngleFeature becomes becomes larger, which means that this 3 is more 2 1 that this feature significant forvalue far away points. that thisisfeature is more significant for far away points. feature more significant for far away points. 2.3. Change in Distance between Neighboring Points in One Spin (RadiusRatio) 2.3. Change in Distance between Neighboring Points in One Spin (RadiusRatio)

Information 2017, 8, 93

6 of 24

Information 2017, 8, 93

6 of 25

2.3. Change in Distance between Neighboring Points in One Spin (RadiusRatio) When projected onto the flat ground, the points generated by one beam form a circle which When projected onto the flat the points generated by one beamthe formsame. a circleHowever, which means means that the distance of ground, the neighboring points are nearly the that the distance of smaller the neighboring points are nearly the the same. However, the distance becomes smaller distance becomes when point is projected onto positive obstacle (e.g., a bump) while lager when is are projected ontoonto the positive obstacle (e.g.,(e.g., a bump) while lager when points are projected whenpoint points projected the negative obstacle a pothole). onto the negative obstacle (e.g., a pothole). 2.4. Height Difference Based on the Prediction of the Radial Gradient (HDInRadius) 2.4. Height Difference Based on the Prediction of the Radial Gradient (HDInRadius) Points A–C are three points obtained by three adjacent beams in one direction. The following Points A–C three ifpoints obtained adjacent equation will be are satisfied A, B and C are by on three the same plane.beams in one direction. The following equation will be satisfied if A, B and C are on the same plane.

Z C − Z B d BC = ZCZ− Z d BC B d AB B − Z= A

(2) (2)

ZCZ= Z + ( Z − Z Z)A∗)* d BC /d/AB dBC d AB C = BZ B + (BZ B − A

(3) (3)

ZB − Z A d AB , and are the heights of A, B, and C, respectively; d BC is the distance of where where Za , Zb , and Zc are the heights of A, B, and C, respectively; dBC is the distance of projection of B projection of B and C on the horizontal plane and is the distance of projection A and B plane. on the d and C on the horizontal plane and d AB is the distanceAB of projection of A and B on the of horizontal Thus, ZC can be predicted Z beand ZB . horizontal plane. Thus, Zwith predicted with Z A and Z B . C can A

Za , Z b

Zc

In onto thethe obstacle while A and B are projected ontoonto the ground. ∆H isΔthe H InFigure Figure7,7,CCisisprojected projected onto obstacle while A and B are projected the ground. height difference based on the prediction of the radial of C andof can as follows: is the height difference based on the prediction of thegradient radial gradient C be andcalculated can be calculated as follows: ∆H = ZC − ZB − ( ZB − Z A ) ∗ dBC /d AB (4) ΔH = Z C − Z B −(Z B − Z A ) * d BC / d AB (4) where ∆H should be 0 if C is projected onto the ground instead of the obstacle. where ΔH should be 0 if C is projected onto the ground instead of the obstacle.

Figure7.7. Schematic Schematic diagram diagramfor forheight heightdifference differencebased basedon onthe theprediction predictionof ofthe theradial radialgradient. gradient. Figure

2.5.Selection Selectionofofthe theThreshold Thresholdand andScope Scopefor forEach EachFeature Feature 2.5. Anexperiment experimentisisperformed performed analyzing value of each feature in a crossroad, the An byby analyzing thethe value of each feature in a crossroad, wherewhere the road roadisarea is relatively broader than the normal road. Theisresult is shown in 8, Figure where area relatively broader than the normal straightstraight road. The result shown in Figure where8,values values of the four features are illustrated in a–d, respectively. of the four features are illustrated in a–d, respectively. Figure8a 8ashows showsthat thatthe thefeature featureof ofmaximum maximumheight heightdifference differencein inone onegrid gridcell cellnear nearthe thevehicle vehicle Figure more distinguishable distinguishable because one grid is larger. Thus, the isis more because the the number numberofofthe thepoints pointsprojected projectedonto onto one grid is larger. Thus, height variation in one grid cell can be reflected. On the contrary, the feature becomes less the height variation in one grid cell can be reflected. On the contrary, the feature becomes less distinguishable,which whichmeans meansthat thatthe thefeature featureshould shouldbe beapplied appliedtotothe thepoints pointsnear nearthe thevehicle. vehicle. distinguishable,

Information 2017, 8, 93

7 of 24

Information 2017, 8, 93

7 of 25

(a)

(b)

(c)

(d)

Figure 8. Value distribution thefour fourfeatures features in in a a crossroad. height difference in one Figure 8. Value distribution of of the crossroad.(a)(a)Maximum Maximum height difference in grid; (b) Tangential angle feature; (c) Distance change between neighboring points in one spin; (d) one grid; (b) Tangential angle feature; (c) Distance change between neighboring points in one spin; Heightdifference differencebased based on on the the prediction prediction of (d) Height ofthe theradial radialgradient. gradient.

As the road is not entirely flat, a sudden change in the direction of the points generated by one As the is not entirely flat, athe sudden change the points generated one beam inroad a circle occurred, causing sudden changeinofthe thedirection tangentialofangle in the middle of theby road, beamasin a circle occurred, causing the sudden of the tangential angle in the middle the shown in Figure 8b. Moreover, as shown in change the red box, some laser beams shoot on the body of of the road,autonomous as shown in Figure 8b. Moreover, as shown in the red box, some laser beams shoot on the vehicle and feature values of their tangential neighboring points in one spin change bodysignificantly of the autonomous vehicle and feature values of their tangential neighboring points in one spin which can be seen in Figure 8c too. Thus, the criterion of obstacles is satisfied and change significantly which can be seen in Figure 8c too. Thus, the criterion of obstacles is satisfied and misdetection occurs. misdetection occurs.generated by one beam in one spin are shown in Figure 8, where a section of points The points The pointsby generated by one beam one spin are in Figure sectionpoint of points is is blocked an obstacle. Point O isinthe position of shown the LIDAR. Point8,Qwhere is an aobstacle while points P are Point ground P and Qofand ofisMan onobstacle both sides. The tangential blocked by M an and obstacle. O points. is the position theadjacent LIDAR. points Point Q point while points → → M and P are ground points. P and Q and adjacent points of M on both sides. The tangential angle of PQ is an angle of point M→ formed by acute angle now instead of the normal right angle → OM and pointfor Mthe formed byprojected OM andonto PQ the is anground. acute angle now instead of the normal right angle for the points points projectedThe ontodistance the ground. change feature in one spin can also be influenced by the block of the obstacle, as The distance change in one spin can also be influenced by the as block of the obstacle, shown in Figures 8 andfeature 9. As shown in Figure 8, the feature can be calculated OP/OQ for point M, as shown Figureshave 8 and 9. As shown in Figure 8, the can be calculated OP/OQ for point M, whichinshould been approximated to 1 for the feature points projected onto theas ground because of the which should have approximated to 1 for the projected onto of thethe ground because of the continuity of thebeen points in one spin. However thepoints value varied because block of the obstacle. Moreover, as mentioned previously, this feature suffers thebecause drawback misdetection the low continuity of the points in one spin. However the also value varied of of the block of theofobstacle. obstacle. As shown in Figure 7d, the height difference based on the prediction of the radial gradient Moreover, as mentioned previously, this feature also suffers the drawback of misdetection of the low

Information 2017, 8, 93

8 of 24

Information 2017, 8, 93

8 of 25

obstacle. As shown in Figure 7d, the height difference based on the prediction of the radial gradient performs distinguishing distinguishingin inall allscopes. scopes.However, However,because becauseofofthe thebump bumpofof the vehicle, noise may exist the vehicle, noise may exist in in ground area. thethe ground area.

Figure Figure 9. Schematic Schematic diagram diagram for for the the influence of the obstacle on its tangential neighboring point.

The threshold and the applied scope for each feature are shown in Table 1. The definition of The threshold and the applied scope for each feature are shown in Table 1. The definition thresholds is difficult as the absence of ground truth of ground segmentation. We adopted an of thresholds is difficult as the absence of ground truth of ground segmentation. We adopted an experimental method to find appropriate thresholds. The thresholds are adjusted and tested to obtain experimental method to find appropriate thresholds. The thresholds are adjusted and tested to obtain better performance in typical scenes including urban scene, field scenes which are introduced in the better performance in typical scenes including urban scene, field scenes which are introduced in the experimental part. After defining the thresholds, the four features are used in a cascade fashion. experimental part. After defining the thresholds, the four features are used in a cascade fashion. Points Points will be classified as obstacles if they satisfy all the four features. Otherwise, they will be treated will be classified as obstacles if they satisfy all the four features. Otherwise, they will be treated as as ground points. ground points. Table 1. The range and threshold for each feature. Table 1. The range and threshold for each feature. Feature Feature

Beam Range (m) Beam Range (m)

Rotation Range Rotation Range

Threshold Threshold

MaxHD MaxHD TangenAngle TangenAngle RadiusRatio HDInRadius RadiusRatio

[0, 45] All [0,All 50] [0,All 50]

All All [0, 300] || [600, 1200] || [1200, 1800] [0, 300] || [600, 1200] || [1200, 1800] All

3 cm >3>cm >0.6 0.6 [0.9, 0.95] ||> [1.05, 1.1] >5 cm [0.9, 0.95] || [1.05, 1.1]

HDInRadius

3. Road Curb Detection

All

All All All

> 5 cm

3. Road Detection TheCurb height of curbs and distance between curbs and the sensor make it difficult to detect the curbs. However, thecurbs application of roadbetween trend can be helpful in sensor the detection process. The height of and distance curbs and the make it difficult to detect the Two typical scenes are shown in Figure 10 to illustrate the difficulties in road curb detection. curbs. However, the application of road trend can be helpful in the detection process. The green box in scenes the leftare image shows that a 10 cartoisillustrate parking on roadside,inwhich would be detected Two typical shown in Figure thethe difficulties road curb detection. The as a road curb if only the local information is applied. The green box in the right image shows that green box in the left image shows that a car is parking on the roadside, which would be detected asa exists twolocal segments of the road curb, and gap would leakshows detection the agap road curbbetween if only the information is applied. The this green box in thecause right the image that of a gap road curb if only thesegments local information is applied. exists between two of the road curb, and this gap would cause the leak detection of the However, it is not difficult to learn road’s overall shape information despite the existence of road curb if only the local information isthe applied. the obstacle inside the road area and the interval of overall the curb. The road’s overalldespite shape information However, it is not difficult to learn the road’s shape information the existencecan of be reflected by the road centerline, and the road width information. Besides, the real road width can the obstacle inside the road area and the interval of the curb. The road’s overall shape information be obtained by the width distribution. can be reflected by road the road centerline, and the road width information. Besides, the real road width can be obtained by the road width distribution.

Information 2017, 8, 93

9 of 24

Information 2017, 8, 93

9 of 25

Information 2017, 8, 93

9 of 25

(a)

(b)

Figure 10. Two typical scenes illustrating the difficulties for road curb detection. (a) A car parking

Figure 10. inside Two the typical scenes thecurb. difficulties for road curb detection. (a) A car parking road; (b) A gapillustrating exists in the road (a) in the road curb. (b) inside the road; (b) A gap exists A road curb detection method based on the road’s overall shape information is presented in this Figure 10. Two typical scenes illustrating the difficulties for road curb detection. (a) A car parking section, and its the flow diagram shown Figure inside road; (b) A gapisexists in thein road curb. 11.The method can be divided into two parts: road road curbevaluation detection basedand onupdate. the road’s overall shape information is presented shape andmethod curb detection

A in this A road curb detection methodin based on the road’s overall shape information is presentedinto in thistwo parts: road section, and its flow diagram is shown Figure 11.The method can be divided section, and its flow diagram is shown in Figure 11.The method can be divided into two parts: road shape evaluation and curb detection and update. shape evaluation and curb detection and update.

Figure 11. Road curb extraction diagram. Information 2017, 8, 93

10 of 25

3.1. Road Shape Evaluation

Figure 11. Road curb extraction diagram.

Figure[i 11. Road curb extraction diagram. + 1][j + 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i + 1][j + 1]) DistanceMap

In this part, a two-step, 3.1. Road Shape Evaluation centerline extraction and road width estimation, method is proposed to DistanceMap [i][j + 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i][j + 1]) estimate the road shape.

In this part, a two-step, centerline extraction and road width estimation, method is proposed to 3.1. Road Shape Evaluation DistanceMap [i − 1][j + 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i − 1][j + 1]) estimate the road shape.

3.1.1. Centerline Extraction

end for In this part, a two-step, centerline extraction and road width estimation, method is proposed to 3.1.1. Centerline Extraction end for After the ground segmentation, a grid map that only contains two values is created, obstacles estimate the road shape.

After ground a grid map thatEach onlygrid contains values created, obstacles are marked asfor 0the and othersegmentation, grids are− marked as 255. cell two of the mapisrepresents 20 cm. Then i← MAP_HEIGHT 2 to 1 are marked as 0 and other gridsuses are marked as 255. grid cell ofand the the mapgrid represents 20 cm. Then with a distance transformation, which Algorithm 1, Each is conducted intensity increases for j← MAP_WITH − 2 to 1 3.1.1. Centerlinea distance Extraction transformation, which uses is conducted and the of gridthe intensity increases with the distance between the grid and anAlgorithm obstacle1,grid. An example distance transformation the distance between the grid [iand obstacle grid. An [i][j] example of the distance transformation DistanceMap − 1][j]an = min(ObstacleMap + 1, ObstacleMap [i − 1][j] ) performed on the obstacle map is shownmap in Figure 12. As contains shown in Figure 12c, the is intensities ofobstacles the After the ground that12. only two created, are performedsegmentation, on the obstacle mapaisgrid shown in Figure As shown in Figure 12c,values the intensities of the [i − 1][ j − 1] = min(ObstacleMap [i][j] + 1,distance ObstacleMap [i − 1][ j the − 1]) points in the DistanceMap pointspoints in thein road centerline is isthe local maximum because the between the road centerline the local maximum because the distance between the points in the marked as 0 and other grids are marked as 255. Each grid cell of the map represents 20 cm. Then a road centerline andand its nearest obstacle is the themaximum maximum neighboring DistanceMap [i][ j − 1] = point min(ObstacleMap [i][j] +compared 1,compared ObstacleMap −its 1]) local road centerline its nearest obstacle point is withwith its[i][j local neighboring distance transformation, which uses Algorithm 1,the istrend conducted and the grid intensity increases with the points. Moreover, theDistanceMap road centerline reflects trend of the road. points. Moreover, the road centerline reflects the of the road. [i + 1][j − 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i + 1][j − 1])

distance between the grid end andforan obstacle grid. An example of the distance transformation performed Algorithm 1. Distance MapGeneration Process Algorithm 1.in Distance Map Process on the obstacle map is shown Figure 12.Generation As shown in Figure 12c, the intensities of the points in the endDistance for Transform Algorithm Transform Algorithm road centerline is theDistance local maximum because the distance between the points in the road centerline Function End Input: ObstacleMap Input: ObstacleMap and its nearest obstacle point is1. the maximum compared Algorithm Distance Map Generation Process with its local neighboring points. Moreover, Input: MAP_WIDTH Input: MAP_WIDTH the road centerline reflects the trend of the road. Input: MAP_HEIGHT Distance Transform Algorithm Input: MAP_HEIGHT Output: DistanceMap Output: DistanceMap Function Begin PixelValueBinaryzation(OMap) Function Begin for i← 1 to MAP_HEIGHT − 2 PixelValueBinaryzation(OMap) for j← 1 to MAP_WITH − 2

for i← 1 to MAP_HEIGHT − 2

DistanceMap [i + 1][j] = min(ObstacleMap [i][j] + 1, ObstacleMap [i + 1][j])

for j← 1 to MAP_WITH − 2

DistanceMap [i + 1][j] = min(ObstacleMap [i][j] + 1, ObstacleMap [i + 1][j])

(a)

(b)

(c)

Figure 12. Generation of the distance map. (a) Original point cloud map; (b) Obstacle map; (c)

Figure 12. Generation of themap. distance map. (a) Original point cloud map; (b) Obstacle map; (c) Distance Distance intensity intensity map. In most cases, the Lidar is not found in the road’s center position. Therefore, a seed point should be obtained to generate the entire road centerline. As shown in Figure 12c, the intensity in the distance map varies continuously, which means that the gradient information can be applied to find the nearest local maximum point. Search from the vehicle’s position to find the local maximum point in 5 × 5 model. The point is set as the seed point if it is the local maximum point in the 5 × 5 model centered on itself. Otherwise, the search is continued until a point is the local maximum point in the 5 × 5 model centered on itself. The result of obtaining the nearest local maximum point is shown in

Information 2017, 8, 93

10 of 24

Algorithm 1. Distance Map Generation Process Distance Transform Algorithm Input: ObstacleMap Input: MAP_WIDTH Input: MAP_HEIGHT Output: DistanceMap Function Begin PixelValueBinaryzation(OMap) for i← 1 to MAP_HEIGHT − 2 for j← 1 to MAP_WITH − 2 DistanceMap [i + 1][j] = min(ObstacleMap [i][j] + 1, ObstacleMap [i + 1][j] ) DistanceMap [i + 1][j + 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i + 1][j + 1] ) DistanceMap [i][j + 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i][j + 1] ) DistanceMap [i − 1][j + 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i − 1][j + 1] ) end for end for for i← MAP_HEIGHT − 2 to 1 for j← MAP_WITH − 2 to 1 DistanceMap [i − 1][j] = min(ObstacleMap [i][j] + 1, ObstacleMap [i − 1][j] ) DistanceMap [i − 1][j − 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i − 1][j − 1] ) DistanceMap [i][j − 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i][j − 1] ) DistanceMap [i + 1][j − 1] = min(ObstacleMap [i][j] + 1, ObstacleMap [i + 1][j − 1] ) end for end for Function End

In most cases, the Lidar is not found in the road’s center position. Therefore, a seed point should be obtained to generate the entire road centerline. As shown in Figure 12c, the intensity in the distance map varies continuously, which means that the gradient information can be applied to find the nearest local maximum point. Search from the vehicle’s position to find the local maximum point in 5 × 5 model. The point is set as the seed point if it is the local maximum point in the 5 × 5 model centered on itself. Otherwise, the search is continued until a point is the local maximum point in the 5 × 5 model centered on itself. The result of obtaining the nearest local maximum point is shown in Figure 13, where the upper right box is the position of the Lidar, whereas the lower left box is the position of the Information 2017, 8, 93 11 of 25 nearest local maximum point.

(a)

(b)

(c)

Figure 13. Result of obtaining the nearest local maximum point. (a) Raw point cloud map; (b) Obstacle

Figure 13. Result of obtaining the nearest local maximum point. (a) Raw point cloud map; (b) Obstacle map; (c) Distance map. map; (c) Distance map.

Starting from the nearest local maximum position, a search is performed on the distance map to find the local maximum points in the upward and downward directions. This step is followed by quadratic curve fitting and the result is in the form of (5).

x = ay 2 + by + c

(5)

Figure 14a shows the extraction of the centerline in a winding road situation. The third column

(a)

(b)

(c)

Information 2017, 93 11 of 24 Figure 13.8,Result of obtaining the nearest local maximum point. (a) Raw point cloud map; (b) Obstacle

map; (c) Distance map.

Starting from the nearest local maximum position, a search is performed on the distance map Starting from the nearest local maximum position, a search is performed on the distance map to to find the local maximum points in the upward and downward directions. This step is followed by find the local maximum points in the upward and downward directions. This step is followed by quadratic curve fitting and the result is in the form of (5). quadratic curve fitting and the result is in the form of (5). 2 by + c x = ay2 +

(5) (5)

x = ay + by + c

Figure 14a Figure 14a shows shows the the extraction extraction of of the the centerline centerline in in aa winding winding road road situation. situation. The The third third column column shows that the extracted centerline reflects the road trend. A situation where the distance map is shows that the extracted centerline reflects the road trend. A situation where the distance map is affected affected by by the the obstacle obstacle inside inside the the road road area area is is shown shown in in Figure Figure 14b. 14b. It It can can be be seen seen that that the the extracted extracted centerline in the center position of the road area. Thearea. centerline satisfy two requirements. centerline isisnot not in the center position of the road The should centerline should satisfy two First, the line should reflect roadreflect trend generally. Second, the lineSecond, should the be found insidebe thefound road requirements. First, the linethe should the road trend generally. line should area. Our experiment found that these two requirements are relatively easy to meet if a relatively large inside the road area. Our experiment found that these two requirements are relatively easy to meet if model is selected to leadistoselected the detection longdetection centerline, example a 10 × model. A longer a relatively large model to leadoftoa the of afor long centerline, for10example a 10 × 10 centerline detected inside the road area means morearea accurate quadratic curve fit quadratic for the road shape. model. A longer centerline detected inside thearoad means a more accurate curve fit As shown in the green box in Figure 14b, the road centerline is continuous in the distance map despite for the road shape. As shown in the green box in Figure 14b, the road centerline is continuous in the the existence the obstacle insideofthe area.inside Thus,the a long road area can distance map of despite the existence theroad obstacle roadcenterline area. Thus,inside a longthe centerline inside be theextracted. road area can be extracted.

Information 2017, 8, 93

(a)

12 of 25

(b) Figure 14. Results for road centerline extraction of a winding road and a straight road. (a) Centerline Figure 14. Results for road centerline extraction of a winding road and a straight road. (a) Centerline extraction for winding road; (b) Centerline extraction for winding road. extraction for winding road; (b) Centerline extraction for winding road.

3.1.2. Road Width Estimation To increase the continuity of the obstacle map, instead of using a normally morphological close operation on the obstacle map, a threshold segmentation, which is less time consuming, is performed on the distance map. Pixels less than the threshold are set as 255 and the rest pixels are set as 0. The

(b) Figure 14. Results Information 2017, 8, 93 for road centerline extraction of a winding road and a straight road. (a) Centerline 12 of 24 extraction for winding road; (b) Centerline extraction for winding road.

3.1.2. Road Width Estimation 3.1.2. Road Width Estimation increase continuity obstacle map, instead using a normally morphological close ToTo increase thethe continuity of of thethe obstacle map, instead of of using a normally morphological close operation on the obstacle map, a threshold segmentation, which is less time consuming, is performed operation on the obstacle map, a threshold segmentation, which is less time consuming, is performed distance map. Pixels less than the threshold are 255the and the rest pixels as 0. onon thethe distance map. Pixels less than the threshold are set asset 255asand rest pixels are set are as 0.set The The result is shown in Figure 15. Figure 15c shows that the road curb points are more continuous than result is shown in Figure 15. Figure 15c shows that the road curb points are more continuous than the the original obstacle Moreover, the road is smoother original obstacle map, original obstacle map. map. Moreover, the road curbcurb is smoother thanthan the the original obstacle map, andand thethe high-frequency noise caused uneven sample is reduced. high-frequency noise caused byby thethe uneven sample is reduced.

(a)

(b)

(c)

Figure 15. Result of the threshold segmentation. (a) Original point cloud; (b) Obstacle map; (c) Figure 15. result. Result of the threshold segmentation. (a) Original point cloud; (b) Obstacle map; Segmentation (c) Segmentation result.

Based on the aforementioned analysis, road trend and a more continuous obstacle map are obtained. Points theaforementioned road centerline, analysis, whose y value in the interval of [STARTY, aremap usedare Based oninthe road is trend and a more continuousENDY], obstacle as seed points and ain search is performed a seed pointisininleft right of directions until an obstacle obtained. Points the road centerline,from whose y value theand interval [STARTY, ENDY], are used point is met. In this distances from of theapoint to theinleft right nearest until obstacles can as seed points andmanner, a search the is performed seed point leftand andthe right directions an obstacle be point obtained. By In summing thesethe twodistances distancesofup eachtopoint in the centerline, road width is met. this manner, thefor point the left androad the right nearestthe obstacles can be distribution be obtained. Based statistics, likely in width of the road can the be obtained. obtained. can By summing these two on distances upthe for most each point the road centerline, road width The detailed algorithm is shown Based in Algorithm 2. distribution can be obtained. on statistics, the most likely width of the road can be obtained. The detailed algorithm is shown in Algorithm 2. In Algorithm 2, a search is performed between STRARTY and ENDY to find obstacle points of the road edge on both sides of the centerline. All the width values are assigned to partitions with a difference of 2 m. Thus, the points corresponding to the most possible width, i.e., points in the largest partition, can be selected as the seed points. Two typical scenes are shown in Figure 16, where the gray area in the third column shows the width distribution of the road. A winding road is shown in the first row. In spite of the curved shape of the road, the width distribution of the road is continuous. An obstacle inside the road area situation is shown in the second row. The third column shows that the width distribution is affected by two vehicles. However, based on the maximum possible width analysis, the true road width can be obtained. In this manner, the vehicle inside the road is not regarded as the road curb.

RoadWidthCount[0]← {0} maxWidthCount← 0 For y← STARTY to ENDY

x ←ay2 +by+c Information 2017,LeftWidth← 8, 93 0

13 of 24

While ThreSegImg[y][x − LeftWidth] == 0 LeftWidth ++ Whilefor the Analysis of Width Distribution Algorithm 2.End Process RightWidth← 0 RoadWidthAnalysis While ThreSegImg[y][x + RightWidth] == 0 RoadWidthCount[0]← {0} RightWidth ++ maxWidthCount← 0 End While For y← STARTY to ENDY RoadWidth[y]← LeftWidth + RightWidth x ← ay2Save + bythe + Point c (x, y) to WidthPoints[RoadWidth[y]/10] LeftWidth ← 0 RoadWidthCount[RoadWidth[y]/10] ++ While ThreSegImg[y][x − LeftWidth] == 0 > maxWidthCount) If(RoadWidthCount[RoadWidth[y]/10] LeftWidth ++ maxWidthCount← RoadWidthCount[RoadWidth[y]/10] End While MaxRoadWidthPortion = RoadWidth[y]/10; RightWidth Endif← 0 EndThreSegImg[y][x For While + RightWidth] == 0

RightWidth ++ In Algorithm 2, a search is performed between STRARTY and ENDY to find obstacle points of End While the road edge ← on LeftWidth both sides +ofRightWidth the centerline. All the width values are assigned to partitions with a RoadWidth[y] difference of 2 m. the points corresponding to the most possible width, i.e., points in the largest Save the Point (x, Thus, y) to WidthPoints[RoadWidth[y]/10] partition, can be selected as the seed points. RoadWidthCount[RoadWidth[y]/10] ++ Two typical scenes are shown in Figure 16, where the gray area in the third column shows the If(RoadWidthCount[RoadWidth[y]/10] > maxWidthCount) width distribution of the road. A winding road is shown in the first row. In spite of the curved shape maxWidthCount← RoadWidthCount[RoadWidth[y]/10] ofMaxRoadWidthPortion the road, the width distribution of the road is continuous. An obstacle inside the road area situation = RoadWidth[y]/10; is shown in the second row. The third column shows that the width distribution is affected by two End if vehicles. However, based on the maximum possible width analysis, the true road width can be End For obtained. In this manner, the vehicle inside the road is not regarded as the road curb.

Information 2017, 8, 93

(a)

14 of 25

(b) Figure 16. Result of the width distribution in two typical scenes. (a) Width distribution of a winding

Figure 16. Result of the width distribution in two typical scenes. (a) Width distribution of a winding road; (b) Width distribution of a straight road. road; (b) Width distribution of a straight road.

3.2. Curb Detection and Update The seed points are applied to detect the entire road curb points by region growing in the original obstacle map. The candidate road curb points should satisfy the following criterion: the gradient in the x-direction should be negative for the left curb candidate road curb points while positive for the right curb candidate road curb points. Thereafter, a quadratic curve fitting is performed on the

3.2. Curb Detection and Update The seed points are applied to detect the entire road curb points by region growing in the original obstacle map. The candidate road curb points should satisfy the following criterion: the gradient in the x-direction should be negative for the left curb candidate road curb points while positive for the right Information 2017,curb 8, 93candidate road curb points. Thereafter, a quadratic curve fitting is performed on the 14 of 24 candidate points. The result is shown in Figure 17c. As shown in the white box in the third column of the first row, a gap exits in the road curb. However, based the width distribution information, seed points in the two sides of the road curb 3.2. Curb Detection andon Update are obtained, and they are grown to obtain the road curb candidate points in the two sides of the gap. TheThe seed points applied to detect thefirst entire points by region growing indetail the original white boxesare in the third column of the rowroad showcurb two vehicles. The partially enlarged obstaclemap map. The that candidate curb pointsthe should criterion: shows the curbroad points between upper satisfy vehiclesthe arefollowing detected, whereas thethe curbgradient points in the between the lower vehiclesfor arethe leak-detected. This difference is curb caused by thewhile difference in thefor gaps x-direction should be negative left curb candidate road points positive the right between the vehicles, such that the curb points in the left side of the lower vehicle are not regarded curb candidate road curb points. Thereafter, a quadratic curve fitting is performed on the candidate as the road curb points. However, this difference does not affect the extraction of the road curb points. The result is shown in Figure 17c. because most road curb points are detected and the shape of the road curb is reflected.

Information 2017, 8, 93

(a)

15 of 25

(b)

(c) Figure 17. Result of the region growing and curve fitting. (a) Obstacle map; (b) Width distribution; (c)

Figure 17. Result the region growing and curve fitting. (a) Obstacle map; (b) Width distribution; Curve fittingof result. (c) Curve fitting result. Then, a multi-index evaluation of road curb in three aspects, including the shape of the road curb, the fit degree between the road curb and the scene, and the variance between the curve and history information, is presented in this study. These aspects are described in detail in the subsequent subsections. 3.2.1. Shape of the Road Curb

Information 2017, 8, 93

15 of 24

As shown in the white box in the third column of the first row, a gap exits in the road curb. However, based on the width distribution information, seed points in the two sides of the road curb are obtained, and they are grown to obtain the road curb candidate points in the two sides of the gap. The white boxes in the third column of the first row show two vehicles. The partially enlarged detail map shows that the curb points between the upper vehicles are detected, whereas the curb points between the lower vehicles are leak-detected. This difference is caused by the difference in the gaps between the vehicles, such that the curb points in the left side of the lower vehicle are not regarded as the road curb points. However, this difference does not affect the extraction of the road curb because most road curb points are detected and the shape of the road curb is reflected. Then, a multi-index evaluation of road curb in three aspects, including the shape of the road curb, the fit degree between the road curb and the scene, and the variance between the curve and history information, is presented in this study. These aspects are described in detail in the subsequent subsections. 3.2.1. Shape of the Road Curb The shape of the road curb can be partially reflected by its curvature, which can be denoted by the coefficient of the fitted quadratic curve. The road trend changes relatively slowly in a local frame grabbed by Velodyne LIDAR. Thus, the coefficient of the quadratic curve should not exceed a certain threshold. 3.2.2. Fit Degree between the Road Curb and the Scene The fit degree between the road curb and the scene reflects the property how the detected curve reflects the true shape of the road. This aspect is denoted by three indices in this study. 1. 2. 3.

The number of the detected curb points. The maximum difference of the y value between the detected points. The variance of the width distribution

Our experiment found that, if the number of the detected road curb points is excessively small, the points cannot reflect the shape of the road curb. Moreover, the more divergent these points are, the more accurate these points will reflect the road shape. The dispersion of the curb points can be reflected by their maximum difference of the y value. As the road is continuous, the road width distributions along the y-direction should be similar. Thus, the variance of the width distribution should be small. 3.2.3. Variance between the Curve and History Information During driving, the road curb should be continuous in most cases. Thus, the curb shape between two consecutive frames should not change significantly. The consecutiveness of the road curb is denoted by the difference of the road width between two consecutive frames. The difference should not exceed a certain threshold, which is defined and tested with a lot of experiments conducted on our autonomous vehicle. In our experiments, the position of the Lidar is (250, 500) in the grid map. Each cell represents 20 cm. As the obstacle information is more detailed near the Lidar, STARTY is set as 100 and ENDY is set as 400 in our experiment. The thresholds selected for the aforementioned indices are shown in Table 2. The prediction and update of the road curb are described in detail in [24], which predict the initial position of the road curb of this frame based on the motion of the vehicle from the previous curb position and the update of the height difference map generated by this frame data using SNAKE Model.

Information 2017, 8, 93

16 of 24

Table 2. Threshold for each index. Index

Threshold

Road curb curvature Point number consisted the road curb Maximum difference of the y value between the detected points Variance of the width distribution Difference of the road width between two consecutive frames

< 0.001 > 150 > 200 < 10 < 10

4. Experimental Results and Discussion Our experimental platform is the autonomous vehicle named “Intellgent Pioneer”, as shown in Figure 2. Besides three-dimensional Lidar, it is equipped with a SPAN-CPT, an integrated navigation device combining inertial navigation and Global Positioning System, which provides accurate position and moving state of the intelligent vehicle. To validate the robustness and accuracy of the ground segmentation algorithm presented in this work, experiments under various environments are conducted including straight road, winding road and crossroad in urban area and off-road area. Figure 18a shows the heavy traffic situation. The existence of the obstacle influences the local geometric characteristics of the tangential neighboring points on the ground, which would be falsely detected based on some features alone. The four features are fused in different scopes and the ground segmentation is completely without over segmentation. Figure 18b shows ground segmentation with a low irregular obstacle inside the road area. As shown in the real scene picture, three rows of cones are in front of the intelligent vehicle, which were low and difficult to detect. The detection of a negative obstacle is shown in Figure 18c, where two pits exist in the scene. A round pit and a rectangular pit are marked out in the red box in the second row. As shown in the figure, with the ground cleanly segmented, the points projected on the round pit are mostly detected as obstacles, and several points projected onto the rectangular pit are detected as obstacles. Information 2017, 8, 93

17 of 25

(a)

(b) Figure 18. Cont.

Information 2017, 8, 93

17 of 24

(b)

(c) Figure 18. Ground segmentation under various environments. (a) Congested viaduct; (b) Urban road

Figure 18. Ground segmentation under various environments. (a) Congested viaduct; (b) Urban road with obstacles; (c) Off-road scene with negative obstacles. with obstacles; (c) Off-road scene with negative obstacles.

The comparison between the proposed ground segmentation method and that of [23] is presented in Figure 19, where a low stone curbsegmentation situation is shown. The of [23] the is ground The comparison between the proposed ground method andresult that of presented segmentation algorithm presented in [23] is shown in Figure 19b, whereas the result of this study is in Figure 19, where a low stone curb situation is shown. The result of the ground segmentation shown in Figure 19c. According to the gradient feature presented in [23], although the curb points algorithm presented in [23] is shown in Figure 19b, whereas the result of this study is shown in near the vehicle have been detected continuously, the curb points distant from the vehicle were leakFigure 19c. According to the gradient feature presented in [23], although the curb points near the detected. As mentioned previously, when the curb points detected are more divergent, the curb shape vehicle have been detected continuously, theshows curb points from the were presented leak-detected. is reflected to be more accurate. Figure 19c that thedistant curb detected by vehicle the algorithm As mentioned previously, when the curb points detected are more divergent, the curb shape is reflected in this study covered a broader scope despite the existence of gaps, which shows more information to beon more accurate. the road trend. Figure 19c shows that the curb detected by the algorithm presented in this study covered a broader scope despite the existence of gaps, which shows more information on the Information 2017, 8, 93 18 of 25 road trend.

(a)

(b)

(c)

Figure 19. Comparison of the ground segmentation method used in [23] and the proposed method. Figure 19. Comparison of the ground segmentation method used in [23] and the proposed method. (a) Original point cloud map; (b) Result of the method used in [23]; (c) Result of the proposed method. (a) Original point cloud map; (b) Result of the method used in [23]; (c) Result of the proposed method.

The comparison between the ground segmentation of this study and that of [24] is presented in The comparison between segmentation of this study and obtained that of [24] presented in Figure 20, which contains the the realground scene image, yaw point cloud, result byisthe algorithm Figure 20, which contains the real scene image, yaw point cloud, result obtained by the algorithm presented in [24], and result obtained by the algorithm presented in this study. The experiment found presented in [24], based and result obtained by the algorithm presented in thisby study. The experiment that the distance features are insensitive to the influence exerted the distant obstacles found on the that the distance based features are insensitive to the influence exerted by the distant obstacles on the local geometric characteristic. Whereas, the angle based features are insensitive to the influence local geometric Whereas, the angle based features are insensitive the influence exerted exerted by thecharacteristic. nearby obstacles. Global thresholds are applied in [24] totosegment the ground. by the nearby thresholds are applied the ground. Therefore, leak Therefore, leakobstacles. detectionGlobal is inevitable as shown in the in red[24] boxtoinsegment Figure 20c. As an in-depth analysis detection is inevitable as shown in the red box in Figure 20c. As an in-depth analysis is performed is performed on the threshold and the fitted scope of each feature applied in this study, leak detection is reduced compared with the algorithm presented in [24], as shown in Figure 20d.

The comparison between the ground segmentation of this study and that of [24] is presented in Figure 20, which contains the real scene image, yaw point cloud, result obtained by the algorithm presented in [24], and result obtained by the algorithm presented in this study. The experiment found that the distance based features are insensitive to the influence exerted by the distant obstacles on the local geometric Information 2017, 8, 93characteristic. Whereas, the angle based features are insensitive to the influence 18 of 24 exerted by the nearby obstacles. Global thresholds are applied in [24] to segment the ground. Therefore, leak detection is inevitable as shown in the red box in Figure 20c. As an in-depth analysis on the threshold andthreshold the fittedand scope eachscope feature applied in this study, detection reduced is performed on the the of fitted of each feature applied inleak this study, leakisdetection compared with the algorithm presented in [24], as shown in Figure 20d. is reduced compared with the algorithm presented in [24], as shown in Figure 20d.

(a)

Information 2017, 8, 93

(b)

19 of 25

(c)

(d) Figure 20. Comparison between the ground segmentation method in [24] and the proposed method. Figure 20. Comparison between the ground segmentation method in [24] and the proposed method. (a) Real scene; (b) Original point cloud; (c) Result of method in [24]; (d) Result of the proposed method. (a) Real scene; (b) Original point cloud; (c) Result of method in [24]; (d) Result of the proposed method.

The result of road curb detection in the three scenes are shown in Figure 21, including the yaw The result of roadmap, curband detection in road the three are shown in Figuredifficulty 21, including the curb yaw point cloud, obstacle detected curb,scenes respectively. A common for road point cloud, obstacle map, and detected road curb, respectively. A common difficulty for road curb detection in these scenes is the existence of the obstacle inside the road area, which may influence the detection scenes is the existence of the obstacle inside the road area, which may influence the detection in of these the road curb. detection of the road curb. Figure 21a shows an urban scene where the road width varied and the number of vehicles inside the road generated interference on road curb detection. Based on the analysis of the road’s overall trend information, the vehicles inside the road were identified as dynamic obstacles instead of the road curb. Moreover, the existence of the dynamic obstacles inside the road area caused the leak detection of the low curb, as shown in the red box in the second column, which did not affect the detection of the road curb. Figure 21b shows the influence of snowy weather on road curb detection.

detection of the road curb. Figure 21b shows the influence of snowy weather on road curb detection. As shown in the first column, the noise points on the road are generated by the snow. However, the road curb was correctly detected without being influenced by these noise points. A field scene is shown in Figure 21c, where the coarse ground in the left of the vehicle was detected as an obstacle. Moreover, the8, curb is rough. The detection results shown in the third column shows that the19curb Information 2017, 93 of 24 was correctly detected.

Information 2017, 8, 93

(a)

20 of 25

(b)

(c) Figure 21. Road Figure 21. Road curb curb detection detection result result under under different different scenes. scenes. (a) (a) An An urban urban scene; scene; (b) (b) A A snowy snowy scene; scene; (c) A field scene. (c) A field scene.

The comparison between the results of road detection in [24] and that in this study is shown in Figure 22. The result of [24] is shown in Figure 22b and the obstacle inside the road was detected as a road curb as the lack of the road’s overall information. The result of this study is shown in Figure~22c. As the road’s overall information was applied in this work, the obstacle inside the road was not designated as the road curb.

Information 2017, 8, 93

20 of 24

Figure 21a shows an urban scene where the road width varied and the number of vehicles inside the road generated interference on road curb detection. Based on the analysis of the road’s overall trend information, the vehicles inside the road were identified as dynamic obstacles instead of the road curb. Moreover, the existence of the dynamic obstacles inside the road area caused the leak detection of the low curb, as shown in the red box in the second column, which did not affect the detection of the road curb. Figure 21b shows the influence of snowy weather on road curb detection. As shown in the first column, the noise points on the road are generated by the snow. However, the road curb was correctly detected without being influenced by these noise points. A field scene is shown in Figure 21c, where the coarse ground in the left of the vehicle was detected as an obstacle. Moreover, the curb is rough. The detection results shown in the third column shows that the curb was correctly detected. The comparison between the results of road detection in [24] and that in this study is shown in Figure 22. The result of [24] is shown in Figure 22b and the obstacle inside the road was detected as a road curb as the lack of the road’s overall information. The result of this study is shown in Figure 22c. As the road’s overall information was applied in this work, the obstacle inside the road was not designated as the road curb. Information 2017, 8, 93 21 of 25

(a)

(b)

(c)

Figure 22. Comparison between the road detection result of [24] and this work. (a) Raw point cloud; Figure 22. Comparison between the road detection result of [24] and this work. (a) Raw point cloud; (b) Result of [24]; (c) Result of this work. (b) Result of [24]; (c) Result of this work.

The results of an experiment measuring the time needed for road curb detection are shown in The of an experiment measuring the time needed foron road curb detection arethe shown in Figure 23 results where the blue line shows the time consumption based detection solely and yellow Figure 23 where theconsumption blue line shows theon time based on in detection solely the yellow line shows the time based theconsumption framework presented this study. Theand average time line shows the time consumption based on the framework presented in this study. The average of of the detection solely method is 112 ms and the peak value is larger than 900 ms as a resulttime of the the detection solely method is 112 ms and the peak value is larger than 900 ms as a result of the limit limit of the multi-threaded framework used in the program. The proposed framework is much faster, of the multi-threaded framework program. The proposed framework is much faster, the the average time is 11 ms, and the used valuein is the stable. average time is 11 ms, and the value is stable.

Figure 23. The road curb detection time-consuming experiment.

To evaluate the accuracy of the proposed curb detection method, a comparison between the

The results of an experiment measuring the time needed for road curb detection are shown in Figure 23 where the blue line shows the time consumption based on detection solely and the yellow line shows the time consumption based on the framework presented in this study. The average time of the detection solely method is 112 ms and the peak value is larger than 900 ms as a result of the limit of the multi-threaded framework used in the program. The proposed framework is much 21 faster, Information 2017, 8, 93 of 24 the average time is 11 ms, and the value is stable.

Figure 23. The road curb detection time-consuming experiment. Figure 23. The road curb detection time-consuming experiment.

To evaluate the accuracy of the proposed curb detection method, a comparison between the To evaluate thethe accuracy of the curb map detection method, The a comparison detection result and road curb in aproposed high precision is conducted. testing pathbetween is shownthe in detection result and the road curb in a high precision map is conducted. The testing path is shown in Figure 24 with red lines. The blue lines are lane marks in the map. The path is about 9.2 km including Figure with red lines. bluewinding lines areroad. lane marks in the map. The path is about 9.2 km including 4.9 km24 straight road andThe 4.3 km 4.9 km straight road and 4.3 km winding road. Information 2017, 8, 93

22 of 25

START

END

Figure Testing path. Figure 24.24. Testing path.

The curbs curbs in in high high precision precision map The map can can be be treated treated as as ground ground truth truth whose whose error error is is less less than than 10 10 cm. cm. To reduce the impact of positioning errors RTK differential positioning system is used and the To reduce the impact of positioning errors RTK differential positioning system is used and the positioning error error is is less less than than 55 cm. cm. Distance Distance between between the the detected detected curbs curbs and and the the ground ground truth truth can can be be positioning worked out in the local coordinate of the intelligent vehicle. The intelligent vehicle is moving along worked out in the local coordinate of the intelligent vehicle. The intelligent vehicle is moving along the road road and and the the horizontal horizontal distance distance between between aa detection detection curb curb point point and and the the ground ground truth truth point point with with the the same same vertical vertical coordinate coordinate can can be be obtained obtained as as the the curb The accuracy accuracy of of the the curb detection detection error. error. The the proposed proposed method and method of [24] is shown in Table 3. The average detection error of our method is smaller smaller method and method of [24] is shown in Table 3. The average detection error of our method is than method of [24] on both straight road and winding road. Besides, the accuracy difference of the the than method of [24] on both straight road and winding road. Besides, the accuracy difference of proposed method method between between on on straight straight road road and and on on winding winding road road is is smaller, smaller, which which demonstrates demonstrates aa proposed better robustness. better robustness. Table 3. Accuracy of the curb detection method.

Proposed method Method of [24]

Scenes Straight road Winding road Straight road Winding road

Two typical scenes on straight road and winding road are shown in Figure 25. Red lines are road

Information 2017, 8, 93

22 of 24

Table 3. Accuracy of the curb detection method. -

Scenes Straight road Winding road Straight road Winding road

Proposed method Method of [24]

Two typical scenes on straight road and winding road are shown in Figure 25. Red lines are road curbs in high precision map and blue lines are detected road curbs. The proposed method is robust to obstacles on road and discontinuity of road curbs which are shown in red boxes. The detected results are consistent with the ground truth. Information 2017, 8, 93 23 of 25

(a)

(b) Figure 25. Result of curb detection in typical scenes. (a) Straight road; (b) Winding road. Figure 25. Result of curb detection in typical scenes. (a) Straight road; (b) Winding road.

5. Conclusions 5. Conclusions This paper introduced a quick and accurate method for online road curb detection using multiThis paper introduced a quick and accurate method for online road curb detection using beam Lidar. In the process of ground segmentation, a multi-feature, loose threshold, varied-scope multi-beam Lidar. In the process of ground segmentation, a multi-feature, loose threshold, varied-scope method is used to increase the robustness. Then, an extraction-update mechanism based road curb method is used to increase the robustness. Then, an extraction-update mechanism based road curb detection method is proposed and the overall road shape information is used in the extraction detection method is proposed and the overall road shape information is used in the extraction process. process. Experimental results show that the proposed ground segmentation method performs much better than the method in [23] and the method in [24]. The robustness is proved by applying the method in different scenes including congested viaduct, urban road with obstacles at night and road with negative obstacles. Besides, the road curb detection method is 10 times faster than the detection solely method and the detection speed is stable. The accuracy is also evaluated and it is sufficient for

Information 2017, 8, 93

23 of 24

Experimental results show that the proposed ground segmentation method performs much better than the method in [23] and the method in [24]. The robustness is proved by applying the method in different scenes including congested viaduct, urban road with obstacles at night and road with negative obstacles. Besides, the road curb detection method is 10 times faster than the detection solely method and the detection speed is stable. The accuracy is also evaluated and it is sufficient for autonomous driving. Our future work will focus on improving the accuracy and rationality of thresholds used in ground segmentation and maybe an adaptive method will be adopted according to the road roughness. Another research direction is the fusion of data from Lidar and camera to increase the robustness and accuracy of curb detection. Acknowledgments: We would like to acknowledge all of our team members, whose contributions were essential for this paper. We would also like to acknowledge the support of National Nature Science Foundations of China (Grant No. 61503362, 61305111, 61304100, 51405471, 61503363, 91420104), National Science Foundation of Anhui Province (Grant No. 1508085MF133, KJ2016A233) Author Contributions: All six authors contributed to this work. Rulin Huang and Jian Liu were responsible for the literature search algorithm design and data analysis. Jiajia Chen, Biao Yu, Lu Liu and Yihua Wu made substantial contributions to the planning and design of the experiments. Rulin Huang was responsible for the writing of the paper and modified the paper. Finally, all of the listed authors approved the final manuscript. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4. 5. 6. 7. 8.

9. 10. 11. 12. 13.

14. 15.

Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian Detection: An Evaluation of the State of the Art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743–761. [CrossRef] [PubMed] Shu, Y.; Tan, Z. Vision based lane detection in autonomous vehicle. Intell. Control Autom. 2004, 6, 5258–5260. Bertozzi, M.; Broggi, A.; Cellario, M. Artificial vision in road vehicles. Proc. IEEE 2002, 90, 1258–1271. [CrossRef] Wen, X.; Shao, L.; Xue, Y. A rapid learning algorithm for vehicle classification. Inf. Sci. 2015, 295, 395–406. [CrossRef] Sivaraman, S.; Trivedi, M.M. A General Active-Learning Framework for On-Road Vehicle Recognition and Tracking. IEEE Trans. Intell. Transp. Syst. 2010, 11, 267–276. [CrossRef] Arrospide, J.; Salgado, L.; Marinas, J. HOG-Like Gradient-Based Descriptor for Visual Vehicle Detection. IEEE Intell. Veh. Symp. 2012. [CrossRef] Sun, Z.H.; Bebis, G.; Miller, R. On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 694–711. [PubMed] Huang, J.; Liang, H.; Wang, Z.; Mei, T.; Song, Y. Robust lane marking detection under different road conditions. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics, Shenzhen, China, 12–14 December 2013; pp. 1753–1758. Bengler, K.; Dietmayer, K.; Farber, B.; Maurer, M.; Stiller, C.; Winner, H. Three decades of driver assistance systems: Review and future perspectives. IEEE Intell. Trans. Syst. Mag. 2014, 6, 6–22. [CrossRef] Buehler, M.; Iagnemma, K.; Singh, S. The 2005 DARPA Grand Challenge; Springer: Berlin, Germany, 2007. Buehler, M.; Iagnemma, K.; Singh, S. The DARPA Urban Challenge; Springer: Berlin, Germany, 2009. Baidu Leads China’s Self-Driving Charge in Silicon Valley. Available online: http://www.reuters.com/ article/us-autos-baidu-idUSKBN19L1KK (accessed on 22 July 2017). Ford Promises Fleets of Driverless Cars within Five Years. Available online: https://www.nytimes. com/2016/08/17/business/ford-promises-fleets-of-driverless-cars-within-five-years.html (accessed on 22 July 2017). What Self-Driving Cars See. Available online: https://www.nytimes.com/2017/05/25/automobiles/ wheels/lidar-self-driving-cars.html (accessed on 22 July 2017). Uber Engineer Barred from Work on Key Self-Driving Technology, Judge Says. Available online: https://www.nytimes.com/2017/05/15/technology/uber-self-driving-lawsuit-waymo.html (accessed on 22 July 2017).

Information 2017, 8, 93

16.

17. 18.

19.

20.

21.

22.

23.

24. 25. 26. 27.

28.

24 of 24

Brehar, R.; Vancea, C.; Nedevschi, S. Pedestrian Detection in Infrared Images Using Aggregated Channel Features. In Proceedings of the 2014 IEEE International Conference on Intelligent Computer Communication and Processing, Cluj Napoca, Romania, 4–6 September 2014; pp. 127–132. Von Hundelshausen, F.; Himmelsbach, M.; Hecker, F.; Mueller, A.; Wuensche, H.-J. Driving with Tentacles: Integral Structures for Sensing and Motion. J. Field Robot 2008, 25, 640–673. [CrossRef] Urmson, C.; Anhalt, J.; Bagnell, D.; Baker, C.; Bittner, R.; Clark, M.N.; Dolan, J.; Duggins, D.; Galatali, T.; Geyer, C.; et al. Autonomous Driving in Urban Environments: Boss and the Urban Challenge. J. Field Robot 2008, 25, 425–466. [CrossRef] Kammel, S.; Ziegler, J.; Pitzer, B.; Werling, M.; Gindele, T.; Jagzent, D.; Schroder, J.; Thuy, M.; Goebl, M.; von Hundelshausen, F.; et al. Team AnnieWAY’s Autonomous System for the 2007 DARPA Urban Challenge. J. Field Robot 2008, 25, 615–639. [CrossRef] Montemerlo, M.; Becker, J.; Bhat, S.; Dahlkamp, H.; Dolgov, D.; Ettinger, S.; Haehnel, D.; Hilden, T.; Hoffmann, G.; Huhnke, B.; et al. Junior: The Stanford Entry in the Urban Challenge. J. Field Robot 2008, 25, 569–597. [CrossRef] Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The Robot that Won the DARPA Grand Challenge. J. Field Robot 2006, 23, 661–692. [CrossRef] Moosmann, F.; Pink, O.; Stiller, C. Segmentation of 3D Lidar Data in Non-Flat Urban Environments Using a Local Convexity Criterion. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 215–220. Cheng, J.; Xiang, Z.; Cao, T.; Liu, J. Robust vehicle detection using 3D Lidar under complex urban environment. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 691–696. Liu, J.; Liang, H.; Wang, Z.; Chen, X. A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment. Sensors 2015, 15, 21931–21956. [CrossRef] [PubMed] Chen, T.; Dai, B.; Liu, D.; Song, J.; Liu, Z. Velodyne-based curb detection up to 50 meters away. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium, Seoul, South Korea, 28 June–1 July 2015; pp. 241–248. Zhao, G.; Yuan, J. Curb detection and tracking using 3D-LIDAR scanner. In Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 437–440. Hata, A.Y.; Osorio, F.S.; Wolf, D.F. Robust Curb Detection and Vehicle Localization in Urban Environments. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014; pp. 1264–1269. Zhang, Y.; Wang, J.; Wang, X.; Li, C.; Wang, L. A real-time curb detection and tracking method for UGVs by using a 3D-LIDAR sensor. In Proceedings of the IEEE Conference on Control Applications, Sydney, Australia, 21–23 September 2015; pp. 1020–1025. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents