Recognition and Reconstruction of Zebra ... - Semantic Scholar

3 downloads 0 Views 6MB Size Report
Jul 19, 2016 - are crucial inputs to intelligent transportation systems and 3D city modelling. ..... GB 5768.3-2009: Road Traffic Signs and Markings. ... Transportation Systems Conference, Toronto, ON, Canada, 17–20 September 2006; pp.
International Journal of

Geo-Information Article

Recognition and Reconstruction of Zebra Crossings on Roads from Mobile Laser Scanning Data Lin Li 1,2,3 , Da Zhang 1,4, *, Shen Ying 1,3, * and You Li 1 1 2 3 4

*

School of Resource and Environmental Sciences, Wuhan University, Wuhan 430079, China; [email protected] (L.L.); [email protected] (Y.L.) Collaborative Innovation Center of Geospatial Technology, Wuhan University, Wuhan 430079, China The Key Laboratory for Geographical Information System, Ministry of Education, Wuhan 430079, China Power China Zhongnan Engineering Corporation Limited, Changsha 410014, China Correspondence: [email protected] (D.Z.); [email protected] (S.Y.); Tel.: +86-131-6329-5128 (D.Z.)

Academic Editor: Wolfgang Kainz Received: 16 May 2016; Accepted: 8 July 2016; Published: 19 July 2016

Abstract: Zebra crossings provide guidance and warning to pedestrians and drivers, thereby playing an important role in traffic safety management. Most previous studies have focused on detecting zebra stripes but have not provided full information about the areas, which is critical to both driver assistance systems and guide systems for blind individuals. This paper presents a stepwise procedure for recognizing and reconstructing zebra crossings using mobile laser scanning data. First, we propose adaptive thresholding based on road surface partitioning to reduce the impact of intensity unevenness and improve the accuracy of road marking extraction. Then, dispersion degree filtering is used to reduce the noise. Finally, zebra stripes are recognized according to the rectangular feature and fixed size, which is followed by area reconstruction according to arrangement patterns. We test our method on three datasets captured by an Optech Lynx mobile mapping system. The total recognition rate of 90.91% demonstrates the effectiveness of the method. Keywords: mobile laser scanning data; zebra crossings; adaptive thresholding; principle component analysis

1. Introduction Road markings, as critical transportation infrastructure, provide drivers and pedestrians with information about traffic regulations, warnings, and guidance [1]. The recognition and extraction of road markings have been seen as important functions in many fields, such as traffic safety management [2,3], driver assistance [4–6], and intelligent transportation [7,8]. Traditional studies have mostly focused on digital images and videos [9–15]. The extraction results are sometimes incomplete or insufficient due to poor weather conditions, lighting conditions, and complex shadowing from trees. In addition, the results fail to provide accurate three-dimensional (3D) coordinates of objects, which are crucial inputs to intelligent transportation systems and 3D city modelling. Recent years have seen the emergence of mobile laser scanning (MLS) as a leading technology for extracting information about the surfaces of urban objects. MLS systems, which integrate laser scanners, global positioning system (GPS), inertial navigation system (INS), and charge-coupled device (CCD) cameras [16], collect information, such as 3D geospatial, texture, and laser intensity data, from complex urban areas when a vehicle is on the move. Such systems have become a promising and cost-effective solution for rapid road environment modelling. Most methods proposed in previous studies are designed for point cloud classification [17–20], building footprint extraction, façade reconstruction [21–23], and detection of vertical pole-like objects [24–26] in a road environment. Only a few studies have explored the recognition and extraction of road markings. ISPRS Int. J. Geo-Inf. 2016, 5, 125; doi:10.3390/ijgi5070125

www.mdpi.com/journal/ijgi

ISPRS Int. J. Geo-Inf. 2016, 5, 125

2 of 16

Jaakkola et al. [27] first generated georeferenced feature images according to elevation and intensity by applying an interpolation method and then segmenting road markings and curbstones by applying thresholding and morphological operations to elevation and intensity images. Yang et al. [28] calculated the weights and pixel values of grey images based on spatial distributions (e.g., planar distance, elevation difference, and point density) of laser scanning points, which improved their algorithm for generating feature images. Then, they applied an intensity filter and an elevation filter, followed by constraints on shape and distribution patterns. The above-mentioned methods transform 3D points into 2D images because addressing a large volume of unorganized points is time-consuming and complex. The transformation improves the computational efficiency and enables one to capitalize on well-established image processing methods. However, this transformation also causes roughness in detail, especially when extracting small objects, such as road markings. Kammel [29] and Chen [30] applied the Radon transform and Hough transform to extract solid-edge-line and dashed-lane-line markings, respectively, from MLS points. These methods are effective when extracting straight markings; however, they exhibit a weakness in extracting curve markings. Since curve markings are usually irregular, it is difficult to choose a suitable curve model. Simple global intensity thresholding is often used to extract road markings [27,28,31]; however, the markings’ non-uniform intensity makes this method less effective in some cases because intensity values are affected by material, laser incidence angle, and range. Guan et al. [32] proposed a novel method that segments the intensity images with multiple thresholds related to the point density. Using their method, an image was partitioned into several blocks in accordance with the point density distribution characteristics. Within different blocks, local optimal thresholds were estimated to extract road markings. However, substantial noise was introduced in this method. In addition to the extraction of road markings, the recognition of types is also a necessary and challenging task, especially for zebra crossings, which are located in urban road intersections and have important functions in traffic safety management. Mancini et al. [33] identified zebra crossings with area, perimeter, and length-width ratios following connected component labelling. Riveiro et al. [34] used a Canny edge detector and standard Hough transform to detect a set of parallel lines that have similar directions as the road centreline. Yu et al. [35] distinguished zebra crossings from other rectangular-shaped markings according to the geometric perpendicularity of their distribution directions and road centrelines. These studies mostly focused on detecting stripes and did not provide specific information about the areas. For guide systems for blind individuals, mobile robot navigation, etc., it is impossible to confirm whether the frontal area is a zebra crossing without such information. Another problem is that the method is invalid when the distribution directions of zebra crossings and road centrelines are not vertical. To overcome the aforementioned limitations, we propose a stepwise procedure for recognizing and reconstructing zebra crossings using mobile laser scanning data in this paper. The contributions of this paper are as follows: (1) an adaptive thresholding method based on road surface partitioning was designed to compensate for non-uniformities in intensity data and extract all types of road markings; (2) a dispersion degree filtering method was applied to reduce the noise; and (3) zebra crossings are recognized and reconstructed according to geometrical features, so that we obtain more specific information about the area, including start positions, end positions, distribution directions of zebra crossings, and road centreline directions. The remainder of this paper is organized as follows: the stepwise description of the proposed method is presented in Section 2; in Section 3, we test the proposed method on MLS data captured in Wuhan, China; following the experiments, we discuss the results; and, finally, conclusions are drawn in Section 4.

ISPRS Int. J. Geo-Inf. 2016, 5, 125 ISPRS Int. J. Geo-Inf. 2016, 5, 125

3 of 16 3 of 16

ISPRS J. Geo-Inf.consists 2016, 5, 125 3 of 16 TheInt.method of three main procedures: road surface segmentation, road marking extraction, 2. Method and zebra crossing recognition and reconstruction. Figure 1 illustrates the complete experimental The method consists consists ofofthree main procedures: road surface road marking extraction, The method three main procedures: road segmentation, surface segmentation, road marking procedures used in this study. and zebraand crossing and reconstruction. Figure 1 illustrates complete experimental extraction, zebra recognition crossing recognition and reconstruction. Figurethe 1 illustrates the complete procedures procedures used in this used study.in this study. experimental

Figure 1. The flowchart of the method. Figure1.1.The The flowchart flowchart of the Figure the method. method.

2.1. Road Surface Segmentation Road Surface Segmentation 2.1.2.1. Road Surface Segmentation In an urban environment, road surfaces are generally flat, with small elevation jumps caused by urban environment,road roadsurfaces surfacesare are generally generally flat, with small jumps caused byby In In anan urban environment, smallelevation elevation jumps caused curbstones ononthe road boundaries, as shown in Figure Figure 2.2.flat, Thewith elevations ofroad roadboundary boundary points curbstones the road boundaries, as shown in The elevations of points curbstones on the road boundaries, as shown in of Figure 2. The elevations ofgradient road boundary points change substantially more quickly than do those roadsurface surface points.The Thegradient of a scalar field change substantially more quickly than do those of road points. of a scalar field change substantially more of quickly than therefore, do those ofwe road surface points. The gradient of a scalar field reflects the rate of change the scalar; attempt to separate roads from other points reflects the rate of change of the scalar; therefore, we attempt to separate roads from other points viavia reflects thegradients rate of change of the scalar; growing therefore,algorithm we attempt to separate roads fromfeature other points via elevation tothe the elevation-gradient image elevation gradientsand andapply applyaaregion region growing algorithm to elevation-gradient feature image forfor elevation gradients and apply a region growing algorithm to the elevation-gradient feature image for precise road surface precise road surfacesegmentation. segmentation. precise road surface segmentation.

Figure 2. A sample of a road profile. Figure Figure 2. 2. A A sample sample of of aa road road profile. profile.

2.1.1. Preprocessing 2.1.1. 2.1.1. Preprocessing Preprocessing Correspondingly large data volumes and the complexity of urban street scenes increase the Correspondingly data volumes and of street scenes increase Correspondingly large data volumes and the the complexity complexity of urban urban street scenes increase the difficulty of creating alarge unified road model. Therefore, we use vehicle trajectory data (L) to section thethe difficulty of unified road Therefore, vehicle data (L) difficulty ofcreating creating unified road model. Therefore, weuse usein vehicle trajectory datathat (L) to to section section the point clouds into aaaset of blocks atmodel. an interval (d), as we shown Figuretrajectory 3. To ensure the road inthe point clouds into a set of blocks at an interval (d), as shown in Figure 3. To ensure that the road point clouds into a set of blocks at an interval (d), as shown in Figure 3. To ensure that the road in each block is as flat and straight as possible, the value of d should be set smaller under undulating in each block is as flat and straight as possible, the value of d should be set smaller under undulating and each is asroad flat conditions. and straight as possible, the value of d should be set smaller under undulating andblock winding winding road road conditions. and winding conditions.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

4 of 16

ISPRS Int. J. Geo-Inf. 2016, 5, 125 ISPRS Int. J. Geo-Inf. 2016, 5, 125

4 of 16 4 of 16

Figure 3. An illustration of sectioning road data. Figure 3. An illustration of sectioning road data.

2.1.2. Elevation Filtering 2.1.2. Elevation Filtering Trees next next to to roads, roads, street street lamps, lamps, etc., etc., may may cause cause difficulties difficulties in in extracting extracting accurate road surface Trees Trees next to roads, street lamps, etc., may cause difficulties in extracting accurate road surface points are projected points. For instance, tree crowns can cover road boundaries when laser scanning points. tree crownswe can cover the roadhistogram boundaries when laser scanning points are projected plane. Therefore, algorithm proposed by onto theFor XYinstance, plane. extend concavity analysis onto the XY plane. Therefore, we extend the histogram concavity analysis algorithm proposed by Rosenfeld [36] and use the extended algorithm to select an appropriate threshold for elevation Rosenfeld [36] and use the extended algorithm to select an appropriate threshold for elevation filtering. Rosenfeld [36] useisthe to select an appropriate threshold for elevation filtering. The algorithm applicable to algorithm both unimodal andhistograms. bimodal histograms. The algorithm isand applicable toextended both unimodal and bimodal filtering. The algorithm to both unimodalhsand bimodal histograms. we first draw a histogram based on F, one of As shown in Figureis4,applicable one of the the laser laser scanning scanning points’ points’ As properties. shown in Figure 4, we first draw a histogram hs based on F, one of the laser scanning points’ feature The class width is w, and each rectangle is numbered i = 1 ,..., n. We added two properties. The class width is w, and each rectangle is numbered i = 1 ,..., n. We added feature width is w, and is numbered = 1 ,..., n. We added two points, (fproperties. 1, 0) nThe , 0),(fclass for defining region hs each more conveniently and accurately. two points, (fand and for defining region hs rectangle more conveniently andiaccurately. n , 0), 1 , 0) (f points, (f1, 0) and (fn, 0), for defining region hs more conveniently and accurately.

Figure 4. 4. Histogram analysis. Figure Histogram concavity concavity analysis. Figure 4. Histogram concavity analysis.

For a rectangle i, let the feature value fi be the X coordinate of the upper side’s midpoint, and let For a rectangle i, let the feature value f i be the X coordinate of the upper side’s midpoint, and let rectangle i, let the feature value fi be the X coordinate of the upper side’s midpoint, and let hi be For the aheight: hi be the height: ˆ ˙ hi be the height: 11 (1) f i𝑓𝑖“=Fmin i ´− )𝑤w (1) 𝐹𝑚𝑖𝑛`+ (𝑖 21 2 (1) 𝑓𝑖 = 𝐹𝑚𝑖𝑛 + (𝑖 − )𝑤 2 find the the concavities concavities of hs, we first first construct construct its convex To find hull HS. This is the smallest convex To find the concavities of hs, we first construct its convex hullofHS. isthe therectangle’s smallest polygon containing (f 1 , 0), (f n , 0) and (f i , h i ) (i = 1,2, …, n), midpoints all This the rectangle’s upperconvex sides. containing (f 1 , 0), (f n , 0) and (f i , hi ) (i = 1, 2, . . . , n), midpoints of all upper polygon (f1of , when 0), (fnwhen , the 0) and , hi)value (i = value 1,2, n), all the rectangle’s sides. Hi is the height of HS feature is f…, i. is The depth of the concavity Di is as sides. Hi containing is the height HS the(fifeature f i . midpoints The depth of the concavity Didetermined isupper determined H i follows: is the height of HS when the feature value is fi. The depth of the concavity Di is determined as follows: as follows: Di “ Hi ´ hi (2) Di = Hi - h i (2) D i = location Hi - h i (2)of Any concavity in inthe thehistogram histogrammay maybebethe the a threshold; however, all points Any concavity location of of a threshold; however, notnot all points of the the concavity are good candidates. The the concavity depth is,however, thethe larger thealldifferences in Any concavity in the histogram maydeeper bethe the location of a threshold; not points of the concavity are good candidates. The deeper concavity depth is, the larger differences in objects’ concavity are good candidates. The deeper depth is, the larger feature values between both sides; thus, the weconcavity shall consider those pointsthe fordifferences which Di in is objects’ a local feature values between for both sides; thus, shall consider those points for which Di is a local maximum as candidates positions of thewe threshold T: maximum as candidates for positions of the threshold T:

ISPRS Int. J. Geo-Inf. 2016, 5, 125

5 of 16

objects’ feature values between both sides; thus, we shall consider those points for which Di is a local 5 of 16 maximum as candidates for positions of the threshold T:

ISPRS Int. J. Geo-Inf. 2016, 5, 125

TT “=fif,i, when whenDDi i “ = DDmax max

(3) (3)

We as feature featureproperties propertiesfor forhistogram histogram concavity Weselect selectelevation elevationinformation information of of point point clouds clouds as concavity analysis in this study. In urban road environments, there are usually a large number of road surface analysis in this study. In urban road environments, there are usually a large number of road surface points, and their streetlamps, lamps,and andother otherobjects objectsthat that cause serious points, and theirdistribution distributionisisconcentrated. concentrated. Trees, Trees, street cause serious interference in road segmentation have greater heights, and the points are dispersed. Therefore, interference in road segmentation have greater heights, and the points are dispersed. Therefore, based based onelevation the elevation distribution characteristics of pointwe clouds, wethat conclude that thehistogram peak of the on the distribution characteristics of point clouds, conclude the peak of the histogram corresponds to the road surface and that “shoulders” on the right side of the peak corresponds to the road surface and that “shoulders” on the right side of the peak correspond to correspond to those objects that interference could cause in interference in road segmentation. As shown in Figure 5(c), those objects that could cause road segmentation. As shown in Figure 5c, the possible locates in locates the “shoulder” on the right. First, letFirst, hf belet thehf elevation corresponding to the to thethreshold possible threshold in the “shoulder” on the right. be the elevation corresponding and letlet hmax ofof allall points. Then, calculate thethe threshold T in thelargest largestfrequency, frequency, and hmaxbebethe thehighest highestelevation elevation points. Then, calculate threshold T in the range [h , h ] by analysing the histogram concavity. Finally, we filter out points with an elevation max the range [hf, hf max] by analysing the histogram concavity. Finally, we filter out points with an elevation of ofthan less than andthe useremaining the remaining points for subsequent procedures. less T andTuse points for subsequent procedures.

(a)

(b)

(c) Figure sample before elevation elevationfiltering filtering(coloured (coloured Figure5. 5.A A sampleofofelevation elevationfiltering: filtering: (a) (a) point point clouds clouds before byby elevation); (b) point clouds after elevation filtering (coloured by elevation); and (c) elevation elevation); (b) point clouds after elevation filtering (coloured by elevation); and (c) elevation histogram. histogram.

2.1.3. Segmentation by Region-Growing

2.1.3. Segmentation by Region-Growing

To improve the computation speed of the proposed method and apply image processing To improve computation of the method and apply image processing algorithms, laserthe scanning points arespeed projected ontoproposed an XY plane to generate a georeferenced feature algorithms, laser scanning points are projected onto an XY plane to generate a georeferenced feature image I. The grey value of each cell equals the central point’s elevation calculated using the Inverse image I. The grey value ofinterpolation each cell equals the [37] central elevation calculated using the Inverse Distance Weighted (IDW) method and point’s normalization. Then, the bilinear interpolation Distance (IDW) the interpolation method [37] and normalization. Then, the image bilinear method Weighted is used to smooth image because there are no points in some cells, which causes

interpolation method is used to smooth the image because there are no points in some cells, which causes image noise. Finally, the elevation feature image I is converted into an elevation-gradient feature image G as follows: 𝐺𝑥 (𝑖, 𝑗) = (𝐼(𝑖, 𝑗 + 1) − 𝐼(𝑖, 𝑗 − 1))/2 𝐺𝑦 (𝑖, 𝑗) = (𝐼(𝑖 + 1, 𝑗) − 𝐼(𝑖 − 1, 𝑗))/2 {

𝐺(𝑖, 𝑗) = √𝐺𝑥 (𝑖, 𝑗)2 + 𝐺𝑗 (𝑖, 𝑗)2

(4)

ISPRS Int. J. Geo-Inf. 2016, 5, 125

6 of 16

noise. Finally, the elevation feature image I is converted into an elevation-gradient feature image G as follows: $ ’ & Gx pi, jq “ pI pi, j ` 1q ´ I pi, j ´ 1qq {2 Gy pi, jq “ pIb pi ` 1, jq ´ I pi ´ 1, jqq {2 (4) ’ % 2 2 G pi, jq “ Gx pi, jq ` Gj pi, jq the elevation gradients ISPRSSince Int. J. Geo-Inf. 2016, 5, 125

of the road surface and road boundary are sufficiently distinct, 6 ofwe 16 use a single threshold TG for image binarization. In the binarized image B, the pixel value is set to 1 if the corresponding pixel value in in GG is greater than TGT; Gotherwise, it is to to 0. 0. The result is shown as if the corresponding pixel value is greater than ; otherwise, it set is set The result is shown Figure 6b.6b. ForFor bridging gaps onon road boundaries, dilation is applied to binarized images. Figure 6c as Figure bridging gaps road boundaries, dilation is applied to binarized images. Figure shows thethe results of dilating thethe image B with a 3 aˆ33×structuring element, andand the the road surface is the 6c shows results of dilating image B with 3 structuring element, road surface is black areaarea surrounded by aby clear white roadroad boundary. the black surrounded a clear white boundary.

(a)

(b)

(c)

Figure 6. (a) Elevation-gradient image; (b) image after binarization; and (c) image after dilation. Figure 6. (a) Elevation-gradient image; (b) image after binarization; and (c) image after dilation.

Region-growing is an effective method for extracting the road surface in an image. The first step an effective for confirm extracting thethe road surface an image. Thethe firstpoint step is to Region-growing choose a point inis the trajectorymethod data and that value of ainpixel in which is to choose a point in the trajectory data and confirm that the value of a pixel in which the point locates is 0. Then, the pixel can be determined as a seed point. All of the pixels that are 8-connected locates is 0.point Then,and thewhose pixel can be determined asappended a seed point. Allseed of the pixels that are 8-connected to the seed values are also 0 are to the point to form a larger region. to the seed point and whose values are also 0 are appended to the seed point to form a larger region. 8-connected pixels are neighbours to every pixel that touches one of their edges or corners. Then, we 8-connected pixels are neighbours to every pixel that touches one of their edges or corners. Then, we dilate the result of the region-growing process using a 3 × 3 structuring element to compensate for dilate the caused result ofby thethe region-growing process using the a3ˆ 3 structuring to compensate for the the error previous dilation. Finally, point clouds ofelement road surfaces are converted error caused by the previous dilation. Finally, the point clouds of road surfaces are converted based on based on the optimized region-growing results. the optimized region-growing results. 2.2. Road Marking Extraction 2.2. Road Marking Extraction 2.2.1. 2.2.1. Adaptive Adaptive Thresholding Thresholding Based Based on on Road Road Surface Surface Partitioning Partitioning Usually, road surface surface is is composed Usually, aa road composed of of asphalt asphalt and and concrete concrete and and exhibits exhibits low low or or diffuse diffuse reflection reflection properties when subject to an incident laser. Road markings are highly reflective properties when subject to an incident laser. Road markings are highly reflective white white or or yellow yellow coatings painted on the road surface. Objects with higher reflectances correspond to stronger coatings painted on the road surface. Objects with higher reflectances correspond to stronger laser laser signals; therefore,the thelaser laserintensity intensityvalue value a key feature distinguishing markings signals; therefore, is is a key feature for for distinguishing roadroad markings fromfrom road road surfaces. However, the intensity values are also affected by the laser incidence angles and the surfaces. However, the intensity values are also affected by the laser incidence angles and the distances distances between and thecentre, scanner centre, which makes single global thresholding less between the targetthe andtarget the scanner which makes single global thresholding less effective effective for segmentation. Therefore, adaptive intensity thresholding based on road surface for segmentation. Therefore, adaptive intensity thresholding based on road surface partitioning is partitioning proposed to solvecaused the problems caused byintensities. non-uniform intensities. proposed to is solve the problems by non-uniform Generally, the farther away from the trajectory data, Generally, the farther away from the trajectory data, the the lower lower the the intensity intensity of of road road markings. markings. The materials that constitute different road sections also vary considerably. Thus, the The materials that constitute different road sections also vary considerably. Thus, the road road surface surface is is partitioned into non-overlapping rectangles Rect i, as shown in Figure 7. The X axis is the direction of partitioned into non-overlapping rectangles Recti , as shown in Figure 7. The X axis is the direction of the and the the Y Y axis axis is is perpendicular perpendicular to to the the X X axis the vehicle vehicle trajectory, trajectory, and axis in in the the horizontal horizontal plane. plane. The The length length and width of the rectangle are R x and Ry, respectively. The size of the rectangle is related to the extent and width of the rectangle are Rx and Ry , respectively. The size of the rectangle is related to the extent of intensity evenness. Rx and Ry should be set smaller when the intensity distributes more unevenly in order to ensure a uniform intensity distribution in each rectangle.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

7 of 16

of intensity evenness. Rx and Ry should be set smaller when the intensity distributes more unevenly in 7 of 16 order to ensure a uniform intensity distribution in each rectangle.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

Figure 7. Road surface partitioning. Figure 7. Road surface partitioning.

(a) only only one one type, type, There are two possibilities concerning the number of point types in a rectangle: (a) i.e., road surface points, or (b) two types, i.e., road surface surface points points and and road road marking marking points. points. Otsu’s algorithm [38] is first used to find the the optimal optimal intensity intensity threshold in each each rectangle. rectangle. Then, the two cases are separated based on the thresholding results. results. , . . p. m, p} mrepresents } represents points whose intensities larger The point set PAA=={ {p1p,1p, 2p, 2…, the the points whose intensities are are larger thanthan the the threshold, and P = { p , p , . . . , p } represents the remaining points in the rectangle. The 3D threshold, and PB = { pB1, p2, …, the remaining points in the rectangle. The 3D coordinates m 1 pm2 } represents coordinates pi), its intensity of pi are (xi, yof i, z and(xits is Ii. value is Ii . i are i , yintensity i , zi ), andvalue case (a), (a), the the points pointsin inPPAAand andPP areboth both road surface points. case points PA For case BB are road surface points. ForFor case (b),(b), thethe points in Pin A are are road marking points, andpoints the points in Proad road surface The distance clusterbetween centres road marking points, and the in PB are points. points. The distance of clusterofcentres B aresurface between and P caselarger (b) isthan far larger in case (a). This dI isascalculated P A and PBPin (b) is far that inthan casethat (a). This distance dI isdistance calculated follows: as follows: A case B in ˇ ř∑m𝑚 𝐼 ř ∑𝑛𝑖=1 n 𝐼𝑖 ˇˇ 𝑖 ˇ | i“𝑖=1 1 Ii − i“1 I|i ˇ 𝑑 = ˇ 𝐼 dI “ ˇ ´ 𝑛 m𝑚 n ˇ

(5) (5)

The ratio of the number of points between PA and PB is also a critical element to the judgement of The ratio of the number of points between PA and PB is also a critical element to the judgement of cases. In Figure 7, Rect1 and Rect2 are examples of case (a) and case (b). Their thresholding results are cases. In Figure 7, Rect1 and Rect2 are examples of case (a) and case (b). Their thresholding results are shown in Figure 8. In case (b), the number of points in PA is substantially larger than that in PB, which shown in Figure 8. In case (b), the number of points in PA is substantially larger than that in PB , which leads to a high ratio. The ratio is defined as follows: leads to a high ratio. The ratio is defined as follows: 𝑚 𝑟𝑎𝑡𝑖𝑜 = m (6) ratio “ 𝑛 (6) n According to the above analysis, the two cases can be distinguished using the following formula: According to the above analysis,𝑖𝑓(𝑑 the two cases can be distinguished using the following formula: 𝐼 < 𝑇𝑑 )𝑎𝑛𝑑 (𝑟𝑎𝑡𝑖𝑜 > 𝑇𝑟 ), 𝑐𝑎𝑠𝑒(𝑎) ∀𝑅𝑒𝑐𝑡# (7) 𝑖: { 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒, 𝑐𝑎𝑠𝑒(𝑏) i f pd I ă Td q and pratio ą Tr q , case paq @Recti : of the cluster-centre (7) where Td and Tr are the thresholds distance otherwise, case pbqand the ratio of the number of points, respectively. where Td and Tr are the thresholds of the cluster-centre distance and the ratio of the number of points, respectively.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

8 of 16

ISPRS Int. J. Geo-Inf. 2016, 5, 125 ISPRS Int. J. Geo-Inf. 2016, 5, 125

8 of 16 8 of 16

(a) (a)

(b) (b)

Figure 8. Intensity histogram: (a) Rect 1; and (b) Rect2. Figure Intensityhistogram: histogram:(a) (a)Rect Rect ; and(b) (b)Rect Rect Figure 8.8.Intensity 1 ;1and 2 .2.

Finally, for case (b), all the points are reserved as the coarse results obtained from road Finally,for forcase case(b), (b),all allofof ofthe thepoints pointsinin inPPPAAare arereserved reservedas asthe thecoarse coarseresults resultsobtained obtainedfrom fromroad road Finally, A marking extraction. marking extraction. marking extraction. 2.2.2. Dispersion Degree Filtering 2.2.2.Dispersion DispersionDegree DegreeFiltering Filtering 2.2.2. Some parts road surfaces have similar material properties as those road markings, which Someparts partsofof ofroad roadsurfaces surfaceshave havesimilar similarmaterial materialproperties propertiesas asthose thoseofof ofroad roadmarkings, markings,which which Some causes noise in the coarse extraction results, as shown in Figure 9. The road marking points are more causesnoise noiseininthe thecoarse coarseextraction extractionresults, results,asasshown shownininFigure Figure9.9.The Theroad roadmarking markingpoints pointsare aremore more causes concentrated than the noise; therefore, noise is proposed to be removed according to the difference concentrated than the noise; therefore, noise is proposed to be removed according to the difference concentrated than the noise; therefore, noise is proposed to be removed according to the difference in in degrees. in dispersion dispersion degrees. dispersion degrees.

Figure Dispersion degree filtering. Figure Figure9.9. 9.Dispersion Dispersiondegree degreefiltering. filtering.

The dispersion degree p of a point p(x, y, z) is defined as follows: Thedispersion dispersiondegree degreeDD D ofaapoint pointp(x, p(x,y,y,z)z)isisdefined definedasasfollows: follows: The p pof 𝑁𝑝b ∑𝑁𝑝 √(𝑥𝑖 − 𝑥)22 + (𝑦𝑖 − 𝑦)22 ř∑N𝑖=1 (8) 𝑖=1 p √(𝑥𝑖 − 𝑥)2 + (𝑦𝑖 − 𝑦) 𝐷 = 2 𝑃 (8) 𝐷𝑃 = i“1 pxi ´ xq 𝑁 𝑝 ` pyi ´ yq 𝑁 𝑝 DP “ (8) Np where where N Npp denotes denotes the the number number of of local local neighbourhood neighbourhood points. points. the points degrees are By removing thenumber points whose whose dispersion degrees are larger larger than than the the threshold threshold TTDD,, accurately accurately whereBy Npremoving denotes the of localdispersion neighbourhood points. extracted road markings can be obtained. extracted road markings canwhose be obtained. By removing the points dispersion degrees are larger than the threshold T , accurately

D

extracted road markings can be obtained. 2.3. 2.3. Zebra Zebra Crossing Crossing Recognition Recognition and and Construction Construction 2.3. Zebra Crossing Recognition and Construction 2.3.1. 2.3.1. The The Model Model of of Zebra Zebra Crossings Crossings 2.3.1. The Model of Zebra Crossings A A zebra zebra crossing crossing is is an an area area that that consists consists of of aa group group of of broad broad white white stripes stripes painted painted on on the the road. road. A zebra crossing is an area that consists of a group of broad white stripes painted on the road. As shown in Figure 10, our model of a zebra crossing contains the following four elements: L LL22 As shown in Figure 10, our model of a zebra crossing contains the following four elements: L11 and and As shown Figure our model ofrespectively; a zebra crossing contains the following four elements: L1 and L2 define the and end positions, V of centreline, which is define theinstart start and10, end positions, respectively; Vrr is is the the direction direction of the the road road centreline, which is also also define the start and end positions, respectively; V the direction of the road centreline, which is also the direction along which vehicles travel; and V zis is the distribution direction of the zebra crossing, r the direction along which vehicles travel; and Vz is the distribution direction of the zebra crossing, the direction along which vehicles travel; and V z is the distribution direction of the zebra crossing, which guides pedestrians to safely. which guides pedestrians to cross cross the the road road safely. which guides pedestrians to cross the road safely.

ISPRS Int. J. Geo-Inf. 2016, 5, 125 ISPRS Int. J. Geo-Inf. 2016, 5, 125

9 of 16 9 of 16

(a)

(b)

Figure 10. The model of zebra crossings: (a) the distribution direction is perpendicular to the road Figure 10. The model of zebra crossings: (a) the distribution direction is perpendicular to the road centreline; and (b) the distribution direction is oblique to the road centreline. centreline; and (b) the distribution direction is oblique to the road centreline.

2.3.2. Detection of Zebra Stripes 2.3.2. Detection of Zebra Stripes Design standards in most countries provide regulations on the exact size and shape of road Design standards in most countries provide regulations on the exact size and shape of road markings. It is advantageous to recognize different types of road markings based on their different markings. It is advantageous to recognize different types of road markings based on their different sizes and shapes. The stripe is actually a rectangle with a fixed size. Therefore, there are two factors sizes and shapes. The stripe is actually a rectangle with a fixed size. Therefore, there are two factors we can use to recognize stripes: (a) rectangular features and (b) fixed lengths Lz and widths Wz. we can use to recognize stripes: (a) rectangular features and (b) fixed lengths Lz and widths W z . To cluster neighbouring road marking points, intensity feature images transformed from point To cluster neighbouring road marking points, intensity feature images transformed from point clouds are binarized, followed by 8-connected component labelling. For each connected region, an clouds are binarized, followed by 8-connected component labelling. For each connected region, an ellipse with the same second moments as the 8-connected region is calculated. e is the eccentricity of ellipse with the same second moments as the 8-connected region is calculated. e is the eccentricity of the ellipse, and its value ranges from 0 to 1. The closer this value is to 1, the more likely the region is the ellipse, and its value ranges from 0 to 1. The closer this value is to 1, the more likely the region is a a rectangle. Based on experience, regions whose e values are larger than 0.99 are candidate stripes, rectangle. Based on experience, regions whose e values are larger than 0.99 are candidate stripes, and and the corresponding points are denoted as the point set P = {p1, p2, …, pk}. the corresponding points are denoted as the point set P = {p1 , p2 , . . . , pk }. To calculate the length and width of the candidate stripes, the principal component analysis To calculate the length and width of the candidate stripes, the principal component analysis (PCA) (PCA) method is performed to judge the principal distribution direction of points in P on the XY method is performed to judge the principal distribution direction of points in P on the XY plane. plane. For all the points in P, the correlation between xi and yi can be determined through their variances For all the points in P, the correlation between xi and yi can be determined through their as follows: ř variances as follows: k px ´ xq pyi ´ yq cov px, yq “ i𝑘“1 i (9) ∑𝑖=1(𝑥𝑖 k−´𝑥̅1)(𝑦𝑖 − 𝑦̅) (9) 𝑐𝑜𝑣(𝑥, 𝑦) = 𝑘−1 where x and y are the average values of xi and yi . The covariance matrix C can be established where x and y are the average values of xi and yi.using Equation (10): ˜ The covariance matrix C can be established using Equation¸(10): cov px, xq cov px, yq C“ 𝑐𝑜𝑣(𝑥, 𝑥) 𝑐𝑜𝑣(𝑥, 𝑦) 𝐶 = (cov py, xq cov py, yq ) 𝑐𝑜𝑣(𝑦, 𝑥) 𝑐𝑜𝑣(𝑦, 𝑦)

(10) (10)

As shown in Figure 11, through the eigenvalue decomposition (EVD) of C, the eigenvector associated with the larger eigenvalue V11 is is defined defined as as the the first first principal principal direction, which is parallel to the long side of rectangle; the eigenvector associated associated with with the the smaller smaller eigenvalue eigenvalue V V22 is defined as the second principal direction, direction, which which is parallel to the short side of the rectangle. rectangle. One One dimension dimension matrixes M11 and to V 11 and and M M22 are areestablished established by by projecting projecting all points’ plane coordinates coordinates to and V V22, ,respectively. respectively. Then, Then, the length Lzz and width W z of the region can be calculated as follows: and width W z of the region can be calculated as follows: # 𝐿𝑧 = 𝑚𝑎𝑥(𝑀1 ) − 𝑚𝑖𝑛(𝑀1 ) {Lz “ max pM1 q ´ min pM1 q 𝑊𝑧 = 𝑚𝑎𝑥(𝑀2 ) − 𝑚𝑖𝑛(𝑀2 ) Wz “ max pM2 q ´ min pM2 q

(11) (11)

ISPRS Int. J. Geo-Inf. 2016, 5, 125 ISPRS Int. J. Geo-Inf. 2016, 5, 125

10 of 16 10 of 16

Figure Figure11. 11.Principal Principalcomponent componentanalysis analysisofofroad roadmarkings. markings.

Theregions regionswhose whose length width satisfy the design standards be reserved zebra The length andand width satisfy the design standards can becan reserved as zebraasstripes. stripes. Considering wear on road markings and calculation errors in real-world cases, the value Considering wear on road markings and calculation errors in real-world cases, the value ranges of Lz ranges of L z and W z are set as follows: and W z are set as follows: # L𝐿s ˆ×0.8 0.8ď≤L𝐿z𝑧ď≤ L𝐿s𝑠ˆ ×1.2 1.2 { 𝑠 (12) × 0.8 ≤ 𝑊 ≤ 𝑊 × 1.2 W𝑊 ˆ 0.8 ď W ď W ˆ s𝑠 z𝑧 s𝑠 1.2 whereLLs sand andW Wssare arethe thestandard standardlength lengthand andwidth widthof ofzebra zebrastripes. stripes. where 2.3.3. 2.3.3.Reconstruction ReconstructionofofZebra ZebraCrossings Crossings The Thecentroids centroidsofofall allstripes stripesininaazebra zebracrossing crossinglocate locatealong alongaastraight straightline. line.This Thisline lineisisfound foundon on the thecentre centreaxis axisofofthe thezebra zebracrossing, crossing,which whichisisimportant importanttotoarea areareconstruction. reconstruction. Random Randomsample sampleconsensus consensus(RANSAC) (RANSAC)[39] [39]isisan aneffective effectiveiterative iterativealgorithm algorithmfor formathematical mathematical modelfitting, fitting, such as linear fitting. By adjusting the number of iterations nRresidual and thethreshold residual model such as linear fitting. By adjusting the number of iterations nR and the R, optimal model can parameters can be estimated a setdata of observed data containing Tthreshold parameters be estimated from a set of from observed containing noise. This is R , optimalTmodel noise. This is adopted to solve the fitting of zebra crossing’s centre axes in this paper. adopted to solve the problem ofthe theproblem fitting ofofzebra crossing’s centre axes in this paper. Wefirst firstcalculate calculatethe thecoordinates coordinatesofofall allstripe stripecentroids centroidson onthe theXY XYplane. plane.Then, Then,the theRANSAC RANSAC We algorithmisisapplied appliedtotothese thesecentroids. centroids.To Toensure ensurethe theaccuracy accuracyofofthe theresults, results,there thereshould shouldbe beatatleast least algorithm threepoints points used fitting the estimated linear model. Finally, weobtain directly obtain some three used for for fitting withwith the estimated linear model. Finally, we directly some important important information: (a) the of zebra the crossings: the of iterations; (b) the axis information: (a) the number of number zebra crossings: number ofnumber iterations; (b) the centre axiscentre of zebra of zebra crossings: lines by RANSAC; (c) stripes belonging to thezebra same zebra crossing. crossings: the lines the fitted byfitted RANSAC; and (c) and stripes belonging to the same crossing. The The distribution direction z is the same as the direction of the centre axis. Thedirection directionofofthe theroad’s road’s distribution direction V z isVthe same as the direction of the centre axis. The centrelineVVr risiscalculated calculatedby byaveraging averagingthe thestripes’ stripes’first firstprinciple principledirection directionininaazebra zebracrossing. crossing.LL and centreline 1 1and areobtained obtainedby bytranslating translatingthe thecentre centreaxis axis along , wherethe the translational distance is L s/2.This This LL2 2are along V rV, rwhere translational distance is ˘L s /2. completesthe therecognition recognitionand andreconstruction reconstructionofofzebra zebracrossings. crossings. completes 3.3. Results Results and and Discussion Discussion The Thepoint pointclouds cloudsused usedininthe theexperiment experimentwere werecaptured capturedby byan anOptech OptechLynx Lynxmobile mobilemapping mapping system, which consists of two laser scanners, one GPS receiver, and an inertial measurement unit. unit. The system, which consists of two laser scanners, one GPS receiver, and an inertial measurement original data isdata given the WGS-84 coordinate system. Then the Then data isthe transformed from longitude The original is in given in the WGS-84 coordinate system. data is transformed from and latitude coordinates to mathematical X and Y planar coordinates using Gauss projection. The longitude and latitude coordinates to mathematical X and Y planar coordinates using Gauss survey area The is insurvey Guanggu, of the Citya of Wuhan, which a majorwhich city iniscentral projection. areaaispart in Guanggu, part of the City of is Wuhan, a majorChina. city in central Figure 12 shows the three datasets selected for the evaluation of the performance of the proposed China. method. The12 figure contains vegetation trees street lamps, power lines, and cars in Figure shows the three datasets (e.g., selected forand thebushes), evaluation of the performance of the proposed these areas. The roads in these datasets consist of straight sections, curved sections, and crossroads. method. The figure contains vegetation (e.g., trees and bushes), street lamps, power lines, and cars in Detailed information, road length and of thestraight numbersections, of points, is presented in and Table 1. these areas. The roadsincluding in these datasets consist curved sections, crossroads.

Detailed information, including road length and the number of points, is presented in Table 1.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

11 of 16

ISPRS Int. J. Geo-Inf. 2016, 5, 125

11 of 16

ISPRS Int. J. Geo-Inf. 2016, 5, 125

11 of 16

Figure12. 12.An Anoverview overview of Figure of the theexperimental experimentaldata. data. Table 1. Description of the datasets.

Table 1. Description of the datasets. Dataset Length/(m) Number of Points Dataset 1 Length/(m) Number of Points 1200 45,175,744 Figure 12. An overview of the experimental data. 2 1350 43,407,389 1 1200 45,175,744 3 600 16,624,370 2 43,407,389 Table 1.1350 Description of the datasets. 3 600 16,624,370 Dataset Length/(m) Number of Points 3.1. Segmentation of Road Surfaces 1

1200

45,175,744

3.1. Segmentation Road Surfaces 2data into 1350 43,407,389 To sectionofthe experimental a number of blocks, we chose d=50m in dataset 2. In the 3 600 there are 16,624,370 other two datasets, we used d=30m because more curves and ramps. For each block To section the experimental data into a number of blocks, we chose d = 50 m in dataset 2. In the histogram concavity analysis was used to obtain elevation thresholds. Then, following elevation Segmentation Road Surfaces other two3.1.datasets, weofused d = 30 m because there are more curves and ramps. For each block filtering, point clouds were converted into elevation-gradient feature images. The grid size is a critical histogram concavity analysis was used tosize obtain thresholds. Then, following elevation experimental datathe into a number of blocks, we chosepoints, d=50m in dataset 2. In parameterTo insection imagethe generation. When is tooelevation small, only a few or, possibly, nothe points, other the two datasets, we used d=30m because curves quality. and ramps. Forgrid each block filtering, point clouds were converted into elevation-gradient images. The size a critical fall inside grids, whereas a large size may there resultare in more low feature image Taking dataset 1isas an histogram concavity analysis was used to obtain elevation thresholds. Then, following elevationno points, parameter in image generation. When the size is too small, only a few points, or, possibly, example, a block of data was selected to generate elevation-gradient images with different grid sizes filtering, point clouds were converted into elevation-gradient feature images. The grid size is a critical fall inside whereas large size may result in low image quality. dataset 1 as an of 0.05, the 0.07,grids, 0.09, and 0.11 m.a Figure 13 presents the comparison results. VisualTaking inspection suggests parameter in image generation. When the size is too small, only a few points, or, possibly, no points, that there are few noise points on the road surface and that the details are clear when the grid size is of example, a block of data was selected to generate elevation-gradient images with different grid fall inside the grids, whereas a large size may result in low image quality. Taking dataset 1 as an sizes m; therefore, this value was applied in the experiment. The grid sizes used in dataset 2 and 3 that 0.05,0.09 0.07, 0.09, and 0.11 m. Figure 13 presents the comparison results. Visual inspection suggests example, a block of data was selected to generate elevation-gradient images with different grid sizes were set to 0.12 m and 0.10 m, respectively, in the same way. of 0.05, 0.07, 0.09, and 0.11 m. Figure 13 presents the comparison results. Visual inspection suggests there are few noise points on the road surface and that the details are clear when the grid size is 0.09 m; are was few noise pointsinonthe theexperiment. road surface and that the sizes detailsused are clear when the2grid is therefore,that thisthere value applied The grid in dataset andsize 3 were set to 0.09 m; therefore, this value was applied in the experiment. The grid sizes used in dataset 2 and 3 0.12 m and 0.10 m, respectively, in the same way. were set to 0.12 m and 0.10 m, respectively, in the same way.

(a) (a)

(c)

(c)

(b) (b)

(d)

(d)

Figure 13. Different elevation-gradient images with different grid sizes: (a) GSD = 0.03 m; (b) GSD = 0.06 m; (c) GSD = 0.09 m; and (d) GSD = 0.12 m.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

12 of 16

Figure 13. Different elevation-gradient images with different grid sizes: (a) GSD = 0.03 m; (b) GSD = 12 of 16 0.06 m; (c) GSD = 0.09 m; and (d) GSD = 0.12 m.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

For the the binarization binarization of of elevation-gradient elevation-gradient images, images, aa threshold threshold should should be be determined. determined. The The grey grey For values of of road road surfaces surfaces generally generally range range from from 00 to to 0.005, 0.005, and and the the grey grey values values of of road road boundaries boundaries are are values approximately 0.015; therefore, any value between 0.005 and 0.015 could be set as the threshold and approximately 0.015; therefore, any value between 0.005 and 0.015 could be set as the threshold and we selected selected 0.005. 0.005. we Finally, we we segmented segmented the the road road surfaces surfaces using using aa region-growing region-growing method; method; then, then, the the 3D 3D points points Finally, associated with road surfaces could be extracted easily, as shown in Figure 14. The close-up views in associated with road surfaces could be extracted easily, as shown in Figure 14. The close-up views in the black black rectangles rectangles indicate indicate that that the the road road surfaces surfaces are are basically basically extracted extracted accurately accurately and and completely. completely. the

Figure 14. The results of road surface segmentation. segmentation.

3.2. Extraction Extraction of of Road Road Markings Markings 3.2. Several parameters parameters and Several and their their values values used used in in the the extraction extraction of of road road markings markings are are listed listed in in Table Table 2. 2. They were mainly selected through a set of tests or based on prior knowledge. Then, road markings They were mainly selected through a set of tests or based on prior knowledge. Then, road markings were extracted extracted directly directly with with adaptive adaptive thresholding thresholding and and dispersion dispersion degree degree filtering filtering from from road road surface surface were points. All All types types of of road road markings markings could could be be extracted fairly well. well. However, few road markings were were points. extracted fairly However, aa few road markings abraded by cars and pedestrians, leading to some of the extraction results being incomplete. Figure 15 abraded by cars and pedestrians, leading to some of the extraction results being incomplete. Figure 15 shows a part of the extracted road markings, including solid lines, dotted lines, arrow markings, and shows a part of the extracted road markings, including solid lines, dotted lines, arrow markings, and diamond markings. ISPRS markings. Int. J. Geo-Inf. 2016, 5, 125 13 of 16 diamond Table 2. Parameters of road marking extraction.

Name Rx Ry Td

Value 2m 1m 40

Name Tr NP TD

Value 0.8 5 0.007 m

Figure roadmarkings. markings. Figure15. 15.The The extracted extracted road

3.3. Recognition and Reconstruction of Zebra Crossings The standard length Ls and width Ws of the zebra stripes are 6 m and 0.4 m, respectively, in the three datasets, which satisfy the design standards of zebra crossings in China. After recognizing stripes according to the above standards, the RANSAC algorithm was applied to the centroids of

ISPRS Int. J. Geo-Inf. 2016, 5, 125

13 of 16

Table 2. Parameters of road marking extraction. Name Rx Ry Td

Value

Name

Value

2m Tr 0.8 1m NP 5 40 TD 0.007 m Figure 15. The extracted road markings.

3.3.3.3. Recognition and Recognition andReconstruction ReconstructionofofZebra ZebraCrossings Crossings The standard length the zebra zebrastripes stripesare are66mmand and0.4 0.4m,m,respectively, respectively, The standard lengthLsLsand andwidth widthW Wss of of the in in thethe three datasets, which satisfy the the design standards of zebra crossings in China. AfterAfter recognizing stripes three datasets, which satisfy design standards of zebra crossings in China. recognizing according to the above standards, the RANSAC algorithmalgorithm was applied the centroids stripes with stripes according to the above standards, the RANSAC wastoapplied to the of centroids of an stripes nR of 5000 of 0.25 comprehensive informationinformation about zebra crossing withand an naR T ofR5000 andtoa obtain TR of 0.25 to obtain comprehensive about zebraareas. crossing areas. A comparative study was conducted to compare our proposed zebra crossing recognition method A comparative study was conducted to compare proposed zebra with a recently published method: Riveiro’s method [34].our As listed in Table 3, crossing a total ofrecognition eleven zebra method with a recently published method: Riveiro’s method [34]. As listed in Table 3, aoutperforms total of eleventhe crossings were recognized at a recognition rate of 90.91% with our method, which zebra crossings recognized at a recognition rate ofto 90.91% with our method, which outperforms other method. Onewere zebra crossing was not detected due the low reflectivity of road markings caused the other method. One zebra crossing was not detected due to the low reflectivity of road markings by serious abrasion, which decreases the completeness of road marking extraction, as shown in the caused Figure 16.by serious abrasion, which decreases the completeness of road marking extraction, as shown in the Figure 16. Table 3. Recognition results of zebra crossings. Table 3. Recognition results of zebra crossings. Dataset Dataset 1 1 2 2 3 3 Total

Total

Number of Recognition Recognition Number of ZebraRecognition Rate ofRate theof theRecognition Rate Rate of of Crossings Proposed Method (%) Riveiro’s Method Zebra Crossings Proposed Method (%) Riveiro’s Method (%) (%) 3 3 100.00 100.00 66.67 66.67 6 100.00 6 100.00 66.67 66.67 2 50.00 50.00 2 50.00 50.00 11 90.91 63.64 11 90.91 63.64

Figure 16. The unrecognized zebra crossing. Figure 16. The unrecognized zebra crossing.

ToTo further quantitatively evaluate the performance of ourofmethod, four measures were computed further quantitatively evaluate the performance our method, four measures were forcrossing each zebra crossing based on manually-extracted results. z and r represent angle forcomputed each zebra based on manually-extracted results. θ z and θ r represent the anglethe deviations of a zebra crossing’s distribution direction and a road centreline’s direction, respectively. The completeness r is used to describe how complete the detected zebra crossing areas are, and the correctness p is used to indicate what percentage of the detected zebra crossing areas are valid. r and p are defined as follows: r “ TP{AP (13) p “ TP{VP

(14)

where TP, AP, and VP are the number of road surface points belonging to (1) the correctly detected zebra crossing areas using the proposed method; (2) the zebra crossing areas collected using manual visual interpretation; and (3) the whole detected zebra crossing areas using the proposed method, respectively.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

14 of 16

As shown by the quality evaluation results in Table 4, the completeness and correctness of recognized zebra crossings are both greater than 90%, the value of θ z is no greater than 2.5˝ , and the maximum value of θ r is 1.2˝ . In summary, our proposed method exhibits good performance in recognizing and reconstructing zebra crossings. Table 4. Qualitative evaluation results. Dataset

Zebra crossing

r (%)

p (%)

θ z (˝ )

θ r (˝ )

1 1 1 2 2 2 2 2 2 3 3

1 2 3 1 2 3 4 5 6 1 2

95.97 96.60 99.08 94.55 91.70 92.68 95.69 96.56 96.40 97.03 /

96.25 99.57 95.45 94.20 97.87 98.67 91.94 98.84 98.93 94.54 /

2.50 0.87 0.08 1.00 0.28 0.33 0.50 1.51 1.51 0.74 /

1.20 0.24 0.06 0.02 0.15 0.03 0.44 0.17 0.46 0.05 /

4. Conclusions In this paper, we have proposed an effective method for recognizing and reconstructing zebra crossings using mobile laser scanning data. The proposed method first converts point clouds into elevation-gradient images and subsequently applies region-growing-based road surface segmentation. Second, road marking points are extracted with adaptive intensity thresholding based on road surface partitioning and dispersion degree filtering. Finally, the zebra crossing areas are recognized and reconstructed according to geometrical features. The three datasets acquired by an Optech Lynx mobile mapping system were used to validate our zebra crossing recognition and reconstruction method. The experimental results demonstrate that the proposed method performs well and obtains high completeness and correctness values. The experiment has indicated three main advantages of the method: (1) the method is effective even when points of zebra crossings are incomplete; (2) the method can be effective when the distribution direction of zebra crossings and road centrelines are at arbitrary angles; and (3) more comprehensive information about zebra crossing areas, such as the extent of the area, is obtained. These research findings could contribute to a more rapid, cost-effective, and comprehensive approach to traffic management and ensure maximum safety conditions for road users. However, our method could only be applied for post-processing instead of real-time use at present, because some parameters need to be selected based on prior knowledge or a set of tests. In the future, we will make further study on the algorithm of selecting optimal parameters automatically. Additionally, it is also important to enhance the computing efficiency of our method, because point clouds with better resolution and higher density are needed if we want to obtain more detailed information about urban objects. Acknowledgments: This study is funded by the Scientific and Technological Leading Talent Fund of National Administration of Surveying, mapping and geo-information (2014), the National 863 Plan of China (2013AA12A202) and Wuhan ‘Yellow Crane Excellence’ (Science and Technology) program (2014). Author Contributions: Lin Li provided the main idea of the study and designed the experiments. Shen Ying and Da Zhang performed the experiments together. You Li contributed to analyzing the experimental results. Da Zhang wrote the first version of the paper and all the authors improved the version. Conflicts of Interest: The authors declare no conflict of interest.

ISPRS Int. J. Geo-Inf. 2016, 5, 125

15 of 16

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

13. 14.

15. 16. 17. 18. 19. 20. 21. 22. 23. 24.

China National Standardization Management Committee. GB 5768.3-2009: Road Traffic Signs and Markings. Part 3: Road Traffic Markings; China Standards Press: Beijing, China, 2009. Carnaby, B. Poor road markings contribute to crash rates. In Proceedings of Australasian Road Safety Research Policing Education Conference, Wellington, New Zealand, 14–16 November 2005. Horberry, T.; Anderson, J.; Regan, M.A. The possible safety benefits of enhanced road markings: A driving simulator evaluation. Transp. Res. Part F Traf. Psychol. Behav. 2006, 9, 77–87. [CrossRef] Zheng, N.-N.; Tang, S.; Cheng, H.; Li, Q.; Lai, G.; Wang, F.-Y. Toward intelligent driver-assistance and safety warning system. IEEE Intell. Syst. 2004, 19, 8–11. [CrossRef] McCall, J.C.; Trivedi, M.M. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation. IEEE Trans. Intell. Transp. Syst. 2006, 7, 20–37. [CrossRef] Vacek, S.; Schimmel, C.; Dillmann, R. Road-marking analysis for autonomous vehicle guidance. In Proceedings of European Conference on Mobile Robots, Freiburg, Germany, 19–21 September 2007. Zhang, J.; Wang, F.-Y.; Wang, K.; Lin, W.-H.; Xu, X.; Chen, C. Data-driven intelligent transportation systems: A survey. IEEE Trans. Intell. Transp. Syst. 2011, 12, 1624–1639. [CrossRef] Bertozzi, M.; Bombini, L.; Broggi, A.; Zani, P.; Cerri, P.; Grisleri, P.; Medici, P. Gold: A framework for developing intelligent-vehicle vision applications. IEEE Intell. Syst. 2008, 23, 69–71. [CrossRef] Se, S.; Brady, M. Road feature detection and estimation. Mach. Vision Appl. 2003, 14, 157–165. [CrossRef] Chiu, K.-Y.; Lin, S.-F. Lane detection using color-based segmentation. In Proceedings of IEEE Conference on Intelligent Vehicles Symposium, Las Vegas, NV, USA, 6–8 June 2005; pp. 706–711. Sun, T.-Y.; Tsai, S.-J.; Chan, V. HSI color model based lane-marking detection. In Proceedings of Intelligent Transportation Systems Conference, Toronto, ON, Canada, 17–20 September 2006; pp. 1168–1172. Liu, W.; Zhang, H.; Duan, B.; Yuan, H.; Zhao, H. Vision-based real-time lane marking detection and tracking. In Proceedings of International IEEE Conferences on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; pp. 49–54. Soheilian, B.; Paparoditis, N.; Boldo, D. 3D road marking reconstruction from street-level calibrated stereo pairs. ISPRS J. Photogramm. 2010, 65, 347–359. [CrossRef] Foucher, P.; Sebsadji, Y.; Tarel, J.-P.; Charbonnier, P.; Nicolle, P. Detection and recognition of urban road markings using images. In Proceedings of International IEEE Conferences on Intelligent Transportation Systems, Washington, DC, USA, 5–7 October 2011; pp. 1747–1752. Wu, P.-C.; Chang, C.-Y.; Lin, C.H. Lane-mark extraction for automobiles under complex conditions. Pattern Recogn. 2014, 47, 2756–2767. [CrossRef] Fang, L.; Yang, B. Automated extracting structural roads from mobile laser scanning point clouds. Acta Geod. Cartogr. Sin. 2013, 2, 260–267. Zhao, H.; Shibasaki, R. Updating a digital geographic database using vehicle-borne laser scanners and line cameras. Photogramm. Eng. Remote Sens. 2005, 71, 415–424. [CrossRef] Vosselman, G. Advanced point cloud processing. In Proceedings of Photogrammetric week, Stuttgart, Germany, 7–11 September 2009; pp. 137–146. Yang, B.; Wei, Z.; Li, Q.; Mao, Q. A Classification-Oriented method of feature image generation for vehicle-borne laser scanning point clouds. Acta Geod. Cartogr. Sin. 2010, 39, 540–545. Wu, B.; Yu, B.; Huang, C.; Wu, Q.; Wu, J. Automated extraction of ground surface along urban roads from mobile laser scanning point clouds. Remote Sens. Lett. 2016, 7, 170–179. [CrossRef] Manandhar, D.; Shibasaki, R. Auto-extraction of urban features from vehicle-borne laser data. In Proceedings of Symposium on Geospatial Theory, Processing and Applications, Ottawa, ON, Canada; 2002; pp. 650–655. Li, B.; Li, Q.; Shi, W.; Wu, F. Feature extraction and modeling of urban building from vehicle-borne laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 934–939. Tournaire, O.; Brédif, M.; Boldo, D.; Durupt, M. An efficient stochastic approach for building footprint extraction from digital elevation models. ISPRS J. Photogramm. Remote Sens. 2010, 65, 317–327. [CrossRef] Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H. Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data. Remote Sens. 2010, 2, 641–664. [CrossRef]

ISPRS Int. J. Geo-Inf. 2016, 5, 125

25.

26. 27. 28. 29. 30.

31.

32. 33.

34. 35. 36. 37. 38. 39.

16 of 16

Wu, B.; Yu, B.; Yue, W.; Shu, S.; Tan, W.; Hu, C.; Huang, Y.; Wu, J.; Liu, H. A voxel-based method for automated identification and morphological parameters estimation of individual street trees from mobile laser scanning data. Remote Sens. 2013, 5, 584–611. [CrossRef] Li, L.; Li, Y.; Li, D. A method based on an adaptive radius cylinder model for detecting pole-like objects in mobile laser scanning data. Remote Sens. Lett. 2016, 7, 249–258. [CrossRef] Jaakkola, A.; Hyyppä, J.; Hyyppä, H.; Kukko, A. Retrieval algorithms for road surface modelling using laser-based mobile mapping. Sensors 2008, 8, 5238–5249. [CrossRef] Yang, B.; Fang, L.; Li, Q.; Li, J. Automated extraction of road markings from mobile LiDAR point clouds. Photogramm. Eng. Remotr Sens. 2012, 78, 331–338. [CrossRef] Kammel, S.; Pitzer, B. Lidar-based lane marker detection and mapping. In Proceedings of IEEE on Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 1137–1142. Chen, X.; Kohlmeyer, B.; Stroila, M.; Alwar, N.; Wang, R.; Bach, J. Next generation map making: Geo-referenced ground-level LiDAR point clouds for automatic retro-reflective road feature extraction. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 4–6 November 2009; pp. 488–491. Smadja, L.; Ninot, J.; Gavrilovic, T. Road extraction and environment interpretation from LiDAR sensors. In Proceedings of International Archives of the Photogrammetry, Remote sensing and Spatial Information Sciences, Saint-Mande, France, 1–3 September 2010; pp. 281–286. Guan, H.Y.; Li, J.; Yu, Y.T.; Wang, C.; Chapman, M.; Yang, B.S. Using mobile laser scanning data for automated extraction of road markings. ISPRS J. Photogramm. Remote Sens. 2014, 87, 93–107. [CrossRef] Mancini, A.; Frontoni, E.; Zingaretti, P. Automatic road object extraction from mobile mapping systems. In Proceedings of IEEE/ASME International Conference on Mechatronics and Embedded Systems and Applications (MESA), Suzhou, Jiangsu, China, 8–10 July 2012; pp. 281–286. Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P. Automatic detection of zebra crossings from mobile LiDAR data. Opt. Laser Technol. 2015, 70, 63–70. [CrossRef] Yu, Y.; Li, J.; Guan, H.; Jia, F.; Wang, C. Learning hierarchical features for automated extraction of road markings from 3-d mobile LiDAR point clouds. IEEE J. Sel. Top. Appl. 2015, 8, 709–726. [CrossRef] Rosenfeld, A.; De La Torre, P. Histogram concavity analysis as an aid in threshold selection. IEEE Trans. Syst. Man Cybern. 1983, SMC-13, 231–235. [CrossRef] Franke, R.; Nielson, G. Smooth interpolation of large sets of scattered data. Int. J. Numer. Meth. Eng. 1980, 15, 1691–1704. [CrossRef] Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [CrossRef] © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents