Multi-lane Detection Based on Original Omni-Directional Images ...

4 downloads 2035 Views 794KB Size Report
... parallel lane and non-parallel lanes. The performance of feature extractor and effectiveness of proposed lane detection method is verified on real world data.
Multi-lane Detection Based on Original Omni-Directional Images* Chuanxiang Li, Bin Dai, and Tao Wu College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha, P.R. China, 410073 [email protected]

Abstract. A lane detection method is presented based on original omnidirectional images. The parameterized representation of curves in panoramic images is provided by analyzing the projection model of omni-directional multi-camera system. Those lines of lane markings in image can be described by the parameter of lane markings in world coordinate system. The results of line fitting into world coordinate system are directly used to fit lane model for both parallel lane and non-parallel lanes. The performance of feature extractor and effectiveness of proposed lane detection method is verified on real world data. Keywords: Intelligent vehicle, lane detection, Omni-directional image, projection model, non-parallel lane, curve fitting.

1

Introduction

Lane detection is an important part of intelligent vehicles. Lanes provide vital information of road and enhance ego-vehicle's localization capability [1].There are plentiful vision-based lane detection methods reported in recent works like [2] and [3]. However, most of the exiting works use the conventional cameras that have a relative small field of view comparing the omni-directional camera. In this paper, a lane detection method is proposed based on omni-directional multi-camera system, which can acquire information from more than 80% of the full 360 degree sphere. According to the projection model of omni-directional camera, either straight lines or curves on the road plane are projected as complex curves in panoramic image. Because of the panoramic image with distort, the previous feature extraction methods and lane model fitting methods, which perform efficient on conventional camera images, are not stable. There are also some methods for lane detection using omni-directional camera [4, 5, 6, 7]. Those methods operate on bird's-eye view images, which are transformed for the original panoramic images with the knowledge of the camera's intrinsic and extrinsic parameters. In the bird's-eye images, the width of lane markings and lane *

This work was supported by National Natural Science Foundation of China: 61075043 and 61375050.

© Springer-Verlag Berlin Heidelberg 2015 M. Gen et al. (eds.), Industrial Engineering, Management Science and Applications 2015, Lecture Notes in Electrical Engineering 349, DOI: 10.1007/978-3-662-47200-2_77

727

728

C. Li, B. Dai, and T. Wu

are approximately constant. It is an advantageous property that facilitates feature extraction and lane model fitting. But the transformation is associated with computational cost and loss of information. In this paper, our algorithm operates on original panoramic images. A new feature extractor is proposed mainly based on the photometric features of lane markings. With the feature maps, the most important question is to fit the curves in panoramic image. Many algorithms for extracting curves from panoramic images are shown [8, 9, 10]. The results of curves fitting of those methods are the parameters of curves in panoramic images or three-dimensional spherical coordinate system. With the intelligent vehicle application, the acquired curves need to be transformed to world coordinate system for lane model fitting. In this paper, analyzing the projection model of omni-directional multi-camera system, the curves are described by the parameter of line in world coordinate. The parameter of curves can be calculated using two points of curves and used to fit lane model directly. The paper is organized as follows. We introduce the projection mode of omnidirectional multi-camera system and give the parameterized representation of curves in panoramic image in section 2. In the section 3, lane detection algorithm based on panoramic images is proposed. We provide experimental results of feature extraction and lane detection on real world data in the section 4. Eventually, we conclude this paper in section 5.

2

Lines in Panoramic Image

Our Ladybug3 camera is composed of six synchronized cameras (five in horizontal ring and one on top), which is 1624×1232 pixels. Six raw images are mapped onto a sphere of fixed radius and acquired the three-dimensional mapping coordinates to the sphere. The three-dimensional spherical coordinate system of camera, the spherical coordinate of panoramic image, and the image axis are shown in Fig. 1. There are several projection methods for transforming the image on sphere to the twodimensional image: radial projection, cylindrical projection, dome projection and cubic projection. Radial projection is one of the most popular projection methods, and the output image is easy to use. The panoramic images from radial projection model are usually two-dimensional so that it can be displayed and processed easily. The panoramic images have two spherical θ coordinate for horizontal, and ϕ for vertical. The projection equation is following: θ = atan 2(Y , X ) ⎧ ⎪ Z ⎨ϕ = arccos( ) ⎪ 2 X Y2 + Z2 + ⎩

(1)

where ( X , Y , Z ) is a point in the spherical coordinate system. (θ ,ϕ ) are the spherical coordinate of panoramic image as shown in Fig. 1. The range of value of θ is [−π , π ] and the range of value of ϕ is [0, π ] .

Multi-lane Detection Based on Original Omni-Directional Images

729

Fig. 1. Definition of coordinate system. (a) Three-dimensional spherical coordinate system; (b) Bird's-eye view of spherical coordinate system; (c) Spherical coordinate of panoramic image and the image axis

The image axes (u, v) to spherical coordinates (θ ,ϕ ) of panoramic image are defined respectively as following: π −θ ⎧ ⎪⎪u = 2π inCols ⎨ ⎪ v = ϕ inRows ⎪⎩ π

(2)

where nCols × nRows is the size of the panoramic images. Rearranging the equation (1) and (2), we get the projective relation between the point ( X , Y , Z ) in the spherical coordinate system and (u, v) in image coordinate system: π − atan 2(Y , X ) ⎧ u= inCols ⎪ 2π ⎪ Z ⎨ arccos( ) ⎪ 2 X Y 2 + Z 2 i nRows + ⎪v = π ⎩

(3)

Line in the camera coordinate frame can be represented as: X − X 1 Y − Y1 Z − Z1 = = X 2 − X 1 Y2 − Y1 Z 2 − Z1

(4)

where ( X 1 , Y1 , Z1 ) and ( X 2 , Y2 , Z 2 ) are two points on the line. The line segment of lane markings or boundaries of lane are on road plane. The parameter Z of those line

730

C. Li, B. Dai, and T. Wu

segments is a constant, which is determined by the height of camera. The line segments on road plane can be simplified as: Y = aX + b

(5)

Z = −Z0

where (a, b) are the parameters of line and Z 0 is the height of camera system. The straight lines on road plane are projected in panoramic image as following: 2

⎛ ⎞ ⎜ ⎟ ⎛ b ⎞2 (− Z ) 2 (− Z ) 2 1 − b ab 0 ⎜ ⎟ +⎜ + 2 = 2 0 i ⎟ + 2 2 ⎛ 2π u ⎞ a + 1 ⎟ ⎝ a 2 + 1 ⎠ a +1 a +1 ⎛ ⎜ πv ⎞⎞ 2 ⎛ cos ⎜ ⎜ a + tan ⎜ nCols ⎟ ⎟ ⎜ ⎟ ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ nRows ⎠ ⎠ ⎝

(6)

From above equations, the curves in image coordinate system can be represented by the parameters of straight line in camera coordinate system.

3

Multi-lane Detection Algorithms

In this section, we detail the lane detection method based on original panoramic images. Firstly, the features of lane markings are extracted using a novel extractor from original panoramic images. With the feature map, the parameters of line segments of lane markings or lane boundaries are then calculated by two points on the lines. Lastly, non-parallel and parallel lanes are fitted in uniform lane model. 3.1

Feature Extraction in Panoramic Image

In radial projection model, the pixels of top and bottom image are stretched out. The straight lines and curves of lane markings are stretched as curves in panoramic images. Therefore, most of lane markings feature extractors introduced in [11], which perform efficient on conventional camera images, are disable. A feature extractor is designed for panoramic images in this paper. For each pixel ( x, y ) in the panoramic image, horizontal and vertical intensity averages are computed as follow: I horizonal ( x, y) =

I vertical ( x, y) =

1

yh = y + 4 S M



8S M + 1 yh = y − 4 SM 1

I ( x, y h )

(7)

I ( xv , y)

(8)

xv = x + 4 SM



8SM + 1 xv = x − 4 SM

where SM implies the maximal width of lane markings, which is an approximate linear function of the vertical image coordinate. Given the threshold T , if the intensity of pixel I ( x, y ) is higher than either T + I horizonal ( x, y ) or T + I vertical ( x, y ) , the pixel is

Multi-lane Detection Based on Original Omni-Directional Images

731

selected as candidate lane markings feature. Then the set of connected pixels wider than the minimal width of lane markings S m are considered as lane markings. 3.2

Curve Fitting

According to the equation (6), each curve ci (i = 1, 2,..., n ) in panoramic images space can be represented by the parameter ai and bi , which are the parameters of straight line in camera coordinate system. The parameters of each curve segments in image space can be calculated using two points on this curve. Let P1 = (u1 , v1 ) and P2 = (u 2 , v2 ) be the two points of curve. Four pairs of ai and bi can be calculated with points P1 and P2 according to equation (6), but only one pair of parameter is real truth. In order to determine the truth of parameter of curve, four virtual curves in panoramic image are generated using the four pair of parameters. Those virtual curves are then compared with real curve and the real truth can be determined. Another important thing worth mentioning is the accuracy of parameter ai and bi of curve ci depends on the location of P1 and P2 . As shown in Fig. 2 that the color curves are calculated using corresponding color endpoints respectively. The red and blue endpoints are on edge of lane marking and green endpoints are on the center line of lane marking. It can be seen from the bottom of Fig. 2 that the green curve coincide with all curve segments belonging to the discontinuously lane markings. However, the red curve and blue curve are not the best results. Compared with edge of lane marking, the centre line of lane marking is benefit to get accurate parameter of lane marking. In this way, the endpoints of centre line of lane markings are chose to calculate the parameter of curves.

Fig. 2. Curves Fitting. Color points are the pair of endpoint of lane marking and the color curves are fitted using corresponding endpoints

732

3.3

C. Li, B. Dai, and T. Wu

Lane Model Fitting

Lane model is the assumption about the lane structure in the real world environments. In this paper, lane model is defined as straight lane model, and each lane has approximately lane markings or boundaries. Considering the splitting or merging of lane markings, a single lane may not parallel to others on the road plane. Each single lane L j ( j = 1,2,..., N ) is described by three parameters: L j = { Aj , B j , w j }

(9)

where A j and B j are the parameter of centre line of lane L j , w j denotes the width of lane. In urban environments, non-parallel lanes must be considered at intersections, in splitting lane or merging lane. The parameters A j of parallel lanes are approximately equal in world coordinate system. However, there are specific differences of A j between non-parallel lanes. With the parameters A j of candidate curves, several cluster centers C k ( k = 1, 2,..., C M ) are generated using K-means clustering algorithms. Lane model is fitted as shown in Algorithms 1. Algorithm 1 Lane Model Fitting Input: The curves ci = ( ai , bi )(i = 1, 2,..., n ) 1: r ← 1 2: For k ← 1 to C M 3: rl ← 0 4: For ix ← 1 to n 5: For iy ← 1 to n 6:

If dist (ci , ci , Ck ) < φ and width(ci , ci ) − W < γ x

y

7: 8:

rl ← rl + 1

9:

Brl ← bm

10:

wrl ← width(cix , ciy )

Arl ← am

11: End If 12: End For 13: End For 14: If rl > 0 r ← r +1 15: 16: End If 17: End For 18: Return { Ar , Br , wr } l

l

l

x

y

Multi-lane Detection Based on Original Omni-Directional Images

733

An array of parallel lanes have common the direction of road. Non-parallel lanes mean multiple directions of road on the road plane. In Algorithm 1, r is the number of direction of road on the road plane and rl denote the number of parallel lane sharing the common direction. ci and ci are two curves of all candidate curves. Let Lr contains x

y

l

two boundaries cix and ciy , and width(cix , ciy ) is the width of lane Lrl . W is the defined width of lane. a m and bm are the parameter of centre line of lane Lr . The function l

dist (cix , ciy , Ck ) gives the average distance between cluster center C k and the direction of

those two curves. When the distance is lower than threshold φ and the width of lane fall in [W − γ ,W + γ ] , the lane will be treated as reasonable result.

4

Experimental Results

In this section, experimental results of the proposed lane detection system on real world data are presented. Firstly, the performance of the proposed feature extractor is acquired by comparing features maps with the ground truth images. In order to evaluate the performance of feature extractor on original panoramic images and on bird's-eye view images, 599 road images are selected to construct the datasets. The ground truths of lane markings in all those images are labeled manually. And the frames are selected from variable scenes. The bird's-eye view images and ground truth of those images are generated using the same extrinsic parameters of camera. And the performance of proposed feature extractor on flat-plane images is provided. The original panoramic images are of 2048×1024 pixels, and the corresponding bird's-eye view images are 301×501 pixels. The average number and percentage of pixel of lane markings in original image and bird's-eye view are shown in Table 1. According to the table, the loss of lane markings feature is accounting for 60.76 % during transforming the original panoramic images to bird's-eye view images. The average percentage of lane markings in bird's-eye view image is higher, since the bird's-eye view image just consists of road plane scene. Table 1. Pixel of lane markings in original image and bird's-eye view image

Number Percentage

Original Image 10158.17 0.0048

Bird's-eye view Image 3985.49 0.0264

Receiver Operating Characteristic curves (ROC) and Dice Similarity Coefficient curves (DSC) are chosen as evaluation metric to quantify the performance of feature extractor. The area under ROC curves corresponds with the performance of feature extractors. The optimal value of the threshold and the best possible performance of algorithm can be described by the maximum value of DSC curves. For the same feature extractors on original images and bird's-eye view images, the ROC curves and DSC curves on our dataset are shown in Fig. 3 and Fig. 4. It can be seen from the area under ROC curves and the maximal of the DSC curves, the proposed feature extractor performs much better on original panoramic image. At the same time,

734

C. Li, B. Dai, and T. Wu

the width of DSC curve peak demonstrates the robustness of feature extractor on original image is better than it on bird's-eye view images. In term of feature extraction from panoramic image, extractor operated on original image performs better than on bird's-eye view image. One reason for the performance of the feature extractor decline on bird’seye view images is information loss during the transformation. In addition, one may attribute to bright stripe noise, which is introduced by transformation.

Fig. 3. ROC curves of the same feature extractor on original image and bird's-eye view image

Fig. 4. DSC curves of the same feature extractor on original image and bird's-eye view image

Some qualitative results for proposed lane detection method are given in Fig. 5. Those results represent different road scenes: change lane, entering tunnel, rain, splitting lane, merging lane, shadow.

Multi-lane Detection Based on Original Omni-Directional Images

735

Fig. 5. Some results of lane detection. The white lines denote the centre line of lane. (1) Change lane; (2) entering tunnel; (3) in tunnel; (4) rain; (5) splitting lane; (6) merging lane; (7) shadow of tree

5

Conclusion

In this paper, multi-lane detection method based on omni-directional image was proposed. Instead of transforming the panoramic images to bird's-eye view images for lane detection, our algorithms detect lane from the original omni-directional images. According to the projection model of omni-directional multi-camera system, the curve can be represented by the parameters of line in world coordinate system. With the curve segments in world coordinate system, lane model is fitted to get the parameter of lanes. And the number of direction of road on the road plane and the number of lane of each lane direction can be calculated during lane model fitting. The proposed lane detection method was test on our real car experiments. Experimental results show that the performance of the same feature extractor operating on original images is better than its’ on bird's-eye view images. And Real-data tests demonstrate the effectiveness of the proposed lane detection method, especially in adverse light condition.

736

C. Li, B. Dai, and T. Wu

Future works will be focus on the development of a comprehensive lane model for either straight or curve road. In addition, the lanes markings feature extractors operating on panoramic images still need further investigation.

References 1. Tao, Z., Bonnifait, P., Frémont, V., Guzman, J.I.: Lane marking aided vehicle localization. In: 16th International IEEE Conference on Intelligent Transportation Systems, Netherlands, (2013) 2. Yenikaya, S., Yenikaya, G., DÜVen, E.: Keeping the vehicle on the road - a survey on onroad lane detection system. ACM Computing Surveys 46(1), 1–43 (2013) 3. Hillel, A.B., Lerner, R., Levi, D., Raz, G.: Recent progress in road and lane detection: a survey. Machine Vision and Applications 25(3), 727–745 (2014) 4. Amemiya, M., Ishikawa, K., Kobayashi, K., Watanabe, K.: Lane detection for intelligent vehicle employing omni-directional camera. In: SICE Annu. Conf., pp. 2166–2170 (2004) 5. Ishikawa, K., Odagiri, K., Kobayashi, K., Watanabe, K.: Lane detection by using omnidirectional camera for outdoor terrains. In: SICE Annu. Conf., Osaka, Japan (2002) 6. Cheng, S.Y., Trivedi, M.M.: Lane tracking with omni-directional cameras: algorithms and evaluation. EURASIP J. on Embedded Systems 2007(46972), 1–8 (2007) 7. Li, C.X., Dai, B., Wu, T., Nie, Y.M.: Multi-lane detection in urban driving environments employing omni-directional camera. In: Proc. of the IEEE Int. Conf. on Information and Automation, pp. 284–289 (2014) 8. Mei, C., Malis, E.: Fast central catadioptric line extraction, estimation,tracking and structure from motion. In: International Conference on Intelligent Robots and Systems, pp. 4774–4779 (2006) 9. Bazin, J.-C., Demonceaux, C., Vasseur, P., Kweon, I.: Rotation estimation and vanishing point extraction by omnidirectional vision in urban environment. International Journal of Robotics Research 31(1), 63–81 (2012) 10. Barreto, J.P., Araujo, H.: Fitting conics to paracatadioptric projections of lines. Computer Vision and Image Understanding 101(3), 151–165 (2006) 11. Veit, T., Tarel, J.-P., Nicolle, P., Charbonnier, P.: Evaluation of roadmarking feature extraction. In: Proc. of the 11th Int. IEEE Conf. on Intelligent Transportation Systems, pp. 174–181 (2008)