2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
Fast Lane Boundary Recognition by a Parallel Image Processor Chinthaka Premachandra, Member, IEEE , Ryo Gohara, Kiyotaka Kato, Member, IEEE Abstract—Much past research related to road domain detection has focused on lane detection. Commonly, this is performed by applying edge detection to a road image, then applying a Hough transform to perform straight-line detection. Detected straight lines are then analyzed to extract lane boundaries. However, the Hough transform is calculation intensive, requiring long processing times. This paper applies a parallel processor to image detection and investigates a Hough transform suited to parallel processing to realize faster lane detection.
outstanding problem, however, is that parameters for the formula used for the recognition target are determined by a voting process, leading to long processing times. While there has been research on performing the Hough transform using a GPU to speed straight-line detection [27], as described above the present research has a goal of realizing miniaturized, lightweight hardware.
I. INTRODUCTION Vehicle-mounted cameras play a central role in both the driving support and automated driving systems described above, typically with driving environment recognition being performed through processing of images obtained from the camera. Generally, image processing is for automatic recognition of important information that the driver must be aware of, such as that related to lanes [1]-[13], road obstacles [14]-[18], oncoming traffic [19][20][21], road signs [22]-[25], and traffic signals [26]. Previous studies have mostly performed image processing using personal computers or somewhat powerful hardware. However, there is ongoing development of small, light vehicles adapted to various needs, so when introducing driving support and automated driving technologies that use vehicle-mounted cameras, the development of not just cameras designed for small vehicles, but also miniaturized hardware for image processing will be of high concern. Toward that goal, we are investigating image processing suitable for miniaturized hardware, with images captured by vehicle-mounted cameras. This is focused primarily on the detection of suddenly appearing road obstacles [18]. We have realized decreased processing times for various processing goals such as binarization, smoothing, and feature extraction, allowing for the detection of road obstacles. This paper addresses rapid lane boundary detection using miniaturized hardware. Lane detection is an important aspect of road boundary detection, and will be absolutely necessary to achieve automated driving in particular. Rapid lane detection will furthermore be important for performing highspeed lane changes while driving on expressways. Many lane-boundary detection methods first perform straight-line detection on images from cameras installed pointing in the direction of travel, and extract lane boundary lines from the detected lines. The Hough transform is commonly used for straight-line detection [1]–[13]. An Chinthaka Premachandra is an assistant professor with School of Engineering, Shibaura Institute of Technolody, 135-8548 Tokyo, Japan (phone: +81-03-5859-8308; fax: +81-03-5859-8308; e-mail:
[email protected]).
978-1-5090-1897-0/16/$31.00 ©2016 IEEE
Fig. 1: General-purpose processor
Fig. 2: Memory array-like processor
We therefore aim at applying a lightweight parallel processor for image recognition. This processor, called the Ryo Gohara is Master course student with Graduate School of Engineering, Tokyo University of Science, 125-8585 Tokyo, Japan Kiyotaka Kato is a professor with Graduate School of Engineering, Tokyo University of Science, 125-8585 Tokyo, Japan
SMC_2016 000947
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
Integrated Memory Array for Cars (IMAPCAR2, Renesas Electronics Corp.), is approximately the size of a business card and would be applied to increase the speed of lane extraction processing, including the Hough transform. In general image processing using a personal computer, the pixels of entire images are searched in turn when the desired processing is applied. Multiprocessors, in contrast, can split images into multiple parts and simultaneously perform processing on those parts, allowing simultaneous searches over image pixels. Figure 1 and 2 illustrates the structure of general processor and structure of IMAPCAR2 processor array. IMAPCAR2 processor has 64 parallel-aligned processor elements (PEs), arranged like the elements in a memory array (Fig. 2). To realize decreased processing times, this study investigates implementation of the complex aspects of the Hough transform, including voting processing, in a way that is suited to these characteristics of parallel processors. The remainder of this paper consists of five sections. Section 2 describes research related to lane detection. Section 3 presents our investigations regarding lane detection and implementation of the Hough transform in a manner suited to the parallel processor. Section 4 describes the experiments performed in this study, and Section 5 gives a summary of the research and areas for future research.
performed to emphasize lane boundary edges, then the Hough transform is used to realize lane boundary detection. Yoo et al. similarly proposed a method for lane detection
Fig. 3: Straight line in the Cartesian plane
II. RELATED WORKS ON HOUGH TRANSFORM-BASED LANE DETECTION Use of the Hough transform allows for more accurate detection from camera images, leading to accurate lane boundary detection. Wang et al. proposed a model-based lane detection method that performs lane extraction primarily through dynamic programming and the Hough transform [1]. Low et al. proposed a similar method [2]. Kuan et al. proposed a system for determining whether a running vehicle is within a lane [3]. In that system, the vehicle position is determined through comparison with lane boundaries, with the Hough transform performing the predominant role in lane boundary detection. Umamaheswari et al. investigated steering wheel angle estimation for automated driving, in which lane boundary detection is performed using the Hough transform, the central point of the lane measured, and the steering angle estimated based on that central point [4]. Li et al. proposed a lane detection method using inverted geometry and the Hough transform [5]. In this method, circle inversion is used to form all lines in a road image other than those representing lane boundaries into curves, and the Hough transform is used to detect lane boundaries. Khalifa et al. proposed a lane detection algorithm for the realization of automated driving [6]. This algorithm, too, primarily performs lane boundary detection by applying the Hough transform to edge detection. Chang et al. investigated lane detection in situations where shadows from peripheral trees and similar objects cause variations in road surface coloration [7]. In that method, the assumed lane direction is used to reduce noise in images to which edge detection has been applied. Processing is then
Fig. 4: Hough space
on roads with mixed coloration, and that method partially uses the Hough transform [8]. In the method of Wang et al., edge detection is performed, then the Hough transform is used to detect the centerline between two adjacent lanes. The Hough transform is then reapplied to detect the boundary between left and right lanes by comparing their edge information [9]. So, as described above, in many examples of lane boundary detection, edge detection is performed on a road image, after which the Hough transform is used to extract boundaries through straight-line detection. Phueakjeen et al. introduced a method for detecting complex edges and analyzed appropriate edge detection methods [10]. Coltado et al. proposed an algorithm capable of not only lane boundary detection, but also extraction of the number of lanes on a road [11]. Here again, part of this algorithm uses the Hough transform. Daigavane investigated lane boundary detection on straight roads [12]. This method is characterized by using an improved Canny edge detector for edge detection in road
SMC_2016 000948
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
images. Ghazali et al. proposed a method for lane boundary detection using Hough space reduction and processing of only local domains in road images [13]. When implemented on a personal computer, this method is faster than conventional methods. As shown above, many examples of lane boundary detection apply the Hough transform, which is calculation intensive and requires long processing times when performed on a personal computer. Meanwhile, small, lightweight
III. FAST HOUGH TRANSFORM IN PARALLEL PROCESSING A. Hough Transform The Hough transform is generally applied to extracting image features such as straight lines and curves. The following describes the straight-line detection method used in this study. A straight line in a Cartesian plane (Fig. 3) can be described by Eq. (1).
Line 10
Fig. 7: Storing feature point coordinates within loaded image
Fig 5: Hough space creation
cos
sin
(1)
Using Eq. (1) to plot Cartesian plane coordinates as ρ and θ in a Hough space gives a plot like that shown in Fig. 4, and since the ρ and θ values of points on a line in the Cartesian
Fig. 6: 64 PEs load image
vehicles are becoming increasingly popular. In consideration of application to such vehicles, we use a miniaturized parallel processor for image recognition, and investigate implementation of a Hough transform suited to the features of that processor, with the goal of faster lane boundary detection. We also conduct a comparative investigation between lane boundary detection using conventional Hough transform and that on a reduced Hough space [13]. The following section describes the parallel processor used for image recognition, and Section 4 describes a Hough transform appropriate for that processor.
Fig. 8: Voting to Hough space
plane will be the same in Hough space, their curves will intersect at a point. To use this method to detect straight lines, we first use (1) to plot image feature points in Hough space, and after all have been plotted determine the maximum values for ρ and θ. The determined values for ρ and θ and (1) in the modified form of (2) are then used to plot a straight line in the Cartesian plane.
SMC_2016 000949
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
(2) B. Hough Transform Process on a Parallel Image Processor This section explains straight-line detection and graphing using the Hough transform in image processing using parallel processing on the IMAPCAR2. As Fig. 5 shows, a ρ–θ Hough space is prepared for each of the 64 IMAPCAR2 PEs, with horizontal axis 0–π split into 64 parts for distribution to each PE. The pixels in a 640 × 360 edge-detected image are thus split into 64 columns, which are left-to-right assigned to PEs (Fig. 6). These edge-detected images are binarized, with edge pixels in white and all other pixels black. Figure 7 shows the image in Fig. 6 as read into the PEs. The pixels in this image are processed in parallel by column, extracting white effective pixels determined to be feature points by edge detection and loading their x–y coordinates into the IMAPCAR2 controller, CP. After searching through the loaded pixels, each PE calculates ρ values using the Hough transform Eq. (2), based on the x–y coordinates of effective pixels stored in CP and their θ values in Hough space. Curves are graphed by voting according to ρ values in the Hough spaces created for each PE (Fig. 8). To process the
transform on the processor used can also be performed without parallel processing, but processing times are increased. This paper investigates methods for decreasing processing times for complex Hough transforms through the use of a multiprocessor for image processing. The next section, however, performs experiments related to lane boundary extraction using a conventional Hough transform. These methods include general image processing techniques other than the Hough transform, such as edge detection and binarization, which can be performed more rapidly through multiprocessing. We have performed previous investigations into methods for speeding general image processing [18][28]. Table 1 shows the results of general image processing using a 64-PE multiprocessor designed for image recognition.
Table 3: Processing time comparison
Image processing Binarization Averaging filter Gaussian filter Laplacian filter
OpenCV (millisec) 2.81 14.11 10.38 12.35
IMAPCAR2 (millisec) 0.973 1.094 1.083 1.071
IV. EXPERIMENTS
Fig. 9: Determination of ρ and θ
entire image in Fig. 6, this process from input/output of the 64 PEs until voting to the Hough space occurs ten times for the full input image. After the entire image has been processed and all voting completed, we determine the ρ values and associated θ values that maximize the voting values from each PE’s Hough space (Fig. 9). Using the determined ρ and θ values, y values are calculated in 64 columns, starting from the left as in the original reading, and the resulting line is graphed. In this way, voting to Hough spaces, maximal value determination, and straight line output is performed in simultaneous coordination between the 64 PEs, realizing decreased processing times. Implementation of a Hough
A. Experiment Description To validate the effectiveness of the proposed method, we mainly conducted two kinds of experiments: 1. We compared processing times for performing Hough transforms between the case of using OpenCV on a personal computer and the case of a parallel processor for image processing. Here, we perform experiments using the proposed method for parallel processing (PIPprop) and without using parallel processing (PIP-orig). In each experiment we perform edge detection on the original image, coloring detected edges white and others black, then perform Hough transform processing. Note that only Hough transform processing times are evaluated. 2. We performed a comparative investigation between the proposed method for Hough transform and conventional lane detection methodologies that incorporate Hough transform processing. In particular, we investigated lane boundary detection using the general Hough transform as proposed by Umamaheswari [4] and that using Hough space reduction proposed by Ghazali [13]. In each method, processing other than the Hough transform was also performed in a manner suited to application to parallel processing. All images used for lane boundary detection were acquired from vehicle-mounted cameras. Images acquired under varying environmental conditions were used for Hough transform processing. In all cases, images had dimensions of 640 × 480 pixels.
SMC_2016 000950
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
B. Results Tables 2 and 3present the results for experiments 1 and 2. These results show that the proposed method for Hough transform can reduce processing time. As a result, proposed method can also reduce processing time of lane boundary detection.
[5] [6]
[7] Table 2 Average computational time comparison
Implementation method
Average Computation time(millisec)
OpenCV
51.2
PIP-orig
28.4
PIP-prop
5.2
[8] [9] [10]
Table 3 Average computational time comparison of road lane boundary detection PIP-org PIPOpen CV (millisec) (millisec) proposed
[11]
(millisec)
Umamaheswari [4] Ghazali [13]
158.4
36.8
7.1
168.2
37.3
7.6
[12]
[13]
V. CONCLUSION We investigated lane boundary detection using miniaturized vehicle-mounted cameras and hardware. Camera images were converted to a form suited to hardware characteristics with the goal of more rapid boundary detection. We succeeded in decreasing the computation times for the Hough transform, a computationally intensive processing method that is well suited to lane detection. We verified that the proposed method can detect lane boundaries within a few milliseconds. The present research proposed a method for lane boundary detection, but did not consider applications of the method. Future research should investigate applications to vehicle control, such as by connecting the parallel processor and miniaturized hardware to control devices. REFERENCES [1] J. Wang, Y. Chen, J. Xie, and H. Lin, “Model-based Lane Detection and Lane Followingfor Intelligent Vehicles,” proc. of 2010 Second International Conference on Intelligent Human-Machine Systems and Cybernetics, pp. 170-175, Aug. 2010 [2] C. Y. Low, H. Zamzuri, and S. A. Mazlan, “Simple Robust Road Lane Detection Algorithm,” 5th International Conference on Intelligent and Advanced Systems (ICIAS), pp.1-4, June 2014 [3] L. K. Kuan, N.H. Ismail, T.S.A. Rehman, and P.P.D. Saadon, “Lane Guidance Warning System” Proc. of International Conference on Computer and Communication Engineering (ICCCE 2012), pp. 864868, July 2012 [4] V. Umamaheswari, S. Amarjyoti, T. Bakshi and A. Singh, “Steering Angle Estimation for Autonomous Vehicle Navigation Using Hough and Euclidean Transform,” Proc. of 2015 IEEE International
[14]
[15] [16]
[17]
[18]
[19] [20]
[21] [22]
[23]
Conference on Signal Processing, Informatics, Communication and Energy Systems, pp. 1-5, 2015 J. Li, X. J. An, E.K. Shang, H. G. He, “Lane Detection Using Inversion Transform,” Proc. of 2011 International Conference on Wavelet Analysis and Pattern Recognition, pp. 109-114, July 2015. O. H. Khalifa and A. H. Hashin, “Vision-Based Lane Detection for Autonomous Artificial Intelligent Vehicles,” Proc. of 2009 IEEE International Conference on Semantic Computing, pp.636-641, Sept. 2009 C. Y. Chang and C. H. Lin, “An Efficient Method for Lane-Mark Extraction in Complex Conditions,” Proc. of 9th International Conference on Ubiquitous Intelligence and Computing and 9th International Conference on Autonomic and Trusted Computing, pp. 330-336, Sept. 2012 H. Yoo, U. Yang, and K Sohn, “Gradient-Enhancing Conversion for Illumination-Robust Lane Detection,” IEEE Transactions on Intelligent Transportation Systems, Vol. 14, No. 3, pp. 1083-1094, Sept 2013 X. Wang, Y. Wang, and C. Wen, “Robust Lane Detection Based on Gradient-Pairs Constraint,” Proc. of the 30th Chinese Control Conference, pp.3181-3185, July 2011. W. Phueakjeen, N. Jindapetch, L. Kuburat, and N. Suvanvorn, “A Study of the Edge Detection for Road Lane” Proc. of 8th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, pp. 995-998, May 2011. J. M. Collado, C. Hilario, A. D. L. Escalera, and J. M. Armingol, “Detection and Classification of Road Lanes with a Frequency Analysis,” Proc. of IEEE Intelligent Vehicles Symposium, pp. 78-83, June 2005. P. M. Daigavane and P.R.Bajaj, “Road Lane Detection with Improved Canny Edges Using Ant Colony Optimization,” Proc. of Third International Conference on Emerging Trends in Engineering and Technology, pp.76-80, Nov. 2010. K. Ghazali, R. Xiao, and J. Ma, “Road Lane Detection Using HMaxima And Improved Hough Transform,” Prof. 2012 Fourth International Conference on Computational Intelligence, Modeling and Simulation, pp. 205-208, Sept. 2012. Z. Khaid, E. A. Mohamed, and M. Abdenbi, “Stereo vision-based road obstacles detection,” Proc. of 8th International Conference on Intelligent Systems: Theories and Applications (SITA), pp. 1-6, May 2013. I. Cabani, G. Toulminet, and A. Bensrhair, “Color Stereoscopic Steps for Road Obstacle Detection,” Proc. of 32nd Annual Conference on IEEE Industrial Electronics, IECON 2006, pp. 3255-3260, 2006. X. Wang, L. Xu, H. Sun, J. Xin and N. Zheng, “Bionic vision inspired on-road obstacle detection and tracking using radar and visual information ,” Proc. of IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), pp. 39-44, Oct. 2014. P. Santana, M. Guedes, L. Coreia, and J. Barata, “A saliency-based solution for robust off-road obstacle detection,” IEEE International Conference on Robotics and Automation (ICRA), pp. 3096 - 3101, May 2010. Ryo Gohara, Chinthaka Premachandra, and Kiyotaka Kato, “Smooth Automatic Vehicle Stopping Control System for Unexpected Obstacles,” Proc .of 10th Asia-Pacific Symposium on Information and Telecommunication Technologies, Aug. 2015. A. Barth and U. Franke, “Tracking oncoming and turning vehicles at intersections,” 13th International IEEE Conference on Intelligent Transportation Systems , pp. 861-868, Sep t. 2010. H. Namba, M. Muneyasu, “A detection and tracking method based on POC for oncoming cars,” Proc,. of International Symposium on Intelligent Signal Processing and Communications Systems, pp. 1-4, Feb. 2009. S. Sivaraman and M. M Trivedi, “Real-time vehicle detection using parts at intersections,” Proc. of 15 th IEEE International Conference on Intelligent Transportation Systems (ITSC), pp. 1519-1524, Sept. 2012. E. Shoba and A. Suruliandi, “Performance analysis on road sign detection, extraction and recognition techniques,” Proc. of International Conference on Circuits, Power and Computing Technologies, pp. 1167 – 1173, March 2013. W. Liu, X. Chen, B. Duan, H. Dong, P. Fu, H. Yuan, and H. Zhao, “A system for road sign detection, recognition and tracking based on multi-
SMC_2016 000951
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
[24] [25]
[26]
[27]
[28]
cues hybrid,” Proc. of IEEE Intelligent Vehicles Symposium, June 2009. Y. R. Huang and R. H. Cao, “Fast road signs recognition using contour features” Proc. of 6th International Congress on Image and Signal Processing (CISP), pp. 101-106, Dec. 2013. S. S. M. Sallah, F. A. Hussin, and M. Z. Yusoff, “Road sign detection and recognition system for real-time embedded applications,” International Conference on Electrical, Control and Computer Engineering (INECCE), pp. 213-218, June 2011. J. Balcerek, A. Konieczka, T. Marciniak, A. Dabrowski, K. Mackowiak, and K.. Piniarski, “Automatic detection of traffic lights changes from red to green and car turn signals in order to improve urban traffic,”Proc. of Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), pp. 110-115, Sept. 2014. T. Vladimir, J. Dongwoon, and D. H. Kim, “Hough Transform with Kalman Filter on GPU for Real-Time Line Tracking,” Proc. of Seventh International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing , pp. 212-216, July 2013. Yutaro Okamoto, Chinthaka Premachandra, and Kiyotaka Kato, “ A Study on Computational Time Reduction of Road Obstacle Detection by Parallel Image Processor,” Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol. 18, Issue 5, pp.849-855, Aug. 2014.
SMC_2016 000952