Long distance moving vehicle detection using rear

0 downloads 0 Views 226KB Size Report
must take extra precaution to avoid an auto accident during the night. .... been used for detection of vehicle lamps under night ..... SMC, San Antonio, Texas, pp.
Long distance moving vehicle detection using rear lamp at night Suvendu chandan Nayaka*, Nilmadhab Dashb, N. K. Kamila a and Alok Ranjan Tripathy c a

Department of Computer Science & Engineering ,C.V. Raman College of Engineering, Bhubaneswar, India b Information Technology, C.V. Raman College of Engineering, Bhubaneswar, India c Computer Science & Engineering. College of Engg Bhubaneswar, Bhubaneswar, India

Abstract This paper proposes a vehicle detection system for identifying the moving vehicles at night which is far distance away by locating their rear-lights in the road. Many advanced driver-assistance systems (ADAS), such as collision mitigation, automatic cruise control (ACC) and automatic headlamp dimming were proposed by many researchers for driver assistance. We present a novel image processing system to detect and track long distance vehicle rear-lamp pairs in forward-facing color video at dark night. The vehicle lights are the salient visual feature to detect vehicle during night time. Detection of tail lamp is purely based on the red light intensity. The intensity changes slowly while the vehicle is moving. Unlike previous work, color threshold is derived using adaptive threshholding and used for real-world conditions in the RGB color space. The results demonstrate the system’s high detection rates and robustness to different lighting conditions and road environments.

Keywords: Driver assistance, rear-lamp, detection, intensity, tracking, vehicle detection.

1. Introduction

must take extra precaution to avoid an auto accident during the night. We also look for any additional visual obstructions which contributed to the car accident. And, many times, our attorneys actually go out to the scene of the car accident and recreate the conditions that led up to the crash. While driving at night, vehicles on the road in front are primarily visible by their red-color rear-facing tail and brake lamps. While all vehicles will differ in appearance, with different styles and designs of rear-facing lamps, they must adhere to automotive regulations. Worldwide regulations [2] specify limits for color and brightness of rear vehicle lamps. While in previous rear-lamp detection systems color threshold is directly derived from automotive regulations and adapted for real-world conditions in the hue–saturation–value (HSV) color space [3]. Our color threshold is derived by adaptive thresh holding from the characteristics of red color pixel intensity. The regulations state that rear lamps must be placed symmetrically and in pairs; however, there is no specification restricting the shape of rear lamps, and due to light-emitting diode (LED) technology, lamp manufacturers are departing from conventional shapes of rear lamps. Detection of rear lamps is a core component

Providing driver assistance at night is an emerging research area. Accordingly, many researchers have developed valuable techniques for recognizing vehicles and obstacles from the images of the road environments to facilitate applications on the camera-assisted system that helps the driver understand the possible dangers on the road whether the vehicle is moving or not. The World Health Organization (WHO) estimated that 1.17million deaths occur each year worldwide due to road traffic accidents and 70percent of the deaths occur in developing countries [1]. The human eye requires light to see. Night driving is a top cause of car accidents. An estimated 90 percent of all driver decisions made are based on what they see. While eyes are capable of seeing in limited light, with the darkness beyond them can cause several problems for vision. In this context rear lamp of a vehicle makes confusion for the driver. It is too difficult for the car driver to detect the motion of the forward facing vehicle from the rear lamp in a National High Way at night. It brings two state of mind for the driver that the vehicle in front of him/her is moving or static. Therefore, car drivers *

Corresponding author. E-mail: [email protected]

401

of each of the outlined potential target applications. The requirements of a moving vehicle detection system are primarily the following: robust detection and tracking of vehicles, achievement of this within a reasonable distance, a low rate of false positive detections basically at night. The layout of the remainder of this paper is given as follows: In Section II, a review of literature has been presented in the area of automotive vehicle detection, with particular emphasis on lighting object segmentation approach for adaptively extracting possible vehicle rear lights in dark conditions. The rear-lamp detection algorithm is described in Section III. The object detection system elements are described in Section IV. But experimental results are discussed in Section V. However conclusion and possibilities for future work are illustrated in Section VI.

This is because the background scenes are greatly affected by the varying lighting effect of moving vehicles. The rear lamps of a moving vehicle appear as some of the brightest regions in a frame of night-time automotive video; therefore, it is common for image processing techniques for lamp detection to begin with some form of thresholding. Grayscale or brightness thresholding is a common starting point[7]-[8].Red color filter is used to get resultant pixels and then are grouped and labeled to analyze characteristics, such as area, position, and shape. Then filtering is required as there are many potential light sources that are not rear vehicle lamps, such as street lamps, headlamps of oncoming vehicles, and reflections from signs. Many different color spaces with widely varying parameters have been used to segment red-color light regions from images. The most common approach is the red–green–blue (RGB) color space [9]–[10]. Separate RGB thresholds for brightness and redness are implemented in [11]. Therefore RGB color space has also been used for detection of vehicle lamps under night conditions [12].RGB color space is an ideal way to find color thresh holding. In RGB color space there is high correlation between the R, G, and B .To overcome this difficulty red-color thresholding is used. To efficiently deal with slowly moving or stationary vehicle and nighttime traffic scenes, researchers have developed model and feature-based techniques [13]–[14] to detect and track vehicles. The Kalman filter has been used for multivehicle tracking [15]. A vehicle lamp’s trajectory has been used to distinguish it from static lights, such as street lamps and reflective road signs [16] and Bayesian templates, in conjunction with a Kalman filter, are used in [17] for tracking of vehicles during daylight conditions. In [18], a mean-shift estimator is used for tracking vehicles during daytime. But in case of nighttime it is too difficult to detect the vehicle because the vehicle is not visible by naked eye and even if the camera when the vehicle is far away. So far different research methods are proposed, but in all the cases the vehicle is visible. In our proposed system we are detecting the vehicle from the bright regions and detecting whether it is static or in motion.

2. State of Art The developed countries are now on their way to develop intelligent and smart cars that will help the humanity to avoid accidents during night time. Advanced driver assistance systems (ADAS) is expected to grow, as consumers grow increasingly safety conscious, and insurance companies and legislators begin to recognize the positive impact that such systems could have on accident rates. To detect the vehicle by using their lamp pair, the concepts such as morphological processing, light edge detection and tracking are used. This paper presents effective night time vehicle detection, tracking, and identification approaches for moving vehicle by locating and analyzing the spatial and temporal features of vehicle back lights. For efficiently detecting and classifying moving vehicles at night first, a fast bright-object segmentation process based on automatic multilevel histogram thresh holding is performed to extract pixels of bright objects from the captured image sequences of night time scenes. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various illumination conditions at night. Then, to locate the connected components of these bright objects, a connected-component analysis procedure is applied to the bright pixels obtained by the previous stage [4]. A spatial clustering process then groups these bright components to obtain groups of vehicle lights for potential moving cars and motorbikes. Recently, vehicle lights have been used as salient features for nighttime vehicle detection applications and driver assistance [5]-[6]. The techniques that are commonly employed for daylight vehicle detection have limited use under dark conditions and at night. Most recent studies on vehicle detection methods are edge maps, frame differencing, and background subtraction techniques to extract the features of moving vehicles which are effective for vehicle detection at daytime. But they become invalid in nighttime illumination conditions.

Fig.1.A traffic scene at night

402

In this paper, we propose an efficient vehicle detection system for identifying the moving vehicles by locating their rear-lights. This system comprises of two stages. The first stage applies a thresholding to separate the bright objects from the grabbed image sequences of nighttime road scene. Then in the second stage we compare these two bright regions and analyze from the intensity graph. Figure.1 and 2 show two sample night time road scenes taken from the vision system. In Fig.1 sample scene, there are more number of vehicles appeared on the road, where only one car is appearing clearly. So the motion of the car can be easily detected, but it is too difficult to detect the motion of the rest of the vehicles which are from a long distance only with their rear light. In Fig.2 there are two vehicles with their rear lights but not visualizing to the human eye. So it is too challenging to detect the motion of these vehicles. Fig.4 sketches the flow diagram of the proposed nighttime vehicle rear lamp detection method. The first task is to extract these bright objects from the road scene image sequence further analysis. Fig.2 and fig.3 are taken as Frame-1 and Fame-2 at different time interval from the actual video sequence. Both the fames may consist of different objects like Fig.1. At night time environment the road scene may consist of other bright object like street lamps, traffic lights, and ground-level road reflector plates or other bright objects. We applied background subtraction mechanism to save the computation cost on extracting bright objects. We firstly extracted the gray-intensity image by performing a RGB to gray transformation. For extracting these bright objects from a given transformed gray-intensity image, bright objects must be separated from other objects with different illuminations. For this adaptive thresholding mechanism is used. By evaluating the separability using the thresholding, the number of objects, into which the image should be segmented, can be automatically determined. As a result, the bright objects will be appropriately extracted from other illuminated objects. Among the bright objects all are not the rear lamps of the vehicle. According to vehicle regulation act the rear lamp must be symmetrical. By using normalized cross correlation [19] symmetric rear-lamp is detected. Correlation is calculated along the direction of a line adjoining the center of each light, as in [20]. Symmetry is commonly used to filter potential candidates [21] and create lamp pairs for vehicle detection, because the rear of a vehicle is symmetrical under all lighting conditions. In [22], symmetry is calculated within a candidate bounding box and considered with several other features in a weighted fusion process. To remove the unwanted regions the image is converted to binary image for low cost of computation. Only paired bright objects are in consideration as like fig5 and the rest bright objects pixel values are set as 0. (But in some cases like fig.2 and fig.3 the unwanted regions are removed at the time of back ground subtraction using adaptive thresh holding).From the paired bright region the target vehicle is identified. Median filter is applied to both

Fig.2.A real time traffic scene at night in instant -1

Fig.3. A real time traffic scene at night in instant-2

Fig.1 is a night time road scenes where different objects are in different night environment. Rear lamp is the silent feature for the moving vehicle at night. But at night different night objects like street lamps, traffic lights, and ground-level road reflector plates creates complex computation for detecting rear lamps of vechiles.For which red color thresh holding is used.Fig.2 and Fig.3 are more complex traffic scenes in which the objects are not visible ,only some bright regions are there. It is too difficult for the diver to identify what are these regions and if these are rear lamps of a vehicle, whether the vehicle is moving or not. For Fig.2 and Fig.3 we have extract the bright regions by red color thresh holding .Then plotting intensity graph of two instances and by comparing these intensity graphs we detect whether the vehicle is moving or not.

3. Rear lamp Detection Rear lamp is the silent feature of a vehicle at night. The rear lamps consist of some bright regions. Though there are other bright regions in the night time road environment like street lamps, traffic lights, and groundlevel road reflector plates. But rear lamp regions are brighter than these regions. So the first task is to extract these bright objects from the road scene image.

403

frames for the smoothness of the image. At the end of the Fig.4 the Frame-1 and Fame-2 are with only rear lamps of the vehicle as shown Fig.8 and Fig.9.

Fig.6.Simulation result of fig.2 without unwanted regions

Fig.7. Simulation result of fig.3 without unwanted regions Fig.4. Block Diagram for Rear Lamp Detection

Fig-8: Gray scale image of fig.6 Fig.5.Simulation result of fig.1 without region is paired.

unwanted regions and the

404

Fig.9. Gray scale image of fig.7

Fig.11.Intensity difference of Frame-1 and Frame-2 with respect to timeline

4. Object Detection To detect the motion of the object different mechanisms are proposed. In this paper we detected the object in two ways by spatial and intensity level analysis. For spatial analysis we added the two resultant frames. If the vehicle is moving the rear lamp will be appeared in the different position of the fame. if (frame-1+frame-2) leads to duplication of bright object pair. then vehicle is moving. if (frame-1+frame-2) = frame-1 or frame-2 then vehicle is in static.

resultant image after adding two frames which has more no of pixels.

In case of intensity level analysis the highest intensity of the two frames are obtained from histogram. By plotting the intensity graph the result is analyzed whether the vehicle is moving or in static.

Fig.12.Histogram of Fig.6

Fig.10. Addition of Frame-1 and Frame-2

Figure.10 is obtained by adding two frames. Hence we obtained the duplicate of bright object region. Simultaneously the histogram is plotted of frame-1 and frame-2 after removing unwanted regions and pairing rear lamps .Fig.12 and fig.13are the histogram of fig.6 and fig.7 respectively. And fig.14 is the histogram of the

Fig.13.Histogram of Fig.7

405

After adding the two resultant frames the same frame is obtained like frame-1 or frame-2.There is no duplication of bright object pair in fig.17. Similarly the intensity difference graph is plotted in fig.18 which parallel with x-axis. There is no change in intensity due to the vehicle is in static.

Fig.14.Histogram of Fig.6 and Fig.7 after adding

Fig.18.Intensity difference of fig 15 and fig 16

Algorithm-(Rear lamp detection) Step-1 Extraction of two different frames from video sequence. Step-2 call backsub ( ) for background subtraction. Step-3 pair the symmetric bright objects Step-4 suppress unwanted regions except paired bright objects. Step-5 Application of median filter to make the frames smooth. Step-6 Combining two different frames, get resultant image. Step-7 Analysis of resulting image for change in intensity or location of pixel. Step-8 Incase change is found the object is assumed to be motion else object is static. Step-9 stop.

Fig.15.New frame like frame-1

Fig.16. New frame like frame-2

For background subtraction of each frame call backsub ( ) backsub ( ) Step-1 Read image Step-2 Find max, min and avg intensity Step-3 Apply adaptive thresh holding to find out the thresh holding value Step-4 return.

5. Simulation and Result analysis The simulation has been carried out using MAT Lab version R2010a with system configuration Core 2 duo.

Fig.17. Frame-1+Frame-2

406

[2]“Regulation no. 7, Uniform Provisions Concerning the Approval of Front and Rear Position (Side) Lamps, Stop-Lamps and EndOutline Marker Lamps for Motor Vehicles (Except Motor Cycles) and Their Trailers,” UN World Forum Harmonization Vehicle Regulations, 1968. [3] Ronan O’Malley, Edward Jones, Martin Glavin, “Rear-Lamp Vehicle Detection and Tracking in Low-Exposure Color Video for Night Conditions”, IEEE transaction on intelligent transportation system, Vol.11,No.2 June 2010. [4] Y.-L. Chen, B.-F. Wu and C.-J. Fan, “Real-time vision-based multiple vehicle detection and tracking for nighttime traffic surveillance”, in Proc.IEEE Int. Conf. SMC, San Antonio, Texas, pp. 3452–3458, 2009. [5] M. Y. Chern and P. C. Hou, “The lane recognition and vehicle detection at night for a camera assisted car on highway,” in Proc. IEEE Int. Conf.Robot. vol. 2, pp. 2110–2115 Autom, 2003. [6]A. M. López, J. Hilgenstock, A. Busse, R Baldrich, F. Lumbreras, and J. Serrat, “NighttimeVehicle Detection for Intelligent Headlight Control”,vol. 5259. Berlin, Germany, Springer-Verlag, ser. Lecture Notes in Computer Science, pp. 113–124, 2008. [7] S.-Y. Kim, S.-Y. Oh, J.-K. Kang, Y.-W. Ryu, K.-S. Kim, S.-C. Park and K.-H. Park, “Front and rear vehicle detection and tracking in the day and night times using vision and sonar sensor fusion,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst, pp. 2173–2178 Aug. 2005. [8] P. Alcantarilla, L. Bergasa, P. Jimenez, M. Sotelo, I. Parra, D. Fernandez, and S. Mayoral, “Night time vehicle detection for driving assistance light beam controller,” in Proc. IEEE Intell. Vehicles Symp, pp. 291–296 Jun. 2008. [9] M. Betke, E. Haritaoglu, and L. S. Davis, “Real- time multiple vehicle detection and tracking from a moving vehicle,” Mach. Vis. Appl, vol. 12, no. 2, pp. 69–83, Aug. 2000. [10]C.-C.Wang, S.-S. Huang, and L.-C. Fu, “Driver assistance system for lane detection and vehicle recognition with night vision,” in Proc. IEEE/RSJ Int Conf. Intell. Robots Sys, pp. 3530– . 3535, Aug. 2005. [11]R. Sukthankar, “Raccoon: A real-time in Proc. autonomous car chaser operating optimally at night,” IEEE Intell. Vehicles Symp, pp. 37–42, Jul. 1993. [12]L. Gao, C. Li, T. Fang, and Z. Xiong, “Vehicle Detection Based on Color and Edge Information”, Berlin, Germany: Springer-Verlag, ser. Lecture Notes in Computer Science, pp. 142–150, 2008. [13] W. F. Gardner and D. T. Lawton, “Interactive model-based vehicle tracking,”IEEE Trans. Pattern Anal. Mach. Intel., vol. 18, no. 11, pp. 1115–1121, Nov. 1996. [14] L.-W. Tsai, J.-W. Hsieh, and K.-C. Fan, “Vehicle detection using normalized color and edge map,” IEEE Trans. Image Process, vol. 16, no. 3,pp. 850–864, Mar. 2007. [15]D. Koller, J. Weber, and J. Malik, Robust Multiple Car Tracking WithOcclusion Reasoning. Berlin, Germany: Springer-Verlag, ,ser. Lecture Notes in Computer Science, pp. 189–196, 2006. [16] C. Julià, A. Sappa, F. Lumbreras, J. Serrat, and A. López, “Motion segmentation through factorization. Application to night driving assistance,”in Proc. Int. Conf. Comput. Vis. Theory App., , pp. 270– 277, 2006. [17] F. Dellaert and C. Thorpe, “Robust car tracking using Kalman filtering,and Bayesian templates,” in Proc. SPIE Conf. Intell. Transp. Syst.,vol. 3207, pp. 72–83, 1998. [18]K. She, G. Bebis, H. Gu, and R. Miller, “Vehicle tracking using online fusion of color and shape features,” in Proc. IEEE Int. Conf. Intell. Transp Syst, pp. 731–736 ., Oct. 2004. [19]J. P. ,San Rafael Lewis, “Fast Normalized CrossCorrelation”, CA: Industrial Light and Magic, 1995. [20]R. Cucchiara and M. Piccardi, “Vehicle detection under day and night illumination,” in Proc. Int. ICSC Symp. Intell. Ind, pp. 789– 794 Autom., 1999. [21]W. Liu, X. Wen, B. Duan, H. Yuan, and N. Wang, “Rear vehicle detection and tracking for lane change assist,” in Proc. IEEE Intell. Vehicles Symp,, pp. 252–257, Jun. 2007 [22]Y.-M. Chan, S.-S. Huang, L.-C. Fu, and P.-Y. Hsiao, “Vehicle detection under various lighting conditions by incorporating article filter,” in Proc.IEEE Intell. Transp. Syst. Con., , pp. 534–539 Sep. 2007.

The fig.2 and fig. 3 are the two real time road scene at different two instants. These two figs are used as frame-1 and frame-2 for the experiment. The real challenge is that the appeared regions of interests are very small with low intensity due to the fact that the object is far away from the camera. The unwanted bright regions are removed and the rear lamps of the vehicle are identified for result analysis depicted in fig.6 and fig.7. Then subsequently the images in both figure 6 and 7 are converted to gray scale image for our purpose and revealed in fig.8 and fig.9. Fig.10 is obtained by adding fig.8 and fig.9 which is leading duplicate regions of frame-1 or frame-2. Fig.11 is showing the intensity difference graph plotted with respect to timeline. From the figure it is seen that the intensity for the frame-1 is high initially and later on as time goes on it is decreased slowly for different frames. Figures 12 and 13 are the histogram of fram1 and frame2. Figure 14 depicts the resultant of figure 12 and 13 after adding. From fig.14 it is observed that number of pixels are more with comparison to figure 12 and 13. It is concluded that increase of pixel shows more region due to motion of vehicle. Hence the vehicle is in motion. When the vehicle is far away from the camera, the intensity gets vanished that shows camera does not able to capture the image. But in case of fig.15 and fig.16 the vehicle is not moving. The output fig.17 is specifying that it is not leading to duplicate regions of input frames and also in the fig.18 the intensity difference of two frames seems to be same in different time interval. This represents that the vehicle is on static, no movement takes place. So there is no such motion of the vehicle.

6. Conclusion and Future work This paper has proposed an effective night time vehicle motion detection system where the vehicle is appeared from far distance in forward facing manner. We have implemented adaptive thresh holding for bright object detection and median filter for unwanted region removal. This technique is robust and adaptive when dealing with varying lighting conditions at night which is very much useful in detecting moving vehicles. In this work we have considered two cases such as static and motion. This work can also be further extended to detect moving objects using a moving camera with different illumination conditions at night. However, this methodology can be extended to identify the vehicle itself during night hour.

References [1] Nigeria atubi, augustus ,“Determinants of road traffic accident occurances in lagos state”, some lessons for (PhD) associate professor department of geography and regional planning delta state university, abraka, international Journal of Humanities and Social Science Vol. 2 No. 6 [Special Issue – March 2012]

407