Proceedings of the 12th International IEEE Conference on Intelligent Transportation Systems, St. Louis, MO, USA, October 3-7, 2009
TuDT1.4
Tracking and Detection of Lane and Vehicle Integrating Lane and Vehicle Information Using PDAF Tracking Model Ssu-Ying Hung, Yi-Ming Chan, Bin-Feng Lin, Li-Chen Fu, Fellow, IEEE, Pei-Yung Hsiao and Shin-Shinh Huang, Member IEEE
Abstract— We propose a robust system for multi-vehicle and multi-lane detection with integrating lane and vehicle information. Most research work only can detect the lanes or vehicles separately. However, the dependency between lane information and vehicle information are able to support each other achieving more reliable results. We use probabilistic data association filter to integrate the information of lane and vehicle. In probabilistic data association filter, cumulate history of target is kept in the data association probability. Target tracking can improve the detection results through region of interests. At the same time, a high-level traffic model combines the lane and vehicle information. The tracking and detection can benefit each other through iterations. Experimental results show that our approach can detect multi-vehicle and multi-lane reliably.
I. INTRODUCTION
D
a driver assistance system is always the necessary item in intelligent transportation systems. In the driver assistance system, sensing for lane or vehicles are two important elements. Most researches detect the lane boundaries or vehicles independently; however, their outcomes may be unsatisfactory because of noise caused by lanes or vehicles. We observe that lanes and vehicles are close related to each other. Modeling of lane and vehicle relation can overcome many challenges in the lane and vehicle detection process. Many related research works focus on the lane detection in recent years [1-4]. These works use different strategies and features. But they suffer from the same issue; these features may appear on the vehicles leading to false results. There are three main features can extract the lane mark pixels from the image. The first one is the edge feature [5, 6]. They assume that lane boundaries are almost straight lines with clear edges. The results of these works, however, may fail when there are vehicles on the road. The second one is the color feature [4, 6-8]. They assume that the lane boundaries usually consist of clear colors. However, color is easily to be influenced by illumination or the weather. The colors of lane markings are mostly white, yellow, or red; however, these colors also apEVELOPING
Ssu-Ying Hung, Yi-Ming Chan, and Bin-Feng Lin are with the Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan (
[email protected]). Li-Chen Fu is with the Department of Electrical Engineering and Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan (
[email protected]). Shin-Shinh Huang is with the Department of Computer and Communication Engineering, National Kaohsiung First University of Science and Technology, Kaohsiung, Taiwan (
[email protected]). Pei-Yung Hsiao is with the Department of Electrical Engineering, National University of Kaohsiung, Kaohsiung, Taiwan (
[email protected]).
978-1-4244-5521-8/09/$26.00 ©2009 IEEE
pear on the vehicles. The detection may fail when the same colors appear. The third one is the intensity-based feature [7, 9]. They assume the intensity of the lane markings is brighter than the road. This method suffers from the problem that the intensity of the vehicle is possible brighter than the road surface. For the sake of solving vehicle interference issue, there are two researches take the vehicle information into consideration. The edge feature and the vertical delimiter of front vehicle are avoided the erroneous fitting in [10]. The size and shape information are used to filter out the effects of vehicles in [11]. These two approaches deal with the vehicles problem in advance by filtering out possible vehicle regions to make the lane detection results more accurate. Nevertheless, they just take the vehicle information as noise; they do not reuse it to improve the lane detection results. Among vehicle detection approaches, two common extracting feature approaches, knowledge-based approach [5, 12, 13] and region-based approach[14] are proposed. Few of them use the lane information. The knowledge-based approach is designed by using domain-knowledge to extract the characteristic of the vehicle, such as vertical edge, horizontal edge [14], the underneath of the vehicle [15] or the symmetry property of the vehicle [16]. Some works combined the knowledge-based features with motion information to filter out the noise on the road [17]. The false positives of these approaches usually are generated near the rail or grove near by the road. The region-based approach is to model the appearance of the vehicle by one kind of features, e.g., principle component analysis [14], Gabor wavelet [16], or Haar-like feature [18]. These approaches use machine learning or template matching to classify the region into category of vehicle or non-vehicle [18-23]. Knowledge-based or appearance-based approaches both suffer from the same problem that always exist some non-vehicle regions which has similar features to the vehicle in the image. Some techniques, like stereo information [2, 23, 24], are powerful and robust in vehicle detection but they are time consuming. The aim of this work is to integrate the relation between lanes and vehicles to improve the reliability of the system. In contrast to previous approaches, our system not only overcomes the bad influence of vehicles for lanes but also uses vehicle information to improve the lane detection. And for vehicle detection, we combine lane information to reduce false positives in the vehicle detection. In order to fuse the relation into the detection system, we adopt probabilistic data
603
A. Traffic model The traffic model is a part of the tracking process encoding the physical constraints for the lanes and the vehicles and maintaining the relation between the tracked lanes and the tracked vehicles. According to the physical rules in the real traffic, the lanes are smooth and with constant width, and they are parallel to neighboring lanes. The trajectory of a vehicle is typically along the lanes on the road. In bird-view image, there is no overlap between two vehicles. The traffic model probability is to evaluate the relation between target xt and other targets under the traffic constraints. xt Xt , Xt {xt , L ,..., xt , L , xt ,V ,..., xt ,V },
Fig. 1 The block diagram of our system
1
association filter (PDAF) [25]. Through PDAF, we can estimate the likelihood of every measurement by a data association probability. We design the model of the data association probability using relation between lanes and vehicles. Figure 1 shows the block diagram of proposed system. There are two parts in our system, one is detection stage and the other is tracking stage. In the detection stage, ROI is used to combine lane and vehicle information in the image. In the tracking stage, we introduce a traffic model to combine lane and vehicle information in the real world. After capturing the images from camera, we use the tracked lane to generate ROI of vehicle detection and use the tracked vehicles to generate ROI of lane detection. After obtaining the candidates, we track the targets by PDAF. These candidates are used to update the tracked targets in PDAF. This paper is organized as follows. Section II introduces the integration approach. Section III describes the tracking model of lane target. In Section IV, we show the method of detection and tracking vehicles. Section V presents the experimental results. Finally we sum up our contributions in Section VI. II. INTEGRATION APPROACH In our system, there are three components, namely, traffic model, detection process, and tracking process. They integrate tracked lanes and tracked vehicles by different ways. We model our problem as maximum a posteriori probability problem: arg max P ( Xt | Xt 1 , I zt , Z t ) (1) Xt
where Xt is the set of targets, Xt-1 is the set of targets at the previous time, I z is the observational features and Zt is the set t
of all sets of candidates for each target. The detection results are recorded in the history of trajectory Zt. For each target, we calculate its probability as:
P(xt xt 1 , I zt , Zt ) E xt I zt , Zt ,xt Xt
(2)
We use Gaussian distribution to compute the probability of each candidate because the motion of a target is usually smooth. Therefore, we define the target state as the mean of the distribution. The state of target is updated by the history of trajectory and observed features of current image.
N
1
M
M
N
where x represents a target state which belongs to Xt. x t , Li is the state of the tracked lane i and
xt ,V j is the state of the
tracked vehicle j. We define the probability of traffic model as follows: P(xt | xt , L1 ,.., xt , Ln , xt ,V1 ,..., xt ,Vm ) p(xt | xt , Li ) p(xt | xt ,V j ) p(xt | xt , Li ) f (xt , xt , Li ); p(xt | xt ,V j ) g (xt , xt ,Vi )
(3)
, where P (xt | xt , L1 ,.., xt , Ln , xt ,V1 ,..., xt ,Vm ) is the probability of the traffic model. The relation is defined by function f for lane targets or function g for vehicle targets. When target xt holds more positive relationship, the probability of traffic model for xt is higher. Given this reason, we know that P(xt | xt , L1 ,.., xt , Ln , xt ,V1 ,..., xt ,Vm ) has direct ratio of multiple of the probability of each relation between target xt and the relative target. B. Detection process For lanes detection, we use the lane detector in [26]. This method use peak pixels to model the lane left and right boundaries. The peaks are the pixels on the lane boundaries and in the row intensity histogram, the positions of these pixels appear the peaks of curve line. Hence the lane boundaries modeled by peaks are lane target measurements. In the vehicle detection, we use the vehicle detector in [27]. First, different features extracted individually and fuse them by mean-shift. We use vertical edge and underneath to be main features to update position of tracked vehicle. C. Tracking process The tracking process finds out the optimal solutions of lane and vehicle, and then updates the tracked lanes and tracked vehicles in traffic model in each frame. In PDAF, the data association probabilities are used to estimate the weight of all measurements in the validation gate. We can use the probabilities to integrate the lane information into vehicle detection. Huang [28] proposed visual probabilistic data association filter (VPDAF) to support target tracking by using images. We redefine the data association probability for each validated measurement as below:
i ,t P I z i ,t , Zt P i ,t Z t t
(4)
604
t is at time t. The measurements Zt={zi,t, i=1,…,mt}at time t, returned by detection process and Zt={Zt, j=1,…, t}denotes the cumulative measurement history. θ i,t is an event and it means that zi is correct from target. I z is the observational t
density of the validated measurements set Zt. P{θ i,t | Zt}is correspond to the original PDAF estimation and this. The joint probability of the observational density conditioned on the eventθ i,t shows that it can enhance the quality of this event from ith measurement zi. In PDAF, there is an assumption that there is only one correct measurement and the others are false measurements in the validated gate. Hence the conditional observational density can be defined as follows:
P I Z i i ,t , Zt
Set Region of Interest
P I mt
j i , j 1
Z j ,t
z j ,t , j ,t , Zt 1
(5)
Detect Lane Marking
Generate Lane Boundary Model
Lane Candidates
Convert into Lane Model
Figure 2 The flowchart of the lane detection
P I Zi ,t zi ,t , i ,t , Zt 1 =
Image
Image
Set Region of Interest
I i ( zi ) mt I 0 ( z j ), i 1,..., mt I 0 ( zi ) j 1
Ii(zi)is the observational density of the validated measurement zi,t, and I0(zj)is the probability distribution function for measurement j when it is assumed to be false. In our system, I0(zj)is defined the uniform distribution. Ii(zi) is design by different image features and integration relations. Ii(zi) will be introduced separately for lane and vehicle in the following sections.
Detect Underneath
Detect Vertical Edge
Vehicle Candidates
Combine Two Feature
Figure 3 Flowchart of the vehicle detection Y
(0,0)
(R,0) X
III. DETECTION WITH INTEGRATION A. Detection Lanes We select the “peak finding ” method in [3] to extract the lane marking pixels called peak pixels; these pixels are important data for lane model, but they often appear on the vehicles and then become the noise in the lane detection process. Here the tracked vehicles in the traffic model are used to decrease such kind noise. We delete the peak pixels in the region of the tracked vehicles. Figure 2 shows the steps of lane detection with traffic model. B. Detection Vehicles In the vehicle detection process, we modify the method in [27]. Figure 3 shows each step of the vehicle detection with traffic model. The main idea of Particle Filter is to predict the candidates using the detection result at the previous time instants and to update the candidates using observations. In Data Driven technique, four features namely, vertical edge, underneath, symmetry, taillight, are used to generate samples in Particle Filter. And we employ the traffic model to limit the positions of samples. The traffic model records the lanes and vehicles being tracked as well as their relations at the current time. Therefore, when there is a vehicle tracked in one lane, we will not generate other samples within the same lane. The implementation of this method can reduce the detection region and increase the performance of the system.
Fig 4 Circle coordinate IV. TRACKING WITH INTEGRATION USING PDAF A. Tracking Lanes 1) Lane State In general, we often use a mathematical model to represent the road. Here, we choose the clothoid model[29], because it is an efficient model for the short lane which is less than 80 meters. For the clothoid model, the road curvature presents as c( x) c0 c1 x . In our system, the vehicles tracked are within 55 meters, so we use a simple model with c1=0 (constant curve radius in every frame). Then left and right boundaries of are parallel, we can find a curve paralleling them in the middle of lane and use it to track the lane. This curve is like the arc of a circle with radius r=c0 in every frame. The curvature may change in the next frame. We record the beginning and the end of the curve to be the state of the lane. We define a circle coordinate with the origin is the beginning. Fig 4 shows the circle coordinate. The bold arc is present the lane curve in the figure 4. We track the lane curve in the circle coordinate because the distribution on the end point is assumed to be Gaussian distribution. We transform positions of points from image coordinate to circle coordinate. First we transform the points in the
605
Y Y
(x,y)
X x
X
w
Figure 5 The Relation between Vehicle Coordinate and Circle Coordinate. Orange coordinate lines are the vehicle coordinate and green coordinate lines are the circle coordinate. The direction of pink line is the same with the direction of y-axis of the circle coordinate.
image coordinate to the point in the vehicle coordinate then transform to the circle coordinate. The transformation between image coordinate and vehicle coordinate is done by using camera calibration parameters. The vehicle coordinate to circle coordinate transformation is done by using the pan angle and the displacement between the center of lane and the position of camera. Figure 5 shows the circle coordinate and vehicle coordinate. The pan angle θ in the figure 5 is the difference of the directions between host vehicle and lane. The displacement is the x-axis distance from position of camera to the beginning of lane curve. During tracking the lane, pan angle and displacement are variable. These parameters keep the relations between circle coordinate and vehicle coordinate and invert the lane state to the lane boundaries in the image. The lane state model is shown as below:
lt [ x, y, , x, w]tT (x,y) is the position of the end of the lane curve in the circle coordinate. θ is the pan angle. x is the displacement of the beginning of the lane curve from the vehicle coordinate. w is the width of the lane. 2) Measurements of Lane Target At beginning of a lane tracking, we use the edge feature of lane markings in [3] to be metadata of the measurements and model them with the Piecewise-Linear Lane Model. The measurement consists of the points of lane segment in the circle coordinate, the pan angle, displacement and width of lane. Our method of extracting the measurements consists of the three steps. The first is to extract lane marking pixels. We find the lane marking pixels near the current lane state in the image. The second is to solve lane boundary equation. After grouping the lane marking pixels into several line segments, we use piecewise-linear lane model to construct lane configuration. Then we get some lane candidates presented by the equation of the lane model. The third is to generate measurements. We need to transform the lane equation model into the lane measurements. The intersection point of left boundary equation and right lane boundary equation is the vanishing
point in the image if the ground is flat. By using the intersection point, we can find out the direction of lane and calculate the pan angle of lane candidate. Then we translate the lane candidate into the vehicle coordinate. We calculate the center point of lane left and right boundaries when y-coordinate is zero. The x-coordinate of center point is the displacement from the origin of the vehicle coordinate. 3) Data Association Probabilities of Lane Measurements In PDAF, we estimate the probability for each validated measurement. Here we use three features as the following to estimate the probabilities: a) Lane Pixel Likelihood
After fitting line segments into the equation of the lane model, some lane marking pixels are not on the lane candidate. However, lane marking pixels are important basis for lane detection. If there are more lane marking pixels on the lane candidate, we estimate higher probabilities. b) Boundary Direction Likelihood
In the high way, lane left boundary and right boundary are parallel. But some fault lane marking pixels cause the departure of lane boundary. Hence the more parallel lane candidate, the better measurement which is translated from it. c) Lane Direction Likelihood
Because the moving vehicle usually follows the lane, the direction of the lane is almost the same with the moving vehicle on the lane. If there are moving vehicles on the road, we can calculate the direction of the movement of the vehicle and the direction of the lane to compare the degree of the similar. According to the above characteristics, we can formula the observational density of the validated measurement as I i ( zi ) P( I t z tL,i , xVt , j )
npi
exp( iL,l iL,r ) exp( iL Vj )
mt
np j 1
j
The first term is the ratio of matching peak pixels. npi is the number of the matching pixels. The second term is the difference between the direction of left lane boundary and right lane boundary. iL,l ,iL,r mean the left angle of the direction. The third term is the direction of the center of the lane and the direction of the vehicle moving within this lane. 4) Multiple Lane Extension In traffic environment, we focus on the behavior of the vehicles in the right side or left side lane. We can use the road width and host lane boundaries to extent the left lane and right lane. If boundary is broken, there is a lane in another side. If boundary is solid, there is no lane in another side. Here we consider the boundary by two criteria in [26]. First is that the
606
peaks line segments are not long enough. Second is that the ratio of number of peaks to length of line pixels is less than a threshold. B. Tracking Vehicles 1) Vehicle State We define a vehicle state as the follows:
xt [ x, y, w, x, y ]tT ( x, y ) is the position of the vehicle in the vehicle coordinate.
w is the width of the vehicle.
( x, y ) is the velocity of the vehicle. We use this definition to track a vehicle.
2) Measurements of Vehicle Target The measurements are the positions of vehicle candidates and they come from two features, underneath and vertical edges. The underneath is the shadow under a vehicle; it forms a dark region in the image. The intensity of underneath is much lower than surface of the road. The underneath provides information of the distance between host vehicle and front vehicle. Another important feature is vertical edge. There are many strong vertical edges by the left side and right side of the vehicle in the image. The vertical edge can be used to supply the horizontal position of the vehicle. The center of the vehicle is between two line segments formed by vertical edge pixels. These vertical line segments usually connect with underneath, hence we combine two kinds of features to generate measurements. Then the width of lane is used to set the threshold of validation gate to filter the reasonable measurements.
Fig.6(e) are the scenes in the tunnel; Fig.6(b) are darker than Fig.6(e), therefore we extract the good features hardly. Fig.6(c) is raining day. It is harder to find the underneath in the raining day. There are 700 images test detecting single vehicle. The detection rate of detecting single vehicle is 98% and the accuracy is 88%. Detecting single vehicle of our system is robust. Our tracking model enhances the stability by using a validated region where we extracts the features. There are 2413 images to test detecting multi-vehicles. The detection rate is 88.97% and accuracy is 82.2%. The Fig. 7 shows the comparisons of with and without integrating results. Fig. 7(a) shows the error of lane without using the direction of vehicle. In Fig.7(b), lane is correctly detected. Fig.7(c) appears one false vehicle in the railings. In Fig.7(d), the false vehicle disappears because of integrating lane. Table III shows the performances of our system and Huang’s system[3]. The vehicle detection method of Huang’s system is knowledge-based method. It is only single vehicle detection in the Huang’s system. We can see that our system is more reliable than it. TABLE I PLATEFORM OF THE EXPERIMENT PROCESSOR MEMORY OPERATION SYSTEM IMAGE RESOLUTION
Intel Pentium4 3.0G 0.99 GB Microsoft Windows XP 320 by 240
TABLE II SYSTEM PERFORMANCE OF VEHICLE DETECTION Single Vehicle Multiple Vehicles Detection Rate 98% 88.97% Accuracy 88% 82.20% Detection Rate = Hit / (Hit + Miss); Accuracy = Hit / Positive
3) Data Association Probabilities of Vehicle Measurements We add some kinds of association with each vehicle measurement. Because the moving vehicle often follows the lanes, the direction of vehicle is like the direction of the lane. We calculate the angle of direction of every vehicle measurement to estimate the probability. Therefore the formula as below:
I i ( zi ) P( I t zVt ,i , xtL, j ) exp( jL iV )
(a)Highway
(b)Tunnel
(c)Rainy
(d) Highway
L t, j
x is the predicted lane from the previous frame. jL ,iV are angles of lane and the vehicle measurement. V. EXPERIMENTAL RESULTS Table I shows the details of specification of the platform used for experiments. We test 3113 images to analysis the performance of our system. The processing time is 120 ms per image. The distance of tracking range is constrained within 50 meters. The average detection rate is 90% and accuracy is 83%. There are 6 scenarios consist of highway, a light slope, tunnel, raining day, setting sun and a cloudy day. Fig.6 displays some scenarios. In Fig.6(a), false vehicles often appear in the grove. The challenge of Fig.6(d) is the rail because that the vertical edges are rich on the rail. Fig.6(f) has strips on the road; the strips are strong noise to lane detection. Fig.6(b) and
(e) Tunnel
(f) Highway
607
Figure 6 Testing scenarios
(a) Error lane
(b) Correct lane
(c) Error vehicle (d) Correct vehicle Figure 7 Comparisons of with and without Integration. TABLE III COMPARED WITH HUANG’S SYSTEM Our System Huang Detection Rate 98% 97.4%
VI. CONCLUSION In this paper, we propose a multiple lanes and vehicles detecting and tracking system using probability data association filter. We use the association probability to combine the relation between lanes and vehicles. For lane detection, we use clothoid model to describe the lane state. By using the similarity of the direction of moving vehicle, we detect more reliable peak pixels to model lane boundaries. For vehicle detection and tracking, we use lane boundaries to limit the detection region and use the lane direction to filter the false measurements. From the outcomes of experiments, the average detection is 90% and our system is more reliable. REFERENCES [1] Y. Wang, E. K. Teoh, and D. Shen, "Lane detection and tracking using B-Snake," Image and Vision Computing, vol. 22, pp. 269-280, 2004. [2] M. Bertozzi and A. Broggi, "GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection," IEEE Transactions on Image Processing, vol. 7, pp. 62-81, 1998. [3] S.-S. Huang, C.-J. Chen, P.-Y. Hsiao, and L.-C. Fu, "On-board vision system for lane recognition and front-vehicle detection to enhance driver's awareness," in IEEE Conference on Robotics and Automation, pp. 2456-2461, 2004. [4] Z. Kim, "Robust Lane Detection and Tracking in Challenging Scenarios," IEEE Intelligent Transportation Systems, vol. 9, pp. 16-26, 2008. [5] K.-Y. Chiu and S.-F. Lin, "Lane detection using color-based segmentation," in IEEE Proceedings on Intelligent Vehicles Symposium, pp. 706-711, 2005. [6] Y. He, H. Wang, and B. Zhang, "Color-based road detection in urban traffic scenes," IEEE Transactions on Intelligent Transportation Systems, vol. 5, pp. 309-318, 2004. [7] R. LABAYRADE, J. DOURET, J. LANEURIT, and R. CHAPUIS, "A Reliable and Robust Lane Detection System Based on the Parallel Use of Three Algorithms for Driving Safety Assistance " IEICE Transactions on Information and Systems vol. E89D, pp. 2092-2100, 2006. [8] C. Hoffman, T. Dang, and C. Stiller, "Vehicle detection fusing 2D visual features," in IEEE Intelligent Vehicles Symposium, pp. 280-285, 2004.
[9] K. Kluge and S. Lakshmanan, "A deformable-template approach to lane detection," in IEEE Intelligent Vehicles Symposium, pp. 54-59, 1995. [10] W. Enkelmann, "Video-Based Driver Assistance--From Basic Functions to Applications," International Journal of Computer Vision, vol. 45, pp. 201-221, 2001. [11] C. Hsu-Yung, J. Bor-Shenn, T. Pei-Ting, and K. C. Fan, "Lane Detection With Moving Vehicles in the Traffic Scenes," IEEE Transactions on Intelligent Transportation Systems, vol. 7, pp. 571-582, 2006. [12] B. Margrit, H. Esin, and D. L. S, "Real-time multiple vehicle detection and tracking from a moving vehicle " in Machine Vision and Applications, pp. 69-83, 2000. [13] S. M. Smith and J. M. Brady, "ASSET-2: real-time motion segmentation and shape tracking," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, pp. 814-820, 1995. [14] Z. Sun, G. Bebis, and R. Miller, "Monocular precrash vehicle detection: features and classifiers," Transactions on Image Processing, vol. 15, pp. 2019-2034, 2006. [15] W. Liu, X. Wen, B. Duan, H. Yuan, and N. Wang, "Rear Vehicle Detection and Tracking for Lane Change Assist," in IEEE Intelligent Vehicles Symposium, pp. 252-257, 2007. [16] H. Cheng, N. Zheng, and C. Sun, "Boosted Gabor Features Applied to Vehicle Detection," in International Conference on Pattern Recognition, pp. 662-666, 2006. [17] R. Okada, Y. Taniguchi, K. Furukawa, and K. Onoguchi, "Obstacle detection using projective invariant and vanishing lines," in International Conference on Computer Vision, pp. 330-337 vol.1, 2003. [18] H. Bai, J. Wu, and C. Liu, "Motion and haar-like features based vehicle detection," in International Multi-Media Modelling Conference Proceedings, p. 4 pp., 2006. [19] M. P. Dubuisson Jolly, S. Lakshmanan, and A. K. Jain, "Vehicle segmentation and classification using deformable templates," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, pp. 293-308, 1996. [20] H. Schneiderman and T. Kanade, "A statistical method for 3D object detection applied to faces and cars," in IEEE Conference on Computer Vision and Pattern Recognition, pp. 746-751 vol.1, 2000. [21] T. Kato, Y. Ninomiya, and I. Masaki, "Preceding vehicle recognition based on learning from sample images," IEEE Transactions on Intelligent Transportation Systems, vol. 3, pp. 252-260, 2002. [22] A. Khammari, F. Nashashibi, Y. Abramson, and C. Laurgeau, "Vehicle detection combining gradient analysis and AdaBoost classification," in IEEE Conference on Intelligent Transportation Systems pp. 66-71, 2005. [23] A. Bensrhair, A. Bertozzi, A. Broggi, A. Fascioli, S. Mousset, and G. Toulminet, "Stereo vision-based feature extraction for vehicle detection," in IEEE Intelligent Vehicle Symposium, pp. 465-470 vol.2, 2002. [24] M. Suwa, Y. Wu, M. Kobayashi, M. Kimachi, and S. Ogata, "A stereo-based vehicle detection method under windy conditions," in IEEE Proceedings on Intelligent Vehicles Symposium, pp. 246-248, 2000. [25] Y. Bar-Shalom and T. E. Fortmann, Tracking and Data Association. New York: Academic, 1988. [26] C.-Y. Huang, S.-S. Huang, Y.-M. Chan, Y.-H. Chiu, L.-C. Fu, and P.-Y. Hsiao, "Driver Assistance System Using Integrated Information from Lane Geometry and Vehicle Direction," in IEEE Intelligent Transportation Systems Conference, pp. 986-991, 2007. [27] Y.-M. Chan, S.-S. Huang, L.-C. Fu, and P.-Y. Hsiao, "Vehicle Detection under Various Lighting Conditions by Incorporating Particle Filter," in IEEE Intelligent Transportation Systems Conference, pp. 534-539, 2007. [28] C.-M. Huang, D. Liu, and L.-C. Fu, "Visual Tracking in Cluttered Environments Using the Visual Probabilistic Data Association Filter," IEEE Transactions on Robotics, vol. 22, pp. 1292-1297, 2006. [29] A. Eidehall and F. Gustafsson, "Obtaining reference road geometry parameters from recorded sensor data," in IEEE Intelligent Vehicles Symposium, pp. 256-260, 2006.
608