Vehicle position estimation for driving pattern recognition using a

0 downloads 0 Views 1MB Size Report
recognition using a monocular vision system. Pavan Nagendra ... estimate the location of the vehicle within a road around a horizontal axis from the bird's eye ...
2012 International Conference on Computer Communication and Informatics (ICCCI -2012), Jan. 10 – 12, 2012, Coimbatore, INDIA

Vehicle position estimation for driving pattern recognition using a monocular vision system Pavan Nagendra

Manjunath R Venkatesh

Delphi Automotive Systems, Technical Center India, Kalyani Platina, Whitefield, Bangalore - 560066, India

Delphi Automotive Systems, Technical Center India, Kalyani Platina, Whitefield, Bangalore - 560066, India

[email protected]

Abstract— The purpose of this paper is to propose a method to estimate the location of the vehicle within a road around a horizontal axis from the bird’s eye perspective. This method uses a road heuristic - the edge length of the road, in order to localize a vehicle within the road. The method aims to determine erratic vehicle steering on roads using a single camera in places where lanes and road discipline is absent. It also proposes a system which could help standardize lane discipline on such driving conditions without the existence of an actual lane. Keywords- Computer Vision; OpenCV; Road geometry; Road detection; Driver assistance

I. INTRODUCTION Driver assistance systems today, help to reduce accidents using state of the art technology. They utilize smart electronics that assist the driver during unintended lane departures and unavoidable head-on collisions. Most of this technology utilizes the operation of RADARs, LIDARs and vision systems in order to carry out the above tasks [1]. These systems are mainly employed on vehicles that assume a certain amount of road infrastructure and lane standards to be successful in its operation. Systems that can address issues of erratic driving in undeveloped road conditions are absent. There is a need for systems to identify driving patterns in order to reduce erratic driving and improve lane discipline. It might not be feasible to employ all the above mentioned sensors within vehicles to perform the task of driving pattern detection. A low cost system would be the best fit for such an application. Taking these facts into consideration it would be best suited to used CMOS vision sensors that are low cost and consume less power compared to the other sensors. The modern solution to the problem of position estimation would be to use stereoscopic vision, although it would pose more challenges in terms of being computationally intensive and increase mounting space and alignment constraints. Single camera systems today utilize the sensors mainly to focus on lane departure warning and vehicle detection on roads that have standardized lanes [2]. We will see in this paper an incremental research work on the above mentioned problem utilising a single camera. In section II we will talk about the specific erratic driving scenarios that are commonly encountered on urban roads and highways. In section III we will talk about the method of edge detection used to detect the 978-1-4577-1583-9/ 12/ $26.00 © 2012 IEEE

[email protected]

road boundaries. In section IV we talk about the position estimation algorithm that utilizes the result of the road boundaries. In section V we talk about the above algorithm integrated with the yaw rate sensor output. In section VI we talk about possible future work and conclusion on the topic. II. DRIVING PATTERN RECOGNITION A driving pattern can be recognised using a combination of road scenarios and driver reactions to road conditions that determine the movement of the vehicle. In case of a uniform lane, the driving pattern will always be constant with predetermined conditions that would drive a knowledge based system. In realistic driving conditions we can analyse the driving pattern in a vehicle system based on erratic road conditions and lane keeping performance as described below. A. Erratic Road Conditions As a generalised definition, a road is a pathway for carrying traffic, pedestrians and other objects. Considering roads which do not have well defined lane based uniformity, the driving conditions result in vehicles congestions. These examples include vehicles driving from left part of the road to the right part of the road. Vehicles of varying speed can also block the vehicles behind resulting in haphazard driving conditions. Additionally, the absence of lanes can lead vehicles driving at different angles and away from the centre lane of the road. In such conditions there is a need for orderly manoeuvring of the vehicles based on the position from the road edges and the central lane. This is one of the conditions we have considered in this paper for vehicle position. B. Lane Keeping Performance. Lane keeping performance is an application which helps in determining the performance of any driver to drive stably in highway and urban scenarios. The algorithm below needs to take care of the number of erratic lane changes performed by the driver with the absence of an indicator. This number could serve as a parameter to measure driver performance and could be stored onto the memory within the ECU on the vehicle in order to identify the erratic driver behaviour that caused a recorded accident. These scenarios are encountered in countries where lane discipline is absent and thus even without

2012 International Conference on Computer Communication and Informatics (ICCCI -2012), Jan. 10 – 12, 2012, Coimbatore, INDIA

the actual presence of a lane the manoeuvring actions could be logged to characterize the driving performance of a driver. III. ROAD DETECTION A road is a general problem when compared to lanes. When considering lanes, there is a uniformity that exists in the line markers. In this paper we would like to present a method for the differentiation of lane markers from the road edges which help in determining the exact position of the vehicle with respect to the road edges. There are several methods that can be considered while performing image analysis in the road edge detection. These include canny edge detection algorithm, Hough transform based algorithms for feature extraction and morphology based region extraction. The Hough transform [3] helps us detect lanes and road edges in an image in the most efficient way. Open CV [4] is a versatile tool that helps us in experimenting and implementing various image processing algorithms giving us the freedom for modifying the libraries as per our application. We have built an edge detection application based on the Hough transform libraries and have experimented with various images for road detection. The application also allows us to use a track bar for dynamically changing the edge detection threshold as below in Fig 1.

Fig. 2 Original Image

Using edge detection with the threshold track bar we have arrived at two results. First as shown in Fig 3 with a threshold of half the maximum we are able to identify the single most significant part of the roads which consists of the lane markings and coming closer to the road edges.

Fig. 3 Mid Threshold value

In below Fig 4 we use maximum threshold for edge detection in our application. Fig. 1 OpenCV Edge detection application with track bar

We have experimented with several images for road detection using this application. Fig 2 shows the original image of the road for performing complete road detection.

2012 International Conference on Computer Communication and Informatics (ICCCI -2012), Jan. 10 – 12, 2012, Coimbatore, INDIA

'R' - The length of the road edge towards the right fitting the FOV of the vehicle. 'X' - The distance of the central axis of the vehicle from the right road edge.

Fig. 4 Maximum Threshold

Fig 5 shows us these results in a video frame. The overlay of a road extracted result. We can see that there is a clear distinction that can be made for the main lanes in the road and the road edges. Based on this road detection extracted images we use the eccentricity and connected pixel information to clearly distinguish between the road edge and the lane markings in the roads. The connected pixel information around the road edge is not uniform and it is spread whereas the lane markings in the road have clear pixel information.

Fig. 6 Scenario where vehicle is towards the right

We can refer Fig 6 above which indicates a scenario where the vehicle is more towards the right. As we can see the length R is much greater than the length L and the value of X is tending towards the minimum.

Fig. 5 Overlay of the Road extraction algorithm on the video frame

IV. POSITION ESTIMATION ALGORITHM The above results help us extract out the road edges from a road. The perspective from the camera enables us to view the road ahead and helps us estimate the position of the vehicle within a road using the position estimation algorithm. But this algorithm is best understood when viewing the vehicle within a road from the bird's eye perspective. Let us study the position estimation task using three parameters: 'L' - The length of the road edge towards the left fitting the FOV of the vehicle. Fig. 7 Scenario where vehicle is towards the left

2012 International Conference on Computer Communication and Informatics (ICCCI -2012), Jan. 10 – 12, 2012, Coimbatore, INDIA

Fig 7 above indicates a scenario where the vehicle is more towards the left. In this scenario, the length L is much greater than the length R and the value of X is tending towards the maximum of the road length. Thus using the above geometric relationship between the vehicle and the road edge lengths we can conclude that X is a function of R and L. We can also notice from the above relations figure that X is directly proportional to L and X is indirectly proportional to R. Thus mathematically X is a function of L and R as denoted below.

F ( X )∞ L / R V. RESULTS In the analysis of vehicle position estimation using the above equation, we have used the below images obtained from video captured using a camera with resolution 1280×720. Fig 8 shows us these results in a video frame where the vehicle is towards the left side of the road. Fig. 9 Image from Vehicle camera towards the right side of the road

Fig. 8 Image from Vehicle camera towards the left side of the road

In the analysis of the above image, the number of pixels composed in the road edge towards the left side of the road is greater than the number of pixels composed in the road edge towards the right side of the road. Thus in the equation used for analysis in this case L > R and the value of F(x) will be greater than 1. Fig 9 shows us these results in a video frame where the vehicle is towards the left side of the road. In this image the number of pixels composed in the road edge towards the right side of the road is greater than the number of pixels composed in the road edge in the left side of the road. In the equation used for analysis in this case the values relates as L < R and thus the value of F(x) will be less than 1. Experimental results correlate closely to the above heuristic. When the vehicle was near the centre the value of F(X) was more or less 1. The value of F(X) varies from a very small value to a very large value as the vehicle moves from right to left. Thus these value plots can be used to estimate the position of the vehicle within a road.

VI. STEERING AND CURVING SCENARIOS The above algorithm estimates the position assuming that the vehicle central axis is parallel to the left road edge and the right road edge. Complexities in calculation and estimation are involved when the vehicles steers from one position to another. An inclination towards a particular direction results in a wrong implication of the position estimation algorithm. Although another premise we could make use of is the fact that lane keeping performance application requires only the number of erratic lane switches. For this application we could make use of the fact that in driveways the vehicle always comes to a stable driving condition after lane switching manoeuvres. This means the steering angle varies within this lane switch and comes to a stable value at a certain point. Using this information a threshold for the change of steering angle is set. Under a stable steering angle the system could estimate the vehicle position and when the change of steering angle is greater than a certain threshold the estimation action could be discarded. This way the number of lane changes which are erratic could be detected. The same solution could be applied to a curving scenario. Another way of handling the above scenario in a better fashion is to integrate the vision data with a yaw rate sensor [5]. The yaw rate information provides a much better way of interpreting the steering actions of a vehicle. This information can be used with the above estimation algorithm even during the curving and steering scenarios to estimate the current position of the vehicle. These fused systems do not make use of the road geometry heuristic calculated in the vision system to provide accurate outputs for a lane keeping application on a road without lanes.

2012 International Conference on Computer Communication and Informatics (ICCCI -2012), Jan. 10 – 12, 2012, Coimbatore, INDIA

III. CONCLUSIONS AND FUTURE WORK In this paper we studied the necessity of using the road edge heuristic in determining the movement of a vehicle within a road. The above research needs to address the complexity in utilization of the yaw rate sensor in order to perform the estimation using the vision system. Also in scenarios where adjacent vehicles block the roadways this could pose a challenge. Although using information from prior frames [6] we could always store a road model in such cases to estimate the position. We also need to address the effects of alignment and wrong mounting on the vehicle. Future work also needs to use the localized information of a vehicle within a road to be applied to the problem of Visual SLAM [7] where the local information can be fed into a global map for accuracy in localization during autonomous navigation.

REFERENCES [1] [2]

[3] [4] [5]

[6] [1]

Bento, L.C.; Nunes, U.; Moita, F.; Surrecio, A; “Sensor fusion for precise autonomous vehicle navigation in outdoor semi-structured environments”, Intelligent Transportation Systems, 2005. Proceedings. Pei-Yung Hsiao; Kuo-Chen Hung; Shih-Shinh Huang; Wen-Chung Kao; Chia-Chen Hsu; Yao-Ming Yu; “An embedded lane departure warning system”, Industrial Electronics Society, 2001. IECON '01. The 27th Annual Conference of the IEEE Yu, B; Jain, A.K; “Lane boundary detection using a multiresolution Hough transform”, Image Processing, 1997. Proceedings. http://opencv.willowgarage.com/wiki/ Ju Yong Choi, Chang Sup Kim Sinpyo Hong Man Hyung Lee Jong IL Bae Harashima, F; “Vision based lateral control by yaw rate feedback”, Industrial Electronics Society, 2001. IECON '01. The 27th Annual Conference of the IEEE Long Chen Qingquan Li Qingzhou Mao Qin Zou; “Block-constraint line scanning method for lane detection”, Intelligent Vehicles Symposium (IV), 2010 IEEE. Lategahn, Henning; Geiger, Andreas; Kitt,” Visual SLAM for autonomous ground vehicles”, Robotics and Automation (ICRA), 2011 IEEE International Conference.

Suggest Documents