+ + بت
An algorithm of real time vehicle detection with low altitude aerial video Wenlong WANG *a,b, Luliang TANG a,b, Qingquan LI a,b a Transportation Research Center, Wuhan University,129 Luoyu Road, Wuhan 430079,China; b State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,129 Luoyu Road, Wuhan 430079,China ABSTRACT Currently, the conflict between vehicle and road is becoming increasingly serious, how to implement advanced technology to obtain traffic information fast and accurately becomes a key point to upgrade the level of transportation management and services. It is an important expansion of conventional technology that the dynamic traffic information is obtained rapidly by the low-altitude aircraft. It is low cost and suitable for collecting a wide range of traffic information. This paper use low-altitude airship as the platform, and several sensors(such as GPS,CCD, video encoder and COFDM wireless transmission equipment) are integrated into the aircraft compose a low altitude remote sensing platform to obtain the high-definition traffic video data. This paper aim at the video proposed a vehicle detection method in the complex and varying background. This method is capable of detecting moving and static vehicles accurately on the road in real time without any supplementary information. Keywords: low-altitude RS platform, airship, traffic video, moving and static vehicle detection
1. INTRODUCTION In recent years, several on-going research projects have been working to come up with technologies based on low altitude remote sensing platform that improve surveillance techniques for traffic management. Aerial view provides better perspective with the ability to cover a large area. In 2001, the Florida Department of Transportation (FDOT) organized the ATSS proof-of-concept project [1]. The UAV were chosen as flying platform that employs a Sony XC555 video camera, which captures video of the traffic on the highway. The video are transmitted using a 2.4GHz wireless link. The proof of concept test intended to show that the UAV can fly for a certain distance collecting traffic information and transmit it to the base stations successfully. In the same year, the research at the University of Arizona has the goal to use aerial video from a helicopter platform to enhance existing data sources and to improve traffic management [2]. In the course of the research, a series of data collection efforts took place, using aerial video as the primary source of traffic data. In 2003, Professor Alejandro Angel of Arizona University put forward a representative method of moving vehicle detection directly from the video images in near real-time
[3]
. Firstly, image registration technique was implemented to carry out the motion
compensation of traffic image sequences, then the frame difference method was used to detect moving vehicles in the complex and varying background. This method has been puzzled by two limitations, one of which is that image registration is time-consuming, the other is that static vehicles be incapable of being detected. German Aerospace Center (DLR), in *
Corresponding Author: WANG Wenlong; [email protected]; phone 13995657273; 027-51855660; fax 027-68778043. Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, edited by Huadong Guo, Changlin Wang, Proc. of SPIE Vol. 7840, 784011 © 2010 SPIE · CCC code: 0277-786X/10/$18 · doi: 10.1117/12.872700 Proc. of SPIE Vol. 7840 784011-1 Downloaded from SPIE Digital Library on 29 Dec 2011 to 202.114.96.200. Terms of Use: http://spiedl.org/terms
2004, were involved in two projects proving the gain of using aerial images for traffic monitoring –LUMOS [4] and “Eye in the sky” [5]. A high resolution digital camera was used in combination with an inertial measurement unit(IMU) and a GPS onboard an airborne platform were used to provide video, attitude data and locating data. The video and data were transmitted to a ground station, georeferenced and processed in order to extract user traffic information. In 2006 Prof. Peter Reinartz came up with another effective method of detecting vehicles [6]. Firstly, a priori knowledge about roads as well as their parameters(width, lanes and directions) and attitude data were used to find the road region in images , and then the vehicle detection was performed on the ROI(Region of interesting) defined in the pervious steps. The whole procedure was realized in real time. However, the method needs a digital road map (produced by Navtech) of the test relevant area and accurate locating and attitude data of the airborne platform, which increase the cost and complexity greatly. This paper selected airship as the flying platform, and integrates GPS, CCD, video encoder and COFDM wireless link into the platform, which is designed to acquire some large scale dynamic traffic videos. This paper describes a vehicle detection method which applies on airborne video. This method is capable of detecting moving and static vehicles on the road accurately in real time without any supplementary information.
2. THE METHOD OF VIHICLE DETECTION 2.1 Wipe off Road Lane Marks Because road lane marks often affect the accuracy of vehicle detection, then it is necessary to remove them firstly. A geometric characteristic of the road lane marks in the traffic video is very slender, so this paper aims at this kind of geometric characteristic of them implements open operation method of gray-based morphology to wipe off them [7]. Equations of the gray morphology operations are shown as follows:
( f ⊕ b)( x, y ) = max{ f ( x − i, y − j ) + b(i, j ) | f ( x − i, y − j ) ∈ f , b(i, j ) ∈ b}
(1)
( f Θb)( x, y ) = min{ f ( x + i, y + j ) − b(i, j ) | f ( x + i, y + j ) ∈ f , b(i, j ) ∈ b}
(2)
( f o b)( x, y ) = ( f Θb) ⊕ b
(3)
where equation (1) is the gray dilate operation, equation (2) is the gray erode operation, f(x,y) is a original image, and b(i,j) is a morphology structural element. Equation (3) is the gray morphological open operation, which is suitable for eliminating some highlight details. In the light of the analysis of the physical size of these road lane marks and the spatial resolution of the traffic video, the width of the lane marks is only 2-3 pixels in the traffic video, so a morphology structural element ({ [-1 0] [0 0] [1 0] }) was designed for open operation to eliminate them. Fig. 1(a) is the original image frame taken from the video and Fig. 1(b) is the result image without road lane marks after being dealt with by open operation.
Proc. of SPIE Vol. 7840 784011-2 Downloaded from SPIE Digital Library on 29 Dec 2011 to 202.114.96.200. Terms of Use: http://spiedl.org/terms
a
Original image Fig. 1.
b
Result image
Wipe off the Road Lane Marks with Morphology
2.2 The detection method of road scope The traffic video is affected by the factors of light constantly changing and unstable attitude of the camera, and the gray values of road are inconsistent in every frame of the traffic video. It is impossible to use the same gray value thresholds to separate the road scope from any images, so a valid method of identify road scope is needed. 2.2.1 The road gray value statistical method The road gray value statistics includes mean and variance of gray values. In order to calculate them, it is needed to select a group of statistical templates with a certain amount of pixels within the road scope, as shown in the Fig. 2. These rectangular templates extracted manually.
Fig.2.
Select Statistical Templates
These pixels, in red rectangular templates, are used to calculate mean and variance of the road gray values by formula(4) and (5).
M =
σ2 =
1 N 1 N
N −1
∑
G i(x, y)
(4)
i= 0
N −1
∑ (G
i
(x, y) − M )2
(5)
i=0
where M is the road gray mean value of a image, σ is the road gray variance of a image, N is the total number of pixels in the gray statistical template of a image, Gi(x, y) is the gray value of pixel (x,y).
Proc. of SPIE Vol. 7840 784011-3 Downloaded from SPIE Digital Library on 29 Dec 2011 to 202.114.96.200. Terms of Use: http://spiedl.org/terms
2.2.2 The road gray value statistical characteristics In order to research the road gray statistical characteristics, it is needed to randomly select a group of test image frames from a traffic video. The gray statistical results of these test images show that the road gray variance will not change greatly when the flying altitude of airship platform is relatively stable. This is an important character for the road extraction based on road gray value. As shown in Fig. 3, a group of image frames were collected from the same altitude. Within these images, the road gray mean value changes a lot, nevertheless the road gray variance only have small changing.
M=165
M=178
σ =14
σ =15
M=170
M=176
σ =14
M=160
σ =16
σ =15 Fig. 3.
M=173
σ =14
Road Gray Statistical Properties
2.2.3 Road extraction 1) Extract road scope from the first video image Equation (6) is used to determine which is non-road region Fbackground(x, y), and which is road region Froad(x, y). In this equation, M and σ is the gray mean value and gray variance of the road respectively in the first frame image.
Froad(x, y),
if
Fbackground ( x, y ),
Fi (x, y) ≤ M +3σ
or
Fi (x, y) ≥ M +3σ;
else
(6)
The results of remove non-road region are shown as Fig. 4(a), there are some discrete small speckles besides several bigger subblocks. These very small subblocks usually are not road, but they are similar in gray value range to road. Some very small area subblocks are removed, and then the true road scopes are retained, shown as in Fig. 4(b).
Proc. of SPIE Vol. 7840 784011-4 Downloaded from SPIE Digital Library on 29 Dec 2011 to 202.114.96.200. Terms of Use: http://spiedl.org/terms
(a) Fig.4.
(b) Road Extraction from the First Image Frame
2)Extract road scope from following video images The time interval of traffic video is very short (25 frames per second), so the change of road scope between two sequence images is very small, the algorithm of road extraction from following images is proposed based on the characteristics of the traffic video. Firstly, the road scope of the previous image of two consecutive images is extended slightly as the road detection scope of current image (the other image), shown as the Fig. 5(a). Secondly, the gray histogram of the Fig. 5(a) is generated as Fig. 5(b), and because road is the main component in the Fig. 5(a), the maximum peak of the gray histogram is the road gray mean value of current image. Lastly, the theory of 3 σ statistics is used to determine the road gray value range, which is used to extract road contour from current image as Fig. 5(c).
(a)
(b) Fig.5.
(c)
Extracting Road Scope from Following Video Images
2.3 Vehicle identification The vehicle identification procedure is show in Fig. 6. The sub-image in the contours of road scope is shown as Fig 6(a), which contains the road as well as some vehicles on the road and the Fig. 6(b) is the result of last road scope extraction step, Fig 6(c) is the frame difference between Fig.6(a) and Fig. 6(b). In theory only the vehicles remained in Fig 6(c), but actually some vehicles are divided into several adjacent small blocks, and some noise speckles are still retain in Fig.6(c). In order to exclude these disturbances and accurate identify the vehicles, it needs to combine the adjacent blocks, and to delete non-vehicle speckles through the difference of the contour geometric parameter (such as area, width, aspect ratio and so on) between noise speckles and vehicles. The vehicle identification result is shown as Fig. 6(d).
Proc. of SPIE Vol. 7840 784011-5 Downloaded from SPIE Digital Library on 29 Dec 2011 to 202.114.96.200. Terms of Use: http://spiedl.org/terms
a
b
c
d Fig. 6. The Flow of Vehicle Identification
3. EXPERIMENTS The experiment made use of the airship as a carrier with a high resolution camera, a video encoders and a wireless link to collect a set of traffic video over the Guanshan avenue and the Middle Ring Road in Wuhan city. The flight altitude was about 150 meters, and the flight speed was about 40Km.p.h. Onboard captured analog videos which were encoded by H.264 digital encoder, and then the digital videoa were sent to the ground station in real time via multi-carrier radio transmission link, the maximum video transmission rate values of radio link was 8Mbps. The image resolution of the video received on the ground station was D1 (width: 720, height: 576), and the spatial resolution of the video was about 0.2 meters per pixel. These high resolution videos ensure to distinguish vehicles on the road. In order to verify the accuracy and robustness of the algorithm in this paper, the one hour video was selected from the field video data sets captured over Wuhan-City expressway as the test data firstly. Then a test of the comparison of automatic vehicle identification counted and manually counted that is shown in Fig. 7(a), and Fig. 7(b) illustrates that the rate of false vehicles does not exceed 10%. 100%
20
count i ng aut omat i c
15
count i ng manual l y
10 5 0
1
30
6 11 16 21 26 31 36 41 46 51 56 Ti me
accur acy r at e
Vehi cl e Count i ng
25
98% 96% 94% 92% 90% 88% 86%
60
a Automatically Vehicle Counting vs. Manually Counting per Image
1
2
3
4
5 6 7 8 9 10 11 12 st at i st i c t i me
b Statistics of Detection accuracy rate Every Five Minutes
Fig. 7. Precision Analysis for Proposed Vehicles Detection Method
Proc. of SPIE Vol. 7840 784011-6 Downloaded from SPIE Digital Library on 29 Dec 2011 to 202.114.96.200. Terms of Use: http://spiedl.org/terms
The following test is to prove that the method proposed above is capable of identifying moving and static vehicles at the same time. Firstly, a rectangle block was added manually in the test images as a static car. Next, the proposed method and the method of Alejandro Angel Arizona University [3] are used separately to detect vehicles. The experimental results are shown in Fig. 8. The proposed method in this paper can accurately identify all of the moving and static vehicles as Fig. 8(a). However, the other method can only detect moving vehicles as Fig. 8(b), the reason is due to the frame difference method that is only sensitive to moving vehicles.
Static car
Static car
(a)
(b) Fig.8.
Comparison of two vehicle identification methods
To verify the proposed method is efficient, the method and the classical method were used to deal with the same sets of experimental data in the same computing environment. Their time-consuming are shown in Figure 9. The time-consuming of proposed method is less one third than the classical method. cl assi cal met hod
pr oposed met hod
Ti mes/ S
2000 1500 1000 500 0 100
Fig. 9.
200 300 400 The Number of I mages
500
Comparisons Between Classical and Proposed Methods in Efficiency
4. RESULT This paper described a kind of low-altitude RS platform for traffic data collection over expressway, and proposed a method for detecting moving and static vehicles quickly and accurately based on road gray statistics in the complex and moving background. The excellent method is capable of detecting moving and static vehicles on the road accurately in real time without any supplementary information. The next research objects will be roads in urban, and the main research content is how to detect vehicles on the road in real time and accurately when the traffic flow density is very high.
Proc. of SPIE Vol. 7840 784011-7 Downloaded from SPIE Digital Library on 29 Dec 2011 to 202.114.96.200. Terms of Use: http://spiedl.org/terms
REFERENCES [1]
Anuj Puri. A Survey of Unmanned Aerial Vehicles (UAV) for Traffic Surveillance.
[2]
Pitu Mirchandani, Mark Hickman. Application of Aerial Video For Traffic Flow Monitoring and Management [C]. Pecora 15/Land Satellite Information IV/ISPRS Commission I/FIEOS 2002.
[3]
Alejandro Angel,Mark Hickman,Pitu Mirchandani. Methods of Analyzing Traffic Imagery Collected From Aerial Platforms[J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2003, 4(2): 99-107
[4]
I.Emst, S.Sujew. LUMOS-Airbome Traffic Monitoring System [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2003, 1(1):753-759
[5]
A.Börner,I.Ernst, M.Ruhé. Airborne Camera Experiments for Traffic Monitoring[C]. ISPRS Proceedings Workshop WG I/6, 2006
[6]
Peter Reinartz, Marie Lachaise. Traffic monitoring with serial images from airborne cameras [J]. ISPRS Journal of Photogrammetry & Remote Sensing, 2006, 61:149-158
[7]
Liu Z.F., You Z.SH., Cao G., A Multi-scale Color Vector Morphological Edge Detection[J].Journal of Image and Graphics,2002,7(9):888-894
Proc. of SPIE Vol. 7840 784011-8 Downloaded from SPIE Digital Library on 29 Dec 2011 to 202.114.96.200. Terms of Use: http://spiedl.org/terms