Speed estimation using simple line

0 downloads 0 Views 622KB Size Report
Procedia Computer Science 00 (2018) 000–000 www.elsevier.com/locate/ ... The First International Conference On Intelligent Computing in Data Sciences.
Available online at www.sciencedirect.com

ScienceDirect Procedia Computer Science 00 (2018) 000–000 www.elsevier.com/locate/procedia

The First International Conference On Intelligent Computing in Data Sciences

Speed estimation using simple line Omar Bourjaa,b,, Abdelilah Maacha, Yahya Zennayia,b, François Bourzeixb, Timothée Guerinb Ecole Mohammedia d’ingénieurs, Avenue Ibn Sina B.P 765 Agdal, Rabat 10100, Morocco MAScIR Foundation, Rue Mohamed El Jazouli Madinat Al Irfane, Rabat 10100, Morocco

a

b

Abstract Several algorithms have already made to estimate vehicle‟s speed using single camera. The main problem is the efficiency of these systems: processing is done on the whole image while moving objects occupy only a specific part. This work aims to present a technique of speed estimation based on one pixel‟s width line processing. This line image can be extracted from a full size image or from a line camera in order to apply some processing like background subtraction, morphological operations, binarization and finally blob detection to allow tracking. This new approach enables implementation on low-cost platforms with low computing power. The size of data to process can almost be divided by the width‟s image of a standard resolution camera, allowing a low data rate from acquisition. © 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/). Selection and peer-review under responsibility of International Neural Network Society Morocco Regional Chapter. Keywords: Radar; Speed Estimation; Efficiency; Tracking Objects; Lines of pixels; Moving objects detection;

1. Introduction Most of currents radars on the market use active system based on signal transmission [1] to estimate vehicles speed and have to add a camera to identify the vehicle using for example a fusion method on the license plate images [2]. Our approach allows a passive detection with only single component, a camera. Secondly, this paper presents a new efficient method allowing performing detection on a one pixel width image. The algorithm was firstly prototyped using Matlab and after implemented in a C++ solution in order to achieve a

* Corresponding author. Tel.: +212-6-71305426; fax: +212-5-30279827, E-mail address: [email protected].

1877-0509© 2018 The Authors. Published by Elsevier B.V.

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/). Selection and peer-review under responsibility of International Neural Network Society Morocco Regional Chapter.

2

Omar BOURJA/ Procedia Computer Science 00 (2018) 000–000

finished product. In a first part “METHODS” we describe our algorithm and processing performed, In section A, we introduce the general architecture of our algorithm. Section B presents line extraction and pre-processing applied to allow speed estimation, In section C, we can see lines reintegration and tracking method to achieve speed estimation, In section D, we talk about calibration of the road and how to associate the information in the image with the metric distances, to be able to calculate the speed of each vehicle. In the second part “Effectiveness of 1d” we give a comparison of two approaches of processing with 1D (one line of pixels) and with 2D (image with a fixed height and width), and also we present the cost of processing on each approach. In a third part “Result” we present our results obtaining by testing our algorithms on real videos, and we show a comparison between the speeds estimated by our method and the real speeds of vehicles (calculated by GPS). In the last part “Conclusion” we discuss the results obtained, and we give a perspective of this work. 2. Methods 2.1. Architecture of the Algorithm The architecture of the algorithm of our method is shown in Fig.1 below:

Fig. 1. General architecture algorithm

2.2. Line extraction and pre-processing Original images are in Bayer format, for that we have to convert Bayer images in the first time to color and to grayscale cause we use a raw data without any compression in order to apply line extraction „Fig. 2‟ and „Fig. 3‟.

Omar BOURJA/ Procedia Computer Science 00 (2018) 000–000

3

Fig. 2. Bayer images

Fig. 3. Grayscale images

Once images are converted in grayscale, we extract lines. To allow the detection of all vehicles, user should define multiple lines (depends on the track number of the road) „Fig. 4‟. For this need, we can use an algorithm to detect road line automatically in the frame.

Fig. 4. Lines placement

In order to have the right lines orientations automatically, it‟s important to calculate the vanishing point formed by the road „Fig.5‟ [3].

4

Omar BOURJA/ Procedia Computer Science 00 (2018) 000–000

Fig. 5. Vanishing point selection

This vanishing point is calculated by resolving the following system (1): y (left) = a(left) .x + b(left) and y(right) = a(right) . x + b(right)

(1)

Every line defined by the user passes by the vanishing point. That allows a perfect alignment with the road, improving the concordance of the detection. The lines extraction works using affine function find by two points defined by the user. For each vertical pixel, an abscissa coordinate is calculated. The resulting pixel value is extracted and stored into an one dimension image „Fig. 6‟.

Fig. 6. Extracted line

The extraction is done two times: on the current frame, and on the background image. Once done, background subtraction is applied. It‟s including the binarization through a threshold „Fig. 7‟ (2). IDy = 0 if Iy > TBy – ((IBy * thresh)/100) and Iy > IBy + ((IBy * thresh)/100) Or IDy = 255 else

Where ID is the resulting binary image, IB is the background image and I is the current image extracted.

(2)

Omar BOURJA/ Procedia Computer Science 00 (2018) 000–000

5

Fig. 7. Result of background subtraction

The final processing applied to the line is a morphologic filter: Opening and Closing filtering. We use a kernel of 3x3 sizes for the opening and closing morphological function „Fig. 8‟.

Fig. 8. Result of morphological operation (closing)

2.3. Lines integration and tracking In order to apply blob detection and object tracking [4], every line is reintegrated in a full size image. This part of the processing is necessary to allow a simple object tracking in 2D space. Lines integration is the exact inverse processes of line extraction „Fig 9‟.

Fig. 9. Result of lines integration

This full image is now usable to carry out blob detection and objects tracking. Each white line represents a foreground object which can be associated to a vehicle. Blob detection returns a labeled foreground object, a bounding box and an associated centroid „Fig. 10‟.

6

Omar BOURJA/ Procedia Computer Science 00 (2018) 000–000

Fig. 10. Result of blob detection

A static offset is added to each bounding to simulate real width of vehicle in the full image. This step is not necessary for speed estimation, but in the purpose of having a complete system with image extraction, we have appended this. The result of this solution supplies images extracted of each vehicle with their corresponding speed. 2.4. Calibration The calibration of the system is done in-situ by setting the rectification matrix and real distance correspondence in the software [5]. The advantages of this system result in the fact that there is no need of external tools or measures. This step is necessary to associate to each pixel coordinates a metric value, and in the end be able to calculate the speed of vehicles „Fig. 11‟.

Fig. 11. Rectification matrix selection

This selection provided us some information on the road orientation and real distances: this information is stored in the rectification matrix. This matrix allows transforming perspective image in sky view image. For example, in Europe, road marking is subject to laws that force an accurate marking with defined distance and dimensions [6] „Fig.12‟.

Omar BOURJA/ Procedia Computer Science 00 (2018) 000–000

7

Fig. 12. Officials distances between marking

These distances are used to convert our pixel size to meters. 3. Effectiveness of the 1D solution This new approach of speed estimation has been developed in order to implement it in low resource system (i.e. low cost board, low computing power). To estimate computing costs of our algorithm, we calculate floating point operation of each processing per pixel. In addition, we evaluate the number of pixel used for each frame received to perform line extraction, processing, background estimation and line integration. This evaluation is done in both cases: 1D and 2D solution, and calculated on the basis of a high definition input image (i.e. 1400x1024 pixels). 3.1. Calculation costs solution 2D Solution 2D is an important resources consumption. In effect, all processing applied to arrive to blob detection are done on the whole image. This table „Table 1‟ presents all processing applied, the number of operations per pixel and the estimated number pixels to process: Table 1.Informations about operations and number of pixels in 2D Operation

Number of operations per pixel

Number of pixels

Bayer to Gray

8

1400x1024

Background estimation

92

1400x1024

Subtraction

1

1400x1024

Graytresh

4

1400x1024

Binarization

4

1400x1024

Structuring element

128

30% x (1400x1024)

Dilatation

130

30% x (1400x1024)

Filling holes

43

30% x (1400x1024)

Erosion

130

30% x (1400x1024)

Dilatation (20x1)

28

30% x (1400x1024)

Total

568

9318,4 Kpixels

We consider that morphological operations process only on foreground object (white pixel) and they represent at maximum 30% of the image. In result, for each frame received, a total of 716.8 MFlops is computed by the system. As reference, an Intel Atom N270 cadenced at 1.6GHz allows 0.31GFlops.

8

Omar BOURJA/ Procedia Computer Science 00 (2018) 000–000

3.2. Calculation costs solution 1D The 1D solution reduces the number of processing steps and also pixels number to treat. In facts, processing is applied on one dimensional image with a resolution of 1400x1. The following table „Table 2‟ shows treatments done on the image before tracking. Table 2.Informations about operations and number of pixels in 1D Operation

Number of operations per pixel

Number of pixels

Line Extraction

3

1400x3

Bayer to Gray

4

1400x3

Background estimation

92

1400x1

Subtraction

9

1400x1

Erosion (20x1)

4

1400x1

Dilation (20x1)

4

1400x1

Total

116

14 Kpixels

As the 2D solution, we estimate the calculation cost to process one frame at 0.77 MFlops. 3.3. Comparison of solutions at calculation cost level These two solutions have different calculation costs, about one for one thousand MFlops. This difference is the main criterion to have the most efficient system in comparison of existing ones „Table 3‟. Table 3.Computing cost in 2D and 1D resolution Solution

Computing cost

2D

716.8 MFlops

1D

0.77 MFlops

4. results We have done a set of measure based on real speed (verified by GPS) and on speed estimation supplied by our 1D system. The system is composed primarily by a camera DALSA HC 1400 and a PC VECOW EC7710. This table „Table 4‟ presents our results compared to other results using a stereoscopic algorithm [7], with the same videos of test: Table 4.speed estimated by 1D solution Vehicles

Speed calculated by (GPS)

Speed Estimated with stereoscopic algorithm

Speed Estimated with 1D (one line of pixels)

Vehicle 1

117

117.17

116.81

Vehicle 2

115

114.92

114.43

Vehicle 3

105

106.51

104.67

Vehicle 4

100

100.87

100.34

Vehicle 5

95

94.79

95.54

Omar BOURJA/ Procedia Computer Science 00 (2018) 000–000

9

5. Conclusion The method proposed in this article, consists in calculating the speed of vehicles in real time by processing just a single pixel line, makes it possible to implement these algorithms on low cost embedded boards. This will allow reducing the cost of products installed on road.

References 1. O.Enas, S.Jeff and C.Christian, Vehicle Classification and Speed Estimation Using Combined Passive Infrared/Ultrasonic Sensors. IEEE Transactions on Intelligent Transportation Systems. PP. 1-14. 10.1109/TITS.2017.2727224 (2017). 2. O.Bourja, O.NAGGAR, F.Bourzeix and A.Maach, Sharpness improvement of license plates using a fusion method, Intelligent Systems and Computer Vision (ISCV), IEEE, DOI: 10.1109/ISACV.2017.8054939 (2017). 3. L.Seokju, K.Junsik, Y.Jae Shin, S.Seunghak, B.Oleksandr, K.Namil, L.Tae-Hee, S.H.Hyun, H.Seung-Hoon and S.Kweon, VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition (2017). 4. W.Min, J.Liangwei, l.Wenjie and M.Qing, Detection and Tracking of Vehicles Based on Video and 2D Radar Information, 53. 205-214. 10.1007/978-981-10-2398-9_19 (2017). 5. R.Tomás, Practical camera calibration and image rectification in monocular road traffic applications, Machine Graphics & Vision International Journal. 15. 51-71 (2006). 6. http://www.equipements-routiers-et-urbains.com/content/que-signifient-les-modulations-des-marquages 7. F.Bourzeix, O.Bourja, M.A.Boukhris and N.Es-Sbai, Speed Estimation Using Stereoscopic Effect,Tenth International Conference on Signal-Image Technology and Internet-Based Systems (SITIS), IEEE , ISBN: 978-1-4799-7978-3 (2014).

Suggest Documents