An Approach for Building an Intelligent Parking Support System Nguyen Thai-Nghe
Nguyen Chi-Ngon
Can Tho University 3-2 Street, Ninh Kieu District Can Tho City, Vietnam
Can Tho University 3-2 Street, Ninh Kieu District Can Tho City, Vietnam
[email protected]
[email protected]
ABSTRACT In this study, we propose a solution for building an Intelligent Parking Support System. This system not only has characteristics as other information systems but also acts as an intelligent system which integrates three recognition techniques, including automatic recognition for motorcycle license plate (using Cascade of Boosting with Haar-like features and Support Vector Machines - SVM), barcode recognition, and semiautomatic recognition via surveillance cameras. Experimental results show that the proposed system is robust and the models work well in three recognition stages. In the license plate area recognition stage, the model gets 99% of accuracy when using 750 images for training and 243 images for testing. In the letter area recognition stage, the model achieves 95.88% of accuracy when using 11,866 and 4,755 images for training and testing, respectively. In the classification stage, we train the SVM model using 2,603 records and test it using 1,550 records to reach the accuracy of 98.99%. The proposed system has full features and can be deployed in practice.
In underdeveloped countries, the condition of public transport systems cannot meet the demands of densely populated areas, thus, personal vehicles, especially motorcycles are growing very fast. This causes many problems such as traffic jam, pollution, lack of parking places, etc. With large amount of motorcycles, how to keep and manage them in a safety way is also a problem. Moreover, in underdeveloped countries, most of the parking places currently use paper-ticket (examples in Figure 1) to keep/manage the motorcycles of customers. The paperticket has many drawbacks, e.g., easy damage, easy to create a fake, mistake in writing and checking, etc. With the paper-ticket, we can not store, search/retrieve, or can not do statistics, etc. For instance, although there are several computer systems that can support the parking places, they work by using surveillance cameras and security guards have to keep an eye on that.
Categories and Subject Descriptors H.4 [Information Systems Applications]: Decision support system; I.4 [Image Processing And Computer Vision]: Scene Analysis—Object recognition
General Terms Design, Performance, Experimentation
Keywords Intelligent parking support system, motorcycle license plate recognition, Haar-like feature, barcode
1.
INTRODUCTION
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from
[email protected] SoICT’14 December 04 - 05 2014, Hanoi, Viet Nam Copyright 2014 ACM 978-1-4503-2930-9/14/12 ...$15.00 http://dx.doi.org/10.1145/2676585.2676594.
Figure 1: Examples of paper-ticket (picture source: Internet) In this study, we propose a solution for building an Intelligent Parking Support System (IPSS). The IPSS not only has characteristics as other information systems but also acts as an intelligent system which integrates three recognition techniques including automatic recognition for motorcycle license plate, barcode recognition, and semiautomatic recognition via surveillance cameras. Moreover, in this work we introduce a three-stage process for automatic license plate recognition: First, we locate and extract the license plate area. Then, in that area, we locate and extract images of letters. Next, for each image of letter, we propose a simple but effective method to convert that image to binary values
so that we can create input data for classification using Support Vector Machines (SVM). Finally, using those data, we build the SVM model to classify the letters. The rest of this study is arranged as the following: Section 2 summarizes related works. Section 3 introduces three main concepts which will be used to build the proposed system. Section 4 describes how to train the models and build the IPSS. Section 5 presents experimental results as well as the IPSS’s snapshots after building it, followed by conclusion section.
2.
http://www.licenseplaterecognition.com/ http://en.wikipedia.org/wiki/Automatic number plate recognition 3 http://www.aiojsc.com 4 http://www.phuongthinhphat.com 2
HAAR-LIKE FEATURES AND CASCADE CLASSIFICATION
We shortly summarize three main concepts that have been used for recognition and classification.
3.1
Haar-like features
Haarlike features [13, 14] are the rectangles which are divided to black and white areas as presented in Figure 2
RELATED WORKS
In object recognition area, several researches have been published such as in [5], [4], [6]. In the area of license plate recognition, there also have several published works. For examples, [3] presents the license plate detection algorithm using both global edge features and local Haar-like features [13, 14]. Global classifiers using global statistical features are constructed through simple learning processes. After applying these global classifiers, most of license plate background regions are excluded from further training or testing. Then an AdaBoost [2] learning procedure is constructed to obtain local classifiers based on selected local Haar-like features. The authors in [15] studied on how to choose good samples for Haar-like cascade classifiers and image post-processing methods to achieve good location results. while the authors in [16] introduced the classifier which was trained through histogram of oriented gradients (HOG) features to judge the likelihood of candidate plates detected by Haar-like classifier, and selected the candidate with highest likelihood as the final plate, in order to reduce the false positives. This method was tested on 3000 images to obtain a recall rate of 95.2%, and accuracy of 94.0% as opposed to 66.4% without using HOG features. The authors in [10] presented algorithm technology based method for license plate extraction from car images followed by the segmentation of characters and reorganization and also develop electronics parking fee collection system based on number plate information. Moreover, in [7], the authors proposed the algorithm based on local structure patterns for license plate detection. That algorithm includes postprocessing methods to reduce false positive rate using positional and color information of license plates. Many tutorials1 and works can be found on the Web2 . However, most of previous works focused on testing the algorithm or building the system for automobile and others [9]. As aforementioned, in this study, we propose the solution for building the Intelligent Parking Support System to support parking of motorcycles. For instance, in the IT-market, there also have other systems3 4 supporting for parking of motorcycles which are installed at Supermarkets, Hospitals,.. However, they work via surveillance cameras and these systems are tracked by the security guards manually. In contrast, the proposed IPSS uses three recognition techniques including automatic license plate recognition, barcode recognition, and semiautomatic recognition via surveillance cameras. 1
3.
Figure 2: Haar-like features (Source: opencv.org)
Figure 3: Integral image calculation (left) and D’s intensity area (right) Values of the Haar-like features are determined by differences between intensities of pixels in black areas and white areas, such as X X f (x) = (Intensities) − (Intensities) BlackArea
W hiteArea
For computing the values of Haar-like features, one has compute the sum of pixels on the image. However, the computation of the Haar-like features for all of the positions on the image are expensive and thus can not be used for realtime applications. Thus, [14] proposed the Integral Image (as presented in the left of Figure 3 - Integral Image for the point P (x, y)) which can speed up the computation. After computing the Integral Image, one can calculate the intensity of any area. For example, D’s intensity in right side of Figure 3 can be calculated as the following D = (A + B + C + D) − (A + B) − (A + C) + A where (A+B+C+D) is the value of Integral Image at the point P4; A+B is at P2; A+C is at P3; and A is at P1. This is equivalent to D = (x4, y4) − (x2, y2) − (x3, y3) + (x1, y1) After computation, the Haar-like features are used as the input for classifiers in cascade classification model.
letters. These images of letters are then converted to binary values so that they can be used for the SVM model. Finally, the SVM model classifies these binary values to letters (there are a maximum of 36 values, e.g., [0..9, A..Z] to be classified)
Figure 4: Cascade classifier
3.2
Cascade classifier
In the cascade classifier models [13, 8], Ci is a classifier in each stage (e.g., Decision Tree, Naive Bayes, etc.) as presented in Figure 4. For an input image, if it is not like (determined using its features) a target object (e.g., license plate or letter image) it goes out the model, otherwise it has to pass all of the internal classifiers before being determined as the target object [14]. The classifiers in the next stages are trained by using the objects that are misclassified by the classifiers in the previous stages.
3.3
Figure 5: License plate recognition and classification process
Support Vector Machines (SVM)
SVM [1] has been successfully applied for prediction/ classification in many areas since it is robust and has strong mathematical models behind. Given a data set D consisting of n examples (xi , yi ), where xi ∈ X input features and yi the target class. For binary classification, yi ∈ Y = {−1, +1}, SVM predicts a new example x by using the function: ! n X αi yi k(x, xi ) + b (1) f (x) = sign i=1
where k(x, xi ) is a kernel function, b is a bias, and αi is determined by solving the Lagrangian optimization problem, Lp = n
n
4.1.1
License plate area recognition
Figure 6 describes for the license plate area recognition and extraction. This means that from the image captured by camera, the system will recognize and locate the area of the license plate, then extracting that area. Technically, in this stage we use the cascade of Boosting classifiers (i.e., each classifier in Figure 4 is a Boosting model [2]) with Haar-like features [14] to recognize the license plate area. Fortunately, these algorithms are available in the EmguCV5 library, thus we do not need to re-implement them but we should know how to use them as well as how to pre-process and postprocess the results to get the license plate area. Each step is thoroughly described as in the following.
n
X X X 1 kwk2 +C ξi − αi {yi (xi .w+b)−1+ξi }− µi ξi (2) 2 i i i where ξi is a slack variable, µi is a Lagrange multiplier, and C is a user-specified parameter representing the penalty of misclassifying. The above concepts are used to build the IPSS as described in the following.
4.
BUILDING THE IPSS
The ”intelligence” feature of the IPSS is presented in the recognition and classification stages. In this section, we present the process for the license plate recognition and classification, followed by models in system analysis and design.
4.1
Motorcycle license plate recognition and classification
The process for motorcycle license plate recognition and classification is proposed as in Figure 5. In this process, when the IPSS receives images from cameras, it locates and extracts the license plate area. From this area, the IPSS does the same method to locate and extract the images of
Figure 6: License plate area recognition and extraction Step 1: Preparing a set of images which contain the license plate for training. In this work, we use 2,250 images including 750 license plate images (called positive images) and 1,500 background images - without containing the license plate (called negative images). These two sets of images should have the same sizes (e.g., 640 x 480 pixels in 5
www.emgu.com
this work) and they should be rotated by some angles so that they can be simulated for real situations in practices as much as possible. Step 2: Determining coordinates of the license plate area and saving them to a file for using later, e.g., ”location.txt”. The contents of this file look like the following: / x11 y11 w11 h11 x12 y12 w12 h12 ··· / x21 y21 w21 h21 where xi and yi are the coordinates of the license plate; wi and hi are their width and height, respectively, as described in Figure 7. For example: D:/LicensePlate/1.jpg 1 132 112 303 216 D:/LicensePlate/2.jpg 1 164 122 288 209 ···
aforementioned. After training, we save the model to file, e.g., plate.xml. For example: opencv haartraining -data D:/licensePlate -vec vec.vec D:/licensePlate/bg.txt -numpos 750 -numneg 1500 -nstages 20 -mem 2000 -w 40 -h 30 -nonsym -minhitrate 0.995 maxfalsealarm 0.5 where -data D:/licensePlate: Where the trained classifier should be stored; -vec vec.vec: path to vector file in Step 3; -bg D:/licensePlate/bg.txt: Background description file in Step 4; -numPos: number of positive samples; -numNeg: number of negative samples; -numStages: Number of cascade stages to be trained; -w: sampleWidth and -h: sampleHeight; -minHitRate: Minimal desired hit rate for each stage of the classifier; -maxFalseAlarmRate: Maximal desired false alarm rate for each stage of the classifier; -nonsym: the objects are non-symmetric
4.1.2
Letter area recognition and extraction
Similar to the license plate area recognition stage in Section 4.1.1, in this stage, the system recognizes and extracts each image of letter as presented in Figure 8. Please note that, the results after this stage are still the images (of letters).
Figure 7: Coordinates of the license plate It is important to say that, in this stage, we have developed a software for reading images, click-and-drag on those images to determine their coordinators (then saving them to the file location.txt as mentioned above). Step 3: Creating samples for training the model (saving them in term of vectors for the next steps). In this, we use the opencv createsamples in EmguCV library, as follow: opencv createsample -vec vec.vec -info D:/licensePlate/ location.txt num 750 -w 40 -h 30 where -vec: Output file name containing the positive samples for training -info: Description file of marked up images collection -num: Number of positive samples to generate. -w: Width (in pixels) of the output samples; and -h: Height (in pixels) of the output samples. Step 4: Creating an index file (e.g., ”bg.txt”) to store the background images which are not the license plate. These images are the negative images which have the same size with positive images. For example, path/imageName path/imageName ··· Step 5: Training the model. Here, we use opencv haartraining (in EmguCV library) to train the model. This model is the cascade of Boosting classifiers with Haar-like features [14] as
Figure 8: Letter area recognition and extraction For training the model in this stage, we have used 5,200 color images including 1,500 images of letters and 3,700 background images (without having the letters). The training process is similar to license plate recognition state presented in Section 4.1.1.
Figure 9: Images of letters after extraction After extracting the images of letters, the system will resize them to smaller values, e.g., the new size of each image
···
: : · · · : : · · ·
In this format, the first column is the target class (e.g., letter 3) followed by the positions which have non-zero values. For example, in Figure 10, the image with size 20x48 pixels has 960 values. The letter ’3’ is presented as 3
7:1
8:1
9:1
···
951:1
952:1
953:1
We repeat this process for all of the images of letters as presented in Figure 9 and finally we achieve the input data set for the SVM classifier.
4.1.4
Training SVM model
We use the data set produced by previous step to train the SVM model. For instance, we use default hyper-parameter values in LibSVM, however, we can perform hyper-parameter search [11] to get better model in the future.
4.1.5
New motorcycle license plate recognition and classification
For recognition and classification a new motorcycle, we repeat the same procedure as presented in Figure 5, which means that the system will recognize and extract the license plate area, recognize and extract the images of letters, convert these images to binary values and finally, transfer these values to SVM model for classification. Figure 10: Convert image of letter to binary values. is 20x48 pixels as introduced in related work, presented in Figure 9. For each image of letter, we convert it to binary values so that it can be used for the SVM classifier.
4.1.3
Convert images of letters to binary values
In this step, we convert the images of letters (20x48 pixels) to binary values as presented in Figure 10. For this, we propose a simple but effective solution which is that for each pixel, if its intensity is greater than a threshold it will be converted to 1, otherwise 0. (in this work, the threshold is an average of intensity of that image). However, in practice, the image of letter may not in an ideal condition (e.g., may not be standing in 900 Celsius degree). It may lean in the left side or the right side, thus, for each image we should rotate them in some possible degrees that the image may have. After testing, we select the rotation angles from −100 to +100 Celsius degrees, as showed in Figure 11.
4.2
• Motorcycle park in/out management (store the images of the license plate, driver’s face, date-time,..) • Parking-card management (create the card with barcode, print the card, tracking the card) • Employee management (schedule, timekeeping), • Account management (group, role, privilege) • Searching (search for a license plate, a period of time in/out,..) • Doing statistic (number of motorcycle in/out per day/ week/ month/ year, revenue,..)
4.2.1
Figure 11: Rotation angles for ’1’ letter After converting the image to binary values, we transform them to the input format of the classifier. In this case, we use SVM in the library LibSVM6 and its format is presented as the following 6
www.csie.ntu.edu.tw/˜cjlin/libsvm/
IPSS System Analysis and Design
After training the recognition and classification models (the intelligent part), we analyze, design, and implement the whole IPSS system so that we can plug the intelligent part into it. After examining the user requirements, the IPSS is expected to have some main functions as the following:
Overall model of the IPSS
The IPSS’s overall model is presented in Figure 12 and the main activities are described in the following. When a driver coming, he/she should stop at “allowed yellow line” (a stop position in Figure 15) so that the cameras can capture the images. At that time, the security-guard scans the barcode card to input the id-number on the card (the driver has to keep this card for check-out), and the front-camera captures the driver’ face (including his shape, clothes,.. this image helps the security-guard identifying later manually) while the rear-camera captures the image of the license plate. These two images are stored in database. Meanwhile, the recognition and classification algorithm is activated to extract and classify the letters on the license
Figure 12: Overall model of the IPSS plate image. These letters are also stored in the database for retrieving later. When checking out, the security guard scans the barcode card getting from the driver to get the id-number. The system will load all of the information in database which match with this id-number, e.g., the driver’s face, license plate images, license plate letters, and timestamp. At that time, the security guard can manually compare these in/out images (loading from database and from the current cameras) while the IPSS automatically recognizes the license plate and compares with the information retrieved from database. If these information are not matched, the system will raise the beep as well as a warning message. As described, in this system, three information must be matched before the driver can check-out: First, the barcode number must be matched (automatically); Second, the license plate number should be matched (automatically); Finally, the driver’s face and the license plate images should be matched (checking manually on the screen by the securityguard) (please refer to Figure 19 to see details). The IPSS is a semi-automatic system which means that the security-guard is still needed to keep an eye on the information. Thus, we are not afraid about the problem of fake cards. With this approach, the system can highly support for the security-guard so he/she can check all of the information easily. For the bike, the process is also the same but without the plate number.
4.2.2
Use Case and Class diagrams
We summarize the main use cases as in Figure 13. Two main actors are Customer and Security-guard. Two main use cases are check-in and check-out activities. The class di-
agram is presented in Figure 14. We have also designed other diagrams such as sequence diagram, activities diagram, etc., however, they are not shown here because of paper’s length limitation. We also find a solution to optimize database’s size. For each time the motorcycle checks in/out, four images are captured and stored. Thus, instead of directly storing the images in the database, we only store paths of the images in database (the images themselves are stores on hard disk). This helps to reduce the database size significantly while retrieval speed is acceptable.
4.2.3
Interface Design
Figure 15 describes an interface with eight frames for check-in/check-out activities. Four frames are used for checkin (1-4) and the rest (5-8) for check-out. For the check-in part, frames 1 and 2 are used to display the real images from the cameras and two others (3-4) are used to display images after storing/retrieving from the database. Similarly, for check-out part, frames 5 and 6 are used to display the real images from the cameras and frames 7-8 are used to display images retrieving from the database. Thus, when the driver checks out, the images from frames 5 and 7 should be matched with the driver’s face and the images from frames 6 and 8 should be matched with the license plate. Please note that this interface is used for check-in/checkout at the same place. If the system is installed as one-gate for check-in and another-gate for check-out, this interface should be reduced to 4 frames.
4.2.4
Deployment model
The IPSS is built using .NET framework, MS SQL Server
Figure 13: Use case diagram
Figure 14: Class diagram database management system, EmguCV (www.emgu.com) and LibSVM (www.csie.ntu.edu.tw/˜cjlin/libsvm) For running the IPSS, necessary devices are required, e.g.: • 01 desktop computer with wide screen (at least 19 inches) • 01 barcode scanner
• 04 cameras (or high density web-cam) • other protection devices (e.g., protection lanes, stop positions,..) The deployment model is presented in Figure 16. When the driver coming, he/she should stop at the “yellow line” (stop position) so that the cameras can capture the images exactly.
(e.g., the system misclassifies between E and F; 1 and 7). To overcome this, we design a shortcut (e.g., →) so that the user can press this shortcut to reactivate the recognition and classification module. Using this function, results show that 25 misclassified images in the first capture now reduce to 10 images in the second capture and reducing to 0 images in the third capture. Thus, in reality, when misclassification happens, the user can press the shortcut to re-classify the letter. Experiments show that this keypress only costs 1 second/press and the user can have a maximum of three presses, so this solution is feasible. Figure 15: Interface design
Figure 16: Deployment model
5. 5.1
5.2
Typical snapshots
Figure 17: Screen of check-in/out at different places: Check-in part
RESULTS Accuracy of the models
To evaluate the accuracy of the models, we use 3-folds cross validation schema for all of the stages. In the license plate area recognition stage, the model gets 99.0% of accuracy when we use 750 images for training and 243 images for testing. In the letter area recognition stage, the model achieves 95.88% of accuracy when using 11,866 and 4,755 images for training and testing, respectively. In the classification stage, we train the SVM model on 2,603 records and test it on 1,550 records getting the accuracy of 98.99%. These results show that the proposed solution, especially the classification stage, is acceptable. Moreover, after building the system, we also check the accuracy of the whole system including three stages above. In this overall experiment, we use the models to test on 215 images of new motorcycles. Results show that 99% of the license plates are correctly recognized. Among this number, there are 190 images (out of 215) are correctly recognized and classified (≈ 88.37%) at the first capture by the camera, and there are 25 (out of 215, equivalent to 11.62%) license plates are correctly recognized but one letter is misclassified
Figure 18: Screen of check-in/out at different places: Check-out part Figures 17 and 18 introduce the screens for one-gate-in
and one-gate-out separately. In this deployment model, the driver checks in at one gate and checks out at another gate. These gates are connected via a local area network. Figure 19 presents the main screen for one-gate in/out combination model. In this model, the system is installed at one place and the check-in/out process is also at one gate. This screen is the same with the interface we have designed in Figure 15. Figure 20 demonstrates for card management function (with barcode generation). This form supports us in designing title, color, etc. on the cards so that they can be used for any deployment place (company/institute/..)
Furthermore, the IPSS also provides many other functions for admin group and security-guard group. • For the security-guard group: the users can track/manage in/out motorcycles, camera selection, system configuration, searching, doing statistics,.. • For the admin group: the users have all of the privileges from the security-guard group and additional functions such as employee management, schedules, time management, card management,..
6.
CONCLUSIONS
In this work, we have proposed an approach for building the Intelligent Parking Support System (IPSS). The proposed approach uses three recognition techniques so that it can strengthen security for the parking system. These techniques are automatic license plate recognition, barcode recognition, and semiautomatic recognition via surveillance cameras. We have conducted and trained recognition models, then integrating them to the system. Moreover, the IPSS also supports other functions as other information systems, e.g., system configuration, searching, administration tools, doing statistics, etc. Experimental results show that the models’ accuracy ranges from 95.88% to 99%, thus, the proposed IPSS can be reliable. It is a full-fledged system and can be applied in reality. In the future, we further improve the models’ accuracy, especially, for the letter recognition stage. We also continue to develop this system so that it is fully automatic, e.g., automatically open in/out barriers after checking successfully. Figure 20: Card management (this system is designed for reality in Vietnam, so, most of the information, e.g. the card, is printed in Vietnamese) Figure 21 shows the search function. The IPSS supports several searching criteria such as searching for the specific license plate, searching for the motorcycles check-in/out at a time period, etc. The retrieval results return in detailed information, e.g., the in/out images of driver’s face, the in/out images of license plate, in/out date-time... so that the security-guard can easily retrieve the information when necessary (e.g., for secure reasons, stolen happen,..)
Figure 21: Search for motorcycle in/out information by several criteria
Acknowledgments This work was funded by research project at Can Tho University under grant number T2014-08 [12] and by the Intelligent Data Processing Lab - CICT. We would like to thank Nguyen Van Dong, Vo Hung Vi, Mai Quoc Truong, and Nguyen Minh Hong for their experimental supports and also thank Dr. Pham Nguyen Khang for his fruitful comments.
7.
REFERENCES
[1] C. Cortes and V. Vapnik. Support-vector networks. Mach. Learn., 20(3):273–297, Sept. 1995. [2] Y. Freund and R. Schapire. A desicion-theoretic generalization of on-line learning and an application to ˜ anyi, editor, Computational boosting. In P. VitA , Learning Theory, volume 904 of Lecture Notes in Computer Science, pages 23–37. Springer Berlin Heidelberg, 1995. [3] X. He, H. Zhang, W. Jia, Q. Wu, and T. Hintz. Combining global and local features for detection of license plates in a video. In Conference on Image and Vision Computing New Zealand 2007, page ˘ S293, Dec 2007. 288ˆ aA¸ [4] J.-W. Hsieh, S.-H. Yu, and Y.-S. Chen. Morphology-based license plate detection from complex scenes. In Pattern Recognition, 2002. Proceedings. 16th International Conference on, volume 3, pages 176–179 vol.3, 2002. [5] V. Kamat and S. Ganesan. An efficient implementation of the hough transform for detecting vehicle license plates using dsp’s. In Proceedings of the
Figure 19: Screen for In/Out at the same place
[6]
[7]
[8]
[9]
[10]
[11]
[12]
Real-Time Technology and Applications Symposium, RTAS ’95, pages 58–, Washington, DC, USA, 1995. IEEE Computer Society. K. Kim, K. Jung, and J. Kim. Color texture-based object detection: An application to license plate localization. In S.-W. Lee and A. Verri, editors, Pattern Recognition with Support Vector Machines, volume 2388 of Lecture Notes in Computer Science, pages 293–309. Springer Berlin Heidelberg, 2002. Y. Lee, T. Song, B. Ku, S. Jeon, D. Han, and H. Ko. License plate detection using local structure patterns. In Advanced Video and Signal Based Surveillance (AVSS), 2010 Seventh IEEE International Conference on, pages 574–579, Aug 2010. R. Lienhart and J. Maydt. An extended set of haar-like features for rapid object detection. In Proceedings of International Conference on Image Processing, pages 900–903, 2002. V. D. Mai, D. Miao, and R. Wang. Building a license plate recognition system for vietnam tollbooth. In Proceedings of the Third Symposium on Information and Communication Technology, SoICT ’12, pages 107–114, New York, NY, USA, 2012. ACM. M. M. Rashid, A. Musa, M. A. Rahman, N. Farahana, and A. Farhana. Automatic parking management system and parking fee collection based on number plate recognition. International Journal of Machine Learning and Computing, 2(2):93–98, May 2012. N. Thai-Nghe, Z. Gantner, and L. Schmidt-Thieme. Cost-sensitive learning methods for imbalanced data. In The 2010 International Joint Conference on Neural Networks (IJCNN), pages 1–8, July 2010. N. Thai-Nghe, H.-V. Vo, and V.-D. Nguyen.
[13]
[14]
[15]
[16]
Intelligent parking support system (to appear). Journal of Science - Can Tho University, 2014. P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEEComputer Vision and Pattern Recognition, volume 1, pages I–511–I–518 vol.1, 2001. P. Viola and M. J. Jones. Robust real-time face detection. Int. J. Comput. Vision, 57(2):137–154, May 2004. Y. Zhao, J. Gu, C. Liu, S. Han, Y. Gao, and Q. Hu. License plate location based on haar-like cascade classifiers and edges. In Intelligent Systems (GCIS), 2010 Second WRI Global Congress on, volume 3, pages 102–105, Dec 2010. K. Zheng, Y. Zhao, J. Gu, and Q. Hu. License plate detection using haar-like features and histogram of oriented gradients. In Industrial Electronics (ISIE), 2012 IEEE International Symposium on, pages 1502–1505, May 2012.