PROCEEDINGS OF SPIE SPIEDigitalLibrary.org/conference-proceedings-of-spie
A localization algorithm of adaptively determining the ROI of the reference circle in image
Zeen Xu, Jun Zhang, Daimeng Zhang, Xiaomao Liu, Jinwen Tian
Zeen Xu, Jun Zhang, Daimeng Zhang, Xiaomao Liu, Jinwen Tian, "A localization algorithm of adaptively determining the ROI of the reference circle in image," Proc. SPIE 10611, MIPPR 2017: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications, 1061110 (8 March 2018); doi: 10.1117/12.2283126 Event: Tenth International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR2017), 2017, Xiangyang, China Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 4/19/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
A localization algorithm of adaptively determining the ROI of the reference circle in image Zeen Xua , Jun Zhang *a , Daimeng Zhangb , Xiaomao Liuc , Jinwen Tiana a State Key Laboratory for Multispectral Information Processing Technologies ,School of Automation ,Huazhong University of Science and Technology (HUST), Wuhan, Hubei, 430074, China b Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742-3285,USA c School of Mathematics and Statistics, HUST, Wuhan, Hubei 430074, China *Corresponding author, email:
[email protected] ABSTRACT Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection. Key words: heat tubes detection;probes;reference circle;adaptive ROI;image positioning
1. INTRODUCTION In the nuclear industry applications, it is difficult to accurately locating the position of the detection probes because of mechanical error. To accurately positioning the detection probes, we used the method based on computer vision. Because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Then we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the heat tubes, then we can know the precise location of the probes. However, how to choose reference circle in image is a difficult problem, as we know, there are two ways to extract circle:Non Hough transform methods and Hough transform methods[1,2]. The non Hough transform methods include shape analysis and loop integral differential method [3] and so on. At present, the Hough transform is the mainstream
MIPPR 2017: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications, edited by Nong Sang, Jie Ma, Zhong Chen, Proc. of SPIE Vol. 10611, 1061110 · © 2018 SPIE CCC code: 0277-786X/18/$18 · doi: 10.1117/12.2283126 Proc. of SPIE Vol. 10611 1061110-1 Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 4/19/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
method of circle detection, and most methods are using step-by-step Hough transform: we should find the center of the circle firstly, and then calculate the radius[4,5]. In recent years, many scholars have extended and improved the Hough transform for specific situations. Chiu[6] gives a new voting method for the standard Hough transform, each pixel corresponding to only one candidate circle parameter, as a result, the computational complexity is greatly improved. Li[7] uses the method of constrained random Hough transform, making reasonable matches and filters on the point pairs that are used to calculate the parameters, solving the invalid sampling in random transform. Ramirez[8] uses genetic algorithm to detecting incomplete circle which can reach sub-pixel level. Although the improved Hough transform has good effect on circle extraction under some conditions, however, the application scenarios in this article are not quite applicable. The difficulty of this paper is how to find the correct reference circle in the image. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle, there will be only one circle in the area, and the circle is the reference circle. The following part will discuss how to adaptively confirm the area of reference circle.
2. CIRCLE EXTRACTION ALGORITHM FOR ADAPTIVELY DETERMINING THE REFERENCE CIRCLE REGION First, we will show the original figure captured by the camera as shown in the following figure. From the original figure, we know that there are many circles in the image. To decrease the numbers of the circles, we set a region as reference circle ROI region. In this area, there is only one circle which we named reference circle[9].
(Th
(a)
(b)
Fig.1 a) Original figure from the camera .b) Reference circle ROI region
In theory, using the Hough gradient method can extract only a circular structure in the region. So what we do is how to calculate the ROI area of the reference circle adaptively. 2.1 Adaptive Computation of the ROI region of the reference circle We will give detailed description of how to determine the column number of the two red lines in figure 1 (b). Supposing the size of original figure is M*N, then the image is transformed into a grayscale image. The image can be represented by formula (1.1)[10]. f(0,0) f(0,1) f(1,0) f(1,1) f(x ,y ) f(M 1,0) f(M 1,1)
f(0,N 1) f(1,N 1) f(M 1,N 1)
(2.1) By observing the gray image f (x, y), it is obvious that the gray value of the white probe is very large, much larger than the gray value of other regions in the image. The Accumulated difference of gray between adjacent columns from
Proc. of SPIE Vol. 10611 1061110-2 Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 4/19/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
the edge of the column and the white probe is certainly larger than other region, so we can first calculate the sum pixel gray value of each column[11], as shown in equation (1.2). M 1
f(i ,j )
s(j )
i 0
j 0,1, ,N 1
(2.2)
Thus, the cumulative gray histogram of each column is obtained [12],then,we subtract the cumulative gray of adjacent columns, as shown in equation (1.3).
s(j ) s(j 1) s(j )
(2.3)
j 0,1, .N 2
If a white probe appears completely in the image, it will bring a positive maximum value and a negative minimum value. If a part of the probe appears in the image, it may produce a positive maximum or negative minimum value. As we know, there are two probes in the image, so we can get three different situations, as shown in Figure 2.
(a)
(b)
(c)
Fig.2 Three kinds of situation of probes in the image .a) part of the left probe in the figure and all right probe in the figure. b) All the left probe in the figure and part of the right probe in the figure. c) Part of the left probe and right probe in the image.
Obviously, the location of the double probe in the image is different, and its reference circle ROI region is necessarily different. For every original image, we should know it belongs to which situation. According to the accumulated histogram, we propose a method to determine which situation the image belongs to. Specific algorithm is as following: First, getting four maximum| s(j ) | from all the s(j ). From above, we know that the accumulated difference of gray between adjacent columns from the edge of the column and the white probe is certainly larger than other region, so the edges of the probes belong to the four maximum| s(j ) |. Second, computing the mean of the four maximum| s(j ) |, then comparing the mean and the four maximum
| s(j ) |. If there are two| s(j ) | larger than the mean and two| s(j ) | smaller than the mean, we know it is the situation (c) in Fig.2. If there are three| s(j ) | larger than the mean and one | s(j ) | smaller than the mean, we know it is situation (a) or situation (b) in Fig.2. Third, we use the symbol of s(j ) to distinguish between (a) and (b) in Fig 2. According to the above steps, we get three maximum| s(j ) |. If there are two of the three related s(j ) are positive and the other one is negative, it is situation (b), If there are two of the three related s(j ) are negative and the other one is positive, it is situation(a). After distinguishing the (a), (b) and (c) three situations in Figure 2, we will discuss how to calculate the reference circle ROI region. Because we have already known which situation the image belongs to, so we can get the column number j of the four| s(j ) |. If it is situation (a), the minimum and minor value of j of the four| s(j ) | are the two blue columns in
Proc. of SPIE Vol. 10611 1061110-3 Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 4/19/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Fig.1, then based on the blue columns, we add redundant pixel values to get the red columns, that is, the reference circle ROI region is obtained in situation (a). The reference circle ROI region of situation (b) and situation (c) can be acquired by the same method as situation (a). 2.2 Reference circle ROI circle extraction Based on OpenCV (Open Source Computer Vision Library), we use the method as described in document[13] to extracting the reference ROI circle. In the process of circle extraction, three related functions in OpenCV are used: cvSmooth(),cvEqualizeHist()and cvHoughCircles() [14].
3. EXPERIMENTAL RESULTS AND ANALYSIS 3.1 Analysis of extraction results of reference circle ROI region We tested 97 frames captured on the spot, the results were all right. The results of 3 typical images are shown in Figure 3.
(a)
(b)
(c) Fig.3 The experiental results about the region of the target circle(The first column is the local initial area of the orginal figure from the camera after projection. The second column is the cumulative gray histogram of the
first column. The third column is the area of the
target circle ROI in the figure. The fourth column is the reference circle ROI)
We can see from Figure 3, for situation (a), (b) and (c), the experimental results are ideal, which undoubtedly verifies the correctness of the reference circle ROI region algorithm used in the test, and the robustness is very good. 3.2 Analysis of extraction results of reference circle in reference circle ROI We set extracting the reference circle in ROI region as method one and extracting the reference circle in original figure directly as method two. The reference circle extraction results of these two methods are shown in Table 1.
Proc. of SPIE Vol. 10611 1061110-4 Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 4/19/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Tab.1 Comparison of results of extracting the reference circle using ROI region method and full area method Method
One
Two
Total number
97
Total number of situation (a)
23
in Fig 2 Total number of situation (b)
34
in Fig 2 Total number of situation (c)
40
in Fig 2 Right circle of situation(a) in
21
17
33
19
39
21
2
6
1
15
1
19
95.88%
58.76%
Fig 2 Right circle of situation(b) in Fig 2 Right circle of situation(c) in Fig 2 False circle of situation(a) in Fig 2 False circle of situation(b) in Fig 2 False circle of situation(c) in Fig 2 Total accuracy
Part results are shown in Figure 4.
(a)
(d)
(b)
(e)
(c)
(f)
Fig.4 Comparison of results between method one and method two .The (a) and (b) are the case of figure 2 (a). The (c) and (d) are the case of figure 2 (b). The (e) and (f) are the case of figure 2 (c)
Proc. of SPIE Vol. 10611 1061110-5 Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 4/19/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Figure 4 shows the experimental results between Method 1 and Method 2.As can be seen from Fig. 4, if we extract the reference circle in original figures, (b), (c), (d), (f) are extracted incorrectly, the extraction is correct in (a), and the extraction of (e) is not accurate. If we extract the reference circle in ROI region, (a), (b), (d), (e), (f) are all right, and (c) has a slight deviation, but it is within the scope of industrial error. Combined with Table 1, the following conclusions can be drawn: For any of the cases in Figure 2, finding the reference circle ROI region adaptively and then doing the reference circle operation is better than extracting the reference circle directly in the original figure.
4. CONCLUSION This paper is proposed to solve the problem of automatic accurate positioning of the double probe used in the detection of the heat pipe in the industrial application. The visual method is used to transmitting the problem into the extraction of the reference circle in the image. In order to realize the rapid and efficient extraction of the reference circle. This paper proposes a method to find the ROI region in the original figure, and then find the reference circle in the region. The experimental results show that the method has high accuracy and robustness, and it has a high reference value for similar applications. If the reference circle is severely blocked, the current detection method may be useless. So the probe on the reference circle has a more serious occlusion of the case, how to detect the reference circle accurately will be a follow-up need to solve the problem.
5. ACKNOWLEDGEMENT This work is supported by CASC, CASIC, and Key Laboratory funding (61422080401).
REFERENCES [1] Wu S, Liu X. Parallelization Research of Circle Detection Based on Hough Transform [J]. International Journal of Computer Science Issues, 2012, 9(3):481-486. [2] Xie L, Chen S, Zhang J, et al. Purifying algorithm for rough matched pairs using Hough transform [J]. Chinese Journal of Image and Graphics, 2015, 20(8): 1017-1025. [3] Daugman, J.G. High confidence visual recognition of persons by a test of statistical independence [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993, 15(11): 1148-1161. [4] Chen H F. The study and application of the basic geometric shapes detection algorithm in the digital image [D]. Hangzhou:Zhejiang University, 2007. [5] Sun H, Mao Y, Yang N, et al. A real-time and robust multi-circle detection method based on randomized Hough transform [C]. 2012 International Conference on Computer Science and Information Processing (CSIP). IEEE, 2012, pp.175-180. [6] Chiu S H, Liaw J J. An effective voting method for circle detection [J].Pattern Recognition Letters, 2005, 26(2): 121-133. [7] Li Z Q, Teng H F. The generalized Hough transform: Multiple circles fast random extracting[J]. Journal of computer aided design and graphics,2006(18):27-33. [8] Ayala-Ramirez V, Garcia-Capulin C H, Perez-Garcia A, et al. Circle detection on images using genetic algorithms [J]. Pattern Recognition Letters, 2006, 27(6): 652-657. [9] Fu S Y. Attitude parameter correction in image projection transformation [D].Wuhan: Huazhong University of Science and Technology.
Proc. of SPIE Vol. 10611 1061110-6 Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 4/19/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
[10] Stark J A. Adaptive image contrast enhancement using generalizations of histogram equalization [J]. Image Processing, IEEE Transactions on, 2000, 9(5): 889-896. [11] Baxes G A. Digital image processing : principles and applications[J]. Trends in Food Science & Technology, 2006, 17(7):387. [12] Zhang C, Zhang J, Tian J. A line detection algorithm based on direction filter and regional growth[C].Ninth International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR2015). SPIE, 2015, pp. 98120T1-5. [13] Liu Y, Zhang J, Tian J. An image localization system based on gradient Hough transform[C].Ninth International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR2015). SPIE, 2015, pp. 98151F1-5. [14] Li Z H, Li X G. Infrared small moving target detection and tracking based on OpenCV [J]. Infrared and Laser Engineering, 2013, 42(9):2561-2565.
Proc. of SPIE Vol. 10611 1061110-7 Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 4/19/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use