Mutual information-based image template matching with small template size Hossein Soleimani
Mohammadali Khosravifard
Department of Electrical and Computer engineering Isfahan University of Technology Isfahan, Iran
[email protected]
Department of Electrical and Computer engineering Isfahan University of Technology Isfahan, Iran
[email protected]
Abstract— Mutual information plays crucial role of similarity measure in some applications such as image registration and image template matching. Therefore estimating the joint probability distribution of underlying image or template is the main problem. Non-Parametric window (NP) method considers the images as continuous two-dimensional signals and results an appropriate joint probability distribution. In this paper we employ a triangle distribution with a large support instead of uniform distribution in the original NP. This gives a more precise estimation of joint probability distribution. As a result the compared mutual information is more robust and reliable. Experimental results show the superiority of the proposed method in image template matching with small window size. Keywords-component; Template matching; joint probability distribution; mutual information; Non-parametric window method
I.
INTRODUCTION
Image template matching is a process to find the location of a given sub-image in an image. In other words, for a template with certain window size, we are looking for a part of the image with the same size, which has the maximum similarity to the template. Image template matching can be used in manufacturing as a part of quality control, a way to navigate a mobile robot, or as a way to detect edges in images [1]. There are many measures, such as mutual information and correlation criteria [2], to determine similarity between two images or subimages. Mutual information is the most popular measure because of its robustness to intensity variation and noise, and high accuracy. It indicates any linear and non-linear correlation between two random variables. It also has other applications in multi-modal medical image registration [3] and moving object tracking [4] and anywhere that the similarity between two images is required. The most important and critical step in calculating the mutual information of two random variables or two images is to find their joint probability distribution. Since the images are actually samples of two-dimensional continuous signals, calculation of their joint probability distributions is not errorfree. Moreover, the number of samples will affect the resulting estimated joint probability distribution. Therefore, selection of a reliable method for estimating the joint probability distribution is of great importance. Joint histogram or co-occurrence matrix was the first solution for calculating the joint probability distribution
function (JPDF). In order to improve the reliability of JPDF estimation, Parzen windowing [5] and PVE [6] interpolation were also proposed. Parzen windowing improves the estimation by filling empty nodes in the joint histogram (smoothing the histogram). As a disadvantage of these methods, it is essential to adjust some parameters. They show acceptable performance when the number of available samples is sufficient (i.e., the number of bins is large enough), so that the mutual information function is not discontinuous. Discontinuity of the mutual information function makes it difficult for optimization. NP windowing is one of the best methods in estimation of joist probability distribution of two random variables (or two images). Since it treats the image as a continuous 2D signal, the number of bins or samples has less importance with respect to other methods. In contrast to the previously mentioned methods, with NP windowing the interpolating or smoothing is performed in the signal domain (rather than the probability domain). For first time, NP method was presented by Kadir and Brady for estimating 1D signals [7]. It was improved for estimating joint probability distribution of 2D signals (images) [8]. It estimates signal statistics by directly calculating the distribution of each piecewise section of a signal for a given interpolation model. One of the most important advantages of this method is that it has no adjustable parameter. Since it considers both the intensity and location of the pixels, its output joint histogram is smooth and continuous. The price of such a quality is its time complexity [8]. In this paper, we extend the basic idea of NP windows and propose Modified Non-Parametric windows (MNP). Experimental results show that the estimated PDF is better than that derived by NP method. The paper is organized as follows: Section 2 describes principle concept of mutual information. In Section 3 NP windows and the proposed algorithm are explained. Finally in Section 4 the performance of these methods are compared. II.
MUTUAL INFORMATION
Mutual information is one of the basic concepts in Information theory that indicates dependency between two random variables [10]. Firstly, this similarity measure has been used in medical image registration by Viola and Wells in 1995 [9].
978-1-4577-1535-8/11/$26.00 ©2011 IEEE
Two random variables A and B with marginal distributions pA(a) and p B (b) and joint probability distribution p A , B ( a , b ) , are
independent
iff
pA,B (a, b) = pA (a).pB (b) .
Mutual
information I ( A, B) is actually the Kullback-Leibler distance of p A,B (a, b) and pA (a).pB (b) ,i.e.,
I ( A, B) = ∑ pA,B (a, b).log( a,b
pA,B (a, b) pA (a).pB (b)
)
(1)
Mutual information takes its maximum value when two images or two discrete random variables are absolutely dependent (i.e. one of them is a function of the other one). In this case the joint probability matrix or joint histogram is diagonal. If the images are independent, then the mutual information will be zero. III.
NP WINDOWS AND PROPOSED ALGHORITM
For estimation of JPDF of two random variables by some samples of each variable, there are several methods such as Parzen windowing and histogram calculating. Most of these methods are sensitive to number of samples or number of bins in the histogram. Estimated PDF by these methods are not so reliable and precise. NP windows have solved this problem. A. Estimation of 1D signal PDF by proposed method
Assuming that the value of the signal
f is equal to f 1 , f 2
, …, f n at the moments t1, t2, …, tn, respectively, for two moments ti and ti+1 , the following expression is used to estimate PDf of f function between two samples f i and f i+1.
f ( x) = ax + b 0 < x < 1
function will be more reliable and precise. Another remarkable note in NP method is using of uniform distribution for x. we propose to use linear distribution defined by (5) as distribution of x variable and extend the interval to ⎡ f i − f i +1 f − f i +1 ⎤ , f i +1 + i ⎢ fi − ⎥ 2 2 ⎣ ⎦
(interval is doubled), i.e.,
⎧4 x ⎪ p X ( x) = ⎨4(1 − x) ⎪0 ⎩
0 < x < .5 .5 < x < 1
(5)
otherwise
Therefore, if the linear interpolation (2) is used then 3 f − f i +1 a = 2( f i +1 − f i ) and b = i . Using (3), the probability 2 distribution of f between two samples is obtained as
⎧ f −b 4⎪ pF ( f ) = 2 ⎨a + b − f a ⎪ ⎩0
b < x