Improving the performance of image classification by Hahn moment ...

9 downloads 0 Views 829KB Size Report
The discrete orthogonal moments are powerful descriptors for image analysis and ... The performance of Hahn's moment invariants used as pattern features for a ...
Sayyouri et al.

Vol. 30, No. 11 / November 2013 / J. Opt. Soc. Am. A

2381

Improving the performance of image classification by Hahn moment invariants Mhamed Sayyouri,1,2 Abdeslam Hmimid,1,3 and Hassan Qjidaa1,4 1

CED-ST; LESSI; Faculty of Sciences Dhar el Mehraz, Sidi Mohamed Ben Abdellah University, BP 1796 Fez-Atlas 30003, Fez, Morocco 2 e-mail: [email protected] 3 e-mail: [email protected] 4 e-mail: [email protected] Received June 24, 2013; revised August 23, 2013; accepted September 23, 2013; posted September 24, 2013 (Doc. ID 192338); published October 25, 2013

The discrete orthogonal moments are powerful descriptors for image analysis and pattern recognition. However, the computation of these moments is a time consuming procedure. To solve this problem, a new approach that permits the fast computation of Hahn’s discrete orthogonal moments is presented in this paper. The proposed method is based, on the one hand, on the computation of Hahn’s discrete orthogonal polynomials using the recurrence relation with respect to the variable x instead of the order n and the symmetry property of Hahn’s polynomials and, on the other hand, on the application of an innovative image representation where the image is described by a number of homogenous rectangular blocks instead of individual pixels. The paper also proposes a new set of Hahn’s invariant moments under the translation, the scaling, and the rotation of the image. This set of invariant moments is computed as a linear combination of invariant geometric moments from a finite number of image intensity slices. Several experiments are performed to validate the effectiveness of our descriptors in terms of the acceleration of time computation, the reconstruction of the image, the invariability, and the classification. The performance of Hahn’s moment invariants used as pattern features for a pattern classification application is compared with Hu [IRE Trans. Inform. Theory 8, 179 (1962)] and Krawchouk [IEEE Trans. Image Process. 12, 1367 (2003)] moment invariants. © 2013 Optical Society of America OCIS codes: (100.0100) Image processing; (100.3010) Image reconstruction techniques; (100.4994) Pattern recognition, image transforms; (100.5760) Rotation-invariant pattern recognition. http://dx.doi.org/10.1364/JOSAA.30.002381

1. INTRODUCTION Since 1962, the theory of moments has been introduced in the field of image analysis and pattern recognition. Hu [1] has proposed a complete system of geometrical moment invariants under the transformations of translation, scaling, and rotation of the image. Due to the nonorthogonal kernel polynomial, these moments present many drawbacks, like the redundancy of information and the sensibility to noise especially for the higher order moments [2]. In 1980, Teague [2] introduced the Legendre and Zernike moments based on continuous orthogonal polynomials. The continuous orthogonal moments possess the best representation of the image compared with the geometric moments and are more robust to noise of the image [3–5]. The computation of continuous orthogonal moments requires a suitable transformation of the coordinates of the image in the space, and an appropriate approximation of the integral. This increases the computational complexity and error of the discretization [5–8]. To eliminate this error, a set of discrete orthogonal moments such as Tchebichef [9], Krawchouk [10], Meixner [11], Charlier [12], and Hahn [11–14] have been recently introduced in the field of image analysis. The use of discrete orthogonal polynomials as basic functions for the computation of the image moments eliminates the need for numerical approximation and satisfies exactly the property of orthogonality in the discrete domain of the image space coordinates [9–15]. This property makes the discrete orthogonal moments superior to the continuous 1084-7529/13/112381-14$15.00/0

orthogonal moments in the sense of the capacity of the image’s representation [9–13]. But, the use of these discrete orthogonal moments generates two major problems which are high computational cost and the propagation of numerical errors in the computation of discrete orthogonal polynomials [11,12]. Indeed, the calculated values of discrete orthogonal polynomials using the hypergeometric function and recurrence relations of three-terms with respect to the order n are complex and require a high computation time, especially for higher orders, which cause the propagation of numerical errors [11]. To limit this error and to ameliorate the accuracy of image reconstruction, Zhu et al. [11] applied the recurrence relation with respect to variable x instead of order n in the computation of discrete orthogonal polynomials. To reduce the computational time cost of moments, several algorithms are introduced in [12,14–20]. Spiliotis and Mertzios [16] have presented a fast algorithm to calculate the geometric moments for binary images using the image block representation (IBR). Hosny [17] has given a fast algorithm to calculate the geometric moments for gray-scale images using the image slice representation. Lim et al. [20] have presented a fast computation technique to calculate exactly Zernike moments by using cascaded digital filter outputs, without the need to compute geometric moments. Papakostas et al. [21] have given an algorithm that permits the fast and accurate computation of Legendre moments based on the image slice representation method. Shu et al. [18] introduced an approach to accelerate © 2013 Optical Society of America

2382

J. Opt. Soc. Am. A / Vol. 30, No. 11 / November 2013

the time computation of Tchebichef moments by using the IBR for binary images and image slice representation for gray-scale images and by deriving some properties of Tchebichef polynomials. Sayyouri et al. have proposed in [12] and [14] a fast method to accelerate time computation of Hahn and Charlier moments using the IBR. The theory of invariant moments has largely been used in pattern recognition problems by several researchers [1,22–29]. In the literature, the invariant moments have been computed by two methods. The first one, which is called image normalization, computes the invariant moments directly from the orthogonal polynomials. The second one computes the moment invariants as linear combination of geometrical moment invariants [22–28]. Chong et al. [23,24] have introduced an effective method to construct the translation and scale invariants of Legendre and Zernike moment. Zhu et al. [25] have proposed a method directly based on Tchebichef polynomials to make the translation and scale invariants of Tchebichef moments. Papakostas et al. [26] have introduced a set of Krawchouk moment invariants computed over a finite number of image intensity slices, extracted by applying image slice representation. Karakasis et al. [28] have proposed a generalized expression of the weighted dual Hahn moment invariants up to any order and for any value of their parameters based on a linear combination of geometric moments. While most work has focused on the moments of Tchebichef [25], Krawchouk [26], and dual Hahn [27,28], no attention has been paid to accelerate the computation time of Hahn’s discrete orthogonal moments and no report has been published on how to make the translation, scale, and rotation invariant of Hahn’s discrete orthogonal moments. In this paper, we propose a new method to accelerate the computation time of Hahn’s discrete orthogonal moment based on two notions. The first one is the use of the recurrence relation with respect to variable x instead of order n and the symmetry property of Hahn polynomials in the computation of Hahn’s discrete orthogonal polynomials. The second one is the description of an image with a set of blocks instead of individual pixels by applying the algorithm of IBR for binary images and the algorithm of image slice representation (ISR) for gray-scale images. In this approach we decompose the binary image into a set of graylevel blocks and we compute the Hahn’s moments from these blocks. The gray-scale image will be decomposed into a series of slices and each slice will be decomposed into a set of blocks and calculate the Hahn’s moments from each block for each slice. The paper also introduces a novel set of Hahn’s invariant moments under translation, scaling, and rotation of the image. This set of Hahn’s invariant moments is formed by a linear combination of geometric moment invariants (GMI) from a finite number of image intensity slices. The improved performance of image classification by our proposed set of Hahn’s moment invariants is compared with Hu [1] and Krawtchouk [10] moment invariants. The rest of the paper is organized as follows. In Section 2, the definition of Hahn polynomials is presented. It shows how the recurrence and the symmetry properties of the Hahn’s discrete orthogonal polynomials are used to facilitate the computation of Hahn’s discrete orthogonal polynomials. Section 3 presents a fast method to compute Hahn’s discrete orthogonal moments for binary and gray-scale images. Section 4 gives the reconstruction of images using Hahn’s

Sayyouri et al.

discrete orthogonal moments. The proposed new set of Hahn’s invariant moments is presented in Section 5. Section 6 provides experimental validations of the theoretical framework developed in the previous sections. In the first subsection of Section 6, we compare the time computation of Hahn’s discrete orthogonal moments by the direct method and the proposed fast method. The second subsection examines how well an image can be reconstructed from Hahn’s discrete orthogonal moments. The third subsection shows the invariability of Hahn’s discrete orthogonal moments under translation, scale, and rotation of the image. A comparison of classification accuracy of Hahn’s moment invariants with Hu and Krawchouk moment invariants in an object recognition task is provided in the last subsection. Section 7 concludes this work.

2. HAHN DISCRETE ORTHOGONAL POLYNOMIALS In this section, we will present a brief introduction to the theoretical background of Hahn’s discrete orthogonal polynomials. The nth Hahn’s discrete orthogonal polynomials with one variable ha;b x; N satisfy the following first-order partial n difference equation [30,31]: σxΔ∇ha;b x; N  τxha;b x; N  λn ha;b x; N  0 (1) n n n with n  0; 1; 2; …N − 1, x ∈  0 N − 1 , a ≻ − 1, and b ≻ − 1 where σx, τx, and λn are defined by σx  xN  a − x; τx  b  1N − 1 − a  b  2x; and λn  na  b  n  1:

(2)

ΔP n x  P n x  1 − P n x and ∇P n x  P n x − P n x − 1 denote the forward and backward finite difference operator, respectively. The solution of the partial difference equation defined in Eq. (1) can be expressed by Rodrigues formula [30] ha;b x; N  n

−1n n ∇ wn x n!wx

(3)

with wn x 

ΓN  a − xΓn  b  1  x ; ΓN − n − xΓx  1

(4)

where wx is the weight function wn x in the case n  0 and Γ denotes gamma functions defined as Γn  n − 1! and Γ1  1 for all positive integer n. For the backward difference operator ∇ we have the following property: n   X n −1k f x − k: ∇n f x  (5) k k0

By combining Rodrigues formula defined in Eqs. (3) and (5), we can obtain an explicit expression for Hahn’s discrete

Sayyouri et al.

Vol. 30, No. 11 / November 2013 / J. Opt. Soc. Am. A

orthogonal polynomials defined by using a hypergeometric function as ha;b x; N  n



−1n b  1n N − nn n! × 3 F 2 −n; −x; n  1  a  b; b  1; 1 − N; 1 n X

k αa;b k;n x :

To accelerate the computation of the weight function and the squared norm we will use the recurrence relation of the gamma function. The gamma function verifies the following relation: Γx  1  xΓx for x ≻ 0. By replacing the previous relationship of gamma in Eq. (4), we obtain the following recurrence relation which calculates the weight function wx  1 from wx:

(6) N − 1 − xb  1  x wx; N  a − x − 1 ΓN  aΓb  1 with w0  : ΓN

k0

wx  1 

The hypergeometric function 3 F 2 is defined as 3 F 2 a1 ; a2 ; a3 ; b1 ; b1 ; x



∞ X a1 k a2 k a3 k xk k0

b1 k b2 k

k!

:

By replacing the previous relationship of gamma in Eq. (11), we obtain the following recurrence relation which calculates squared norm ρn  1 from ρn:

a  n  1b  n  1a  b  n  N  1N − n − 1a  b  2n  1 ρn; n  1a  b  n  1a  b  2n  3 Γa  1Γb  1Γa  b  N  1 : ρ0  a  b  1N − 1!Γa  b  1

ρn  1 

x0  1 and xk  xx  1…x  k − 1;

k ≥ 1: (8)

More explicitly, Hahn’s discrete orthogonal polynomials of the nth order are defined as follows:

ha;b x; N  n

−1 b  n! a  b  k  n! −xk : n − k!k! b  k! a  b  n! k0 k

(9) Hahn’s discrete orthogonal polynomials hna;b x; N satisfy the following orthogonality condition: N−1 X

ha;b x; Nha;b n m x; Nwx  ρnδnm ;

(10)

x0

where ρn denotes the squared norm of Hahn’s discrete orthogonal polynomials defined as

ρn 

Γa  n  1Γb  n  1a  b  n  1N ; a  b  2n  1n!N − n − 1!

(11)

and δnm denotes the delta of Kronecker. The normalized discrete orthogonal polynomials of Hahn are defined as s wx a;b ~ha;b : x; N  hn x; N n ρn

(13)

(7)

The Pochhammer symbol xk is defined as

n X

2383

with (14)

Therefore, the orthogonal property of normalized Hahn’s discrete orthogonal polynomials defined in Eq. (10) can be rewritten as N−1 X

x; Nh~ a;b h~ a;b n m x; N  δnm :

(15)

x0

A. Computation of Hahn’s Discrete Orthogonal Polynomials This section discusses the computational aspects of Hahn’s discrete orthogonal polynomials. It is shown in the first subsection that the recurrence relation is used to calculate the discrete orthogonal polynomials of Hahn with respect to n. In the next subsection, the use of the recurrence relation with respect to variable x is shown to accelerate the computational time. The last subsection gives the symmetry property of Hahn’s discrete orthogonal polynomials. 1. Recurrence Relation with Respect to n As the calculation of normalized discrete orthogonal polynomials of Hahn by combining Eqs. (6) and (12) has a great computational time cost, we use the following three-term recurrence relation with respect to order n [31]: B × D ~ a;b C × E ~ a;b hn−1 x; N − hn−2 x; N: h~ na;b x; N  A A

(16)

(12) The parameters A, B, C, D, and E are defined as follows:

2384

A

J. Opt. Soc. Am. A / Vol. 30, No. 11 / November 2013

Sayyouri et al.

na  b  n ; a  b  2n − 1a  b  2n

a − b  2N − 2 b2 − a2 a  b  2N − ; 4 4a  b  2n − 2a  b  2n a  n − 1b  n − 1 a  b  N  n − 1N − n  1 × ; C− a  b  2n − 2 a  b  2n − 1 s na  b  na  b  2n  1 ; D N − na  nb  na  b  2n − 1a  b  n  N s nn − 1a  b  n E a  na  n − 1b  nb  n − 1N − n  1N − n s a  b  n − 1a  b  2n  1 × : a  b  2n − 3a  b  n  Na  b  n  N − 1 Bx−

(17) The zero-order and first-order of the normalized Hahn’s discrete orthogonal polynomials can be calculated as follows: s ~ha;b x; N  wx; 0 ρ0 s wx a;b : (18) h~ 1 x; N  −b  1N − 1  a  b  2x ρ1 2. Recurrence Relation with Respect to x In the computation of Hahn’s normalized discrete orthogonal polynomials using the recurrence formula with respect to the order n, we find problem of propagation of numerical error. To limit this error, we use the recurrence relation with respect to variable x. By considering the properties of the forward and backward finite difference operator Δ and ∇ defined previously, we have Δ∇P n x  P n x  1 − 2P n x  P n x − 1:

(19)

Thus, the recurrence relation of Hahn’s discrete orthogonal polynomials with respect to x can be obtained through Eqs. (1) and (19) as follows: p wx x; N  h~ a;b n σx − 1  τx − 1  2σx − 1  τx − 1 − λn ~ a;b p × hn x − 1; N wx − 1  σx − 1  τx − 1 ~ a;b p − hn x − 2; N : (20) wx − 2 The initial values of the recurrence relation with respect to x of Hahn’s normalized polynomials with respect to x are defined as 0; N  1 − Nn h~ a;b n



nb n

s w0 ; ρn

n  b  1N − n − 1 − nN  a − 1 h~ a;b 1; N  n b  1N − 1 s w1 ~ a;b × h 0; N: w0 n

(21)

3. Symmetry Property The computation time of the Hahn’s discrete orthogonal polynomials for the special case a  b can be reduced considerably by using the symmetry property. The symmetry relation of the Hahn’s discrete orthogonal polynomials can be derived from Eq. (9) as N − 1 − x; N  −1n h~ nb;a x; N: h~ a;b n

(22)

If we replace x by N − 1 − x, a by b, and b by a, then Eq. (1) retains its form. Since, under this replacement, the polynomial hb;a N − 1 − x; N remains a polynomial of the same degree, n then because of the uniqueness of the polynomial solutions of the difference equations of hypergeometric type defined in Eq. (1), we obtain the relation x; N  −1n h~ b;a N − 1 − x; N: h~ a;b n n

(23)

The computation of Hahn’s discrete orthogonal polynomials can be terminated at x  N∕2. We can make use of the symmetry condition to evaluate the Hahn’s polynomial values where x is in the range N∕2; N − 1. This greatly reduces the computational time and the storage space for the Hahn’s discrete orthogonal polynomials. Indeed, for the case of N is even, only the values of ha;b x; N for x ≺ N∕2 − 1 need to be n calculated. The symmetry relation can be used to determine ha;b x; N for x ≻ N∕2 − 1. Furthermore, if the N × N size imn age is subdivided into four equal quadrants, only the polynomials in the first quadrant, 0 ≺ x, y ≺ N∕2 − 1, need to be calculated. In the case of N is odd, the image can be zeropadded to achieve an even N.

3. FAST COMPUTATION OF HAHN’S DISCRETE ORTHOGONAL MOMENTS The two-dimensional (2D) Hahn’s discrete orthogonal moment of order n  m of an image intensity function f x; y with size M × N is defined as H nm 

M −1 X N −1 X

h~ na;b x; Mh~ a;b m y; Nf x; y;

(24)

x0 y0

x; M is the nth order of Hahn’s discrete orthonorwhere h~ a;b n mal polynomials. The computation of Hahn’s discrete orthogonal moments by using Eq. (24) seems to be a time consuming task mainly due to two factors. First, the need of computing a set of complicated quantities for each moment order and second, the need to evaluate the polynomial values for each pixel of the image. If in the first case the recurrence relation with respect to x is introduced as an efficient algorithm, which recursively computes the orthogonal polynomials, little attention has been given in the second case. Moreover, the computation of Hahn’s moments, by using less mathematical operations of the image pixels is achieved by describing an image with a set of blocks instead of individual pixels. The computation of Hahn’s discrete orthogonal moments can be accelerated by using the methodology of the IBR [16] and the image slice representation [19]. In the following two subsections, we will propose a new formula to the fast computation of Hahn’s discrete orthogonal

Sayyouri et al.

Vol. 30, No. 11 / November 2013 / J. Opt. Soc. Am. A

moments using the IBR for binary images and image slice representation for gray-scale images. A. Binary Images In order to accelerate the time computation of Hahn’s moments, we will apply the algorithm of IBR for the binary image [16]. In this approach, the binary image is represented as a set of blocks, each block corresponds to an object. This block is defined as a rectangular area that includes a set of connected pixels. Therefore, the binary image is described by the following relation: f x; y  fbi ; i  0; 1; …:; K − 1g;

(25)

where bi is the ith block and K is the total number of blocks. Each block is described by the coordinates of the upper left and lower right corner in vertical and horizontal axes. By applying IBR, the Hahn’s moment defined in Eq. (25) can be rewritten as H nm 

x2;bi k−1 X X

y2;bi X

x; Mh~ a;b y; N  h~ a;b n n

i0 xx1;bi yy1;bi

k−1 X

i H bnm ;

i0

(26) with x1;bi ; x2;bi  and y1;bi ; y2;bi  are, respectively, the coordii nates in the upper left and lower right block bi , and H bnm is the Hahn’s moment of the block bi given by i  H bnm

x2;bi X

y1;bi X

x; Mh~ a;b y; N h~ a;b n n

xx1;bi yy1;bi



x2;bi X

x; M h~ a;b n

xx1;bi

y2;bi X

y; N h~ a;b n

yy1;bi

 S n x1;bi ; x2;bi S m y1;bi ; y2;bi ;

(27)

with S n x1;bi ; x2;bi  

x2;bi X

x; M h~ a;b n

y2;bi X

y; N: h~ a;b n

where bij is the block of the edge i with intensity f i and K i is the number of image blocks. By using the ISR defined by Eq. (30), the fast method to compute Hahn’s discrete orthogonal moments of a gray-scale image f x; y are defined as H nm 

M −1 X N −1 X

f i x; y;

x; Mh~ a;b y; N h~ a;b n n

f i x; y

i1

x0 yO



L X

L M −1 X N −1 X X

x; Mh~ a;b y; Nf i x; y h~ a;b n n

i1 x0 yO



L X

f i H nm i;

(31)

i1

where H nm i is the n  mth order of Hahn’s discrete orthogonal moments of the ith binary slice.

4. IMAGE RECONSTRUCTION USING HAHN’S DISCRETE ORTHOGONAL MOMENTS In this section, the image representation capability of Hahn’s discrete orthogonal moments is shown. The Hahn’s discrete orthogonal moments of the image are first calculated and subsequently its image representation power is verified by reconstructing the image from the moments. An objective measure based on a mean squared error (MSE) is used to characterize the error between the original image and the reconstructed image. Indeed, the Hahn’s discrete orthogonal moments of order n  m in terms of normalized Hahn’s discrete orthogonal polynomials, for an image of size M × N with intensity function f x; y are defined as M−1 N−1 XX

f x; yh~ a;b x; Mh~ a;b n m y; N:

(32)

and

(28)

B. Gray-Scale Images In this subsection, we propose the use of the image slice representation to accelerate the computation time of Hahn’s discrete orthogonal moments for gray-scale images. This approach decomposes a gray-scale image f x; y in a series of two levels of image slices f i x; y L X

(30)

x0 y0

yy1;bi

f x; y 

with

f i x; y  fbij ; j  1; 2; …:; K i − 1g;

H nm 

xx1;bi

S m y1;bi ; y2;bi  

f x; y  ff i x; y; i  1; 2; …:; Lg;

2385

(29)

i1

where L is the number of slices and f i x; y the intensity function of the ith slice. After the decomposition of gray-scale image into several slices of two levels, the gray-scale image f x; y can be redefined in terms of blocks of different intensities as

By solving Eqs. (15) and (24) for f x; y, the image intensity function can be written completely in terms of the Hahn’s discrete orthogonal moments as f x; y 

M−1 N−1 XX

H nm h~ a;b x; Mh~ a;b n m y; N:

(33)

x0 y0

The image intensity function can be represented as a series of normalized Hahn’s discrete orthogonal polynomials weighted by Hahn’s discrete orthogonal moments. If the moments are limited to order maximum, the series is truncated to fˆ x; y 

max X l X

~ a;b H n−m;m h~ a;b n−m x; Mhm y; N:

(34)

n0 m0

The difference between the original image f x; y and the reconstructed image fˆ x; y is measured using the MSE defined as follows:

2386

J. Opt. Soc. Am. A / Vol. 30, No. 11 / November 2013

MSE 

M X N 1 X f xi ; yj  − fˆ xi ; yj 2 : MN i1 j1

Sayyouri et al.

(35)

5. HAHN’S DISCRETE ORTHOGONAL MOMENT INVARIANTS To use the Hahn’s discrete orthogonal moments for object classification, it is indispensable that Hahn’s discrete orthogonal moments be invariant under rotation, scaling, and translation of the image. Therefore, to obtain the translation, scale, and rotation invariants of Hahn’s discrete orthogonal moments, we adopt the same strategy used by Papakostas et al. for Krawchouk’s invariant moments [28]. That is, we derive the Hahn’s discrete orthogonal moment invariants through the geometric moments using the direct method and the fast method based on IBR.

V nm 

B. Fast Computation of Hahn’s Discrete Orthogonal Invariant Moments In order to accelerate the computation time of Hahn’s discrete orthogonal invariant moments, we will apply the same strategy for fast computation of Hahn’s discrete orthogonal moments. By using the binomial theorem, the GMI defined in Eq. (37) can be calculated as follows: GMInm  GM−γ 00

GMnm 

xn ym f x; y:

(36)

n X m  n  m  X

i

i0 j0

j

× −1m−j ηmi−j;nj−i ;

GMInm  GM−γ 00

N −1 X N −1 X

ηnm 

¯ cos θ − x − x ¯ sin θm f x; y; × y − y

(37)

ηnm 

with nm GM10  1; x¯  ; 2 GM00   GM01 1 2μ11 : ; and θ  tan−1 y¯  2 GM00 μ20 − μ02 γ

μnm 

¯ n y − y ¯ m f x; y: x − x

(43)

N−1 M−1 X μnm 1 X ¯ n y − y ¯ m f x; y x − x γ  γ GM00 GM00 x0 y0



X  N−1 M−1 s X 1 X ¯ n y − y ¯ m x − x f k x; y γ GM00 x0 y0 k1



S N−1 M−1 X X 1 X ¯ n y − y ¯ m fk × x − x γ GM00 k1 x 0 y 0 k

(38) 

k

S X

1 f GMγ00 k1 k

2;bj k  X X

x

The n  mth approximated central geometric moments are defined in [1] by N−1 M−1 X X

μnm : GMγ00

By applying the IBR, the normalized central moment defined in Eq. (43) can be rewritten as follows:

¯ cos θ  y − y ¯ sin θn x − x

x0 y0

(42)

where ηnm are normalized central geometric moments

x0 y0

The set of GMI by translation, scaling, and rotation can be written as [1]

cos θij sin θnm−i−j

× −1m−j μmi−j;nj−i n X m  n  m  X cos θij sin θnm−i−j  i j i0 j0

A. Computation of Hahn’s Discrete Orthogonal Invariant Moments Given a digital image f x; y with size M × N, the geometric moments are defined using the discrete sum approximation as N−1 M−1 X X

   n−p n X m   X N × M pq∕21 N n m × p q 2 2 q0 p0  m−p M × × GMIpq : (41) 2

×

j0

 (39)

¯ n x − x

xk x1;bj

y2;bj  X

 ¯ m y − y

yk y1;bj

S 1 X f × ηknm ; γ GM00 k1 k

(44)

x0 y0

where The Hahn’s moment invariants can be expanded in terms of geometric moments invariants as follows: HMInm  ρnρm−1∕2

n X m X

2;bj k  X X

x

ηknm 

j0 a;b αa;b i;n αj;n V i;j ;

xk x1;bj

¯ n x − x

y2;bj  X

¯ m y − y

 ;

(45)

yk y1;bj

(40)

i0 j0

where αa;b i;n are the coefficients relative to Eq. (7) and V i;j are the parameters defined as

and f k ; k  1; 2; …S are the slice intensity functions, S is the number of slices in image f i , bj ; j  1; 2; …k is the block in each slice. x1;bi ; y1;bi  and x2;bi ; y2;bi  are, respectively, the coordinates in the upper left and lower right block bj .

Sayyouri et al.

Vol. 30, No. 11 / November 2013 / J. Opt. Soc. Am. A

Using Eqs. (42) and (44), the GMI of Eq. (36) can be rewritten as follows: n X m  n  m  X cos θij sin θnm−i−j GMInm  i j i0 j0 × −1m−j ηmi−j;nj−i n X m  n  m  1 X cos θij sin θnm−i−j  γ GM00 i0 j0 i j × −1m−j

S X

f k × ηkmi−j;nj−i

k1



S n X m  n  m  X 1 X cos θij sin θnm−i−j fk γ GM00 k1 i0 j0 i j

× −1m−j × ηkmi−j;nj−i :

(46)

Therefore, Hahn’s discrete orthogonal moment invariants HMI can be obtained from Eqs. (40), (41), and (46).

6. RESULTS AND SIMULATIONS In this section, we give experimental results to validate the theoretical results developed in the previous sections. This section is divided into four subsections. In the first subsection, we will compare the time computation of Hahn’s discrete orthogonal moments by the direct method and the proposed fast method for binary and gray-scale images. In the second one, we will test the ability of Hahn’s discrete orthogonal moments to reconstruct the binary and gray-scale images with and without noise. In Subsection C, we will evaluate the invariability of Hahn’s invariant moments under translation, scaling, and rotation of image. In the last subsections, the recognition accuracy of Hahn’s discrete orthogonal moment invariants is tested and compared to that of the Hu and Krawchouk moment invariants in the classification of the object. A. Computational Time of Hahn’s Moments In this subsection, we will compare the computational time of Hahn’s discrete orthogonal moments by the direct method and the proposed fast method. Figure 1 shows a flowchart of the two methods.

2387

In the first example, a set of five binary images with size 128 × 128 pixels (Fig. 2) is selected from the well-known MPEG-7 CE-shape-1 Part B database [32] have been used as test images. After extraction the blocks for each binary image by applying the IBR algorithm [16], the number of blocks of these images is NB  247 for spoon, NB  223 for Jar, NB  329 for Horse, NB  85 for Car, and NB  551 for Device. The computational processes are performed 19 times for each of the five images where the average times are plotted against the moment order up to (100, 100) for the images above using the proposed fast method and the direct method in Fig. 3. This figure shows that the proposed fast method is faster than the direct method. Notice that the computation time for extracting the blocks of each image is about 0.1 ms, this time is much less than the computation time required in the calculation of Hahn’s discrete orthogonal moments. In the second example, a set of five gray-scale images with a size of 256 × 256 pixels shown in Fig. 4 have been used. After extraction, the blocks for each gray-scale image by ISR [19], the number of blocks NB, and the number of slices NS of these images are (NB  47;345; NS  250) for House, (NB  55;887; NS  209) for Woman, (NB  61;605; NS  208) for Mandrill, (NB  54;527; NS  224) for Pepper, and (NB  56;338; NS  231) for Lake. The computation time to extract the blocks of each image is about 1 ms. The computational processes are performed 18 times for each of the five images where the average times are plotted against the moment order up to (180, 180) for the images above using the proposed fast method and the direct method in Fig. 5. The result indicates again that our fast method has a better performance than the direct method. Note that the experiments were implemented in Matlab on a PC Dual Core 2.10 GHz, 2 GB of RAM. The two figures show that the proposed fast method is faster than the direct method because the computation of Hahn’s moments by the proposed method depends only on the number of blocks in the image, knowing that the number of blocks of the image is smaller than the image’s size. B. Image Reconstruction from Hahn’s Discrete Orthogonal Moments In this section, we will discuss the capacity of Hahn’s moments for the reconstruction of the binary and gray-scale

Fig. 1. Flowchart of the two methods for the computation of Hahn’s moments. (a) The proposed fast method and (b) the direct method.

Fig. 2. Set of test binary images. Spoon, Jar, Horse, Car, and Device.

2388

J. Opt. Soc. Am. A / Vol. 30, No. 11 / November 2013 35

The results of these experiments show the robustness of Hahn’s moments against different types of noise.

time of computation in seconds

direct method proposed method

30 25 20 15 10 5 0 10

20

30

40

Sayyouri et al.

50

60

70

80

90

100

moment order

Fig. 3. Average computation time for binary image using the direct method and the proposed method.

images with noise and without noise using the proposed fast method and the direct method. To evaluate these methods, we will calculate the MSE between the original and the reconstructed image. MSE is largely used in the domain of image analysis as a quantitative measure of accuracy. A numerical experiment is conducted, where the binary image Horse of size 128 × 128 as shown in Fig. 2 is used with a maximum moment order ranging from 0 to 100 and the grayscale image Woman of size 256 × 256 as shown in Fig. 4 with moments order ranging from 0 to 200. Figures 7(a) and 7(b) show the MSE for the two images by the two methods. It is obvious that the MSE decreases as the moment of order increases where the MSE gets near to zero with increasing time order. When the maximum moments order gets to a certain value, the reconstructed images will be very close to the original ones. The figures also show that the proposed fast method is efficient in terms of quality of the reconstruction of binary and gray-scale images and faster than the direct method. In this experience, we will test the robustness of Hahn’s moments in relation to different types of noise. For this, two numerical experiments are performed using two types of noise. The first type of noise is the “salt and pepper” noise while the second is the sound of the strong “white Gaussian noise.” Images contaminated with the two types of noise are shown in Fig. 6, respectively. The two contaminated images are reconstructed using Hahn’s discrete orthogonal moments of order from 0 to 200. Figure 7(c) shows the plotted curves of MSE for the noise contaminated images. For easier comparison, the three curves of MSE are plotted in the same figure. Generally, the MSE of the noisy images is greater than the corresponding values of MSE without noise in the image. Figure 7(c) shows the curves of the noise-contaminated image MSE approaches zero while increasing the moments order.

C. Invariability To evaluate the invariability of Hahn’s discrete orthogonal invariant moments under translation, scaling, and rotation of the image, we will use a binary image character “R” whose size is 64 × 64 pixels transformed by translation vector TV ∈ f−5; −5; −5; 5; 5; −5; 0; 0; 5; 5g, scaling factor SF ∈ f0.8; 0.9; 1; 1.1; 1.2g, and rotation angle RA ∈ f0°; 45°; 90°; 135°; 180°g within the framework of the image (Fig. 8). All invariant moments of Hahn are calculated up to order two n, m ≤ 2 for each transformation. The results are presented in Tables 1–4 for the case a  10; b  10. Finally, in order to measure the ability of the proposed invariants to remain unchanged under different image transformations, the following relation is used: α  σ∕jμj%, where σ denotes the standard deviation of Hahn’s invariant moments for the different factors of each transformation, and μ is the corresponding mean value. It is clear from the tables that the rate σ∕μ is very low, indicating that the Hahn’s moment invariants are very stable under different types of image transformations which give excellent results with Hahn’s moment invariants. Therefore, the invariant moments of Hahn derived in this paper could be a useful tool in pattern recognition tasks that require the translation, scaling, and rotation invariance. D. Classification In this section, we will provide experiments to validate the precision of recognition and the classification of objects using the proposed set of Hahn’s invariant moments. For this, we will put in place the characteristic vectors defined by V  HMI00 ; HMI01 ; HMI10 ; HMI11 ; HMI02 ; HMI20 ; HMI21 ; HMI12 ; HMI22 

(47)

where HMInm are the Hahn’s moment invariants defined in Section 5. We will use simple classifiers based on plain distances. These measures count the distance of each object in the process and the objects which represent the problem’s classes. Two well-known distances from the literature, the Euclidean [33] and correlation coefficient [33], are selected and presented in the following: Euclidean distance s n X d1 x; y  xi − yi 2 ; i1

and the correlation coefficient method

Fig. 4. Set of test gray-scale images. House, Woman, Pepper, Mandrill, and Lake.

(48)

Sayyouri et al.

Vol. 30, No. 11 / November 2013 / J. Opt. Soc. Am. A

35

Proposed method Direct method

30

1.2

25

1

20

0.8

MSE

time of computation in seconds

1.4

(a)

direct method proposed method

15

0.6

10

0.4

5

0.2

0 0

20

40

60

80

100

120

140

160

0

180

0

10

20

30

moment order

40

50

60

70

80

90

100

moment order

Fig. 5. Average computation time for gray-scale images using the direct method and the proposed method.

Pn i1 xi yi q  q d2 x; y  P : Pn n 2 2 x  i i1 i1 yi 

2389

(b)

0.7 Proposed method Direct method

0.6

(49)

0.5

The above formulas measure the distance between two vectors x  x1 ; x2 ; x2 ; …; xn ; y  y1 ; y2 ; y3 ; …; yn , which are defined in the IRn space. These distances are quick to calculate and easy to implement. The idea of these distances is that each image which has N features is a point in the space of dimension N. Each feature is a vector of this space. The distance between two images is the distance between two points in this space. If the two vectors x and y are equal, then d1 tends to 0 and d2 tends to 1. Therefore, to classify the images, one takes the minimum values for d1 and the maximum values for d2 . The recognition accuracy is defined as

MSE

0.4 0.3 0.2 0.1 0 0

20

40

60

80

100

120

140

160

180

200

moment order

(c) 0.45

noise-free image salt and peppers noisy image White Gaussian noisy image

0.4 0.35

(50)

In order to validate the precision of recognition and the classification of objects using Hahn’s invariant moments, we will use two image databases. The first is a set of binary English characters and numbers. The reason for the choice of such a character is that the elements of the subset ffB; E; Rg; fM; Ng; fq; 8; 9g; fI; J; 1; Lgg can be easily misclassified due to similarity (Fig. 9). The test set is generated by translation, scaling, and rotation of the training set with scaling factor SF ∈ f1.2; 1.1; 1; 0.9; 0.8g, translation TV  f−5; −5; 0; 0; −5; 5; 5; −5; 5; 5g in horizontal and vertical directions, and rotation θ ∈ f0°; 45°; 90°; 135°; 180°; 225°; 270°; 315°g, forming a set of tests of 2,400 images, as shown in Fig. 10. The second image database is the Columbia Object Image Library (COIL-20) database [34]. The total number of

Fig. 6. (a) Woman gray-scale image, (b) noisy image by salt and pepper, and (c) noisy image by white Gaussian.

0.3

MSE

Number of correctly classified images η × 100%: The total of images used in the test

0.25 0.2 0.15 0.1 0.05 0 0

20

40

60

80

100

120

140

160

180

200

moment order

Fig. 7. (a) MSE for the binary image of Horse by two methods, (b) MSE for gray-scale image of Woman by two methods, and (c) MSE for gray-scale image of Woman with salt and pepper and white Gaussian noise.

images is 1,440 distributed as 72 images for each object. All images of this database have the size 128 × 128. Figure 11 displays a collection of the 20 objects. The test set also is degraded by salt and pepper noise with noise densities 1%,

Fig. 8. Set of the transformed binary image character “R” by translation, scaling, and rotation.

1 0,8 0,9 1,1 1,2 Performance σ∕jμj%

Scaling Factor (SF)

00 −5 −5 −5 5 5 −5 5 5 Performance σ∕jμj%

Translation Vector (TV)

−3;71916800E08 −3;71916800E08 −3;71916800E08 −3;71916800E08 −3;71916800E08 1;22052957E−13

6;55360000E04 6;55360000E04 6;55360000E04 6;55360000E04 6;55360000E04 0;00000000E00

−8;15189197E09 −8;15189197E09 −8;15189197E09 −8;15189197E09 −8;15189197E09 5;23186668E−14

HMI10 2;11514721E12 2;11514760E12 2;11514649E12 2;11514793E12 2;11514689E12 2;68777296E−05

HMI11 1;89037805E14 1;89037605E14 1;89037805E14 1;89037705E14 1;89037875E14 5;56905643E−05

HMI20 5;76047704E11 5;76057863E11 5;76047704E11 5;76047704E11 5;76057863E11 9;65904698E−04

HMI02 −2;06973287E17 −2;06973187E17 −2;06973187E17 −2;06973387E17 −2;06973287E17 4;04517583E−05

HMI21

HMI01

−3;71916800E08 −3;71916800E08 −3;71916800E08 −3;71916800E08 −3;71916800E08 0;00000000E00

HMI00

6;55360000E04 6;55360000E04 6;55360000E04 6;55360000E04 6;55360000E04 0;00000000E00

−8;15189197E09 −8;15189197E09 −8;15189197E09 −8;15189197E09 −8;15189197E09 0;00000000E00

HMI10

2;11514721E12 2;11514229E12 2;11514229E12 2;11514589E12 2;11514833E12 1;32319057E−04

HMI11

1;89037805E14 1;89037192E14 1;89037147E14 1;89037985E14 1;89037297E14 2;02801987E−04

HMI20

HMI02 5;76047704E11 5;76047897E11 5;76047692E11 5;76047321E11 5;76047263E11 4;72301524E−05

Hahn Moment Invariants

−2;06973287E17 −2;06977669E17 −2;06977669E17 −2;06977669E17 −2;06977669E17 9;46756755E−04

HMI21

Table 2. Hahn’s Moment Invariants a  10; b  10, Scaling of Character “R”

HMI01

HMI00

Hahn Moment Invariants

Table 1. Hahn’s Moment Invariants a  10; b  10, Translation of Character “R”

−1;38867671E16 −1;38869671E16 −1;38867681E16 −1;38867651E16 −1;38866671E16 7;89206995E−04

HMI12

−1;38867671E16 −1;38867571E16 −1;38867771E16 −1;38867571E16 −1;38867671E16 6;02349291E−05

HMI12

2;69573599E19 2;69573699E19 2;69573799E19 2;69573499E19 2;69573699E19 4;23135005E−05

HMI22

2;69573599E19 2;69573699E19 2;69573599E19 2;69573699E19 2;69573599E19 2;03424824E−05

HMI22

2390 J. Opt. Soc. Am. A / Vol. 30, No. 11 / November 2013 Sayyouri et al.

TV: (0 0); SF:1; RA:0° TV: (−5 −5); SF:0,8; RA:45° TV: (−5 5); SF:0,9; RA:90° TV: (5 −5); SF:1,1; RA:135° TV: (5 5); SF:1,2; RA:180° Performance σ∕jμj%

−8;15189197E09 −8;15189197E09 −8;15189197E09 −8;15189197E09 −8;15189197E09 8;272308E−14

2;11514721E12 2;11547246E12 2;11514723E12 2;11514722E12 2;11514722E12 6;876410E−03

HMI11 1;89037805E14 1;89037826E14 1;89037815E14 1;89037854E14 1;89037844E14 1;064082E−05

HMI20 5;76047704E11 5;76056587E11 5;76057863E11 5;76056587E11 5;76057863E11 7;473760E−04

HMI02 −2;06973287E17 −2;06973539E17 −2;06973594E17 −2;06973518E17 −2;06973187E17 8;564783E−05

HMI21

HMI01

HMI10

HMI11

HMI20

Hahn Moment Invariants HMI02

HMI21

HMI12

−1;38867671E16 −1;38867770E16 −1;38881689E16 −1;38864150E16 −1;38860748E16 5;739797E−03

HMI12

HMI22

2;69573599E19 2;69576400E19 2;69572778E19 2;69576741E19 2;69578765E19 9;069392E−04

HMI22

0;00000000E00

1;36929095E−13

5;88827168E−14

5;78925663E−07 1;84309313E−04 5;73220713E−04

1;21591815E−04

6;46627083E−05

1;66512503E−06

6;55360000E04 −3;71916800E08 −8;15189197E09 2;11514767E12 1;89037490E14 5;76041031E11 −2;06972780E17 −1;38867472E16 2;69573590E19

6;55360000E04 −3;71916800E08 −8;15189197E09 2;11514721E12 1;89037487E14 5;76045940E11 −2;06972785E17 −1;38867671E16 2;69573593E19

6;55360000E04 −3;71916800E08 −8;15189197E09 2;11514724E12 1;89037058E14 5;76041767E11 −2;06972782E17 −1;38867673E16 2;69573596E19

6;55360000E04 −3;71916800E08 −8;15189197E09 2;11514721E12 1;89037805E14 5;76047704E11 −2;06973287E17 −1;38867671E16 2;69573599E19 6;55360000E04 −3;71916800E08 −8;15189197E09 2;11514723E12 1;89037117E14 5;76049475E11 −2;06972786E17 −1;38867675E16 2;69573598E19

HMI00

−3;71916800E08 −3;71916800E08 −3;71916800E08 −3;71916800E08 −3;71916800E08 1;492688E−13

6;55360000E04 6;55360000E04 6;55360000E04 6;55360000E04 6;55360000E04 0;000000E00

HMI10

Hahn Moment Invariants

Table 4. Hahn’s Moment Invariants a  10; b  10, Mixed Transformation of Character “R”

HMI01

HMI00

Mixed Transformation

0 45 90 135 180 Performance σ∕jμj%

Rotation Angle (RA)

Table 3. Hahn’s Moment Invariants a  10; b  10, Rotation of Character “R”

Sayyouri et al. Vol. 30, No. 11 / November 2013 / J. Opt. Soc. Am. A 2391

2392

J. Opt. Soc. Am. A / Vol. 30, No. 11 / November 2013

Sayyouri et al.

Table 6. Classification Results of the Set of Binary English Characters and Numbers by Using d2 Distance Fig. 9. Binary images as a training set for invariant character recognition in the experiment.

Salt and Pepper Noise Invariant Moments Noise Free Hu Krawchouk Hahn

100% 100% 100%

1%

2%

3%

4%

94.28% 84.12% 75.03% 71.24% 96.85% 92.64% 90.26% 89.95% 97.56% 93.94% 92.15% 92.14%

Table 7. Classification Results of COILL-20 Objects Database by Using d1 Distance Salt and Pepper Noise Invariant Moments Noise Free

Fig. 10. Part of the images of the testing set in the experiment.

Fig. 11. Collection of the COIL-20 objects.

2%, 3%, and 4%. The feature vector based on Hahn’s moment invariants cited in Eq. (47) is used to classify these images and its recognition accuracy is compared with that of Hu’s moment invariants [1] and Krawchouk’s moment invariants [10] for the two databases. The results of the classification using all features are presented in Tables 5–8. The results show the efficiency of Hahn’s moment invariants in terms of recognition accuracy of noisy images, compared to those of Hu and Krawchouk. Note that the recognition of non-noisy binary images by our method is 100%, and the accuracy of the recognition decreases with increasing noise. Finally, we will compare the computational time of Hahn’s discrete orthogonal moment invariants by two methods: the direct method described in Subsection A and the proposed

Table 5. Classification Results of the Set of Binary English Characters and Numbers by Using d1 Distance Invariant Moments Hu Krawchouk Hahn

Salt and Pepper Noise Noise Free 100% 100% 100%

1% 95.5% 97.12% 97.5%

2% 86.23% 94.32% 94.26%

3% 76.26% 91.25% 92.45%

Hu Krawchouk Hahn

2%

3%

4%

92.26% 85.27% 74.86% 70.01% 95.17% 90.87% 85.98% 77.25% 96.71% 91.45% 86.58% 78.56%

fast method based on the application of algorithms IBR defined previously in Subsection B. For this, we will measure the computational time of characteristic vector defined in Eq. (47) by two methods. To compare the two computational methods, the execution time improvement ratio (ETIR) is used as a criterion [35]. This ratio is defined as ETIR  1 − Time1∕Time2 × 100, where Time1 and Time2 are the execution time of the first and the second methods, respectively. ETIR  0 if both execution times are identical. In the first experiment, a set of five binary images Spoon, Jar, Horse, Car, and Device with size 128 × 128 pixels (Fig. 2) selected from the well-known MPEG-7 CE-shape-1 Part B database [32] were used as test images. The computational processes are performed for each of the five images where the average times and ETIR are included in Table 9 using the proposed fast method and the direct method. Table 9 shows that our proposed method is faster than the direct method. In the second experiment, a set of five gray-scale images with a size of 128 × 128 pixels, shown in Fig. 12, selected from the COIL-20 database [34] were used as test images. The computational processes are performed for each of the five Table 8. Classification Results of COILL-20 Objects Database by Using d2 Distance Salt and Pepper Noise Invariant Moments Noise Free Hu Krawchouk Hahn

93.56% 95.79% 96.27%

1%

2%

3%

4%

91.25% 85.21% 73.42% 68.87% 94.18% 90.64% 84.76% 76.54% 96.11% 91.15% 88.44% 77.86%

Table 9. Average Times and Reduction Percentage of Hahn’s Invariant Moments for Binary and Gray-Scale Images Average Time Computation of HMI by

4% 72.25% 89.56% 92.23%

94.58% 96.48% 97.35%

1%

Set of Binary images Gray-scale images

Direct Method

Fast Method

ETIR%

0.1418 0.2562

0.0485 0.1025

65.80% 59.59%

Sayyouri et al.

Fig. 12. Set of test binary images (a) Cup, (b) Duck, (c) Box, (d) Cat, and (e) Object.

images where the average times and ETIR are included in Table 9 using the proposed fast method and the direct method. The result indicates again that our method has a better performance than the direct method. The result shows that the proposed method is faster than the direct method because the computation of Hahn’s discrete orthogonal invariant moments by the fast method depends only on the number of blocks in the image.

7. CONCLUSION In this paper, we have presented a new fast method for the computation of Hahn’s discrete orthogonal moments for binary and gray-scale images using some properties of Hahn’s discrete orthogonal polynomials and the IBR. The computation of Hahn’s discrete orthogonal moments, using this method, eliminates the propagation of numerical error and depends only on the number of blocks, which can significantly reduce the time computation of Hahn’s discrete orthogonal moments because the number of blocks in the image is smaller than the size of the image. Experimental results also show the effectiveness of Hahn’s discrete orthogonal moments for the reconstruction of binary and gray-scale images with noise and without noise. Moreover, we have proposed a new set of Hahn’s moment invariants computed by the proposed fast method. The accuracy of recognition of Hahn’s moment invariants in the classification of the object is more robust than Hu and Krawchouk moment invariants, especially for noisy images.

ACKNOWLEDGMENTS This work was supported in part by a grant from Moroccan pole of Competence STIC (Science and Technology of Information and Communication). We also thank the anonymous referees for their helpful comments and suggestions.

REFERENCES M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inform. Theory 8, 179–187 (1962). 2. M. R. Teague, “Image analysis via the general theory of moments,” J. Opt. Soc. Am. 70, 920–930 (1980). 3. C. H. Teh and R. T. Chin, “On image analysis by the method of moments,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 496–513 (1988). 4. A. Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 489–497 (1990). 5. S. X. Liao and M. Pawlak, “On image analysis by moments,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 254–266 (1996). 6. C. W. Chong, R. Paramesran, and R. Mukundan, “Translation and scale invariants of Legendre moments,” Pattern Recogn. 37, 119–129 (2004). 7. H. Lin, J. Si, and G. P. Abousleman, “Orthogonal rotationinvariant moments for digital image processing,” IEEE Trans. Image Process. 17, 272–282 (2008). 1.

Vol. 30, No. 11 / November 2013 / J. Opt. Soc. Am. A

2393

8. B. Yang, G. Li, H. Zhang, and M. Dai, “Rotation and translation invariants of Gaussian–Hermite moments,” Pattern Recogn. Lett. 32, 1283–1298 (2011). 9. R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tchebichef moments,” IEEE Trans. Image Process. 10, 1357–1364 (2001). 10. P. T. Yap, R. Paramesran, and S. H. Ong, “Image analysis by Krawchouk moments,” IEEE Trans. Image Process. 12, 1367–1377 (2003). 11. H. Zhu, M. Liu, H. Shu, H. Zhang, and L. Luo, “General form for obtaining discrete orthogonal moments,” IET Image Process. 4, 335–352 (2010). 12. M. Sayyouri, A. Hmimd, and H. Qjidaa, “A fast computation of Charlier moments for binary and gray-scale images,” in The 2nd Edition of the IEEE Colloquium on Information Sciences and Technology (CIST’12), Fez, Morocco, October 22–24, 2012. 13. P. T. Yap, P. Raveendran, and S. H. Ong, “Image analysis using Hahn moments,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 2057–2062 (2007). 14. M. Sayyouri, A. Hmimd, and H. Qjidaa, “A fast computation of Hahn moments for binary and gray-scale images,” in IEEE International Conference on Complex Systems (ICCS’12), Agadir, Morocco, November 5–6, 2012. 15. L. Yang and F. Albregtsen, “Fast and exact computation of Cartesian geometric moments using discrete Green’s theorem,” Pattern Recogn. 29, 1069–1073 (1996). 16. I. M. Spiliotis and B. G. Mertzios, “Real-time computation of two-dimensional moments on binary images using image block representation,” IEEE Trans. Image Process. 7, 1609–1615 (1998). 17. K. M. Hosny, “Exact and fast computation of geometric moments for gray level images,” Appl. Math. Comput. 189, 1214–1222 (2007). 18. H. Z. Shu, H. Zhang, B. J. Chen, P. Haigron, and L. M. Luo, “Fast computation of Tchebichef moments for binary and gray-scale images,” IEEE Trans. Image Process. 19, 3171– 3180 (2010). 19. G. A. Papakostas, E. G. Karakasis, and D. E. Koulourisotis, “Efficient and accurate computation of geometric moments on gray-scale images,” Pattern Recogn. 41, 1895– 1904 (2008). 20. C. Lim, B. Honarvar, K. H. Thung, and R. Paramesran, “Fast computation of exact Zernike moments using cascaded digital filters,” Inf. Sci. 181, 3638–3651 (2011). 21. G. A. Papakostas, E. G. Karakasis, and D. E. Koulouriotis, “Accurate and speedy computation of image Legendre moments for computer vision applications,” Image Vision Comput. 28, 414–423 (2010). 22. J. Flusser, “Pattern recognition by affine moment invariants,” Pattern Recogn. 26, 167–174 (1993). 23. C.-W. Chong, P. Raveendran, and R. Mukundan, “Translation invariants of Zernike moments,” Pattern Recogn. 36, 1765–1773 (2003). 24. C.-W. Chong, P. Raveendran, and R. Mukundan, “Translation and scale invariants of Legendre moments,” Pattern Recogn. 37, 119–129 (2004). 25. H. Zhu, H. Shu, T. Xia, L. Luo, and J. L. Coatrieux, “Translation and scale invariants of Tchebichef moments,” Pattern Recogn. 40, 2530–2542 (2007). 26. G. A. Papakostas, E. G. Karakasis, and D. E. Koulouriotis, “Novel moment invariants for improved classification performance in computer vision applications,” Pattern Recogn. 43, 58–68 (2010). 27. H. Zhu, H. Shu, J. Zhou, L. Luo, and J. L. Coatrieux, “Image analysis by discrete orthogonal dual Hahn moments,” Pattern Recogn. Lett. 28, 1688–1704 (2007). 28. E. G. Karakasis, G. A. Papakostas, D. E. Koulouriotis, and V. D. Tourassis, “Generalized dual Hahn moment invariants,” Pattern Recogn. 46, 1998–2014 (2013). 29. H. Zhang, H. Z. Shu, G. N. Han, G. Coatrieux, L. M. Luo, and J. L. Coatrieux, “Blurred image recognition by Legendre moment invariants,” IEEE Trans. Image Process. 19, 596–611 (2010). 30. R. Koekoek, P. A. Lesky, and R. F. Swarttouw, Hypergeometric Orthogonal Polynomials and Their q-Analogues, Springer

2394

J. Opt. Soc. Am. A / Vol. 30, No. 11 / November 2013

Monographs in Mathematics, Library of Congress Control Number: 2010923797 (Springer-Verlag, 2010). 31. A. F. Nikiforov, S. K. Suslov, and V. B. Uvarov, Classical Orthogonal Polynomials of a Discrete Variable (Springer, 1991). 32. http://www.dabi.temple.edu/~shape/MPEG7/dataset.html.

Sayyouri et al. 33. R. Mukundan and K. R. Ramakrishnan, Moment Functions in Image Analysis (World Scientific, 1998). 34. http://www.cs.columbia.edu/CAVE/software/softlib/coil‑20.php. 35. K. M. Hosny, “Fast computation of accurate Gaussian–Hermite moments for image processing applications,” Digit. Signal Process. 22, 476–485 (2012).