Fingerprint Image Watermarking Approach Using ... - Semantic Scholar

1 downloads 0 Views 732KB Size Report
DTCWT without Corrupting Minutiae. Mohammed Alkhathami, Fengling Han and Ron Van Schyndel. School of Information Technology and. Computer Science.
2013 6rd International Congress on Image and Signal Processing (CISP 2013)

Fingerprint Image Watermarking Approach Using DTCWT without Corrupting Minutiae Mohammed Alkhathami, Fengling Han and Ron Van Schyndel School of Information Technology and Computer Science RMIT University, Australia Email:[email protected] Abstract—This paper proposes a new digital watermarking technique for fingerprint images using the Dual-Tree Complex Wavelet Transform (DTCWT). The watermark is embedded into the real and imaginary parts of the DTCWT wavelet coefficients. This work focuses on the study of watermarking techniques for fingerprint images that are collected from different angles without corrupting minutiae points. We investigate the effect of the watermark on the fingerprint features after the watermark embedding process. VeriFinger V5.0 is used to determine the matching score between the template and the watermarked images. The users identity is linked with the fingerprint features to add more authentication factors to the authentication process. The SHA2 hash function is used to encode the user identification number by generating the hash value and convert it into a binary image to construct the watermark data. The original fingerprint image is not required to extract watermark data. The proposed method has been tested using the CASIA fingerprint image database with 500 fingerprint images from 100 persons. Keywords—Fingerprint, DTCWT, Watermark, Matching.

I.

I NTRODUCTION

A fingerprint is represented by the impression of the pattern of ridges and valleys on the surface of a fingertip. The uniqueness of a fingerprint is determined by the combination of the pattern of ridges and valleys and the minutiae points [1]. Due to their uniqueness, fingerprint images are usually used for user authentication purposes. Consequently, their protection has become an extremely important issue. A digital watermark algorithm is one of the most researched methods to protect fingerprint images and there are several characteristics that a good watermark technique should include [2], [3]. For example, it should be perceptually invisible and resistant to common image processing operations. There are two main challenges for fingerprint watermarking algorithms which are designed to protect fingerprint images. First, the fingerprint features may be affected by the embedded watermark. Second, displacements and rotations of fingerprints lead to different sets of features. Based on the domain where the watermark pattern is embedded, watermarking techniques can be classified into spatial [3], [4] and transform domain [5], [6] algorithms. In the spatial domain method, the watermark pattern is embedded directly into the fingerprint image pixels. An example of this technique is the least significant bit (LSB) method where watermark data is embedded into the least significant bit [7]. The technique has several advantages such as a low level of complexity and ease of implementation. In contrast, LSB is not recommended for watermarking algorithms due to the fact that they are not robust to some image attacks, in particular to

978-1-4799-2764-7/13/$31.00 ©2013 IEEE

lossy compression [8]. On the other hand, transform domain methods are widely used for robust watermarking algorithms where the watermark is embedded by modifying the frequency coefficient [9]. Popular examples of these techniques are the Discrete Cosine Transform (DCT) and the Discrete Wavelet Transform (DWT). While DCT [10] and DWT [11] are widely used for watermarking methods, several methods of frequency domains such as Discrete Fourier Transform (DFT) are also proposed [12]. Determining the frequency band to embed the watermark pattern in the transform domain algorithms is an important issue. In general, the most significant bit is not preferred part in which to embed a robust watermark because it may affect the watermark perceptually where the watermark may become visible [13]. Thus, specifying where to embed the watermark is a tradeoff between perception and robustness. Recently [14] proposed a watermarking algorithm for fingerprint image protection without corrupting minutiae points. This method embeds a watermark into a fingerprint image using the DCT technique. The idea behind this method is the watermark is embedded into the DCT blocks which contain two minutiae points or less. The template and the host fingerprint are exactly the same image. The watermark effect is determined by comparing the total number of minutiae points before and after watermark embedding. Of the various transform domain methods, methods that implement the DWT technique have become the most researched method in image processing applications which also includes image watermarking. In the DWT, the image is divided into four sub- bands called LL, LH, HL, and HH at level 1 where LL represents the low frequency coefficients for the input signal and LH, HL, and HH denote the high frequency coefficients that include detailed information. The LL band is decomposed again until it reaches the optimal decomposition level specified by the implemented technique. The aim of this paper is to embed watermarks into fingerprint images that are collected from different angles without corrupting minutiae points. DWT features are separable and suffer from weak directional selectivity for diagonal features [15]. To address this problem, a complex extension of DWT called the Dual Tree Complex Wavelet Transform (DTCWT) [16] is used. The DWT has features of selectivity with three directions only while the DTCWT offers 12 direction wavelets which include 6 for the real tree and 6 for the imaginary tree at the angles of ±50 , ±100 , ±150 in two dimensions [17].

1706

This paper extends the DCT watermarking algorithm presented in [14] where the template and the tested image were same image. Unlike the DCT [14] algorithm, the proposed algorithm in this paper embeds the watermark into different images collected from different angles of the same finger. Our watermarking algorithm utilizes the DTCWT technique that embeds the watermark into all parts of the fingerprint image without corrupting their features. In addition, it is able to retain mostly the same set of fingerprint features even after some rotations have been conducted on the fingerprint images. To evaluate our watermarking algorithm, five different fingerprint images of same finger for each person are investigated. One image is set as a template without a watermark. The other four images are called tested images and contain the watermark information. VeriFinger V5.0 [16] is used to determine the matching score between the template and the tested fingerprint images based on their features. To determine the watermark effect over the fingerprint images, VeriFinger compares the template with the other fingerprint images without the watermark. After this, we compare the template against the watermarked images. We compare the results of these two tests to determine the influence of the watermark on the fingerprint features. To test the robustness of the proposed algorithm against rotation, we investigate the matching score between the template and the watermarked images after some rotations have been conducted on the tested images. The experimental results show that the proposed water marking algorithm does not affect the fingerprint features (minutiae points). The embedded watermark in this algorithm is invisible to the human visual system (HVS) and robust to various kinds of image attacks and rotations. This paper is organized as follows: Section II presents background information about the DTCWT algorithm. The watermark embedding algorithm is described in section III. Experimental results of the proposed method are discussed in section IV. The conclusion is presented in the last section. II.

reconstruction purpose, dual tree complex wavelet transform has been used. B. 2D Dual Tree Complex Wavelet Transform The dual-tree complex wavelet transform employs two real DWTs. These two transforms jointly give an overall transform. The first DWT transform indicates the real part and the second transform indicates the imaginary part. The two real wavelets integrated with each of the two real wavelet transforms are represented as ψi (s) and ψj (s). After the filters are generated, the complex wavelet is approximately estimated as follows: ψ (s) = ψi (s) + jψj (s)

(3)

In the 2D dual-tree complex wavelet transform, 2D wavelet function ψ (a , b) = ψ (a) ψ (b) is integrated with the rowcolumn of the wavelet transform, where ψ (a) is a complex wavelet represented by ψ (a) = ψi (a) + jψj (a). ψ (a , b) is obtained from the equation: ψ (a , b) = [ψi (a ) + jψj (a )] [ψi (b ) + jψj (b )] = ψi (a )ψi (b ) − ψj (a )ψj (b ) + j [ψj (a ) ψi (b ) + ψi (a )ψj (b )]

(4)

The real part of the complex wavelet are taken, and then the sums of two separable wavelets are obtained: RealP art {ψi (a , b)} = ψi (a) ψi (b) − ψj (a) ψj (b)

(5)

The watermark has a checkerboard type pattern. A directional filter of DTCWT is applied on the watermarked image for decomposition purposes. The resultant images are the output of the one directional filter from the low pass and high pass filters. The real and imaginary part of the DTCWT decomposition of the fingerprint and watermark image are shown in figures 1 and 2.

D UAL T REE C OMPLEX WAVELET T RANSFORM

A. Complex Wavelet Transform A complex wavelet transform (CWT) solves four problems (directionality, shifting, oscillation and aliasing) that are present in discrete wavelet transform using complex valued basis functions [16]. This change is inspired by the Fourier transform basis functions. CWT is represented in the form of complex values scaling functions [17] and complex valued wavelet functions as follows: ψo (s) = ψa (s) + jψb (s)

(a)

(b)

(c)

Fig. 1: (a) Fingerprint image (b) Fingerprint imaginary part (c) Fingerprint real part.

(1)

where ψa (s) are real and j ψb (s) are imaginary. ψa (s) and ψb (s) form a Hilbert transform pair and ψo (s) is the analytic signal [18], [19]. The complex scaling function is defined in similar ways. The CWT is obtained by projecting the signal into complex basis functions as follows: (a)

dx (s , t) = da (s , t) + jdb (s , t)

(2)

where s is the scaling factor and t is the time shift. CWT is shift invariant and does not include aliasing. For perfect

(b)

(c)

Fig. 2: (a) Watermark image (b) Watermark imaginary part (c) Watermark real part.

1707

The DTCWT coefficients that are obtained from the first filter bank are called the real part and the coefficients that are obtained from the other filter are called the imaginary part. The real part of the image contains less important data compared to the imaginary part which contains more information. Unlike the algorithm introduced in [20] which embeds the watermark into the real part of the image, our algorithm embeds the watermark into the whole image, that is, both the real and imaginary parts. This is preferable because if an attack occurs while transmitting, it can be easily identified. In other words, if only the real part is used for watermark embedding and an unauthorized person tries to extract the watermark, minutia point information will not be changed because the real part contains less information. In the case of using both real and imaginary parts, an attack can be easily identified since minutia distribution will be changed.

III.

WATERMARKING A LGORITHM

In the proposed algorithm, user identity is combined with biometric identification during the authentication process. The user identification number consists of 12 decimal digits. It is encoded using the SHA2 hash function which generates a unique hash value. SHA2 is a secure one-way hash function, so it is not possible to obtain a user identification number based on the hash value and it is infeasible to change a message without modifying its hash value. The hash value is then converted into binary and then converted into image as a watermark of size 256x256, equal to the size of the fingerprint image. The Dual-Tree Complex Wavelet Transform (DTCWT) domain is used to embed the watermark data into fingerprint images. DTCWT removes the directionality and shift variance problems present in the wavelet transforms by using complex basis functions. For decomposition purposes, DTCWT uses directional filters. These directional filters are able to extract the same information, such as minutia locations, even after the watermark has been embedded into the image. Multiplicative fusion is used to distribute the watermark evenly over the whole fingerprint image, including the real and imaginary parts, without affecting the information present in the fingerprint image. We use the multiplicative fusion rule to combine the fingerprint image coefficients and the watermarked image coefficients after decomposition. In this way, we embed the watermark into the fingerprint images. This algorithm relies on the information fusion-based approach. The algorithm comprises the following steps: 1) 2) 3) 4) 5)

Generate the hash vale of the user identification number using SHA2 hash function. Convert the hash value into binary image to construct the watermarked image. Decompose the fingerprint and watermarked image using the dual-tree complex wavelet transform. Apply the multiplicative fusion rule to combine the watermark and fingerprint image coefficients. Apply the inverse DTCWT to obtain the watermarked image.

Figure 3 shows the block diagram of the watermarking algorithm.

Fig. 3: Watermarking algorithm diagram

A. Algorithm Description First, DTCWT is performed on both the fingerprint and watermarked images up to four levels. At each level, DTCWT coefficients are sorted according to their magnitude in ascending order. F (x, y) is the image and and X, Y and Z are the real, imaginary and complex coefficients at each level of decomposition, respectively. In this case, m=1..4 level of decomposition, and n=1...N is the number of coefficients present at each level of decomposition. Figure 4 shows the histogram of DTCWT coefficients at the first level of decomposition. Most of the coefficients are centered around zero and range between -100 and +100. In this histogram, high value coefficients represent the frequency transitions. In fingerprint image decomposition, high value coefficients represent ridges, short ridges and other transition points. These points are usually called minutia points. The watermark image is usually a black and white image. At watermark decomposition using DTCWT, coefficients are in the range of -80 to +80. The wavelet coefficients distribution is mainly below 10 with very few being above 10. These high value coefficients represent the edge from black to white or white to black. In fingerprint decomposition, the wavelet coefficients range increases drastically from -300 to +350 at level 2 and from -600 to +600 level 3. At the 4th level, DTCWT coefficients are in the range of -700 to 700. In the watermarked image, the DTCWT coefficients vary at level 2 around -150 to +130 and at level 3 from -300 to +300. The multiplicative fusion rule does not greatly affect fingerprint image coefficients due to the fact that the number of high value coefficients in the watermarked image is much less. In order to make the coefficients values very low, we perform normalizing for the coefficients after the decomposing process. In this step, we divide the DTCWT coefficients of that level to the average value of coefficients. Moreover, when we decompose a fingerprint image using DTCWT, the high value coefficients correspond to the minutia points. Other continuous lines correspond to the low value DTCWT coefficients. By multiplying the watermarked image coefficients with the fingerprint image coefficients, these high values remain high and the distribution of minutiae points is not changed. Fingerprint minutiae points are usually distributed in the

1708

middle of the finger. In the DTCWT, the image is decomposed using two 2D DWT, so that we have extra coefficients. These extra coefficients come from two 2D DWT decompositions. One set of 2D coefficients represents the real part and the other set of 2D DWT represents the imaginary part. The advantage of extra coefficients is that if changes occur in one coefficient, the other will accommodate this change. So, the final result will not change. Also, the proposed watermarked image is a black and white image which has large DTCWT coefficients at the edges and coefficients with lower values at the other places. In the proposed algorithm, the watermark does not affect the minutia points because DTCWT has an oversampling property which means dense sampling. By multiplying the watermark coefficients with the image coefficients, only the edge coefficients change greatly while the changes to the other coefficients are much less. In this case, the change to the coefficients does not appear in the image domain.

fusion, the mean and standard deviation value become less, so that the host image (fingerprint) looks bright and sharp and the watermark does not appear visually. Since the watermark image coefficients contains a lot of zeros, so after multiplying with these zeros, the fingerprint coefficients will be a zero value. To avoid this issue, we add the original value (after decomposition) to values after the multiplication as shown in equation 8. At each level of decomposition, the multiplicative rule is applied as follows:

Znew = Zmn + Xmn ∗ Umn + j (Ymn ∗ Vmn )

(8)

where Znew is the modified DTCWT coefficient. In the presented method, we multiply the real part coefficients of fingerprint image and watermark image and the imaginary part coefficients of fingerprint image and watermark image separately. These values are added to the fingerorint image that includes original real and imaginary Zmn = Xmn + jYmn coefficients. After all the DTCWT coefficients have been modified, we then apply the inverse wavelet transform to retrieve the image domain and we finally obtain the watermarked image.

Fig. 4: DTCWT coefficients’ histogram of fingerprint Fig. 5: DTCWT coefficients of watermarked image The DTCWT coefficients of the fingerprint image are denoted as: Zmn = Xmn + jYmn (6) The DTCWT coefficients of watermark image are denoted as: Wmn = Umn + jVmn

(7)

where U , V and W are the real, imaginary and complex coefficients at each level of decomposition respectively. After this decomposition is performed, the fusion rule is applied to embed the watermark into the fingerprint image. For embedding purpose, the multiplicative rule is used at each scale and orientation to combine the real and the imaginary parts using their coefficients. In this case, the watermark is evenly distributed over the whole fingerprint image. The multiplicative rule helps to distribute the watermark over the entire image and the resultant image is shown in figure 5. By using the multiplicative fusion rule, the histogram of all the bands is stretched. Due to this stretching, the resultant coefficients (after multiplying DTCWT coefficients) are distributed evenly over the whole range. After applying the inverse of DTCWT, the resultant fingerprint image is sharp due to histogram stretching and the watermark does not appear in the watermarked image. Some statistical measures of the image, such as mean and standard deviation, are also changed due to the multiplicative fusion rule. If mean and standard deviation values are less, then the image looks sharp and bright. Due to multiplicative

The redundancy of information (real and imaginary parts) present in the DTCWT assists the perfect reconstruction of the image. This property is very useful because even if we make small changes in the DTCWT coefficients, we still obtain an almost perfect or original image. The wavelet coefficients of the fingerprint image is changed during the embedding step but we still obtained almost the original fingerprint image. In this way, we preserved the important features, such as the structure of the minutia set etc., present in the original image, which are not affected due to the watermark. B. Watermark Extraction To extract the watermark at the receiver end, the following processes are applied 1) 2)

Decompose the template image and the watermarked image using DTCWT. Apply the division equation at each level of the DTCWT to inverse the multiplicative fusion rule in the embedded stage. W (i , j) =

1 0

α [E (i , j) /(F 0 (i , j) + 1) − 1]

(9)

where W (i , j) is the watermark coefficients, 0 E (i , j) is the DTCWT coefficient of this level of 0 watermarked image and F (i , j) is the DTCWT

1709

3)

coefficient of this level of template image stored in the database and α is the empirical parameter. The value of α is defineed as 0.2 by experimentation. After obtaining the watermark coefficients of each level, we perform the inverse DTCWT to obtain the watermark. IV.

E XPERIMENT R ESULTS

Fingerprint features are very important for recognition purposes. The uniqueness of the fingerprint image is determined by the minutiae locations. Usually, fingerprint recognition algorithms compare the features of the input image against the template image in the database to find positive matching results between them. The matching score is based on the matched pairs of minutiae points and the minutiae locations. VeriFinger V.5 is used to determine the matching between template and watermarked images. It has a threshold where the matching score should pass it to obtain positive matching results. During the matching process, the watermarked image is compared with the template image and the matching score is generated. If the matching score passes the matching threshold, the matching score is returned, otherwise a zero value is returned as the matching score, which means the watermarked and the template image are not matched. Figure 7 shows the matching process diagram of VeriFinger. The performance of the watermarking algorithm has been evaluated using the CASIA fingerprint image database V5.0 which is publicly available. In the proposed experiment, 500 fingerprint images from 100 persons with 5 images for each person from one finger are tested. One fingerprint image for each person is set as the template and the watermark is embedded into the other four images. We perform two different sets of experiments, the first experiment to investigate the effect of the embedded watermark over the fingerprint features. The second experiment is performed to test the robustness of the proposed watermarking algorithm against image rotation and different types of image noises. In the following, we present the experimental results obtained with our proposed algorithm in two subsections. A. Watermark Effect Investigation To evaluate the influence of the watermark on the fingerprint images, two verification tests are performed using VerFinger. The objective of these two tests is to prove that our watermarking algorithm does not affect the minutiae points present in the fingerprint images. In the first test, we compare the template image against all input images for each person without the watermark. In the second test, we compare the template image with all other watermarked images for each person. The matching score for each corresponding comparison is compared to determine the influence of the watermark on the fingerprint features. VeriFinger extracts the minutiae points from both the template and the tested images and returns the matching score, based on the similarity between the two images. In addition, it returns the matched pairs in both images. Matching between minutiae point pairs is based on the corresponding minutiae structure in both fingerprint images. In both tests, we investigate the matching score and the total number of matched pairs. VeriFinger still able to verify each person even after the watermark is embedded. The matching score is returned for all matching processes which means that it passes the matched threshold.

Fig. 6: VeriFinger matching process

Figure 7 presents the extracted features in the template and the input image before and after the watermark has been embedded. These three images collected from one finger show the effect of the watermark on the fingerprint features during the matching process. Figure 7.(b) and Figure 7.(c) refer to the tested image where b indicates the fingerprint image without the watermark and c includes the watermark while a is the template image. We compare the template in 7.(a) without the watermark and the circles indicate the extracted features in both images. VeriFinger extracts all possible features but this does not mean all of them are matched in the corresponding image. The aim of this comparison is to determine the matched pairs between the template and the tested image without the watermark and compare it with the matched pairs between the template and the input image after adding the watermark. We found only one pair at most is the difference between the two comparisons in some images which means the proposed watermarking algorithm does not affect the fingerprint features. In our experiment, we compare all the four input images for each person with the template using the 1:1 verification method, and calculate the mean of the matching score for each person before and after watermark embedding. In this case, the average of the matching score for all four comparisons of each person before watermark embedding is calculated. In the same way, the average of the matching score after watermark embedding is also calculated. Figure 8 shows the comparison results of the matching score before and after watermark embedding. The introduced watermarking method does not affect the fingerprint features and each person is verified as genuine. For further evaluation, we compare the matched minutiae pairs of the template and tested images before and after watermark embedding. We determine the mean of the matched pairs between the template and all four tested images without the watermark. Then, we compare the results with the corresponding results after adding the watermark into all four tested images. Figure 9 shows the total number of matched minutiae pairs for 10 persons. We found almost identical results before and after watermark embedding. This means the proposed watermarking method does not corrupt the fingerprint features.

B. The Robustness of the Watermarking Algorithm To evaluate the robustness of the proposed watermarking technique against rotation, we add some rotations to the fingerprint images. The original image is set as the template for

1710

Fig. 7: (a) Minutiae of template image (b) Minutiae of tested image without watermark (c) Minutiae of tested image with watermark.

without rotation. If the image is affected by rotation, there will be a huge difference in the matching score. According to the matching results presented in table I, the proposed algorithm shows high robustness against rotation.

Fig. 8: Matched score in template Vs. watermarked images

comparison purposes and it is rotated on six different rotation angles ±50 , ±100 , ±150 . The watermark data is embedded only into the rotated images. For example, we choose a fingerprint image and we embed the watermark into it. Then, we determine the matching score between the original and the watermarked images. The watermarked image is rotated on six angles. For each angle, we compare the rotated image which includes the watermarked image with the template image. Table I shows the matching scores between template image and rotated images after watermark embedding process. Angle 0 refers to the matching score between the template image and the watermarked image without rotation. The other (±5, ±10, ±15) angles represent the rotated images and include the watermarked image and show the matching scores against the template. The matching scores are compared to determine the robustness of the watermarking method against rotations. In this step, 100 fingerprint images from 100 persons are tested. Table I shows a stable number of matching scores between the template images compared to the watermarked images with rotations. If two exact copies for one image are compared, VeriFinger will return 1770 as the matching score which is the maximum score of matching. In our experiment, the template image and the host images are different. Five random comparison results are shown in table I. To illustrate the results in table I, the first watermarked image without rotation is compared against the template image and the matching score is 0.86. After different rotations are conducted on the watermarked image, the matching score against the template remains almost the same as the matching score

Fig. 9: Matched pairs in template Vs. watermarked images

TABLE I: Matching score between the template and rotated images Image No. 1 2 3 4 5

0 0.86 0.78 0.80 0.83 0.85

+5 0.86 0.78 0.80 0.83 0.85

Rotation Angles +10 +15 -5 0.85 0.85 0.86 0.78 0.78 0.78 0.80 0.80 0.80 0.83 0.83 0.83 0.85 0.85 0.85

-10 0.85 0.78 0.80 0.83 0.85

-15 0.85 0.78 0.80 0.83 0.85

We test our algorithm against different types of noise attacks. We use Gaussian noise with high noise variance (35db), Salt and Pepper noise (with different noise density), Speckle and Poisson noise. Every type of noise exhibits different properties in the image. Salt and Pepper noise is very common while transmitting an image from the source and destination. It randomly distributes white and black pixels over the whole fingerprint image. Usually, fingerprint images have short ridges or broken ridges which appear similar to Salt and Pepper noise. Gaussian and Poisson noise follow the Gaussian and Poisson distributions, respectively and Speckle noise is a granular type noise. According to table II, the proposed algorithm shows good performance under noisy conditions. To evaluate our algorithm, we use PSNR (peak signal to noise ratio) as the measure. PSNR computes the peak signal to noise ratio in

1711

decibels (db) between two images. This ratio is used as the quality measure between the original image and the noisy image. High PSNR means the noisy image quality is high and the noisy image is similar to the original image. Our algorithm results can preserve high PSNR values of watermarked images which means noise attacks do not affect the fingerprint image much. PSNR is defined as  P SN R = 10 log10

R2 M SE

 (10)

where R is the maximum grey level of the image (for an 8-bit image R=255) and MSE is defined as: M SE =

angles and different kinds of noise attacks and a high level of robustness is achieved. R EFERENCES [1]

[2]

[3] [4]

m−1 n−1 1 X X [I (k , l) − K (ki , l) ]2 mn k=0 l=0

(11)

where I (k , l) is the original image and K (k , l) is the noisy image.

[5]

[6]

TABLE II: PSNR for different types of image noises. Image Gaussian No. (35Db) 1 2 3 4 5 6

26.28 26.27 26.44 26.33 26.20 26.22

Salt & Pepper (0.2 noise density) 36.23 35.76 36.02 34.89 35.72 34.42

Salt & Pepper (0.6 noise density) 31.98 30.87 31.67 31.45 30.45 30.54

Speckle (0.04 Parameter value) 18.64 18.61 18.30 18.42 18.75 18.82

Poisson

27.21 27.20 27.06 27.09 27.26 27.34

Table II shows the results of different noise attacks on the watermarked image. According to table II, the proposed algorithm is effective as the PSNR value is high under all types of noise attacks. High PSNR means that a noise attack on a watermarked image does not affect features such as lines, ridges etc. and a noise-attacked watermarked image and a without-noise-attacked watermarked image are visually similar. Column 2 shows the PSNR value under a Gaussian noise attack and the results show that the PSNR values are high even under a high noise (35db) attack. Similarly, columns 3,4,5 and 6 show the PSNR values under different noises (Salt and Pepper, Speckle and Poission noises) with different noise strength. These PSNR values are high which means different noise attacks do not affect fingerprint features. V.

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

CONCLUSION

In this paper, a new watermarking algorithm for fingerprint image using DTCWT is proposed. The proposed technique adds an authentication factor to the authorization process by combining user identity with biometric features. A watermarked image is constructed using a hash value of the users identification number. Fingerprint and watermarked images are both decomposed into real and imaginary wavelet coefficients using DTCWT. The watermark is embedded into the whole fingerprint image, that is, both the real and imaginary parts. Fingerprint features are not affected by the embedded watermark. VeriFinger is used to prove the proposed method performance by matching the template and watermarked fingerprint images. The matching results show that fingerprint features after watermark embedding remain as before the embedding process. In our experiment, the CASIA database is used to implement the proposed approach. The robustness of our algorithm has been evaluated under different rotation

[15] [16]

[17]

[18]

[19] [20]

1712

Anil K Jain, Salil Prabhakar, Lin Hong, and Sharath Pankanti. Filterbank-based fingerprint matching. Image Processing, IEEE Transactions on, 9(5):846–859, 2000. Ingemar J Cox, Joe Kilian, F Thomson Leighton, and Talal Shamoon. Secure spread spectrum watermarking for multimedia. Image Processing, IEEE Transactions on, 6(12):1673–1687, 1997. Chiou-Ting Hsu and Ja-Ling Wu. Hidden digital watermarks in images. Image Processing, IEEE Transactions on, 8(1):58–68, 1999. Olivier Bruyndonckx, Jean-Jacques Quisquater, and Benoit Macq. Spatial method for copyright labeling of digital images. Proc. IEEE Nonlinear Signal and Image Processing, pages 456–459, 1995. Jiwu Huang, Yun Q Shi, and Yi Shi. Embedding image watermarks in dc components. Circuits and Systems for Video Technology, IEEE Transactions on, 10(6):974–979, 2000. Qiang Cheng and Thomas S Huang. Robust optimum detection of transform domain multiplicative watermarks. Signal Processing, IEEE Transactions on, 51(4):906–924, 2003. Mehmet Utku Celik, Gaurav Sharma, A Murat Tekalp, and Eli Saber. Lossless generalized-lsb data embedding. Image Processing, IEEE Transactions on, 14(2):253–266, 2005. Bo Tao and Bradley Dickinson. Adaptive watermarking in the dct domain. In Acoustics, Speech, and Signal Processing. ICASSP-97., IEEE International Conference on, volume 4, pages 2985–2988. IEEE, 1997. Vidyasagar M Potdar, Song Han, and Elizabeth Chang. A survey of digital image watermarking techniques. In Industrial Informatics. INDIN’05. The 3rd IEEE International Conference on, pages 709–716. IEEE, 2005. Juan R Hernandez, Martin Amado, and Fernando Perez-Gonzalez. Dctdomain watermarking techniques for still images: Detector performance analysis and a new structure. Image Processing, IEEE Transactions on, 9(1):55–68, 2000. Mauro Barni, Franco Bartolini, and Alessandro Piva. Improved waveletbased watermarking through pixel-wise masking. Image Processing, IEEE Transactions on, 10(5):783–791, 2001. Mauro Barni, Franco Bartolini, Alessia De Rosa, and Alessandro Piva. A new decoder for the optimum recovery of nonadditive watermarks. Image Processing, IEEE Transactions on, 10(5):755–766, 2001. Deepa Kundur and Dimitrios Hatzinakos. Toward robust logo watermarking using multiresolution image fusion principles. Multimedia, IEEE Transactions on, 6(1):185–198, 2004. Mohammed Alkhathami, Fengling Han, and Ron Van Schyndel. Fingerprint minutiae protection using two watermarks. In Industrial Electronics and Applications. ICIEA2013., 8th IEEE Conference. IEEE, 2013. Nick Kingsbury. Image processing with complex wavelets. Roy. Soc. London A, 357(1760):2543–2560, 1999. Ivan W Selesnick, Richard G Baraniuk, and Nick C Kingsbury. The dual-tree complex wavelet transform. Signal Processing Magazine, IEEE, 22(6):123–151, 2005. Manesh Kokare, Prabir K Biswas, and Biswanath N Chatterji. Texture image retrieval using new rotated complex wavelet filters. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 35(6):1168–1178, 2005. Ivan W Selesnick. The design of approximate hilbert transform pairs of wavelet bases. Signal Processing, IEEE Transactions on, 50(5):1144– 1152, 2002. Ivan W Selesnick. Hilbert transform pairs of wavelet bases. Signal Processing Letters, IEEE, 8(6):170–173, 2001. Samira Mabtoul, Elhassan Ibn Elhaj, and Driss Aboutajdine. Robust semi-blind digital image watermarking technique in dt-cwt domain. Int’l journal of computer science, 5(1), 2009.