social network: Facebook and Twitter. Everyday more than 600,000 Facebook accounts are hacked [1], and in February. 2103 more than 250,000 Twitter ...
Volume 5, Issue 8, August 2015
ISSN: 2277 128X
International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com
Partial Encryption for Colored Images Based on Face Detection Haidar R. Shakir, Loay E. George, Ghada K. Tuma College of Science, University of Baghdad, Iraq Abstract— Today, the great use of secure images thatareeither transferred over a network or storedon disk has raised a concern over information security. The confidentiality and image privacy protection is most likely effective through image encryption. Digital images are large in size and complex, thus computational overhead plus processing time required to conduct total image encryption result in the limiting elements that block them from beingmore intensity in real time. Handling the complexity of digital images of the sensitive parts like human faces can be done alternatively through selective encryption. This study suggests a new development in encryption by the use of Haar wavelet transformation in combination with linear feedback shift register to cover up the faces. This method has three major steps which are detecting the face, encryption of the image and decryption. Entropy and PSNR are some of the security experiments that have been performed and the outcome has shown that the projected method offers high security with entropy plus PSNR close to model values. Keywords— Partial image encryption, Face detection, Face scrambling, Skin color I. INTRODUCTION In recent times, dueto the development in the performance of computer, high speed communication, and rapid increase in the usage of Internet, sharing great amounts of digital contents among lots of people has become very simple. However, this situation has led to increasing concerns related to digital contents protection, particularly the digital images present in the social networking sites. A prominent compromise of users’ accounts is seen in the two largest sites of social network: Facebook and Twitter. Everyday more than 600,000 Facebook accounts are hacked [1], and in February 2103 more than 250,000 Twitter accounts were compromised [2]. Protection of digital contents is the prime focus of the latest studies. However application of traditional encryption algorithms into contents of digital image/video is inefficient.This is due to longer time taken for encryption of contents, which has huge amount of information. Thus, reducing the time of encryption of digital contents through partial encryption is the latest trend. Partial encryption allows only certain parts of the contents to be encrypted. Partial encryption to minimize encryption plus decryption time in picture processing was suggested by Cheng and Li [3]. Wavelet compression algorithm and quardtreecompression algorithm are the two groups of algorithms discovered and found to be appropriate for limited encryption. These two classes of algorithms are suited for low bit rate functions, and limited encryption design are suggested for them. Quadtree and factor explaining each area are the two rational sections that are produced during quadtree compression. For lossy as well as lossless compression, quadtree limited encryption system can be applied. The wavelet-packet tree structure is used by Pommer and Uhl [4] for finding the feature region. Also this structure is used for compressing image. Their paper shows that the technique of encryption is related to the technique that uses the Quadtree structure. For visual communications based on retrieval, discretion is provided by this technique. There are two distinctive methods that NaveenKumar and Panduranga [5] developed for partial encryption of digital images. The first technique is dividing an image and the chosen parts are manually taken for encryption. The second technique automatically notices how objects are positioned and the chosen objects are taken for encryption. The locations of things in the images are detected through morphology techniques. For medical and satellite pictures encryption, these two techniques were precisely innovated. Bits pointing the sign and magnitude of the non-zero DCT coefficients are found by Droogenbroeck and Benedett [6]. It finds low frequency elements on the frequency domain as obtained by Suchindran and Nikolaos [7]. For certain parts of the low-frequency components the feature bit, the encryption can enhance the protection strength by these two methods. An image encryption method that selects facial parts and encrypts them by RGB pixel rearranging of a picture of (m x n) size was suggested by Kester [8]. This method makes it difficult to reinstate an image after it has been encrypted. This method is important for law implementation agencies to redo a face from images or footages linked to abuse lawsuits. Nevertheless, from the perspective of human, these features are still low-levels and do not contain significant information, as the features to minimize the time of encryption is extracted by earlier studies. The technique of partial encryption that uses face region feature is proposed in this paper, as the face contains significant data and is the most significant element in the digital images. The partial encryption technique, which uses the face region feature for encrypting only certain part of the image, is proposed in this paper. In the digital image the face region of the human is the most important region. Thus, the time of © 2015, IJARCSSE All Rights Reserved
Page | 25
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35 encryption is reduced and, also, by adding the face region location as the second key it increases the strength of protection. In section (1) the details of proposed Digital Images Protection System (DIPS) are given, and in section(2) the process of the detection of the ROI area (i.e., face region) is explained. II. DIPS METHOD Based on the difficulty of the features as they were categorized in previous section, these features were used for partial encryption.Becausethe features do not contain significant information from a human perspective, hence with the feature size being reduced more, the strength of encryption becomes less weak. The encryption algorithm efficiency is increased and the time of encryption is reduced by our proposed method, because of the partial encryption. And, in colored digital images, the face detection algorithm detects certain parts. Since the face region is the most significant component of the digital images, it is the reason behind using the face region exclusively for encrypting the digital content. III. DETECTION OF THE FACE REGION The proposed system uses the face regionas the main feature of the digital images which need to be encrypted. The HSV skin color technique is used to detect the face region more precisely; this technique also separates skin regions from non-skin regions and detects the edge for separating face regions from other objects or background. The proposed technique detects more clearly and precisely However, more computation time is taken when theface detection is used. But, the size of feature is reduced by this technique, giving exact results, and also the time of encryption is minimized. Thus, in the entire time of detection and encryption, a very little difference is observed. The digital images are assumed to be colored images for detecting the region of the face. Training stage and detection stage are the two stages of the face detection process under this assumption. Figure (1) below shows the layout of the proposed face detection system.
Fig. (1): Layout of the face detection system IV. HSV SKIN COLOR MODEL In order to perform face region extraction process based on skin color, the development stage of the proposed system passed through the following two phases: (1) extraction of face areas samples and (2) application of clustering using HSV color data. A. Extraction of Phase Training images To develop a skin color model, the distributions and attributes of different color subspaces were analyzed using a set of training images samples. The training set of images has 120 patches extracted from 89 color images that are acquired from the Caltech face dataset [9]. The set covers a widerange of variations (i.e. due to different ethnicities and skin colors). The taken images comprised of skin color regions that were exposed to different forms of illumination; such as daylight illumination (outdoors), mal-uniform illumination, or flashlight illumination (under dark conditions). The 89 training images were manually cropped in order to examine the color distribution of their skin regions. Some of the samples of the cropped skin training images are shown in Figure (2) below.
Fig.(2): Cropped skin samples for training © 2015, IJARCSSE All Rights Reserved
Page | 26
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35 B. HSV K-Means Clustering The training images’ HSV color components are drawn on by k-means clustering algorithm to identify the input images’ skin colors. In order to determine each cluster’s initial centroid value,the mean () and standard deviation (σ)of each color band (i.e. H, S, and V) are calculated for all skin patches. As starting step, 8 initial color centroids have been determined. The initial centroid vector mean vector and corresponding standard deviation vector are denoted centroidHSV, HSV and σHSV respectively. Table (1) below presents the initial centroids values which are produced using the following equation: 𝐶𝑒𝑛𝑡𝑟𝑜𝑖𝑑𝐻𝑆𝑣 = 𝐻𝑆𝑉 ± 𝜎𝐻𝑆𝑉 (1) TABLE (1): K-MEANS INITIAL CENTROIDS FOR HSV COLOR Cluster Centroid ( Ck ) Number Hue (H) Saturation (S) Value (V) (k) 0 Hue + Hue Sat + Sat Val + Val 1 Hue + Hue Sat - Sat Val + Val 2 Hue - Hue Sat + Sat Val + Val 3 Hue - Hue Sat - Sat Val + Val 4 Hue + Hue Sat + Sat Val - Val 5 Hue + Hue Sat - Sat Val - Val 6 Hue - Hue Sat + Sat Val - Val 7 Hue - Hue Sat - Sat Val - Val The purpose of applying k-means clustering algorithm is to establish a set of clusters havingminimum clusters. The objects should be assembled into k clusters; as the cluster k points {mj}(j=1, 2, …, k), in the color face, can be found in D using the minimum distance criterion: 𝑛 1 𝐷= 𝑚𝑖𝑛𝑗 𝑑 2 (𝑥𝑖 , 𝑚𝑗 ) (2) 𝑛 𝑖=1
Where d(xi,mj) represents the xi and mj’s Euclidean distance. The cluster centroids are represented by the points {mj} (j=1, 2… k). The clustering of k-means ceases until k cluster centroids are found, to the extent that a reduction of mean squared Euclidean distance between a training image and the closest cluster centroid is achieved. The convergence state is arrived at by repeating the iterations of k-means algorithm several times (in this work it is set 7). V. FACE DETECTION A. Brightness Balance In face image light usually affects a person’s skin color, and this leads to a deviation from the real person’s skin color. The correction in color images is enabled through Gray World Theory (GWT) [10], which is an algorithm of lighting compensation. In terms of description, the GWT method functions as follows: the magnitude of stimulus of red (R), green (G) and blue (G) in the recorded scenery is identified and the average of all color channels (i.e. Ravg, Gavg and Bavg) is calculated as illustrated in equations (3) & (4). A sample result is shown in Figure (3). 𝑅𝑒𝑑𝑎𝑣𝑔 + 𝐺𝑟𝑒𝑒𝑛 𝑎𝑣𝑔 + 𝐵𝑙𝑢𝑒𝑎𝑣𝑔 𝐺𝑟𝑎𝑦𝑎𝑣𝑔 = (3) 3 𝐺𝑟𝑎𝑦𝑎𝑣𝑔 𝑅′ = 𝑅 × , (4𝑎) 𝑅𝑒𝑑𝑎𝑣𝑔 𝐺𝑟𝑎𝑦𝑎𝑣𝑔 𝐺′ = 𝐺 × , (4𝑏) 𝐺𝑟𝑒𝑒𝑛 𝑎𝑣𝑔 𝐺𝑟𝑎𝑦𝑎𝑣𝑔 𝐵′ = 𝐵 × , (4𝑐) 𝐵𝑙𝑢𝑒𝑎𝑣𝑔
Fig.(3): Original image (left) and result of color balance (right) B. Skin Color Detection The image is converted into HSV color space after the application of the color balance. At this stage, the k-means clustering is used for the creation of a binary image which is black and white. The equation below is used to produce the binary image: © 2015, IJARCSSE All Rights Reserved
Page | 27
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35 𝑀𝑖𝑛𝑖𝑚𝑢𝑚 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 = lim ( 𝐼𝐻 − 𝐶 𝑘 . 𝐻) + 𝐼𝑆 − 𝐶 𝑘 . 𝑆 + 𝐼𝑉 − 𝐶 𝑘 . 𝑉 (5) 𝑘=0..7
The input image’s pixel is denoted by I. The pixel that is identified to have a minimal distance is set as white or black using the threshold (T) in accordance with the formula below: 1 𝑖𝑓 𝑀𝑖𝑛𝑖𝑚𝑢𝑚 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 < 𝑇 𝐵𝑖𝑛 = , (6) 0 𝑖𝑓 𝑀𝑖𝑛𝑖𝑚𝑢𝑚 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 ≥ 𝑇 C. Combining Edge Image with Skin Color Image The majority of algorithms of face detection that are based on skin color only function well if the images have a nonskin tone background or if the people in the image are wearing non-skin tone clothing. If the image contains skin tone clothing or backgrounds, the algorithm identifies the entire region as a skin region; as illustrated in Figure (4). In such case, the skin region of the candidate’s face may merge with the background, as illustrated in the figure. This necessitates an establishment of a mechanism that can separate the background regions from candidate’s face to allow easy localization of the face. This problem is solved by taking the edges of the input image, particularly those of the skin regions, into consideration and combining the edge results with the results of the previous step. The most common methods that are applied during edge detection include Robert’s cross edge, Sobel edge, Prewitt edge, Canny edge, and the Laplacian of Gaussian edge. The method used in this study is Robert’s cross edge. Roberts Cross edge detector offers the ability to conduct simple and quick computations of a 2-D image’s spatial gradient. The output images’ pixel values correspond to the estimated absolute magnitude of the input images’ spatial gradient. As illustrated in figure (5), Robert’s cross edge detector is comprised of two 2×2 convolution kernels. Each kernel is a 90° rotation of the other.
Fig. (4): Original images (left) and result of skin color segmentation (right) 1 0
0 -1
0 -1
1 0
Gx Gy Fig. (5): Robert’s 2×2 convolution kernels D. Erosion The implementation of edge detection facilitated the separation of face region from other objects such as the background and clothing. However, in some instances, full separation of the face region from other objects cannot be achieved through fusion with edge image.This failure is due to the resemblance of colors among the neighbor objects. For this reason, erosion was also included in the proposed face detection system to increase its separation effectiveness. An illustration of face segmentation results before and after being subjected to erosion is shown in Figure (6) below. In erosion, the binary image segments are shrunk and its pixels are scanned. Detection of a white pixel leads to checkup of its 8-connected neighbors. If at least one of the 8-connected neighbors is found to be black, the white pixel is removed.
A B Fig. 6: Illustration of erosion (A) face segment before erosion (B) Face segment after erosion E. Holes Filling The purpose of holes filling process is to get rid of the major holes which may appear in the image after thresholding stage. Holes filling process turns the faces into one connected region devoid of the numerous facial cavities and holes. The holes filling process is contingent on labeling of the background segments. During the process of labeling each connected background is assigned a special ID. The steps followed in the method are highlighted below [11]: © 2015, IJARCSSE All Rights Reserved
Page | 28
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35 1. The regions that correspond to background intensity are found and labeled. 2. The regions belonging to the background and/or holes are separated according to the following condition: Each collected region is regarded as a background if some of the collected pixels lay on image boundaries. If not, it is regarded as a hole. Step 2 is repeated overall the collected regions which are identified to have similar intensities with the background. For the purpose of holes filling, the intensity of the collected holes' pixels is changed to match the intensity of surrounding. Figure (7) below presents an example which illustrates the effect of holes detection and filling stage.
(A) Image before holes filling (B) Image after hole filling Fig.(7): Example of a filled holes image F. Connected Component Labeling The image generated after the application of holes filling stage is subjected to the stage of search for connected components using the adjacent 8-neighbor pixels test. The adopted procedure for carrying out this task is as follows [12]: 1. The procedure begins with labeling the 1D connected components of every row. 2. Then, the labels of row-adjacent components are merged by means of an associative memory scheme. 3. Finally, the consecutive sets of positive component labels are relabeled. This procedure helps in the identification of all connected components. The step that follows next pertains to deciding the connected components that correspond to a face and those which do not. G. Candidate Face Verification At this stage, the remaining components are subjected to a set of shape based connected operators such as Area, Solidity, Centroid, Orientation, and Ellipse area to make decisions whether the tested shape correspond to a face or not. Through these operators, the basic assumptions regarding the shape of a face are drawn upon. If a component does not correspond to the shape of the candidate’s face, it is removed. The criteria used in making these simple and effective decisions is contingent on the combinations of the perimeter P, area A, and D x&Dy of the connected component’s minmax box. Consequently these features are only calculated once for the three operators. The Area represents the number of pixels of a given shape. Small components , any one which is less than 15% of the total image pixels, are eliminated. Solidity refers to the ratio of its area to the min-max box (i.e., the rectangular bounding box) area: 𝐴 𝑆𝑜𝑙𝑖𝑑𝑖𝑡𝑦 = , (8) 𝐷𝑥 𝐷𝑦 The value of solidity represents the area occupied by a connected component in its min-max box dimensions. The face components often have high solidity values. If the tested shape is found to have a solidity values lower than the specified 0.5 or greater than 0.9 thresholds, it can be discarded or preserved so that it can be further analyzed. Orientation refers to the aspect ratio of the min-max box that surrounds the component [13]: 𝐷𝑦 𝑂𝑟𝑖𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛 = , (9) 𝐷𝑥 There is an assumption that there is a certain range of the orientation of face components. The determination of this range is done through observations on numerous images. If the orientation of a component is less than 0.5 or larger than 0.9 of the range; it is discarded. Human faces are usually evenly distributed in the center. As a result, sides of images do not have face components in them. The centroid of a face region is usually located in a small window at the central point of the bounding box. The average centroid location is determined by taking the center of mass of the shape after the application of the above criteria. If a skin regions' centroids (Y coordinate) is extremely below or above by 20% of central windows, it is rejected as a face candidate area. The equation of calculating the ellipse area is as follows [14]: 4𝐴 𝐸𝑙𝑙𝑖𝑝𝑠𝑒 𝑎𝑟𝑒𝑎 = , (10) 𝜋𝐷𝑥 𝐷𝑦 The ellipse area criterion stipulates that it is possible to calculate the probability of every skin color of the nominated face region. For a region being tested to be considered as a human face, it must result in a S e value that is greater than threshold 0.7. After all the above operators are applied, the connected components that are left behind contain faces. VI. ENCRYPTION AND DECRYPTION After detecting face, the face region of the image is encrypted by the proposed system. And, then with the help of steganography technique, information of the location is hidden in the image. For enhancing the protection strength of the © 2015, IJARCSSE All Rights Reserved
Page | 29
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35 contents, the information of location is hidden. In this paper, the face area is made of a binary image that determine of the detected face pixels which are being used. Linear Feedback Shift Register is used for encrypting the contents by splitting into blocks. For transforming the region of face into an understandable form, Haar wavelet transform with LFSR is used. In comparison to the decryption process the process of encryption is different in the proposed system. The implemented encryption process consists of the following steps (see Fig. 8a): 1. Detecting the regions of face. 2. Splitting of the facial area into NN blocks. 3. Each block having Haar Wavelet Transform. 4. Quantize Haar coefficients (LL, LH, HL and HH). 5. Applying quantized coefficients for conducting Haar inverse. 6. Applying Linear Feedback Shift Register (LFSR) to encrypt the resulting pixels. 7. The location of information of the encrypted region is added. On the other hand , there are many steps in the process of decryption (see Fig. 8b): 1. Retrieving the information about the location of the face region that is encrypted. 2. Splitting the area into blocks. 3. Applying LFSR to decrypt blocks of pixel. 4. Each block having Haar Wavelet Transform. 5. De-quantization of Haar coefficients (LL, LH, HL and HH). 6. Applying de-quantized coefficients for conducting Haar inverse.
(a) (b) Fig.(8): The flowchart of the DIPS system: (a) encryption, (b) decryption A. Blocking A fixed portioning system, which divides the different region of face into non-overlapping blocks, forms the base of the encryption system. More flexible representation is offered and more powerful outcomes are developed by this method. Moreover, the simplicity of the fixed-partitioning method balances these benefits. Figure (9) shows that the localized face area is partitioned into non-overlapped blocks (each of size 4x4 pixel).
Fig. 9:Image portioning (Left: Original face image, Right: Partitioned face image) B. Haar Transform Due to the enough decomposition ability and ease of Haar wavelet transform it was used as image signal decomposer for applying fast partial encryption [15]. Haar wavelet transform is involved with computing the scaling (approximation) sub-band'scoefficients and the three wavelet (detail) subbands' coefficients. If the image block has nxn dimension then the size of each subband will be (n/2)x(n/2) coefficient. The scaling subband coefficients, {LL()}, are kept in the upperleft quarter of the Haar block, and the three wavelet subbands coefficients {LH(), HL(), HH()} occupy the other three quarter, as shown in figure (10) below. © 2015, IJARCSSE All Rights Reserved
Page | 30
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35
Fig.(10): Haar wavelet coefficients The calculation of both scaling coefficients (sn) and wavelet coefficient (wn) is done using nth pair of adjacent block pixels[16]: 1 𝑠n = dn + dn+1 (11) 2 1 wn = dn − dn+1 (12) 2 C. Color quantization The task of minimizing the number of colors in an image is called color quantization. A scalar quantization technique is used in the proposed system of encryption. Scalar Quantization is a simple and fast quantization method [17]. A quantization step value is used to divide each of all Haar wavelet coefficients (i.e., approximation and detailed). The following paragraphs illustrate the implemented quantization process At first the quantization step value (QLL) is set. The approximation band coefficients hold the most energetic part of the image block. The value of the quantized LL is calculated using the following equation: LLorginal (𝑥, 𝑦) LLnew (x, y) = round (13) Q 𝐿𝐿 The Haar detail sub-bands (LH, HL, and HH) hold the edge-related information. Hence, in comparison with the quantization value (QLL) the value of used step value for quantizing detail subbands coefficients is taken higher. The following equation is used to accomplish quantization for the detail subbands: XXorginal XX new = (14) 1 + Q |XXorginal | Where, the detail coefficients of the Haar transform (LH, HL, and HH) are presented by XX. Since the detail coefficients of the Haar wavelet are small, the quantization applied on the detail coefficients helps in reducing the subjective distortions in the reconstructed image. D. Linear Feedback Shift Register Asthe first level of encryption, the linear feedback shift register (LFSR) algorithm is used. This algorithm takes a seed as an input. The first number of the produced sequence is formed by this seed. A linear feedback function is made by LFSR (eg.,x16+x14+x13+x11+1) that portrays the feedback bits to be XORed. The left-most bit is represented by 1 (x0) in the polynomial, where after the XOR operation the value is positioned. Then, the bits are shifted right by 1 position, and as first bit the XOR value is kept. This becomes the next number in the sequence [18].
Fig.(11): Linear feedback function: x16 + x14 +x13 +x11 + 1 At these positions the bits are XORed because the powers in the linear feedback function are 16,14,13,11. At this time, the bits are shifted right by 1 shift stepand the XORed value is kept as the first bit. ALGORITHM: Linear Feedback Shift Register Algorithm Input: seed Output: sequence of pseudo random numbers Initialize: Num← seed 1. While Num!=0 and Num is not repeated do 2. bin← Obtain the binary pattern of the Num 3. Pad with leading zeros until bin has 16 bits. 4. XOR the bits at positions corresponding to linear feedback function and store it in m. 5. bin← Right shift the bin by 1. 6. Pad with leading zeros until bin has 16 bits. 7. Replace the first bit with m in bin. 8. Num← Obtain the decimal value of the binary pattern obtained. 9. End © 2015, IJARCSSE All Rights Reserved
Page | 31
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35 VII. EXPERIMENTAL RESULTS The Peak Signal to Noise Ratio (PSNR), Entropy Coefficient, and Encryption Time Analysis have been used to measure the performance study of the proposed partial image encryption. For illustration, the original image shown in figure (12) below was passed through the proposed system, and the produced encrypted image is shown in figure (13). While, figure (14) shows the reconstructed (i.e., de-encrypted) image.
Fig. (12): Original ImageFig. (13): Partially Encrypted Image
Fig. (14): Reconstructed Image A. Time Analysis The time analysis of the proposed cryptosystem has been studied. The encryption time for the original images is shown in Table (2) below. For each test image thefollowing two encryption modes have been applied: 1. Full encryption (i.e., encryption applied upon all wavelet subbands), and 2. Selective encryption (i.e., encryption applied on LL subband only) Table (2): The encryption time analysis for the proposed cryptosystem Encryption Time Image Ratio Full Partial Encryption Encryption
27.83
1.03
0.49
27.21
1.22
0.63
13.87
0.91
0.51
3.46
0.42
0.19
21.34
0.75
0.43
B. Entropy Coefficient Entropy is defined as measure of improbability association with random variable. The shared information amongst pixel values is reduced by encryption; particularly in the case of images. Such process has the effect of raising the entropy value. Since the encrypted image must not deliver any info regarding the original image, so it is safe that the encryption system should fulfill the condition of maximizing the information entropy of encrypted image [19]. The equation given below is used to calculate the information entropy: © 2015, IJARCSSE All Rights Reserved
Page | 32
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35 1 𝐸𝑛𝑡𝑟𝑜𝑝𝑦 = 𝑃 𝑖 𝑙𝑜𝑔 (15) 𝑃(𝑖) Where, P(i) is the probabilityof occurrence of ithgray value in the image. For an image having 256 gray levels, the maximum possible value is 8 which indicate that complete randomness may happened in the tested image, which is anticipated in encrypted image. This is only possible when the individual image pixels have an equivalent probability to take any value within the range [0,255]. The resulting entropy had been examined for various types of images. Table (3) blowshows samples of the entropy values for different original face image and their encrypted variants. Table (3): Image entropy analysis for the proposed cryptosystem Encrypted Image Entropy
Image
7.40
7.34
7.46
7.23
7.39
C. Image Fidelity Due to the use of scalar quantization step which is a lossy operation, the reconstructed image holds error in comparison with the original image. The degree of closeness between the original and constructed image is tested either objectively (numerically) or subjectively (visually). The most common objective measure is the peak signal-to-noise ratio (PSNR) of the decrypted image. It is determined using the following equation [20]: 𝑀 × 𝑁 2552 𝑃𝑆𝑁𝑅 = 10 × 𝑙𝑜𝑔10 𝑀 (16) 𝑁 2 𝑚 =1 𝑛=1 |𝑓 𝑚, 𝑛 − 𝑓𝑑 𝑚, 𝑛 | Here, fd(m, n) is the decrypted image, f (m, n) is the original image. Table (4): PSNR analysis for the proposed cryptosystem Image
© 2015, IJARCSSE All Rights Reserved
PSNR of Encrypted Samples
PSNR of Decrypted Samples
18.10
83.74
20.39
80.38
Page | 33
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35
19.29
85.17
20.53
80.38
19.88
85.17
VIII. CONCLUSION AND FUTURE WORK This paper presented a facial encryption technique and proposed a new color image encryption algorithm based on Haar wavelet transform and linear feedback shift register. Through an assessment module, parts of interest (faces) are identified. These parts, believed to have confidential sensitive details, are afterward encrypted. The encryption process involves separating into blocks followed by using Haar wavelet. The results of quantizing Haar coefficients are Xored with pseudo-random number generated from LFSR for alteration. It is possible effectively to use the suggested encryption to conceal face parts of interest in the images as shown by experimental results. At first, the effectiveness of the suggested technique was through security evaluation which entailed entropy plus PSNR examination. Entropy analysis illustrates that the algorithm has entropy that is near to ideal entropy, where the algorithm is safe from information leakage. The encrypted images have low PSNR values that are opposing to statistical harassments. Furthermore, the selective encryption technique lessens the overhead of encrypting parts that are not sensitive and improved the execution time. REFERENCES [1] E.N. Nfuka, C. Sanga, and M. Mshangi, “The rapid growth of cybercrimes affecting information systems in the global: is this a myth or reality in tanzania?”, International Journal of Information Security Science, vol. 3, no. 2, pp. 182-199, 2014. [2] T. Parwani, R. Kholoussi, and P. Karras, “How to hack into Facebook without being a hacker”, Proceedings of the 22nd international conference on World Wide Web companion, pp. 751-754, 2013. [3] L. Cheng, X. Li, “Partial encryption of compressed images and video”, IEEE Transactions on Signal Processing, vol. 48, no. 8., pp. 2439-2451, 2000. [4] A.Pommer, A.Uhl, “Selective encryption of wavelet-packet encoded image data: efficiency and security”, Multimedia Systems, vol. 9, no. 3, pp. 279-287, 2003. [5] H.Panduranga, and S.NaveenKumar, “Selective Image Encryption for Medical and Satellite Images”,International Journal of Engineering Science &Technology,vol. 5, no. 2, pp. 32-43, 2013. [6] S. M. Suchindran and G. B. Nikolaos, SCAN based lossless image compression and encryption, IEEE Int. Conference on Information Intelligence and Systems, pp. 490-499, 1999. [7] A. Pommer, and A. Uhl, “Wavelet packet methods for multimedia compression and encryption”, IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, Victoria, Canada, pp. 1-4, 2001. [8] Q.A.Kester,“A cryptographic image encryption technique for facial-blurring of images”,International Journal of Advanced Technology and Engineering Research (IJATER), 2013. [9] P. Kakumanu, S. Makrogiannis, R. Bryll, S. Panchanathan, and N. Bourbakis, “Image chromatic adaptation using ANNs for skin color adaptation”, 16th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2004), pp. 478-485, 2004. [10] K. Somasundaram, and T. Kalaiselvi, “A method for filling holes in objects of medical images using region labeling and run length encoding schemes”, National Conference on Image Processing (NCIMP) , pp. 110-115, 2010. [11] K. B. Eum, “Effective face detection and robust face tracking using hybrid filter”, Journal of Korean Society for Imaging Science and Technology, vol. 18, no. 2, pp. 9-19, 2012. [12] P. Kuchi, , P. Gabbur, and P. S. Bhat, “Human face detection and tracking using skin color modeling and connected component operators”, IETE Journal of Research, vol. 48, pp. 289-293, 2002. [13] G. Yang, H. Li, L. Zhang, and Y. Cao, “Research on a skin color detection algorithm based on self-adaptive skin color model, 2010 International Conference on Communications and Intelligence Information Security (ICCIIS), pp. 266-270, 2010. [14] X. Zhang, S. Xie and C. Yu, “Watermarking algorithm for resisting cropping attack based on Haar wavelet transform”, Journal of Convergence Information Technology, vol. 7, no. 7, pp. 174-181, 2012. © 2015, IJARCSSE All Rights Reserved
Page | 34
[15] [16] [17] [18] [19]
Shakir et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(8), August- 2015, pp. 25-35 R. K. Dehankar, S. C.Bhivgade and A. R. Khan, “Wavelet based edge detection technique for iris recognition using matlab”, International Journal of Emerging Technology and Advanced Engineering, vol. 2, no.1, 2012. V. Sadasivam, Compression of gray scale images using bandelets and neuro-statistical methods, PhD thesis, ManonmaniamSundaranar University, 2011. V. Kapur,S. T. Paladi and N. Dubbakula, “Two level image encryption using pseudo random number generators”, International Journal of Computer Applications, vol. 115, no. 12, 2015. S. Rakesh, Ajitkumar A Kaller, B. C. Shadakshari, and B. Annappa, “Multilevel image encryption”, Cornell University Library, 2012. F.E.A. El-Samie, H.E.H. Ahmed, I.F. Elashry, M.H. Shahieen, O.S. Faragallah, E.S.M. El-Rabaie and S.A. Alshebeili, Image encryption: a communication perspective, CRC Press, 2013.
© 2015, IJARCSSE All Rights Reserved
Page | 35