Hybrid (2, n) Visual Secret Sharing Scheme for ... - Semantic Scholar

3 downloads 0 Views 1MB Size Report
Abstract—In several recent decades, many secret sharing schemes for digital images have been developed. At the beginning stage, traditional schemes ...
Hybrid (2, n) Visual Secret Sharing Scheme for Color Images Jung-San Lee1

T. Hoang Ngan Le2,*

1

Department of Information Engineering and Computer Science Feng Chia University Taichung 40724, Taiwan [email protected] Abstract—In several recent decades, many secret sharing schemes for digital images have been developed. At the beginning stage, traditional schemes typically must deal with the problem of computational complexity. Later, other visual secret sharing schemes come with either higher storage cost or low accuracy problems. In this paper, a new (2, n) secret sharing scheme for color images is proposed with hybrid techniques: the gradual search algorithm for a single bitmap BTC (GSBTC), the discrete wavelet transform (DWT), and the vector quantization (VQ) technique. Experimental results confirm that our proposed scheme not only generates a high reconstructed secret color image quality but also provides a set of noise-like grayscale shadows which are much smaller than these in existing schemes Keywords-(2, n) VSS scheme, Shamir’s scheme, GSBTC coding, DWT, VQ technique, secret color images

I.

INTRODUCTION

*

Nowadays, protecting secret data during transmission over the Internet has become an important requirement because of hackers’ ability to steal or tamper with the secret information. To prevent secret data from being lost or altered, the traditional cryptology technique has been considered in the recent years, in general. However, its decoding phase has very high computational complexity. To surmount this deficiency and further secure the important data, in 1979 George Blakley [3] and Adi Shamir [23] independently introduced the secret sharing mechanisms, called (k, n) threshold scheme. Their schemes are performed by first dividing the secret data into n pieces and each piece is called a shadow or a share. Then, the set of shadows is distributed to n participants, each of whom

*

Correspondence address:

T. Hoang Ngan Le Department of Computer Science, Faculty of Information Technology, University of Science, 227 Nguyen Van Cu, District 5, HCMC, Vietnam. E-mail: [email protected] Phone: (083) 8324467-500 Fax: (84.8) 8 350. 096

2

Department of Computer Science, Faculty of Information Technology, University of Science, 227 Nguyen Van Cu, District 5, HCMC, Vietnam. [email protected] holds one shadow. Later, the secret data can be reconstructed if there is at least completely knowledge of k shadows, where k ≤ n. In 1995, Noar and Shamir first extended the application of this concept to the binary images domain [20], named visual secret sharing (VSS). Noar and Shamir’s VSS scheme eliminates the complex computation problem, and the secret image can be restored by stacking operation. In addition, the reconstructed image can be just identified by the human visual system. A (k, n) VSS scheme is evaluated by four criteria: security, accuracy, computational complexity, and pixel expansion. The first criterion is satisfied if each shadow leaks no information of the original image and the original image cannot be reconstructed if there are fewer than k shadows collected. The second criterion is considered to be the quality of the reconstructed secret image and evaluated by peak signal-tonoise ratio (PSNR) measure. A high PSNR implies high accuracy of the secret image sharing scheme. The computational complexity concerns the total number of operators required both to generate the set of n shadows and to reconstruct the original secret image. The last criterion, which affects data transmission speed, is also called the shadow size. A large shadow size implies high transmission cost and storage cost. An ideal VSS scheme must satisfy high security, high accuracy, low computational complexity, and small shadow. Over the past decade, most VSS schemes for images have been implemented by simply stacking the collected shadows. Without computational complexity, but pixel expansion and low accuracy are weaknesses of these schemes [1, 8, 11, 14, 20]. Later, other VSS schemes have been proposed to resolve these above deficiencies, but they have suffered from the problem of high computational complexity [4, 9, 24]. To deal with these problems, Ito et al. [13], Wang et al. [25], and Yang et al. [26] applied the probability concepts to design a probabilistic visual secret sharing scheme for binary images, called (2, n) ProbVSS. However, all these existed (2, n) ProbVSS schemes have been designed for binary images and have achieved low reconstructed secret image quality. In 2008, Chang et al. [5, 6] extended the application of ProbVSS to encompass both grayscale images and color images, respectively. The experimental results show that Chang et al.’s (2, n) ProbVSS schemes are superior to those existing ProbVSS schemes with respect to four criteria. In this paper, a

978-1-4244-4568-4/09/$25.00 ©2009 IEEE

1 Authorized licensed use limited to: CONCORDIA UNIVERSITY LIBRARIES. Downloaded on January 16, 2010 at 14:25 from IEEE Xplore. Restrictions apply.

new (2, n) VSS scheme based on hybrid techniques is proposed to achieve a higher quality reconstructed image and much smaller shadow size than that were offered by Chang et al. To achieve our objectives, Shamir’s scheme is combined with three well-known image compression techniques, namely, Chang and Wu’s gradual search algorithm for a single bitmap BTC (GSBTC), discrete wavelet transform (DWT), and vector quantization (VQ) [12, 17]. The rest of this paper is organized as follows: Section 2 briefly describes Shamir’s secret sharing scheme and relevant technique. In Section 3, we present our proposed scheme in details. Then, the experimental results are presented in Section 4. Finally, we make conclusions in Section 5. II.

RELATED WORKS

The following example will give a better explanation of this scheme: Share construction phase: Input: n = 6, k = 3 and s = 206. Step 1: Arbitrarily choose two random numbers a1 = 166, a2 = 94, and a prime number g = 257. Step 2: Construct a polynomial function: f(x)=(206 + 166x + 94x2) mod 257. Step 3: Compute six shares: {(i, f(i))}={(1, 209), (2, 143), (3, 8), (4, 61), (5, 45), (6, 217)} where (i, f(i)) is the ith shadow si. Revealing phase:

Our objectives of reducing the shadow size used and increasing the quality of reconstructed images are achieved by combining Shamir’s secret sharing scheme with vector quantization (VQ), discrete wavelet transform (DWT), and Chang and Wu’s gradual search algorithm for a single bitmap BTC (GSBTC) technique. Background techniques are briefly introduced as follows.

Consider three shares chosen randomly: (2, 143), (4, 61), (5, 45).

l1 =

x − x2 x − x3 x − 4 x − 5 × = × = ( −2) −1 ( x − 4)( −3) −1 ( x − 5) . x1 − x2 x1 − x3 2 − 4 2 − 5

A. Shamir’s scheme Based on the polynomial interpolation function [23], Shamir proposed a secret sharing scheme, called (k, n) threshold scheme. In his scheme, secret data s is divided into n pieces: s1, s2, …, sn and the secret data s can be reconstructed with the cooperation of t at least k pieces. This secret data s is an outcome of a (k - 1)-degree polynomial function whose coefficients are based on the Lagrange interpolation formula. The procedure of this scheme is briefly described as follows.

l2 =

x − x1 x − x3 x −2 x −5 × = × = (2) −1 ( x − 2)(−1) −1 ( x − 5) . x2 − x1 x2 − x3 4 − 2 4 − 5

l3 =

x − x1 x − x2 x−2 x−4 × = × = (3) −1 ( x − 2)(1) −1 ( x − 4) . x3 − x1 x3 − x2 5 − 2 5 − 4

In general, assume the data secret s is a number. In Shamir’s scheme, to divide the secret data s into n pieces, a polynomial sharing function of degree (k-1) is defined as in (1).

(

)

f ( x) = a0 + a1 x + ... + ak −1 x k −1 mod g .

(1)

Here a1, a2, …, an are n random numbers, and a0=s and g is a random prime number. Each si, which is considered to be a share and is given to the ith participant, is calculated by (2). si = ( i, f ( i ) ) .

(2)

In the revealing phase, s can be reconstructed from at least k shares chosen randomly from n shares. The coefficients of the polynomial function f(x) are obtained from the Lagrange interpolation formula defined as.

li =

x − xi .

∏ x −x

1≤ j ≤ k i≠ j

i

(3)

j

The function f(x) is determined as follows: k

f ( x ) = ∑ si × li mod g . i =1

The secret data can be retrieved as, s=f(0).

(4)

Step 1: Compute Lagrange basis polynomials.

Therefore, f ( x ) =

3

∑ s ×l . i

i

i =1

f(x) = 143×((-2)-1×(x-4)×(-3)-1×(x-5)) + 61×((2)-1×(x-1)×(-1)-1×(x-5)) + 45×((3)-1×(x-2)×(1)-1×(x-4)) mod 257. = 43×(128×(x-4)×171×(x-5)) + 61×(129×(x-1)×256×(x-5)) + 45×(86×(x-2)×(x-4)) mod 257. = (51483×x2-42294324×x+82775280) mod 257. = 94×x2+166×x+206. Step 2: Obtain the secret data s = f(0) = 206. B. Vector quantization technique Vector quantization (VQ) is considered to be one of efficient lossy compression techniques based on block coding. The basic idea of VQ is that an image is presented by a set of representative vectors, called codewords. And the set of codewords forms a codebook which is shared between parties. Generally, there are three phases in a VQ scheme, namely, codebook generating, VQ encoding and VQ decoding. Both the VQ encoding and VQ decoding phases need a codebook; therefore, the codebook generation phase must be executed in advance. In our proposed scheme, to speed up the codebook generating phase, Lai et al.’s scheme [17] is adopted. Moreover, to cut down the computational cost of VQ encoding phase

2 Authorized licensed use limited to: CONCORDIA UNIVERSITY LIBRARIES. Downloaded on January 16, 2010 at 14:25 from IEEE Xplore. Restrictions apply.

without incurring any further image distortion, we can further employ Hu et al.’s algorithm [12]. Their scheme utilizes two test conditions [22] to avoid impossible codewords. According to such approach, we firstly classify codewords into either static or active groups. The information of codeword displacement is applied to reject impossible candidates in the partition step. For a quantizer Q and a codebook CB, a vector quantizer can be defined as a mapping function in the k dimension space Rk. That is, Q: Rk → CB, where CB={Ci | i=1, 2, . . . , N} is the set of codewords. The quantizer Q partitions training vectors into N clusters Si (i=1, 2, ... , N), where Si ={X|Q(X)=Ci } and X = (x1, x2, . . . , xk)t ∈ Rk is a training vector. Note that the set of training vectors is defined as S = {X}. Three phases in a VQ scheme are described as follows. Codebook generating phase: The phase performed by Lai et al.’s scheme is shown in Fig. 1.

Figure 1. Lai et al.’s codebook generation algorithm

Step 1: Choose a set of t random vectors from the set of training images. Each vector is considered to be a codeword sized k= m×m. Step 2: Perform partition operation into the codebook CBp to obtain the new codebook CBp+1. For each training vector X, let the distances to the nearest and second nearest codewords in the previous partition be d'1 and d'2, respectively. The values d'1 and d'2 are determined by the Euclidean distance between each codeword of a codebook and an input vector. Step 3: Let the ith codeword used in the current and previous partition be Ci and C'i, respectively. Note that Di is the distinction between Ci and C'i. If Di = 0, codeword Ci is considered as a static codeword; otherwise, it is an active one. The codebook CB will be classified into two groups corresponding to a set of static vectors and a set of active ones. Step 4: Let us consider a training vector X with a set of active vectors called CBact. If X is a static vector, its closest codeword is determined by searching in CBact. If X is an active vector, let di be the distance between center of the ith cluster Ci and X, di=|Ci - X|. When di < d2, the closest codeword in the codebook for X is determined in CBact; otherwise, it is decided in CB. After updating d'1, d'2, CB, and CBact, we generate the search structures for CB and CBact. This Step is repeated until CB is converged. Encoding phase: The given grayscale image is partitioned into a set of non-overlapping vectors of m×m pixels. Each vector is considered as a k-dimensional vector, where k= m×m. As usual, the closest codeword in the codebook for one vector is the one with the minimal squared Euclidean distance to the given image vector. However, by applying Hu et al.’s scheme, encoder firstly employs mean-distance-ordered partial codebook search (MPS) algorithm to sort the codewords in the

codebook. Then, encoder decides the initial searched codeword for each vector. After that, a simple prediction technique instead of binary search technique is set up to find the codeword whose mean value is the closest to the mean value of an image vector. The compression codes of VQ are the set of indices of the closest codewords in the codebook for all image vectors. Decoding phase: This phase is performed as traditional VQ technique based on compression codes. C.

Discrete wavelet transform Discrete wavelet transform (DWT) has been successfully applied in fields ranging from pure mathematics to applied sciences. Because of its high compression ratio, which makes it easy to analyze the temporal and spectral properties of signals and flexible in representing nonstationary signals, DWT is used in many image processing and data compression applications. Although there are various wavelet families, the HAAR wavelet is used in the proposed scheme thanks to its high compression ratio and simple implementation [19]. To compute a one-level 2D-DWT for an image, we first apply a one-level 1D-DWT to a grayscale image along the rows of the image. Second, we apply the one-level 1D-DWT along the columns of the transformed image generated from the previous step. Finally, we obtain an image with four sub-bands LL, HL, LH, and HH. The LL sub-band is a down sample low resolution version of the original image. The other three sub-bands are the low resolution of the residual image. D. Gradual search algorithm for one single bitmap BTC This technique is another variation of block truncation coding (BTC) used as an efficient and simple color image compression. The BTC technique can be expressed as follows: Firstly, a grayscale image is segmented into nonoverlapping grayscale blocks. The size of each block can be (4×4) or (8×8), etc. Secondly, the mean grayscale value x of a block (4×4) is calculated as in (5). x=

1 j =4 j = 4 ∑∑ x(i, j) . 16 i = 4 i = 4

(5)

Pixels in each grayscale block are classified into two categories according to whether their values are larger or smaller than the average value x . Let G be the number of pixels whose intensions are greater than value x . The average values of higher or lower categories are calculated as follows: 1 (6) XH = ∑ x(i, j) , XL = 16 1− G ∑ x(i, j ) . G x (i , j )≥ x x ( i , j )< x The binary block is composed by self-replacing. The pixels whose values are larger than value x will be replaced by bits ‘1’; otherwise, bits ‘0’ are replaced. To extend the BTC method to a color image composed of three color channels, we use Chang and Wu’s GSBTC technique to compress one secret color image into one binary image and mean values. The detailed descriptions of GSBTC are shown in Fig. 2.

3 Authorized licensed use limited to: CONCORDIA UNIVERSITY LIBRARIES. Downloaded on January 16, 2010 at 14:25 from IEEE Xplore. Restrictions apply.

Figure 2. Flowchart of GSBTC technique

Step 1: Generate three bitmaps and three pair of mean values for each block by applying the traditional BTC method to each color channel of a color image. First, decompose a color image into three channels: Red, Green, and Blue. Second, split each channel into same size (m×m) non-overlapping color blocks. After applying the traditional BTC scheme to each single color block, one bitmap and one mean value pairs are derived. Let (RXH, RXL), (GXH, GXL), and (BXH, BXL) be three pairs of mean values of Red channel, Green channel and Blue channel, respectively. The corresponding bitmaps for Red, Green, and Blue channels are defined in turn as follows: BR={r1, r2, …, rm×m|ri=1 or 0}, BG={g1, g2, …, gm×m|gi=1 or 0},

(7)

BR={b1, b2, …, bm×m|bi=1 or 0}, where ri, gi, and bi are the ith elements of Red (BR), Green (BG), and Blue (BB) channels, respectively. Step 2: Initialize common bitmap sCB. According to the definition given by Chang and Wu, if sCB is the best common bitmap, the initial status of sCB is defined sCB={ c1, c2, …, cm×m }, where as

ri if ri = gi = bi ⎫ ⎧ ci = ⎨ ⎬ otherwise ⎭ . (8) ⎩non-determinate

Step 3: Replace the non – determinate elements in the initial sCB by bits “1” or “0”. Let MSEL and MSEH be the mean squared error (MSE) once the non – determinate element ci is replaced by bit “0” and bit “1”, respectively. MSEL and MSEH are determined as follows: MSEL = (ri-RXL)2 + (gi-GXL)2 + (bi-BXL)2, MSEH = (ri-RXH)2 + (gi-GXH)2 + (bi-BXH)2.

(9)

The value of non–determinate element ci is specified as ⎧0 if MSEL < MSEH ⎫ (10) ci = ⎨ ⎬. Otherwise ⎩1 ⎭ After the above three steps are completed, each color block composing of three blocks, namely, Red block, Green block, and Blue blocks is represented by one common bitmap and six mean values called (RXH, RXL), (GXH, GXL), and (BXH, BXL). III.

Figure 3. The relationship among three procedures

Figure 4. Codebook generation phase of our proposed scheme

A. 3.1. Codebook generation procedure There are two codebooks generated during this procedure. The first, HL codebook, is produced based on coefficients of the HL sub-bands. The other is LH codebook, generated based on the coefficients of the LH sub-bands. The flowchart of codebook generation procedure is depicted in Fig. 4 First, each image in the set of training images is preceded by DWT. From the set of training images, we get a set of LH sub-bands and a set of HL sub-bands. Then, we apply the codebook generation phase mentioned in sub-section 2.2 to the LH and HL sub-bands to generate the LH codebook and HL codebook, respectively. These codebooks will be used in VQ encoding of the share construction phase and in VQ decoding of the revealing phase. Let us assume that the number of codewords in each codebook is nc and the elements in each codeword is ne. Definitely, the value of ne is equal to the block size in bytes, which is derived from the set of training images in the VQ encoding. If there are m×m bytes in each block, the value of ne is obviously equal to m×m. Based on the number of codewords (nc), the value of each codeword index is ranged from 0 to

⎛⎜ 2 ⎡log 2 nc ⎝

⎤ ⎞⎟

−1



and its size is ⎡log 2 nc ⎤ bits. ⎢ ⎥

B. 3.2. Share construction procedure In share construction procedure, a secret color image is divided into n shadows by three phases shown in Fig. 5.

THE PROPOSED SCHEME

A new (2, n) VSS scheme for color images based on hybrid techniques will be presented in this section. Our scheme consists of three procedures, namely, codebook generation, share construction, and revealing. The relationship among three procedures is first illustrated as follows:

Figure 5.

Flowchart of shares construction phase

4 Authorized licensed use limited to: CONCORDIA UNIVERSITY LIBRARIES. Downloaded on January 16, 2010 at 14:25 from IEEE Xplore. Restrictions apply.

G_LH1, G_LH2, …, G_LHb; B_LH1, B_LH2, …, B_LHb are the LH indices of Red, Green, and Blue channels, respectively. Phase 2: This phase takes responsibility for two tasks. The first task is to transform the common bitmap to a set of grayscale values. By transforming the common bitmaps into bytes stream

presentation, we get a set of ⎡ m × m ⎤ grayscale values, called ⎢ 8 ⎥ ⎢ ⎥

Figure 6. Flowchart of Phase 1

Phase 1: This phase breaks an input color image into a set of indices and a set of common bitmaps together with mean values. The set of indices consists of LH indices and HL indices corresponding to LH and HL sub-bands. This phase is illustrated in Fig. 6 and described in following steps.

Step 1: Divide an original color image into non-overlapping color blocks with the same size. If there are m×m elements in each codeword as being assumed, the size of each color block derived from the original image must be (2×m)×(2×m) = 4×m×m pixels (each pixel consists of three bytes). Suppose that X and Y are width and height of the original secret color image and the number of derived blocks is b, the relationship among X, Y, and b is b = X × Y . Obviously, the size of the secret 4× m× m

image is 3×X×Y bytes.

Step 2: Each color block is decomposed into three single color blocks corresponding to three color channels. As above assumption, each color block contains 4×m×m pixels; thus, each single color block has 4×m×m bytes in size. Step 3: Apply DWT to blocks obtained from Step 2. There are four sub-bands in each single color block by taking DWT, namely, the LL sub-band, HL sub-band, LH sub-band, and HH sub-band. In this scheme, the HH sub-band is abandoned because its coefficients are the least significant. Here, only LL, HL, and LH sub-bands are reserved. Since the size of each block is 4×m×m bytes, the size of each sub-band is m×m bytes. Step 4: Apply GSBTC technique to LL sub-bands to generate common bitmaps and mean values. From one color block, we decompose it into three single color blocks which provide three LL sub-bands. Applying GSBTC procedure to these three LL sub-bands which are considered to be three input blocks, we obtain one common bitmap and six mean values. According to b color blocks, we have b sets of three LL sub-bands. Thus, there are b common bitmaps and 6×b mean values achieved. Step 5: Encode the coefficients of the HL and LH sub-bands by the HL and LH codebooks which were generated during the codebook generation procedure. Based on the HL and LH codebooks, an index is selected to encode one HL sub-band and an index is selected to encode one LH sub-band. Thus, each sub-band is encoded by an index sized ⎡log 2 nc ⎤ bits. ⎢ ⎥ Based on b blocks broken from each color channel, we get 3×b HL indices and 3×b LH indices. Let us call R_HL1, R_HL2, …, R_HLb HL indices of Red channel; G_HL1, G_HL2, …, G_HLb HL indices of Green channel; B_HL1, B_HL2, …, B_HLb HL indices of Blue channel. Similarly, R_LH1, R_LH2, …, R_LHb;

GSs. Let us call R_XHi, R_XLi, G_XHi, G_XLi, B_XHi, and B_XLi, high mean values and low mean values of Red, Green, and Blue channels, respectively. And GSsi is the set of grayscale values derived from the ith color block. The second task is to reduce the size of the set of indices. This task will be done by average estimating. Assume the average values of HL indices and LH indices are A_HL1, A_HL2, …, A_HLb, A_LH1, A_LH2, …, and A_LHb, respectively. Each representative value of HL index is calculated by averaging values of HL indices of three channels. Namely, if A_HLi is considered as representative value of R_HLi, G_HLi, and B_HLi; A_HLi will be defined as: ( R _ HLi + G _ HLi + B _ HLi ) . Similarly, if A_LHi is A _ HL = i

3

considered as representative value of R_LHi, G_LHi, and B_LHi; A_LHi will be defined as:. ( R _ LH i + G _ LH i + B _ LH i ) . Note that we call A _ LH = i

3

representative values HL index and LH index, for short. From one secret color image sized 3×X×Y bytes (width: X and height: Y), we obtain a set of b groups. Each group consists of six mean values, ⎡⎢ m × m ⎤⎥ grayscale values, one HL index, and one ⎢ 8



LH index. We call group of these values the compression code. Phase 3: Apply Shamir’s scheme to the set of b compression code to generate n shadows. Let us denote CPi as the ith compression code including two indices A_HLi, A_LHi, six mean values R_XHi, R_XLi, G_XHi, G_XLi, B_XHi, B_XLi, and one set of grayscale values GSsi. Here, Shamir’s scheme is applied to each CPi, for i=1, 2,…, b to create compression code shadows before generate shadows. Assume that SP1i, SP2i, …, SPni are compression code shadows of CPi, the details of Phase 3 are demonstrated in Fig. 7. Each shadow is generated by appending b compression code shadows together. Let SPlk be the lth compression code shadow of the kth compression code, for l=1, 2, …, n and k=1, 2, …, b. The shadow called Sl is composed of SPl1, SPl2, …, SPlb, for l=1, 2, …, n.

Figure 7. Procedure for generating n shadows

5 Authorized licensed use limited to: CONCORDIA UNIVERSITY LIBRARIES. Downloaded on January 16, 2010 at 14:25 from IEEE Xplore. Restrictions apply.

Figure 8. Flowchart of the revealing proced

Based on the size of each part in one compression code, the size of each compression code shadow SPlk is ⎛ ⎡ m × m ⎤ ⎞ bytes. Therefore, the size of each nc ⎜ 2 × ⎡⎢log 2 ⎤⎥ + 6 + ⎢ 8 ⎥ ⎟ ⎢ ⎥⎠ ⎝ shadow is ⎛ 2 × ⎡log nc ⎤ + 6 + ⎡ m × m ⎤ ⎞ bytes, with b = X × Y . ⎜ ⎢ 2 ⎥ ⎢⎢ 8 ⎥⎥ ⎟⎠ 4× m× m ⎝

Each shadow will be sent to one participant.

C. Revealing procedure In this procedure, reconstructed color image will be revealed when at least two participants release their own shadows. Assume that S = {S1, S2, …, Sn} is a set of n shadows distributed to n participants and two arbitrary shadows, named Si and Sj, are used to generate the reconstructed image. This procedure includes three steps shown in Fig. 8. Step 1: Divide shadow Si and shadow Sj into b non– overlapping compression code shadows with the same size. Because the shadow size is ⎛ 2 × ⎡log each

compression

⎜ ⎝

code



2

nc

m × m ⎤ ⎞ bytes, ⎤⎥ + 6 + ⎡⎢ ⎟ ⎢ 8 ⎥⎥ ⎠

shadow

is

sized

⎡m× m⎤⎞

⎛ nc bytes. Let SPxi and SPxj, for x=1, ⎜ 2 × ⎡⎢log 2 ⎤⎥ + 6 + ⎢ 8 ⎥ ⎟ ⎢ ⎥⎠ ⎝

2, …, b, be the sets of compression code shadows of shadows Si and Sj, respectively.

Step 2: Recover the coefficients of the sets of HL, LH, and LL sub-bands. The coefficients of the HL, LH and LL sub-bands are retrieved from the set of pairs (SPxi, SPxj), for x=1, 2, …, b. The detail of Step 2 is shown in Fig. 9. First, we apply the revealing phase of Shamir’s scheme to a set of pairs of compression code shadows in succession. From one pair of (SPix, SPjx), we get one compression code CPx. Then, we divide each compression code sized ⎛ ⎡ m × m ⎤ ⎞ into four groups. Each of the first nc ⎜ 2 × ⎡⎢log 2 ⎤⎥ + 6 + ⎢ 8 ⎥ ⎟ ⎢ ⎥⎠ ⎝ nc ⎤ ⎡ two groups is log 2 bytes, corresponding to HL indices and ⎢ ⎥ LH indices, respectively.

The third group has six bytes being correlated with six mean values, R_XHi, R_XLi, G_XHi, G_XLi, B_XHi, B_XLi. The last group, which will be used to generate a common bitmap, is ⎡ m × m ⎤ bytes. Employ the VQ decoding phase to the HL and ⎢ 8 ⎥ ⎢ ⎥ LH indices; we can restore the coefficients of HL sub-band and LH sub-band. After transforming the last group into a bits stream, one common bitmap sized m×m is obtained. Moreover, applying the GSBTC decoding phase to the derived common bitmap and six mean values (corresponding to the third group), we can obtain three LL sub-bands, namely, Red, Green, and Blue channels. Based on b pairs of compression code shadows (SPix, SPjx), a set of b compression codes CPx is reconstructed. We can further obtain a set of b HL sub-bands, a set of b LH sub-bands, and a set of 3×b LL sub-bands, from b compression code CPx, for x=1, 2, …, b.

Step 3: Generate the reconstructed image by applying inverse DWT (IDWT). Since HL index and LH index are generate by averaging estimation, HL and LH sub-bands are shared in IDWT to generate the Red, Green, and Blue channels. To reconstruct the Red channel, for instance, we apply IDWT to four sub-bands: the first LL sub-band, HL sub-band, LH sub-band, and HH sub-band which are set to 0 as the default value. Similarly, the Green channel is reconstructed by applying IDWT to four subbands: the second LL sub-band, HL sub-band, LH sub-band, and 0-HH sub-band. The Blue channel is similarly produced by applying IDWT to four sub-bands: the last LL sub-band, HL sub-band, LH sub-band, and 0-HH sub-band. IV.

EXPERIMENTAL RESULTS The experimental results presented in this section demonstrate the performance of our proposed scheme. Codebooks are generated by a set of training images which consists of 100 real images: namely “Barbara”, “Baboon”, “Lena”, “Peppers”, “Jet”, “Tiffany”, “Toy”, and “Sailboat” and many pictures from the “Kodak” set. To conduct the experiments, four 512×512 color images, “Barbara”, “Baboon”, “Lena”, and “Peppers”, are used as the test images. All computing is performed on a PC with a 1.83GHz Intel(R) Core™2 CPU and a 1-GB RAM. The operating system is Windows XP Professional and our scheme is programmed by Matlab 7.0. In our proposed secret sharing scheme, the image quality of the reconstructed color images depends on two parameters: the number of generated codewords (nc) and the codeword size (m×m). Peak signal-to-noise ratio (PSNR) is to evaluate the quality of the reconstructed secret color image in our proposed scheme. PSNR is defined in (11). PSNR = 10 × log 10

Figure 9. Detailed procedure for Step 2 in the revealing phase

255 2 (dB), MSE

(11)

where MSE is the mean square error between the original image and the reconstructed one. For an original color image X × Y in size, MSE is defined in (12).

6 Authorized licensed use limited to: CONCORDIA UNIVERSITY LIBRARIES. Downloaded on January 16, 2010 at 14:25 from IEEE Xplore. Restrictions apply.

MSE =

1 X ×Y

X

Y

∑∑

( Rij − R 'ij ) 2 + (Gij − G 'ij ) 2 + ( B 'ij − B 'ij ) 2 3

i =1 j =1

(12)

,

where Rij, R′ij, Gij, G′ij, Bij, and B′ij are the pixel values at position (i, j) of the Red, the Green, and the Blue channels in the original color image and the reconstructed color image, respectively. A higher PSNR indicates that the quality of the reconstructed secret color image is more similar to the original one. The reconstructed image quality as well as the ratio in size between the original color secret image and the reconstructed one will be presented in Tables 1. These experimental results have been done in differences of codewords number, and codeword sizes. TABLE I.

RECONSTRUCTED IMAGE QUALITY, THE RATIO THE ORIGINAL IMAGE SIZE TO THE SHADOW SIZE WITH VARIANT VALUES OF NC AND M nc=128, m=2, Ratio=2.28

Barbara PSNR=37.8 dB

Lena PSNR=42.89 dB

Baboon PSNR=34.15 dB

Pepper PSNR=40.12 dB

128 256 256 256 256 512 512 512 512 1024 1024 1024 1024

16 2 4 8 16 2 4 8 16 2 4 8 16

34.32 43.24 40.34 38.24 35.27 44.67 39.98 39.16 35.90 46.37 43.56 40.81 37.22

28.59 35 .07 34.39 30.14 29.14 35.53 34.96 30.62 29.75 36.79 35.02 31.15 30.56

32.17 38.12 35.5 34.85 33.06 39.37 38.24 36.71 34.05 40.87 38.39 35.5 34.89

34.8 40.46 39.12 37.69 35.76 41.78 40.59 38.07 36.83 43.38 40.75 38.4 37.7

To prove that the security of this scheme is guaranteed; it means that each shadow reveals no information about the secret color image even when each shadow is a grayscale image, Tables 3 and 4 show a set of shadows generated by our scheme for test images “Barbara”, “Baboon”, “Lena”, and “Peppers”. When the number of shadows is set as 3, the number of codewords is set as 128 and the size of each block derived from original image is 4×4 and 16×16, in turn. TABLE III.

CORRESPONDING SHADOWS WHEN N=3

TABLE IV.

CORRESPONDING SHADOWS WHEN N= 3

Original Image

nc=128, m=8, Ratio=27.42

Shadow 1

Barbara PSNR=34.47dB

Lena PSNR=36.77 dB

Baboon PSNR=30.42 dB

Pepper PSNR=36.07 dB

Shadow 2

nc=1024, m=8, Ratio=22.58

Shadow 3 PSNR=35.50 dB

PSNR=40.81 dB

PSNR=31.15 dB

PSNR=38.40 dB

From Tables 1 we know that the average image quality of the reconstructed images is satisfactory. Experimental results show that the larger the nc value is given, the better the image quality of the reconstructed color image with larger shadow size will be. When an original image is divided into 4×4 blocks (m=2), the shadow size is about 12 times greater than that is divided into 16×16 blocks (m=8). When an original image is divided into 2×2 blocks, the quality of the reconstructed image can be increased from 1.7dB to 6.2dB, compared with that it is divided into 16×16 blocks. Therefore, there exists a tradeoff between the quality of the reconstructed image and shadow size. That is to say, a larger shadow size may lead to higher transmission cost. The relationship among three criterion, namely, reconstructed image quality, codewords number, and codeword sizes will be summarized in the following table. TABLE II. nc

m

128 128 128

2 4 8

RELATIONSHIP AMONG PSNR, NC AND M

Lena 42.89 39.27 36.77

PSNR (dB) Baboon Barbara 34.15 37.8 33.45 35.61 30.42 34.47

Pepper 40.12 38.8 37.2

Original Image

Shadow 1 Shadow 2 Shadow 3

From Tables 3 and 4, we can see that each shadow is a random-like grayscale image; therefore, the security of the proposed scheme is guaranteed. Note that n is the number of shadows generated from one color secret image. In the share construction procedure, the size of each shadow is computed as b × ⎛ 2 × ⎡ log nc ⎤ + 6 + ⎡ m × m ⎤ ⎞ bytes, ⎜ ⎝



2



⎢ 8 ⎢

⎥⎟ ⎥⎠

7 Authorized licensed use limited to: CONCORDIA UNIVERSITY LIBRARIES. Downloaded on January 16, 2010 at 14:25 from IEEE Xplore. Restrictions apply.

X ×Y and the size of original image is 4× m× m 3×X×Y bytes. These results show that our . Thus, the size of shadow can be efficiently minimized by the proposed scheme. The following table will demonstrate the relationship among the ratio (between the original color image and the reconstructed one), codewords number, and codeword sizes will be summarized in the following table.

when b =

[7]

[8]

[9]

[10] TABLE V. nc m

128 128 128 2 4 8

RELATIONSHIP AMONG RATIO, NC AND M

128 256 256 256 16 2 4 8

256 512 512 512 512 1024 1024 1024 1024 16 2 4 8 16 2 4 8 16

Ratio 2.28 9.6 38.4 153.6 2.18 8.7 34.9 139.6

V.

2

8

32 128

1.8

7.3 29.5 118

REFERENCES

[2]

[3]

[4]

[5]

[6]

[12]

5. CONCLUSIONS

In this paper, we propose a new secret sharing scheme for color images. Our scheme is based on three existing techniques: the gradual search algorithm for a single bitmap BTC (GSBTC), discrete wavelet transform (DWT), and vector quantization (VQ). In this scheme, we replace an original color image with a set of much smaller grayscale shadows. Any reconstructed color image can be obtained by combining at least two shadows. Experimental results confirm that our proposed scheme not only gives high reconstructed image quality with PSNRs ranging from 30.42 dB to 40.89 dB but also provides smaller shadows size without causing significantly high computational complexity. Moreover, each shadow reveals no information about the original image; therefore, the security of the proposed scheme is guaranteed.

[1]

[11]

A. Adhikari and S. Sikdar, “A New (2, n)-Visual Threshold Scheme for Color Images,” INDOCRYPT 2003, Lecture Notes in Computer Science, Vol. 2904, pp. 148-161, Dec. 2003. A. Algamal, “Public Key Crytosystem and a Signature Scheme Based on Discrete Logarithms,” IEEE Transaction of Information Theory, Vol. 31, pp. 469-472, 1985. G. R. Blakley, “Safeguarding Cryptographic Keys,” Proceedings of the National Computer Conference, American Federation of Information Processing Societies, pp. 313-317, Jun. 1979. C. C. Chang and I. C. Lin, “A New (t, n) Threshold Image Hiding Scheme for Sharing a Secret Color Image,” Proceedings of the ICCT2003, Vol. 1, pp. 196-202, 2003. C. C. Chang, C. C. Lin, T. H. N. Le, and B. H. Le, “A New Probabilistic Visual Secret Sharing Scheme for Color Images,” International Journal of Intelligent Information Technology Application (IJIITA), Vol. 1, No. 1, pp. 1-9, Jul. 2008. C. C. Chang, C. C. Lin, T. H. N. Le, and B. H. Le, “A Probabilistic Visual Secret Sharing Scheme for Grayscale Images with Voting Strategy,” Intelligent Information Hiding and Multimedia Signal Processing, pp. 184-188, Aug. 2008.

[13]

[14]

[15]

[16]

[17]

[18]

[19] [20]

[21]

[22]

[23] [24] [25]

[26]

Y. F. Chen, Y. K. Chan, C. C. Huang, C. C. Tsai, and Y. P. Chu, “A Multiple-Level Visual Secret-Sharing Scheme without Image Size Expansion,” Information Sciences, Vol. 177, No. 21, pp. 4696-4710, Nov. 2007. S. Cimato, R. De Prisco, and A. De Santis, “Probabilistic Visual Cryptography Schemes,” The Computer Journal, Vol. 49, No. 1, pp. 97107, 2006. J. B. Feng, H. C. Wu, C. S. Tsai, and Y. P. Chu, “A New Multi-Secret Images Sharing Scheme Using Lagrange’s Interpolation,” The Journal of Systems and Software, Vol. 76, No. 3, pp. 327-339, 2005. J. Foster, R. M. Gray, and M. O. Dunham, “Finite State Vector Quantization for Waveform Coding,” IEEE Transactions on Information Theory, Vol. 31, No. 3, pp. 348-359, 1985. Y. C. Hou, “Visual Cryptography for Color Images,” Pattern Recognition, Vol. 36, Issue 7, pp. 1619-1629, 2003. Y. C. Hu, B. H. Su, and C. C. Tsou, “Fast VQ Codebook Search Algorithm for Grayscale Image”, Image and Vision Computing, Vol. 26, pp. 657-666, 2008. R. Ito, H. Kuwakado, and H. Tanaka, “Image Size Invariant Visual Cryptography,” IEICE Transactions on Fundamentals, No. 10, pp. 2172-2177, 1999. M. Iwamoto and H. Yamamoto, “The Optimal n-out of-n Visual Secret Sharing Scheme for Grayscale Images,” IEICE Transactions on Fundamentals, Vol. 10, pp. 2238-2247, 2002. T. Kim, “Side Match and Overlap Match Vector Quantizers for Images,” IEEE Transactions on Image Processing, Vol. 1, No. 2, pp. 170-185, 1992. J. Z. C. Lai, Y. C. Liaw, and W. Lo, “Artifact Reduction of JPEG Coded Images Using Mean-Removed Classified Vector Quantization,” Signal Processing, Vol. 82, No. 10, pp. 1375-1388, 2002. J. Z. C. Lai, Y. C. Liaw, and J. Liu, “A Fast VQ Codebook Generation Algorithm Using Codeword Displacement,” Pattern Recognition, Vol. 41, No. 1, pp. 315-319, Jan. 2008. Y. C. Liaw, J. Z. C. Lai, and W. Lo, “Image Restoration of Compressed Images Using Classified Vector Quantization,” Pattern Recognition, Vol. 35, No. 2, pp. 329-340, 2002. F. Murtagh, “The Haar Wavelet Transform of a Dendrogram,” Journal of Classification, Vol. 24, Issue 1, pp. 3-32, Jun. 2007. M. Naor and A. Shamir, “Visual Cryptography,” Advances in Cryptology - Eurocrypt’94, Lecture Notes in Computer Science, Vol. 950, pp. 1-12, 1995. N. M. Nasrabadi and Y. Feng, “Image Compression Using AddressVector Quantization,” IEEE Transactions on Communications, Vol. 38, No. 12, pp. 2166-2173, Dec. 1990. K. N. Ngan and H. C. Koh, “Predictive Classified Vector Quantization,” IEEE Transactions on Image Processing, Vol. 1, No. 3, pp. 269-280, 1992. A. Shamir, “How to Share a Secret,” Communications of ACM, Vol. 22, No. 11, pp. 612-613, 1979. R. Z. Wang and C. H. Su, “Secret Image Sharing with Smaller Shadow Images,” Pattern Recognition Letters, Vol. 27, No. 6, pp. 551-555, 2006. D. Wang, L. Zhang, N. Ma, and X. Li, “Two Secret Sharing Schemes Based on Boolean Operations,” Pattern Recognition, Vol. 40, pp. 27762785, 2007. C. N. Yang, “New Visual Secret Sharing Schemes Using Probabilistic Method,” Pattern Recognition Letters, Vol. 25, Issue 4, pp. 481-494, 2004.

8 Authorized licensed use limited to: CONCORDIA UNIVERSITY LIBRARIES. Downloaded on January 16, 2010 at 14:25 from IEEE Xplore. Restrictions apply.