Limiting the Visible Space Visual Secret Sharing Schemes ... - CiteSeerX

2 downloads 0 Views 257KB Size Report
Abstract. In this paper, we propose new uses of visual secret shar- ing schemes. That is, we use visual secret sharing schemes to limit the space from which one ...
Advances in Cryptology { ASIACRYPT'96 : LNCS 1163, pp.185{195,(1996-11)

Limiting the Visible Space Visual Secret Sharing Schemes and their Application to Human Identi cation Kazukuni Kobara and Hideki Imai Institute of Industrial Science, The University of Tokyo Roppongi, Minato-ku, Tokyo 106, Japan E-mail: [email protected] Abstract. In this paper, we propose new uses of visual secret sharing schemes. That is, we use visual secret sharing schemes to limit the space from which one can see the decoded image. (We call this scheme limiting the visible space visual secret sharing schemes (LVSVSS).) We investigate the visibility of the decoded image when the viewpoint is changed, and categorize the space where the viewpoint belongs according to the visibility. Finally, we consider the application of LVSVSS to human identi cation, and propose a secure human identi cation scheme. The proposed human identi cation scheme is secure against peeping, and can detect simple fake terminals. Moreover, it can be actualized easily at a small cost.

1 Introduction It is very dangerous to trust only one person or only one organization to manage very important information. To deal with these kinds of situations, a scheme to share a secret with some members, called a secret sharing scheme or a (k; n) threshold scheme, was proposed by A. Shamir [1]. In a (k; n) threshold scheme, a secret is divided into n pieces. Each single piece looks like random data by itself. In order to decode the secret, members have to gather k pieces. That is, k persons' permission is required to decode the secret. Since then, various studies on secret sharing schemes have been carried out. In particular, visual secret sharing schemes (VSS), originally proposed by M. Naor and A. Shamir [2], are very interesting. In these schemes, members who have shared a secret can decode it without help of computers in the decoding process. Shared secret (image) are printed on transparencies as patterns. Members can decode the secret (image) by stacking some of them, and see it. However, even if one uses these secret sharing schemes, once an attacker peeps at the decoded image, it might be leaked out easily. To deal with this, some people may decode it after con rming that no attacker is around. However, even though that can be con rmed, preoccupation about peeping still exists. A video camera may be set on somewhere secretly. Some people may decode it after covering it by a piece of cloth, or a corrugated carton, or by hands. However, it is troublesome to cover it with worrying about others' eyes and cameras every

(x1,y1,0)

Y

?

(x2,y2,0)

Y Attacker

Z

(x1,y1’,0)

viewpoint (vx’,vy’,0)

the visible space

gy

vx’ X

X the origin (vx,vy,vz)=(0,0,0) transparency 2 x=x2

transparency 2 x=x2

transparency 1 x=x1

transparency 1 x=x1

The principle of limiting Fig. 2. The relation between the viewpoint the visible space. and the distortion.

Fig. 1.

time a secret is decoded. Moreover, watching something secretly is enough to arise suspicious of immoral behavior. In this paper, we propose new usage of visual secret sharing schemes. We call this scheme limiting the visible space visual secret sharing schemes (LVSVSS). That is, we use the VSS to limit the space from which one can see a decoded image. We investigate the visibility of the decoded image when the viewpoint is changed, and categorized the space where the viewpoint belongs according to the visibility. Finally, we consider the application of LVSVSS to human identi cation, and propose a secure human identi cation scheme. The proposed human identi cation scheme is secure against peeping, and can detect simple fake terminals. Moreover, it can be actualized easily at a small cost.

2 The principle of limiting the visible space The principle of limiting the visible space is very simple (see Fig.1). The patterns are printed on transparencies so that an image can be decoded when a space is left between the transparencies. The separation of the transparencies make the space from which the decoded image can be seen smaller. We call this space \the visible space". Attackers out of the visible space cannot see it. Let the point from which the decoded image can be seen correctly be the origin of the coordinate axes, and the x, y , z axes are set. Transparency 1 is located on x = x1 and transparency 2 on x = x2 . Let the points overlapping each other be (x1 ; y1 ; z1 ) (transparency 1) and (x2 ; y2 ; z2 ) (transparency 2), when the viewpoint is on the origin. Then the following equations are held: y1

=

x1 x2

y2

,

z1

=

x1 x2

z2 :

(1)

3 The relation between the viewpoint and the distortion For simplicity, we consider the x 0 y plane where z = 0 (see Fig.2). Let a function to calculate a point y1 on x = x1 overlapping with (x2 ; y2 ; 0) when you watch the point (x2 ; y2 ; 0) from (vx ; vy ; 0) be f (vx ; vy ; x2 ; y2 ; x1 ), and a function to calculate the di erence gy = y1 0 y1 be g (vx ; vy ; x2 ; y2 ; x1 ). Then, the functions are de ned as follows: 0

0

0

0

0

(

0

0

0

0

0

f vx ; vy ; x2 ; y2 ; x1

0

0

) = a(vx ; x2 ; x1 )y1 + b(vx ; x2 ; x1 ; vy )

= a(vx ; x2 ; x1 )(y1 0 b (vx ; x1 ; vy )) + b (vx ; x1 ; vy ); 0

(

0

0

0

0

0

0

0

0

(2)

) = (a(vx ; x2 ; x1 ) 0 1)y1 + b(vx ; x2 ; x1 ; vy )

0

0

g vx ; vy ; x2 ; y2 ; x1

0

0

= a (vx ; x2 ; x1 )(y1 0 b (vx ; x1 ; vy )); 0

0

0

0

0

(3)

where (x1 0 vx )x2 a(vx ; x2 ; x1 ) = (x2 0 vx )x1

(4)

(x1 0 x2 )vx (x2 0 vx )x1

(5)

(x2 0 x1 )vy (x2 0 vx )

(6)

0

0

0

0

0

0

(vx ; x2 ; x1 ) =

a

0

0

(

0

0

b v x ; x 2 ; x 1 ; vy

)=

0

0

0

b

0

0

(vx ; x1 ; vy ) =

x1 vy 0

vx

(7)

:

As a matter of course: 0

y1 gy

0

0

(8)

0

0

(9)

= f (vx ; vy ; x2 ; y2 ; x1 ) = g (vx ; vy ; x2 ; y2 ; x1 ):

In the same way as the x 0 y plane on z = 0, where y = 0 can be calculated. 0

0

0

0

0

0

z1

and

gz

on the

x

0

z

plane

= f (vx ; vz ; x2 ; z2 ; x1 ) gz = g (vx ; vz ; x2 ; z2 ; x1 )

(10) (11)

z1

Therefore, a point on x = x1 overlapping with a point (x2 ; y2 ; z2 ) is (x1 ; y1 ; z1 ), and the vector g from (x1 ; y1 ; z1 ) to (x1 ; y1 ; z1 ) is (0; gy ; gz ). 0

0

0

0

3.1

When the viewpoint is on the

y0z

plane where

When the viewpoint is on the y 0 z plane where equation (2) and (3) becomes 1. Therefore, g

x

x = 0.

= 0, a(vx ; x2 ; x1 ) in the 0

0

0

= (0; b(0; x2 ; x1 ; vy ); b(0; x2 ; x1 ; vz )) = (0;

x1

0

x2

x2

(0vy ); 0

x1

0

x2

x2

(0vz )): 0

(12)

It can be seen that the distortion vector g is independent from the points on the transparencies. So it looks as if all the points on the transparency 2 drifted from the corresponding points on the transparency 1 for the same length. 3.2

When the viewpoint is in the space

x < x2

but

x 6= 0.

When the viewpoint is in the space x < x2 but x 6= 0, the distortion vector g is as follows: g

= (0; a (vx ; x2 ; x1 )(y1 0 b (vx ; x1 ; vy )); a (vx ; x2 ; x1 )(z1 0 b (vx ; x1 ; vz ))) 0

0

0

0

0

0

0

0

x1 vy (x1 0 x2 )vx (x 0 x2 )vx x v (y1 0 ); 1 (z 0 1 z )) (x2 0 vx )x1 vx (x2 0 vx )x1 1 vx 0

0

= (0;

0

0

0

0

0

(13)

0

where vx 6= 0. Therefore, it looks as if all the points on the transparency 2 radially drifted from the corresponding points on the transparency 1. The center is (x1 ; x1 vy =vx ; x1 vz =vx ) and the length of the drift is ((x1 0 x2 )vx )=((x2 0 v x )x1 ) times longer than the distance between the center and the point on the transparency 1. 0

0

0

0

0

0

0

4 The relation between the shift of two corresponding cells and its density (visibility). We consider (2; 2) threshold schemes. Transparencies consist of square cells with sides c. Each of cell has 2 2 2 square pixels with sides d. A pixel is black or transparent. Therefore, when the two transparencies are stacked each other, it looks like black if 4 pixels are black in the cell, and white if 2 pixels are black. Let me call the types of black cells and white cells Bi and Wi (1  i  6) respectively. We show all kinds of the cells in Fig.3 with the situation of the shift. In order to measure the visibility, we de ne the density as the rate of black area in a cell rst. Then let the length of the shift be gz and gy respectively, and the expected value of the density where the shift is (gy ; gz ) be E GBi (gy ; gz ) and E GWi (gy ; gz ) respectively. E GBi (gy ; gz ) and E GWi (gy ; gz ) (0  gy ; gz < 2d) can be expressed as follows:



( ) 1 1 0 4d gz 0 81d gy + 8d12 gz gy

E GB1 gy ; gz

=

3 4

(0  gz < d; 0  gy (d  gz ; 0  gy )


> < 11 0+ 81d ( zy++ 1z )y+08d212 zz yy = 21 + 81d + 21d 0 4d1 > > : 52 21d z 8d y 41d2 z y

E GB2 gy ; gz

(0  gz < d; 0  gy < d) (d  gz < 2d; 0  gy < d) (0  gz < d; d  gy < 2d) (d  gz < 2d; d  gy < 2d)

(15)

(16)

4

(0  gz < d; 0  gy < d) (d  gz < 2d; 0  gy < d) (0  gz < d; d  gy < 2d) (d  gz < 2d; d  gy < 2d)

( ) 1 1 0 8d gy 0 21d gz + 4d12 gz gy 1 + 41d (gy + gz ) 0 8d12 gz gy 4

(0  gz < d; 0  gy < 2d) (d  gz ; 0  gy < 2d)

(17)

g

4

g

g g

g

g

g g

g

g

g g

0 4d (

gz

+ gy ) + 8d2 gz gy

8( 3 ) 1 > > < 11 0+ 81d ( zy++ 1z )y+02d212 zz yy 4d 8d = 21 81d + 4d z + 81d y 0 8d12 z y > > 2 :3

E GB3 gy ; gz

g



g

g g

g

g

g g

g

g

g g

EGB4 gy ; gz

=

d W1

gy gz

B5

gy gz

gy gz

B4

gy gz

(

) = E GB1 (gz ; gy )

(18)

(

) = E GB4 (gz ; gy )

(19)

E GB5 gy ; gz

E GB6 gy ; gz

3 0 E GBi (gy ; gz ): (20) 2 E GBi (gy ; gz ) is shown in Fig.4. If (2d  gz ) or (2d  gy ), E GBi and EGWi take the same value 3=4. Let the density in a black part and in a white part of a (

E GWi gy ; gz

)=

EGB1(gy,gz)

1 0.9 0.8 0 0.5 1 gz/d 1.5

EGB2(gy,gz)

1 0.9 0.8 0.7 0 0.5 1 gz/d 1.5

2 1.5 1 gy/d 0.5 20

20

1 0.9 0.8 0.7 0 0.5 1 gz/d 1.5

EGB5(gy,gz)

EGB4(gy,gz)

1 0.8 0.6 0 0.5 1 gz/d 1.5

2 1.5 1 gy/d 0.5

EGB3(gy,gz)

1 0.9 0.8 0 0.5 1 gz/d 1.5

2 1.5 1 gy/d 0.5 20

2 1.5 1 gy/d 0.5 20

2 1.5 1 gy/d 0.5 20

EGB6(gy,gz)

1 0.8 0.6 0 0.5 1 gz/d 1.5

2 1.5 1 gy/d 0.5 20

The expected value of the density of the black cells versus the length of the shift E GBi (gy ; gz ).

Fig. 4.

decoded image be E GB (gy ; gz ) and E GW (gy ; gz ) respectively. If all kinds of cells are used uniformly, E GB (gy ; gz ) and E GW (gy ; gz ) can be expressed as follows: (

E GB gy ; gz

E GW

)=

6 1X E GBi (gy ; gz ) 6 i=1

6 1X E GWi (g y ; gz ) 6 i=1 3 = 0 E GB (gy ; gz ): 2

(21)

(gy ; gz ) =

(22)

The visibility of a part of a decoded image depends on the di erence of the density between black cells and white cells of which the part of the image consists. Therefore, we use the normalized value 2jE GB (gy ; gz ) 0 E GW (gy ; gz )j as a measure of the visibility E G(gy ; gz ). (

E G gy ; gz

) = 2jE GB (gy ; gz ) 0 E GW (gy ; gz )j

(23)

( ) is shown in Fig.5. You should pay attention to the region around (gy ; gz ) = (6d; 0) or (gy ; gz ) = (0; 6d). In these regions, the visibility is a little bit higher than the neighborhood and the black and white are reversed. E G gy ; gz

EG(gy,gz)

EG(gy,gz)

2 1.5 1.0~0.3 1

gy/d

1 0.8 0.6 0.4 0.2 0 0

2

0.5

1.5

0.03~0.00

1 gy/d

0.5

0.5

1 gz/d

0 0

0.5

1

1.5

2

gz/d

1.5 20

Fig. 5. The visibility of a part of a decoded image where the shift of the cells are (gy ; gz ), E G(gy ; gz ). (The right side shows the contour map of E G(gy ; gz ))

5 Categorization of the space In this section, we categorize the space where the viewpoint belongs according to the visibility of the decoded image. For simplicity, we consider a x 0 y plane where z = 0, and suppose the visibility of a cell is categorized as follows: 0  jgy j < g0 : clearly visible  jgy j < c : slightly visible c  jgy j : invisible

g0

where gy , g0 and c denotes the length measured on the transparency 1. c is a length of a side of the cell, and g0 depends on the sensitivity of a person. When vx 6= 0, we should consider the di erence of the size of the corresponding two cells. However, if a(vx ; x2 ; x1 ) ' 1, or the corresponding two cells do not overlap at all, the e ect is very small or not at all. Therefore we can ignore the di erence of the size under those conditions. The visibility of the whole image can be guessed from the visibility of the image on the boundary (x1 ; 6r1 ; 0) and the size of the clearly visible region or slightly visible region on the image. The size of the region can be derived from the length from the most visible point to the point where the corresponding two points are shifted as gy on x = x1 and z = 0. Let the length be sy . sy is given by the following equation: 0

0

8 (x20v )x1 > > < y (x10x2 )v ( 2 x 1 y = x =0 > > : (x20v )x1 0

x

g

x

0

x

s

0

v

0

gy

(x1

> v

0

0

>

0) (24)

:

0x2 )vx (vx < 0) x

0

0

By substituting gy for g0 or c, the size of the slightly visible region and the clearly visible region on the image can be derived. We show sy versus vx in Fig.6. 0

sy (cm) 8

gy=c gy=g0

7 6 5 4 3 2 1

-40 Fig. 6. sy

-20

0

vx’ (cm)

20

0

versus vx (x1 = 30cm,x2 = 27cm, c = 0:25cm,

Then the viewpoints where the di erence between derived by the following equation: 0

vy

=



gy x1

0

x2

+

y1 x1



0

vx

0

x 2 gy x1

0

0

y1

x2

:

g0

and

= c=4). y1

is

gy

can be (25)

Therefore, by substituting gy for 6g0 or 6c , and y1 for 6r1 respectively, the space where the viewpoint belongs can be categorized as follows (see Fig.7): One (or an attacker) can see the whole decoded image clearly. One (or an attacker) cannot see the whole decoded image clearly, but can see a part of it. space 1 One (or an attacker) may see the region around the center of the decoded image clearly, but cannot see the region around the boundary at all. space 2 One (or an attacker) may see a region somewhere between the center and one boundary of the decoded image clearly, but cannot see the region around the opposite boundary at all. space 3 One (or an attacker) may see a region around one boundary of the decoded image clearly, but cannot see the region around the opposite boundary at all. space 4 One (or an attacker) may see the region around the center of the decoded image clearly, and may also see the region around boundary slightly. space 5 One (or an attacker) may see a region somewhere between the center and one boundary of the decoded image clearly, and may also see the region around the opposite boundary slightly. slightly visible space One (or an attacker) cannot see the decoded image clearly, but may see the whole decoded image or a part of the decoded image slightly. space 6 One (or an attacker) may see a region around one boundary of the decoded image slightly, but cannot see the opposite boundary at all. space 7 One (or an attacker) may see the whole decoded image slightly, but cannot see it clearly. visible space

partly visible space

Y1 invisible space

Y1

Y2

invisible space

c

6

Y2

c

g0

3 6

3

2

5 visible space

4

1

5 5

7

3

6

r2

6 slightly visible space

Fig. 7.

Visible space properties.

invisible space

r1

r2

space where attackers are detected

Fig. 8.

tice.

transparency 1

2

transparency 2

4

3

transparency 1

partly visible space

invisible space 2

r1

7

transparency 2

5

g0

Visible space properties in prac-

One (or an attacker) cannot see the decoded image at all.

If the size of the section of the slightly visible space and the visible space is designed to be smaller than the size of one's head or face, attackers cannot see the decoded image at all from everywhere. Because when an attacker see it from behind the person, a part of the decoded image where the attacker can see is hidden behind the person's head, and when from before the person, the attacker can be detected before the image is decoded (see Fig.8). The size of the section of the slightly visible space and the visible space can be changed by controlling the length of the sides of the cells c. Let l be the length from the origin (0; 0; 0) to the border between invisible space and slightly visible space on x = 0, z = 0. The relation between c and l is given by the following equation: c

=

(x1 0 x2 ) x2

l:

(26)

6 Applications to Human Identi cation Current human identi cation schemes using secret codes or passwords are not secure enough against peeping at the input process. To overcome this problem, some schemes have been proposed by several researchers. One such schemes use the Zero-Knowledge Interactive Proof [3] [6] or OneTime password [7]. These schemes are robust against wire tapping. However, in these schemes, veri ers do not verify human provers themselves, although they verify whether the devices are identical. Therefore we call these schemes \indirect human identi cation schemes" to tell them apart from \direct human identi cation schemes" in which veri ers can verify the provers themselves. On the other hand, direct human identi cation schemes which are a little bit robust against

Verifier (Terminal, Portable Device etc.)

Prover (Human)

transparency 1

sp

Verifier (Terminal, Portable Device etc.)

Prover (Human)

sp

Challenge

transparency 1 + transparency 2 oxooxox xooxoxo

Response transparency 2

Fig. 9.

sp

sp

Application of LVSVSS to challenge-response type direct human identi cation.

peeping have been proposed [8] [9]. These schemes use simple challenge-response protocols so that human provers can make responses by themselves. (We call these schemes \challenge-response type direct human identi cation schemes" (CRDHI).) These schemes are certainly secure against peeping at either of challenges or accepted responses, but not so secure against both of them [10], because attackers can guess provers' secret from several pairs of challenges and their corresponding responses. This is a serious problem of these schemes. However, if we can prevent attackers peeping at either the challenges or the accepted responses, we can make these schemes extremely secure against peeping. That is the reason why we propose to apply LVSVSS to CRDHI. In the proposed scheme, veri ers display challenges by using LVSVSS. The detail is as follows (see Fig.9). First, a veri er makes a transparency 2 and give it to a prover secretly. The prover selects a secret as his/her secret sp and inform it to the veri er secretly. In the identi cation process, the veri er makes a pattern so that a challenge is decoded when the transparency 2 is stacked on it, and displays it. The prover stacks his/her transparency (transparency 2) on the display and see the decoded challenge. Then, he/she makes the response from sp and the decoded challenge, and returns it. Finally the veri er verify the response. The prover can see the decoded challenges, but attackers cannot peep at them. Therefore, the security against peeping becomes exceedingly higher. Moreover, by using the proposed scheme, it is possible to detect simple fake terminals before they input a response. Because simple fake terminals cannot display proper patterns which proper challenges are decoded by stacking the prover's transparency on, although high-grade fake terminals may be able to do it. Another advantage of the proposed identi cation scheme is that it can be actualized easily at a small cost.

7 Conclusions We proposed new usage of visual secret sharing schemes to limit the space from which one can see the decoded image. We named this scheme limiting the visible space visual secret sharing schemes (LVSVSS). Then, we investigated the visibility of the decoded image when the viewpoint is changed, and categorized the space where the viewpoint belongs according to the visibility. Finally, we considered the application of LVSVSS to human identi cation, and proposed a secure human identi cation scheme. The proposed human identi cation scheme is secure against peeping, and can detect simple fake terminals.

References 1. A. Shamir. \How to share a secret". Communications of the ACM, 22(11):612{ 613, 1979. 2. M. Naor and A. Shamir. \Visual cryptograpy". In Proc. of EUROCRYPT '94, LNCS 950, pages 1{12. Springer{Verlag, 1994. 3. A. Fiat and A. Shamir. \How to prove yourself". In Proc. of CRYPTO '86, LNCS 263, pages 186{194. Springer{Verlag, 1986. 4. A. Shamir. \An ecient identi cation scheme based on permuted kernels". In Proc. of CRYPTO '89, LNCS 435, pages 606{609. Springer{Verlag, 1990. 5. J. Stern. \A new identi cation scheme based on syndrome decoding". In Proc. of CRYPTO '93, LNCS 773, pages 13{21. Springer{Verlag, 1994. 6. J. Stern. \Designing identi cation scheme with keys of short size". In Proc. of CRYPTO '94, LNCS 839, pages 164{173. Springer{Verlag, 1994. 7. N. Haller. \The S/KEY(TM) one-time password system". In Proc. of the Internet Society Symposium on Network and Distributed System Security, pages 151{158, 1994. 8. T. Matsumoto and H. Imai. \Human identi cation through insecure channel". In Proc. of EUROCRYPT '91, LNCS 547, pages 409{421. Springer{Verlag, 1991. 9. H. Ijima and T. Matsumoto. \A simple scheme for challenge{response type human identi cation (in Japanese)". In Proc. of Symposium on Cryptography and Information Security (SCIS94{13C), 1994. 10. K. Kobara and H. Imai. \On the properties of the security against peeping attacks on challenge-response type direct human identi cation scheme using uniform mapping (in Japanese)". IEICE Trans.(A), J79-A(8), 8 1996.

This article was processed using the LaTEX macro package with LLNCS style

Suggest Documents