A Neural-Network Approach for Visual Cryptography and Authorization Tai-Wen Yue Computer Science and Engineering Tatung University, Taiwan
[email protected]
Suchen Chiang Computer Science and Engineering Tatung University, Taiwan
[email protected]
Abstract. In this paper, we propose a neural-network approach for visual authorization. It is an application on visual cryptography. The scheme contains a key-share and a set of user-shares. The administrator owns the key-share, and each user owns a user-share issued by the administrator from the usershare set. The shares in the user-share set are visually indistinguishable, i.e., they have the same pictorial meaning. However, the stacking of the key-share with different user-shares will reveal significantly different images. Therefore, the administrator (in fact, only the administrator) can visually recognize the authority assigned to a particular user by viewing the information appearing in the superposed image of key-share and user-share.
1
Introduction
Visual cryptography is a cryptographic scheme to achieve visual secret sharing [5, 7]. Specifically, a set of informative images, also called target or secrete images, are separated into a set of shares, or called shadow images, which are recorded in transparencies. Each share alone doesn’t reveal any piece of secrete residing in target images. However, if one is able to acquire a number of legal shares associated with the application’s access scheme, e.g., described using an access structure in [1, 3], and stack them together, the hidden secrete distributed among these shares can be disclosed from the superposed image using our eyes. Visual cryptography finds many applications [4, 6, 9] in cryptographic field such as key management, message concealment, authorization, authentication and entertainment. This paper proposes a neural-network approach for applying visual cryptography on authorization. Authentication and authorization are always addressed in cryptography. Authentication establishes who you are. Authorization establishes what you are allowed to do. However, sometimes we can prove authorization without exposing one’s identity. For example, if one has the pass of a mansion, then he will be allowed pass through no matter who he is. People hold different passes means they possess different authorities. Figure 1 schematically shows the access scheme defined for visual authorization in this research. Note that all shares shown in the figure are different binary (halftone) images although the pictorial meaning for some of them are the same. Two types of resources are shown in the figure. To access a particular resource, users must show a share with a particular visually recognizable pattern to the administrator. To access resource 1, for example, the share must be read out as “Lena”; while for resource 2, it must be “Mona Lisa”. The administrator owns a key-share, “Marilyn Monroe” shown in the figure. By visually reading out the pattern appearing in the superposed image of key-share and user-share, the administrator can verify the legality of a share and, then, assign the coincident access right to a legal share-owner according to the authorizing rules associated with the access scheme. Traditionally, visual cryptography was fulfilled using codebook approach [5]. In typical, the shares’ pixels are generated by looking up a codebook indexed by each pixel value (0 or 1) in target image. Because the code length is larger than one, the image size for shares
Different images represent different authorities on resource 1.
External stimulus
Tii
Active Value
mi (ai )
...
(aiQi )
Ii
0
Internal stimulus n
user shares (resource 1) stacking
key share
å T (a Q ) ij
j
j
qi-1
Ni
j =1
Figure 2: The Q'tron model.
stacking user shares (resource 2)
Clamp-mode Interface Q'trons Free-mode Free mode - hidden Q'tron
Different images represent different authorities on resource 2.
Figure 1: The access scheme, defined using graytone images, for visual authorization.
Figure 3: The abstract Q'tron NN.
will be a multiple of the target. The Q’tron NN (neural network) model had successfully applied to many complex applications of visual cryptography, even to the cases that are, in fact, theoretically unrealizable using traditional approaches, such as the full access scheme described in [9]. Using the Q’tron NN approach, an access scheme can be described completely using a set of graytone images, see Figure 1. This set of images is also the only requisite information to be fed into the NN to produce shares. The shares, as a result, will be halftone images that mimic the graytone share-images, and stacking each subset of shares described in the access scheme will produce a halftone image that mimics the corresponding graytone target-image. Using this approach, the image sizes of shares and targets are the same. This paper is organized as follows: Section 2 gives a brief review on the Q’tron NN model, and introduces how the model can serve as a question-answering machine and/or as an associative memory. Section 3 applies the NN model for image halftoning, and investigates its auto-invertibility, i.e., image restoration. We are going to combine several image-halftoning NN’s by adding new energy terms to build an Q’tron NN for visual cryptography in Section 4. Its applications and experimental results appear in Section 5 and 6. Finally, we draw some conclusions in Section 7. More information, including Java applet and source code, related to this research can be found on http://www.cse.ttu.edu.tw/twyu/vc. 2
The Q’tron NN Model
In this model of NN, the basic processing elements are called Q’tron’s. The number of outputlevel of each Q’tron can be greater than two. Specifically, let µi represent the ith Q’tron in a Q’tron NN, where the output of µi , denoted as Qi , takes up its value in a finite integer set {0, 1, ..., qi −1} with qi (≥ 2) being the number of output-level. In addition, Qi is also weighted by a specific positive value ai called active weight, which stands for the unit excitation strength of that Q’tron. The term ai Qi is thus referred to as the active value of µi , which represents the total excitation strength of that Q’tron. In a Q’tron NN, for a pair of connected Q’trons µ i and µj , there is only one connection strength, i.e. Tij = Tji . In Q’tron NN model, each Q’tron ˆ i for the Q’tron µi in the NN is allowed to be noise-injected. Thus, noise-injected stimulus H is defined as: n X ˆ i = Hi + Ni = H Tij (aj Qj ) + Ii + Ni , (1) j=1
where Hi denotes the noise-free P net stimulus of µi , which apparently is equal to the sum of internal stimuli, namely, nj=1 Tij (aj Qj ), and external stimulus Ii . The term Ni denotes the piece of random noise fed into µi , and n denotes the number of Q’trons in the NN. The schematic diagram for a Q’tron is shown in Figure 2. The noise-injection mechanism for Q’tron NN’s was thoroughly investigated in [8]. The applications discussed in the paper don’t need ˆ i = Hi will be always true in the following discussion noise injection. Therefore, Ni = 0 and H unless otherwise stated. At each time step only one Q’tron is selected for level transition subject to the following rule: Qi (t + 1) = Qi (t) + ∆Qi (t), (2) with
ˆ i (t) > 1 |Tii ai | and Qi (t) < qi − 1; +1 H 2 ˆ i (t) < − 1 |Tii ai | and Qi (t) > 0; ∆Qi (t) = −1 H 2 0 otherwise,
(3)
Operating Modes of Q’trons Each Q’tron can either be operated in clamp mode, i.e., its output-level is clamped fixed at a particular level, or in free mode, i.e., its output-level is allowed to be updated according to the level transition rule specified in Eq. (2). Furthermore, we categorize Q’trons in an NN into two types: interface Q’trons and hidden Q’trons, see Figure 3. The former provides an environment to interface with the external world, whereas the latter is functionally necessary to solve certain problems. Hidden Q’trons usually run in free-mode only. However, the NN discussed in the paper doesn’t need any hidden Q’tron. Some examples that require hidden Q’trons were given in [9] and [10]. Interface Q’trons operated in clamp-mode are used to feed the available or affirmative information into the NN. The other free-mode interface Q’trons, on the other hand, are used to perform association to ‘fill in’ the missing or uncertain information. System Energy — Stability The system energy E embedded in a Q’tron NN, called Liapunov energy, is defined by the following form: n n n X 1 XX Ii (ai Qi ) + K; (4) (ai Qi )Tij (aj Qj ) − E =− 2 i=1 j=1
i=1
where n is total number of Q’trons in the NN, and K can be any suitable constant. It was shown that the energy E defined above will monotonically decrease with time. Therefore, if a problem can be mapped into one which minimizes the function E given in the above form, then the corresponding NN will autonomously solve the problem after E reaches a global/local minimum. This reveals that a Q’tron NN performs association, in fact, by ‘releasing’ its internal energy. Therefore, to solve a problem using a Q’tron NN, one can clamp the available or affirmative information to the corresponding Q’trons and free all other Q’trons. The Q’tron NN will then report a feasible solution after all of the free interface Q’trons settle down. 3
Image Halftoning
Halftoning is a process to convert gray (or graytone) images into binary (halftone) images. Upon display, it is hoped that, by blurring the eyes, the halftone image will appear similar to the original gray image.
Plane-G (Gray -tone image) Free-mode
Clamp -mode Halftoning
Restoration
Free -mode
Clamp -mode Plane -H (Halftone image )
Figure 4: The Q’tron NN for image halftoning and restoration.
Image Representation In the sequel, we’ll use an integer value between 0 and 255 to represent the pixel value in a gray image. Unconventionally, however, we will use 0 to represent the pure white color, and 255 to represent the darkest black color. Similarly, we will use 0 and 1 to represent an white (uninked) pixel and black (inked) pixel in a halftone image, respectively. Consider two M ×N images, say, G and H which represent a graytone and a halftone image, respectively. We’ll use two M ×N Q’tron planes, say, Plane-G and Plane-H to represent images H G and H, respectively, as shown in Figure 4. We’ll use QG ij ∈ {0, 1, ..., 255} and Qkl ∈ {0, 1} to represent the ij th and klth pixel values in Plane-G and Plane-H, respectively. This suggests G G G H H us to take aG ij = a = 1 and qij = q = 256 for each Q’tron in Plane-G, and akl = a = 255 H = q H = 2 for each Q’tron in Plane-H. Clearly, the values of a G QG and aH QH now and qkl ij kl represent the darknesses (the complement of luminances) for pixels in Plane-G and Plane-H, respectively. Energies – Halftoning and Restoration Given a graytone image, image haltoning can be done by constructing a binary image that, in average sense, preserves the luminance (or darkness) everywhere in a small area on the original image. We define energy function E1 for that as follows: 2 X X X 1 G G a Q a H QH , (5) − E1 = kl kl 2 1≤i≤M r r 1≤j≤N
(k,l)∈Nij
(k,l)∈Nij
where Nijr denotes the r-neighborhood of ij th Q’tron in a Q’tron Plane, defined by Nijr = {(k, l) : |i − k| ≤ r and |j − l| ≤ r},
(6)
The terms in the brace of Eq. (5) represent the error of total luminance between a pair of small rectangular areas located at the same place in images G and H. Therefore, the minimization the total sum of such squared errors, indeed, fulfills the goal of halftoning. Although image restoration is not absolutely needed in visual cryptograph, we include it here to highlight the auto-invertible feature of a Q’tron NN. Given a halftone image, we will simply reconstruct its ‘original’ image by passing it through a rectangular filter, which is described by the following energy function: 2 X X 1 1 G G E2 = a G QG − a Q . (7) ij kl 2 1≤i≤M (2r + 1)2 r 1≤j≤N
(k,l)∈Nij
The Q’tron NN for Halftoning and Restoration Using 1-neighborhood (3 × 3) locality, the total energy function E for such a dual-function NN will be: E = λh Eh + λr Er , (8) with 2 j+1 j+1 i+1 X i+1 X N X M X X X 1 ; (9) a G QG a H QH Eh = kl kl − 2 i=1 j=1 k=i−1 l=j−1 k=i−1 l=j−1 2 j+1 M X i+1 X N X X 1 1 G G Er = a G QG − . (10) a Q ij kl 2 9 i=1 j=1
k=i−1 l=j−1
where λh , λr > 0. For simplicity, the indices corresponding to boundary pixels are unnoted in Eq. (9) and Eq. (10). The connection strengths of the Q’tron NN can, then, be found by mapping Eq. (8) to Eq. (10) to the energy function for the two-dimensional Q’tron planes shown in Figure 4, namely E =−
M N M N M X N XX 1 X X X X X X x x xy y y (a Qij )Tij,kl (a Qkl ) − Iijx (ax Qxij ) 2 x∈P i=1 j=1 y∈P k=1 l=1
(11)
x∈P i=1 j=1
xy where P = {H, G}, Tij,kl represents the connection strength between the ij th and the klth Q’trons on the Q’tron planes x and y, and Iijx represents the external stimulus fed into the ij th Q’tron in the Q’tron plane x. It is easily seen that IijH = IijG = 0. To perform image halftoning, input a graytone image into Plane-G by clamping its Q’trons to the output-levels that represent the pixel values in the image, and set all Q’trons in PlaneH free, i.e., their states are allowed to be updated, after their states have been randomly initialized. Conversely, one can perform the inverse function, i.e., restoration, by clamping a binary image into Plane-H, and setting all Q’trons in Plane-G free.
4
The Q’tron NN for Visual Cryptography
In this section, we first describe how to combine three Q’tron NN’s built in the last section to fulfill the (2, 2), read as the two-out-of-two access scheme, in visual cryptography. Later, we state the method to set the operating modes for Q’trons to produce shares for visual authorization. The histogram reallocation on the graytone images involved in the scheme will also be addressed. The Q’tron NN structure for (2, 2) In (2, 2), three binary images are involved. One is target, denoted by T , which is the secret we want to conceal, and the other two are shares, denoted by S1 and S2, respectively, which are printed on transparencies. The pictorial meanings of S1 and S2 are totally unrelated to the secret depicted in T . The only way to uncover the secret is to view the superposed image of S1 and S2. Since graytone images can be converted into halftone (binary) ones using the Q’tron NN constructed in the last section, we can describe the involved images of (2, 2) using graytone images purely. We will use notations Gx and Hx, where x ∈ {T, S1, S2}, to represent the graytone image and the halftone image of x, respectively. For example, GT and HT represent the graytone and halftone images of target T , respectively. Figure 5(a) shows the structure of Q’tron NN for (2, 2). There are three Q’tron-plane pairs for images (GT, HT ), (GS1, HS1) and (GS2, HS2) in the NN. Without any connection among these Q’tron-plane pairs, they are, in fact, three machines to perform image halftoning independently. In the following subsections, we will incorporate the three NN’s with extra ‘rules’ for (2, 2).
,GT TijGT = - l h A (i , j , k , l ) , kl
GT
Plane-GT (Target)
, HT TijGT = l h A (i , j , k , l ) , kl , HT TijHT = - lh A(i , j , k , l ) - 2.25l sd ij , kl , kl
HT
Plane-HT
, HS 2 TijHT = 1.5l sd ij.kl , kl
Plane- HS 2
Plane- HS 1
HS1
1, HS 2 TijHS = - lsd ij ,kl , kl
2 , HS 2 TijHS = - l h A( i , j , k , l ) , kl
- l sd ij , kl
HS2 GS 2 , HS 2 ij , kl
T Plane- GS 1 (User Share 1/key)
Plane- GS 2 (User Share 2)
GS1
GS2
(a)
= l h A (i , j , k , l )
2 ,GS 2 TijGS = - l h A( i , j , k , l ) , kl
(b)
Figure 5: (a) The Q’tron NN architecture for (2, 2); and (b) its connection strengths. Table 1: Cost function (C) for share pixels (s1 and s2 ), and their target (t). s1 0 0 1 1
s2 0 1 0 1
0 1 1 1
t
C 0 0.25 0.25 0.25
s1 0 0 1 1
s2 0 1 0 1
1 0 0 0
t
C 2.25 1 1 4
Energy Functions for (2, 2) Consider the following operations for (2, 2). Suppose that we are given three graytone images GT , GS1, and GS2, where first one describes the target, i.e., the secrete, and the next two are for shares. Clamping these three images to Plane-GT , Plane-GS1, and Plane-GS2 in Figure 5(a), respectively, and setting all Q’trons in Plane-HT , Plane-HS1, and Plane-HS2 free, we hope that, as the NN settles down, their states will satisfy the following conditions: 1. Halftone Rule — The binary images HT , HS1, and HS2 appearing in Plane-HT , Plane-HS1, and Plane-HS2, respectively, are visually similar to GT , GS1, and GS2, respectively. 2. Stacking Rule — The superposed image of HS1, and HS2 must be equal to HT . Apparently, the first rule describes the image halftoning process. By referring to Eq (9), one then see that the rule can be described using the following energy function: 2 j+1 j+1 M X N X i+1 X i+1 X X X X 1 (2,2) G Gx = Eh aH QHx − a Q , (12) kl kl 2 x∈P i=1 j=1
k=i−1 l=j−1
k=i−1 l=j−1
where P ∈ {T, S1, S2}. Next, let’s consider the stacking rule. Table 1 lists the possible combinations of three pixel values, where s1 , s2 ∈ {0, 1} represent the pixel values of two share pixels, and t ∈ {0, 1} represents their stacking result. It can be easily seen that the four entries on the left column are correct stackings, while the others are not. Now, consider the following cost function C. C(s1 , s2 , t) = [1.5t − (s1 + s2 )]2 .
(13)
The costs, computed using Eq. (13), for all possible combinations of (s1 , s2 , t) are also appearing in Table 1. With a little investigation, one can see that each correct stacking bears relatively lower cost than the corresponding false one. For example, C(0, 1, 1) = 0.25 < 1 = C(0, 1, 0),
where the lefthand site in the inequality corresponds to a correct stacking, whereas the righthand site corresponds to a false one. Referring to Eq. (13), we then use the following energy function to cope with the stacking rule. M
Es(2,2) =
N
1 XX H HS1 HS2 2 1.5(aH QHT + aH Qij ) . ij ) − (a Qij 2
(14)
i=1 j=1
For simplicity, we don’t add the image-restoration capability to the NN. The total energy function for (2, 2) is thus defined as: (2,2)
E (2,2) = λh Eh
+ λs Es(2,2)
(15)
where λh and λs > 0. Because the NN doesn’t incorporate with any noise-injection mechanism [8], the value of λh and λs must be carefully chosen. One good candidate is λh = 1 and λs = 10. The Q’tron NN for (2, 2) For clarity, the Q’tron NN parameters for (2, 2) are summarized as follows: H Hx 1. aG = aGx ij = 1 and a = aij = 255 for all i, j and x ∈ {T, S1, S2}; Gx = 256 and q H = q Hx = 2, i.e., QGx ∈ {0, 1, . . . , 255} and QHx ∈ {0, 1} for all 2. q G = qij ij ij ij i, j and x ∈ {T, S1, S2};
The energy function for the NN in Figure 5(a) has the same form as Eq (11), but with P = {GT, HT, GS1, HS1, GS2, HS2}. Therefore, the external stimulus for each Q’tron and the connection strength between each pair of Q’trons can be found by mapping Eq. (15) to Eq (11). Accordingly, 3. Iijx = 0, for all i, j and x ∈ {GT, HT, GS1, HS1, GS2, HS2}; and 4. The connection strength between each pair of Q’trons is shown in Figure 5(b). For simplicx,y ity, each Q’tron in the figure represents its corresponding Q’tron-plane. In the figure, T ij,kl represents the connection strength between the ij th Q’tron in Plane-x and klth Q’tron in Plane-y, A(i, j, k, l) is the area function defined by: A(i, j, k, l) =
(3 − |i − k|) × (3 − |j − l|) 0
if |i − j| ≤ 2, |k − l| ≤ 2 , otherwise
and δij,kl is the Kronecker delta function defined by 1 ij = kl δij,kl = 0 otherwise . Due to symmetry, some connection strengths are not shown in the figure. Histogram Reallocation We describe the (2, 2) access scheme using a set of graytone images, see Figure 1. This doesn’t implies that the scheme can be fulfilled given any arbitrary set of graytone images. For example, stacking two shares together, it is impossible to obtain an image that is brighter than any of the shares. However, the problem can be resolved by reallocating their gray-level histograms. We call the problem to determine the set of feasible ranges of gray-level for the images involved in an access scheme as the histogram allocation problem. The problem is not difficult for (2, 2); it is nontrivial for more complex access schemes [9] however. T , g T ], [g S1 , g S1 ] We now consider the histogram allocation for (2, 2). Suppose that [g min max min max S2 , g S2 ] are the ranges of gray-levels assign to images T , S1 and S2, respectively. They and [gmin max have the following constraints:
T S1 , g S2 ) — the lightest pixel (area) in the target image must darker than 1. gmin ≥ max(gmax max the darkest pixel (area) everywhere in the two shares. T S1 + g S2 ) — the darkest pixel (area) in the target must be able to 2. gmax ≥ min(255, gmin min synthesized even by overlapping the two lightest pixels (areas) in the two shares.
Note that in the above we have used the convention that a darker pixel has a higher pixel value. 5
Applications
In the following subsections, we are going to describe the possible applications for the NN we constructed. We assume that the histograms for the involved graytone images in these applications have been properly reallocated. We also assume that all free-mode Q’trons are randomly initialized before the NN starts running. Application — (2, 2) For (2, 2), the input to the Q’tron NN is a set of graytone images, say GT , GS1 and GS2, which are clamped onto Plane-GT , Plane-GS1 and Plane-GS2, respectively. Therefore, all Q’trons in Plane-GT , Plane-GS1 and Plane-GS2 are operated in clamp-mode, and all of the other Q’trons are in free-mode. As the NN settles down, binary images HT , HS1 and HS2 produced in Plane-HT , Plane-HS1 and Plane-HS2 will be the halftone versions of GT , GS1 and GS2, respectively, and the superposition of HS1 and HS2 will be HT . This, hence, fulfill (2, 2). Application — Visual Authorization Suppose that there are n different access rights, say R1 , . . . , Rn , to access a resource. To access the resource, a user must show a user-share, say HS, which describes the same pictorial meaning as a graytone image, say GS, to the administrator. The administrator has a keyshare, say HK, which is a halftone version of a graytone image, say GK. Users don’t have any information about HK and GK. A user is allowed to access the resource if and only if the pictorial meaning retrieved by stacking HS and HK is in the authority set, say GR = {GR1 , . . . , GRn }, which is a set of graytone images that the administrator used to identify the access rights of users. Furthermore, if stacking HS and HK provides the meaning of GRi ∈ GR, then the user is given access right Ri to access the resource. To make life easier, we always assign Plane-GS1 and Plane-HS1 for key-share, and PlaneGS2 and Plane-HS2 for user share. One convenient method to generate the key-share is described as follows. We arbitrarily choose a target from the authority set. Suppose it is GRi ∈ GR. By letting GT = GRi , GS1 = GK and GS2 = GS, and applying the operation for (2, 2) described in the last subsection, two shares will be produced in Plane-HS1 and Plane-HS2 after the NN settles down. The administrator can then keep the image appearing in Plane-HS1 as the key-share HK. Clearly, HK is visually similar to GK. In the following, we describe the most efficient method to generate user-shares by taking advantage of available knowledge. It is affirmative that overlapping a black pixel in one share with a pixel (black or white) in another share can only produce a black pixel. Therefore, stacking HK with any shares, the pixels at the positions where HK’s pixels are black must be also black. With this knowledge, we can use the following method to produce user-shares. Suppose that we now want to produce a user-share with access right R j . First, we copy GS and GRj to Plane-GS2 and Plane-GT , respectively, and copy HK both to Plane-HS1 and Plane-HT . All Q’trons in Plane-HS1, Plane-GS2 and Plane-GT are set to clamp-mode. Additionally, the Q’trons in Plane-HT whose output-levels now are one, i.e., black pixels, are also set to clamp-mode. All of other Q’trons are set to free-mode. With such an initial setting, we can then get the desired user-share from Plane-HS2 when the NN settles down. Note that Plane-GS2 plays no role in this application.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figure 6: An experimental result of visual authorization, see text.
6
Experimental Results
Figure 6 shows an experiment for visual authorization. In this experiment, the size of authorization set is 3, shown in Figure 6(a) to (c). The graytone versions of the key-share and user-share are shown in Figure 6(d) and (e), respectively. All graytone images shown in the figure were original, i.e., their histograms were not modified. The ranges of gray-level assigned to images in the authorization set, key-share, and user-share were [180, 255], [128, 180] and [128, 180], respectively. For contrast enhancement, histogram equalization was performed on these images before their histograms were converted to these ranges. We used the method described in the last section to produce the key-share, shown in Figure 6(f), and three binary user-shares, shown in Figure 6(g) to (i), to take account of the three authorities defined in the authorization set. One can see that, except carefully examining the pixel distributions of these images, these images are visually indistinguishable. However, the superposed images of these key-shares with the key-share are quite different, as shown in Figure 6(j) to (l). 7
Conclusions
In the paper, we propose a novel approach for visual authorization using Q’tron NN model, which is a generalized version of the Hopfield NN model [2]. Intrinsically, a Q’tron NN releases its internal energy monotonically. This allows us to solve a problem by reformulating it as one that minimizes the energy function of an Q’tron NN model. Furthermore, there are two operation modes for Q’trons, namely, clamp-mode and free-mode, which allows the model can serve as an question-answering machine and/or associative memory by setting up the operation modes of Q’trons in the NN. We described the access schemes of visual cryptography using graytone images. This is completely different from the traditional approaches, which deal with binary images directly. Two main rules, namely, halftone rule and stacking rule, were adopted to deal with the feasibility of solutions. Each of them was reformulated as an energy term of a Q’tron NN. Initially, the Q’tron NN was constructed to fulfill the (2, 2) access scheme of visual cryptography. Effortlessly, the NN can also be used for another application by simply switching the operation modes of its Q’trons. We demonstrated such an auto-association capability, or called autoinvertibility, by applying the NN to produce shares for authorization, and the results are surprisingly good. Acknowledgement This research is supported by Tatung University under the grant B91-I02-025. References [1] G. Ateniese, C. Blundo, A. D. Santis, D. R. Stinson, “Visual Cryptography for General Access Structures”, Information and Computation, 129(2):86-106, 1996. [2] J. J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,” Proc. Nat. Acad. Sci. USA, 79:2554-2558, Apr. 1982. [3] L. A. MacPherson, Grey Level Visual Cryptography for General Access Structures, Master thesis, Mathematics in Computer Science, Waterloo, Ontario, Canada, 2002. [4] M. Naor, B. Pinkas, “Visual Authentication and Identification,” CRYPTO 1997, pp. 322-336. [5] M. Naor and A. Shamir, “Visual Cryptography,” Advances Cryptology-Eurocrypt ’94, Lecture Notes in Computer Science, 950:1-12, 1995. [6] N. Paul, D. Evans, A. Rubin and D. Wallach. ”Authenticatin for Remote Voting,” Workshop on Human-Computer Interaction and Security Systems, 6 April 2003. [7] E. Verheul and H. V. Tilborg, “Constructions And Properties of k Out of n Visual Secrete Sharing Schemes,” Designs, Codes and Cryptography, 11(2):179-196, 1997. [8] T. W. Yue and S. C. Chiang, “Quench, Goal-Matching and Converge — The Three-Phase Reasoning Of a Q’tron Neual Network,” Proceedings of the IASTED International Conference on Artificial and Computational Intelligence, pp. 54-59, Sep., 2002. [9] T. W. Yue and S. C. Chiang, “The General Neural-Network Paradigm for Visual Cryptograph,” IWANN 2001, LNCS 2048, pp. 196-206, 2001. [10] T. W. Yue and Z. Z. Lee, “A Goal-Driven Approach for Combinatorial Optimization Using Q’tron Neural Networks,” Proceedings of the IASTED International Conference on Artificial and Computational Intelligence, pp. 60-65, Sep., 2002.