Tree Parity Machine-based One-Time Password Authentication Schemes Tieming Chen, and Samuel H.Huang
AbstractOne-Time Password (OTP) is always used as the strongest authentication scheme among all password-based solutions. Currently, consumer devices such as smart card have implemented OTP based two-factor authentications for secure access controls. Such solutions are economically sound without support of timestamp mechanisms. Therefore, synchronization of internal parameters in OTP models, such as moving factor or counter, between the client and server is the key challenge. Recently, a novel phenomenon shows that two interacting neural networks, called Tree Parity Machines (TPM), with common inputs can finally synchronize their weight vectors through finite steps of output-based mutual learning. The improved secure TPM can well be utilized to synchronize parameters for OTP schemes. In this paper, TPM mutual learning scheme is introduced, then two TPM-based novel OTP solutions are proposed. One is a full implementation model including initialization and rekeying, while the other is light-weight and efficient suitable for resource-constrained embedded environment. Security and performance on the proposed protocols are at final discussed.
I. INTRODUCTION
U
SING static passwords for authentication has quite a few security drawbacks -- passwords can be guessed, forgotten, stolen, and eavesdropped. A better and more secure way of authentication is to use so called two-factor or strong authentication based on one-time passwords. Instead of authenticating with a simple password, each user carries a device, say token, to generate passwords that are valid only one time. One-Time Password (OTP) is known as the most widely used two-factor authentication scheme [1, 2, 3]. It is preferred than other stronger authentication models such as PKI or biometrics enabled mechanism, because an air-gap device employed in OTP does not require any installation of desktop software on the client user machine. It allows authentications to be easily implemented anywhere and anytime with tokens that can be embedded into personal objects such as bank card [4]. This work was partially supported by the National Nature Science Foundation of China under grant No.60673080 and No. 60773115, National 863 High-Tech Project of China under grant No.2006AA01Z235, Zhejiang province Nature Science Foundation under grant No.Y106290. Tieming Chen is with College of Software Engineering, Zhejiang University of Technology, Hangzhou, 310032, China; Now he is a doctor candidate at College of Computer Science, Beihang University, Beijing, 100083,China. (Phone:86-571-8529-0034, e-mail:
[email protected]). Samuel H.Huang is with University of Cincinnati, Cincinnati, OH 45521 USA. He is now the director of the Intelligent System Laboratory.(e-mail:
[email protected]).
Recently, a type of smart devices called USBKey has been used for secure user identifications and authentications. USBKey based OTP authentications depend on two-factor implementations built in hardware[5,6]. In this case, a counter C known as a moving factor needs to be kept both on the USB token and the authentication server that needs to be synchronized before each round of OTP implementations. The shared secrecy, denoted as K, is first initialized by the authentication server and distributed into the corresponding client token. Let H denote a general hash algorithm, then the USBKey based OTP authentication can simply be described as follows: A one-time password is firstly calculated as pwd1=H(K,C) by the USB token and sent to the server, afterwards the value of counter C is incremented by one. Meanwhile, the authentication server calculates the one-time password by the same way and compares with that from the client. If the password matches, the authentication succeeds and the server counter C is also incremented simultaneously for synchronizing with that of the client. However, it is a little awkward for counter resynchronization every time the client fails to connect to the server. In that case, it is possible that the client counter works but the server is unaware of it. In other cases, updating the shared secrecy K may also be compromised. Although current techniques can be employed to improve the stability of OTP mechanism, for example timestamp, it is expensive and hard to implement. How we can utilize other means to do this job easily and with low-cost? Resorting to non-classical cryptography based schemes could be a good solution. Recent research on Brain Computer Interface introduces an innovative topic called Pass-Thoughts, which is explored to perform secure authentication using human minds [7]. It could be a significant breakthrough if conversation between human beings and machines can be achieved using neural network-based artificial intelligence. However, it is far from being practical at present time. Fortunately, the latest research on a novel neural network model, called Tree Parity Machine [8,9,10], has shown a novel phenomenon that both weight vectors of two neural networks with common inputs can be trained to finally synchronized under specific output bit-based mutual learning. It is a promising point for non-classical cryptography applications, especially for key exchange. Therefore, we believe that, based on the weight synchronization property of TPM, novel and efficient OTP authentication schemes can be designed. The rest of paper is organized as follows. Section 2
257 c 978-1-4244-1821-3/08/$25.002008 IEEE
introduces the tree parity machine and its applications in cryptography. Two TPM-based OTP protocols are proposed in section 3. Security and performance of our schemes are analyzed in Section 4. Finally a conclusion is drawn. II. TREE PARITY MACHINE FOR CRYPTOGRAPHY A. Mutual Learning Neural Network Fig.1 illustrates a monolayer perceptron with common N-dimensional input vector X. X satisfies the Gaussion distribution with value located between 0 and 1. W is the weight space identified by a normalized N-dimensional vector. is the output bit of the neural network which only takes two values +1 or -1. We know that the conventional perceptron learning process begins from lots of X/ sample pairs, then updates the weight vector W iteratively. Finally the perceptron has the ability to forecast the correct output for the given input sample.
Fig.1 Monolayer Perceptron
Fig.2 Mutual Learning Model
Now lets consider the mutual learning of two neural networks based on the exchange of output bits, which is illustrated in Fig.2. Two neural networks have the common Gaussion input XA= XB whose values change randomly but are kept equal throughout for every learning step. WA and WB represent the respective weight vector of neural networks, and are undated only if the two output bits are equal with the following updating rules.
members in the input vector X with binary value +1 and -1. (3) Modify the weight updating rule to ahcieve parallel synchronization as follows:
w A (t + 1) = w A (t ) − xσ B w B (t + 1) = w B (t ) − xσ A Here, the condition when weight updating takes place is A=B, and the following limitations for each updated weight value should be satisfied: w = L if w>L or w = -L if w= 0 X