Continuous Authentication and Identification for Mobile ... - IEEE Xplore

4 downloads 0 Views 691KB Size Report
We use continuous authentication to protect a mobile device. Once it is ... V. In Section VI, we discuss the experimental results we found from our research.
Continuous Authentication and Identification for Mobile Devices: Combining Security and Forensics Soumik Mondal and Patrick Bours Norwegian Information Security Laboratory (NISlab) Gjøvik University College, Gjøvik, Norway {soumik.mondal,patrick.bours}@hig.no Abstract—In this paper we consider an additional functionality to continuous authentication which is the identification of an impostor. We use continuous authentication to protect a mobile device. Once it is detected that it is not the genuine user that is using the mobile device, it is important to lock it, but in a closed user group, valuable information could also be gained from determining who the actual person was that was operating the device. This new concept is termed continuous identification and in this paper we will show that we can identify the impostors with almost 98% accuracy in case the security settings are such that an impostor is detected after 15 actions on average. In case of a higher security, we already can detect impostors after 4 actions on the mobile device, but in that case the recognition rate of the correct impostor drops to almost 83%.

I.

I NTRODUCTION

Technological advances for the mobile devices make people dependent on such devices day by day. Nowadays mobile devices (i.e. smart phone or tablet) can be treated as a pocket computing device and people are using it for banking transactions, health monitoring, social networking, etc.. Therefore, they contain highly sensitive and private information. Securing these devices from illegitimate access of such information is the primary concern for the security research community. Access control on the mobile devices is generally implemented as a one-time proof of identity (i.e. password, defined swipe pattern, face or fingerprint) during the initial log on procedure [5], [6], [13], [16]. The legitimacy of the user is assumed to be the same during the full session. Unfortunately, if the device is left unlocked and unattended, any person can have access to the same information as the genuine user. This type of access control is referred to as static authentication or static login. On the other hand, we have Continuous Authentication (also called Active Authentication), where the genuineness of a user is continuously monitored based on the biometric signature left on the device. When doubt arises about the genuineness of the user, the system can lock, and the user has to revert to the static authentication access control mechanism to continue working. Continuous authentication is not the alternative security solution for static authentication; it provides an added security measure alongside static login. In our research, we are not only looking at the Continuous Authentication (CA) but also the Continuous Identification (CI) which can be used for forensics evidence. During our research we address two issues. The first is related to CA (Is 978-1-4673-6802-5/15/$31.00 ©2015 IEEE

an impostor using the system) while the second is related to CI (Can the impostor be identified once the CA system detects that an impostor uses the system). To the best of our knowledge is this the first time that the issue of CI is addressed in research. Furthermore, only little research has been done in CA [7], [8], [11], [15], [20] for mobile devices. CA by analysing the user’s swipe behaviour on the mobile devices is challenging due to the limited amount of information that is available and the large intra-class variations. As a result, all of the previous research has been done as a periodic authentication, where the analysis was based on a fixed number of actions or a fixed time period. This creates a limitation of the system, that is, if the system could detect an impostor before that fixed number of actions are completed, then there is no reason to allow the impostor to perform more actions. Bo et al. [3] have mentioned continuous user identification in their research. They followed traditional authentication and identification processes with continuous biometric data, but such an approach is unable to address the above research questions. The contributions of this paper are as follows: •

Use a publicly available swipe gesture database [1] which makes this research completely reproducible;



Introduce two novel identification schemes by using pairwise coupling for CI;



Introduce the concept (closed user group) of CI;



Analyse the data for optimal performance both from a security perspective and a forensic perspective.

The remainder of this paper is organized as follows. In Section II, we discuss the overall system architecture. Data description, feature extraction and the profile creation process is given in Section III. The description of the Continuous Authentication Module can be found in Section IV, while the Continuous Identification Module will be discussed in Section V. In Section VI, we discuss the experimental results we found from our research. Finally, we conclude this research in Section VII. II.

S YSTEM A RCHITECTURE

In this section, we describe our proposed system architecture. The main idea of this research is to address the 2015 IEEE International Workshop on Information Forensics and Security (WIFS)

fundamental question, whether we can establish the identity of an impostor once the system has detected that an impostor is using the system. Figure 1, shows the block diagram of the complete system. Our proposed system is mainly divided into two major components (see Figure 1 with dotted area), i.e. the Continuous Authentication Module (CAM) and the Continuous Identification Module (CIM) . We describe these two modules in more detail in Sections IV and V respectively. In Figure 1, we see that after Static Login (i.e. password, fingerprint, swipe pattern, etc..) the user is accepted as genuine and obtains the permission to use the device. Now during the use of the device each and every action performed by the user is used by the CAM and this module returns the current system T rust in the genuineness of the user. The T rust values is used in the decision module where it is compared with a predefined threshold (Tlockout ) to determine whether the user can continue to use the device or, if the trust is too low, if the device will be locked. After detecting that the present user is an impostor (i.e. T rust < Tlockout ), all the performed activity done by the current user are used in the CIM to try to identify the impostor from a known user database and return to the Static Login part of the system.

Static Login

swipe position, tk is the time stamp, opl k is the orientation of the phone (i.e. portrait or landscape), pk is the finger pressure on the screen, and ak is the area covered by the finger during swipe. In the analysis, we divided the sequence of consecutive tiny movement data into actions (i.e. strokes). From the encoded raw data, 15 features were calculated for each swipe action sk . The details of this feature extraction has been described by Antal et al. [1]. B. Profile Creation In this section we describe the profile creation process for CAM and CIM. Let Ui denote the data of user i, then this data is split into a part for training (denoted by Mi ) and for testing (Ti ). At most 50% of the data of a user is used for training, i.e. |Mi | ≤ |Ti |. The CAM training model for user i is build with the training data Mi from user i as well as training data taken from the other users, where the total amount of impostor training data is approximately of the same size as Mi . In our study, we used pairwise coupling [9] for classification in CIM. For each combination of genuine user i and impostor user j we created a training set CIMji . This classifier was trained with the training data Mi of user i and the training data Mj of user j, where, to avoid bias, the amount of training data of both users is taken equal. Given that we have n = 71 users, we see that we have n × (n − 1) = 4970 different classifier models.

Yes

Continuous Authentication Module

Continue

C. Feature Selection

����� > ��������

Fig. 1.

Continuous Identification Module

Adversary ID

No Lock out

Block diagram of the proposed system.

III.

DATA D ESCRIPTION AND P ROFILE C REATION

In our research, we have used a publicly available swipe gesture dataset collected from mobile devices [1]. To the best of our knowledge contains this dataset the largest number of users compared to other publicly available datasets. The description of the dataset, users profile creation, and feature selection technique are given below. A. Data Description During the data collection process a client-server application was deployed to 8 different Android mobile devices (screen resolutions ranging from 320x480 to 1080x1205) and touch gesture data was collected from 71 volunteers (56 male and 15 female with ages from 19 to 47) [1]. The data was collected in four different sessions with two different tasks. One task was reading an article and answering some questions about it and the other task was surfing an image gallery. The dataset is divided into two sets, and in this research only Set-1 is used in the analysis. Every swipe action S is encoded by a sequence of vectors sk = (xk , yk , tk , opl k , pk , ak ), k ∈ {1, 2, ..., N }, where xk , yk are the coordinates of the

Before building the classifier models we first apply the feature selection technique for both CAM and CIM. Our feature selection process is based on maximization of the separation (i.e. Kolmogorov–Smirnov test [17]) between two Multivariate Cumulative Distributions. We also tested the feature selection technique proposed by Ververidis et al. [18], but we decided not to use it, due to the lower learning accuracy of the classifier models. Let F = {1, 2, 3, ...m} be the total feature set, where m is the number of feature attributes. The feature subset A ⊆ F is based on the maximization of the F S with Genetic Algorithm as where, a feature subset searching technique A ) − M V CDF (x ) , M V CDF () F S = sup M V CDF (xA i j th is Multivariate Cumulative Distribution Function, xA is i i user feature subset data and xA j is impostor user(s) feature subset data. Figure 2, shows the Empirical Cumulative Distri31 . bution Function plot of the selected features for CIM35 IV.

C ONTINUOUS AUTHENTICATION M ODULE

Figure 3 shows the complete block diagram of the CAM. In this section we discuss the major components of CAM in more detail. The feature extraction and feature selection techniques are already discussed in the Section III. A. Comparison Module We found that 2 regression models performed best for CAM due to the nature of the dataset. We applied Artificial Neural Network (ANN) and Counter Propagation Artificial Neural Network (CPANN) [12] in a Multi Classifier Fusion (MCF) architecture in our analysis [10]. We build 2 classifier

Empirical CDF for Feature 4

Empirical CDF for Feature 7

1

F(X)

F(X)

1

0.5

0

0

0.2

0.4

0.6

0.8

0.5

0 0.2

1

X Empirical CDF for Feature 11

0.5

0

0.2

0.4

0.6

0.8

0

1

1

0

0.05

0.1

0.15

∆T (sci ) = min{−D + (

F(X)

1

0.5

0

0.2

0.4

0.6

0.8

0.5

0

1

0

0.2

X

0.4

0.6

0.8

X

Fig. 2. Empirical Cumulative Distribution Function plot of important features between user 31 and 35 marked in red and blue. Training Phase Swipe Gesture

Feature Extraction

Data Separation for Training

Feature Selection

Build Classifier Models ANN and CPANN

Store Profile

1 C) ), C} exp(− sciB−A )

D × (1 +

0.2

X Empirical CDF for Feature 15

1

F(X)

0.8

0.5

X Empirical CDF for Feature 14

0

0.6

1

F(X)

F(X)

1

0

0.4

X Empirical CDF for Feature 12

action is performed deviates from what is expected according to the template, then the system’s trust will decrease, which is called a Penalty [4]. If the trust drops below a pre-defined threshold Tlockout then the system locks itself and will require static authentication of the user to continue working. In our research, we implement the Dynamic Trust Model [14] where, the penalty/reward calculated based on the resultant classifier score sc according to:

1 C

+

(1)

In the above equation the parameter A represents the threshold value to determine between a penalty and a reward. If the classification score of the current action (i.e. sci ) is above the threshold A then a reward is given, otherwise a penalty. Furthermore, the value of B represents the width of the sigmoid function and C and D are the upper limits for the amount of reward and penalty. If the trust value after i actions is denoted by T rusti , then we have the following relation between the trust T rusti−1 after i − 1 actions and the trust T rusti after i actions when the particular ith action had a classification score sci :

Testing Phase Swipe Gesture

Fig. 3.

T rusti = min{max{T rusti−1 + ∆T (sci ), 0}, 100} Feature Extraction

Feature Selection

Comparison Module ANN and CPANN

Trust Model

System Trust (Trust)

Continuous Authentication Module.

models for each CAM i (see Section III-B). We also tested the prediction model (i.e. Support Vector Machine (SVM)), but due to lower learning accuracy we decided not to use it. The score vector we use is (f1 , f2 ) (Scoreann , Scorecpann ). From these 2 classifier we calculate a score that will be used in the Model (see Section IV-B) in the following sc = Wca × f1 + (1 − Wca ) × f2 where, Wca weight for the weighted fusion technique.

= scores Trust way: is the

B. Trust Model Human behavioural is generated from motor-skills [19]. No persons can behave in exactly the same manner all the time and for a biometric system this means that a user can not always behave as defined in his/her template [2]. Sometimes there will be some deviation of the genuine user’s behaviour, which motivates us to use Trust Model for continuous authentication [4]. The fundamental concept of the Trust Model is that the system determines the trust in the genuineness of the current user based on the similarity or deviation from the behaviour of the current user compared to the stored template of the genuine user. We analyse every single action performed by the user. If the performed action is performed in a similar manner as described in the stored template, then the system’s trust in the genuineness of the user will increase. This is called a Reward. On the other hand, if the manner in which the

V.

(2)

C ONTINUOUS I DENTIFICATION M ODULE

Figure 4 shows the complete block diagram of the CIM. In this section we discuss the major components of CIM in more detail. The feature extraction and feature selection techniques are discussed above in the Section III. Training Phase Swipe Gesture

Feature Extraction

Pairwise Training Data Preparation

Feature Selection

Build Classifier Models SVM and CPANN

Store Profile

Testing Phase Swipe Gestures before Lockout

Fig. 4.

Feature Extraction

Feature Selection

Comparison Module SVM and CPANN

Decision

Adversary ID

Continuous Identification Module.

A. Comparison and Decision We found that one regression model and one prediction model performed best for CIM. We applied pairwise coupling [9] by using CPANN and SVM in a MCF architecture in our analysis. We build 2 classifier models for each CIMji (see Section III-B). 1) Proposed Scheme 1 (PS1): During comparison we first randomly choose k classifier models (for both SVM and for CPANN) for the ith user and calculate the score vector for each of the m test actions (where m is the number of actions before lockout) for each of these k classifier models. This will give k × m score values for SVM and the same amount for CPANN. Let γj , for j = 1, 2, ..., k ×m denote the score values

Rank 1

Rank 2

100

100

95

95

90 85 k=5 k=10 k=15 k=20 k=25

80 75 5

scj .

10

15

90 85 k=5 k=10 k=15 k=20 k=25

80 75 70

20

5

10

Number of Actions Rank 4

j=1

We repeat this procedure for all users i and select the user with the largest average score SC i as the identified impostor. Note that in this research the CAM and CIM module work independent of each other (except that the data from the CAM is used in the CIM too).

100

95

95

90 85 k=5 k=10 k=15 k=20 k=25

80 75 5

10

15

90 85 k=5 k=10 k=15 k=20 k=25

80 75 70

20

5

10

Number of Actions

We have tested the above procedure for various values of k = 5, 10, ..., 25 and number of actions m = 2, 4, ..., 20. Figure 5 shows the performance accuracies. We clearly see that increasing the value of k also increases the recognition accuracy. We also found that there is a large difference in accuracy between Rank-1 and Rank-8 for given k and m values, which motivated us to formulate an additional identification scheme, described in Section V-A2. Rank 2 100

90

90

80

80

70 60

k=5 k=10 k=15 k=20 k=25

50 40 30

5

10

15

70 60

k=5 k=10 k=15 k=20 k=25

50 40 30

20

5

Number of Actions

10

90

90

80

80

70 60

k=5 k=10 k=15 k=20 k=25

50 40 5

10

15

Result obtained from PS2.

A. Performance Measure We found that the current research on CA reports the results in terms of Equal Error Rate (EER), or in terms of False Match Rate (FMR) and False Non Match Rate (FNMR), over either the whole test set or over chunks of m of actions. This means that an impostor can perform at least m actions before being detected as an impostor, even if system achieves 0% EER. This is then in fact no longer CA, but at best Periodic Authentication (PA). In our research, we focus on actual CA that reacts on every single action from a user. Therefore, we used Average Number of Genuine Actions (ANGA) and Average Number of Impostor Actions (ANIA) as the performance evaluation metric [14]. A detailed explanation of the performed actions is given in Section III. 100

60

k=5 k=10 k=15 k=20 k=25

50

30

5

10

15

95

20

Number of Actions

Result obtained from PS1.

2) Proposed Scheme 2 (PS2): Let U = [u1 , u2 , ..., u8 ] be the set of the Rank-8 users after applying PS1 where U ⊂ {1, 2, ..., 71}. Now we repeat PS1 on the set of user U and with a fixed k = 7, i.e. we consider all impostor users in the reduced set of 8 users. Figure 6, shows the result we obtained after applying PS2 on our dataset for different number of actions (i.e. m = 2, 4, ..., 20) with different k value (i.e. k = 5, 10, ..., 25 for PS1). We can see the improvement on the results for Rank1, Rank-2, and Rank-4. The Rank-8 classification accuracy will be same for both PS1 and PS2 (because PS2 was derived from same Rank-8 users of PS1).

VI.

20

70

40 20

Number of Actions

Fig. 5.

20

Rank 8 100

Accuracy (%)

Accuracy (%)

Rank 4 100

30

15

Number of Actions

15

Number of Actions

R ESULT A NALYSIS

In this section, we analyse the results we obtained from our research.

System Trust

Accuracy (%)

Accuracy (%)

Rank 1 100

Fig. 6.

20

Rank 8

100

70

15

Number of Actions

Accuracy (%)

1 k×m

70

Accuracy (%)

SC i =

k×m X

Accuracy (%)

Accuracy (%)

for SVM and λj for j = 1, 2, ..., k × m be the score values for CPANN. Then we define scj = Wci × γj + (1 − Wci ) × λj denote the weighted score vector, where Wci is the weight for the classifier fusion technique. Now we define the average score for user i as

90

41 41 41 41 41 41 41 41 41 41 41

41

41

56 41

41

41

41

41

41

41

R−2

Genuine User − 22 Imposter User − 41 85

0

50

100

150

Accuracy − 95.2381% ANIA − 15

200

250

300

Event Number

Fig. 7. Continuous Authentication and Identification experiment with genuine user 22 and impostor user 41.

In Figure 7, we see how system trust values changes when we compare the model of a genuine user with the test data of an impostor user. The trust will drop (in this example) 21 times below the lockout threshold (Tlockout = 90 is marked with a red line) within 319 user actions. The value of ANIA in this example equals ⌊ 319 21 ⌉ = 15. We can calculate ANGA in a similar manner if the genuine user is locked out based on his own test data. The goal is obviously to have ANGA as high as possible (to

ensure that the system is user friendly, and ideally a genuine user is never locked out), while at the same time the ANIA value must be as low as possible (to ensure little harm can done by the impostor). Also we do want that all the impostors are in fact detected as impostors. In our analysis, whenever a user is locked out, we reset the trust value to 100 to simulate a new session starting (i.e. post Static Login). In our research, we used an upper limit of the system trust to 100 (see Equation 2) to prevent the impostor user to take advantage of the higher system trust before taking over the system. Every time that a user is locked out by the CAM system, the adversary ID is determined by the CIM system. In Figure 7, we displayed these adversary IDs in green (meaning a successfully identified adversary) and red (meaning identification of another person than the actual adversary). When the system is unsuccessful in identifying the correct adversary ID, we display the Rank-1 identity along with the rank of the correct impostor. In Figure 7 one such case was present, where the system identifies the impostor as number 56, while the correct adversary ID of 41 is found at Rank-2 (R-2 marked as blue). During this experiment out of 21 times lockout the CIM identified the correct adversary 20 times, therefore, the recognition accuracy for this example is 20 out of 21, or 95.2%. The complete results are given in below. One of the challenges that we have in this research is that the number of the data samples that are used in the CIM module is variable. The number of data samples equals the number of actions that the current user could perform before he/she was locked out by the CAM module. B. Results We performed our experiment with one-hold-out testing. Therefore, we have in total 71 × 71 = 5041 tests, of which 71 are genuine tests and the remainder are impostor tests. We report the results with the user specific lockout threshold (Tus ) [14], where the threshold for lockout will satisfy 50 ≤ T rus < min(T rustgenuine ) and with a fixed lockout threshold, where Tlockout = 90. The average number of times that any of the 70 impostors is detected by for a given genuine user is 1039 in case Tlockout = 90 and 4778 when the user specific threshold T rus is used. These numbers represent the number of tests done by CIM for each genuine user. In our research, all the algorithmic parameters are optimized using linear search. Interpretation of the tables: Based on the CA performance a genuine user can be categorize into 4 possible categories: •

(+ / +) : In this category, the genuine user is never locked out from the system (i.e. ANGA will be ∞), and all the 70 impostors are detected as impostors. This category can be considered as a best category.



(+ / -) : In this category, ANGA will be ∞, but some impostors are unable detect by the system.



(- / +) : In this category, the genuine user is locked out by the system, but on the other hand are all impostors detected by the system.



(- / -) : In this category, the genuine user is locked out by the system and also, some of the impostors

are not detected by the system. This category can be considered as a worst case category. In Table I, column # Users shows how many users fall within each of the 4 categories based on the CA performance (i.e. the values sum up to 71). In column ANGA a value will indicate the Average Number of Genuine Actions in case genuine users are locked out by the system. If the genuine users are not locked out, then ANGA will be ∞. The column ANIA will display the Average Number of Impostor Actions, and is based on all impostors that are detected. The actions of the impostors that are not detected are not used in this calculation, but the impostors undetected rate was given in column Imp. ND (%). This number should be seen in relation to the number of users in that particular category. For example, in case of T rus lockout threshold ’+ / -’ category, we see that # Users equals 3 (i.e. there are 3 × 70 = 210 impostor test sets) and only 4 impostors not detected by the system as being an impostor therefore, Imp. ND rate will be 1.9% . In Table I, in the CI Rate (%) section, the identification accuracies that we obtained from CIM for different analysis techniques are shown. Columns PS1+WtF and PS2+WtF show the results obtained from Proposed Scheme 1 and Proposed Scheme 2 when using the weighted classifiers score fusion technique and columns PS1+AvgF and PS2+AvgF show the result obtained from Proposed Scheme 1 and Proposed Scheme 2 when using the average classifiers score fusion technique (see Section V-A where Wci = 0.5). We can clearly see that the Proposed Scheme 2 with weighed classifiers score fusion technique performed better than the other techniques. Due to the lower value of ANIA, we can observed that the continuous identification accuracy is lower for the ’+ / +’ category users than the ’+ / -’ category genuine users but this observation does not hold when we use average classifiers score fusion. The summary line in Table I shows the weighted average for the full system. For example, for T r90 the total number of undetected impostors is 20, hence the impostor undetected rate is 20/(71 × 70) or 0.4%. C. Comparison with Previous Research Due to the novelty of this research we did not find any research which is directly related to our research. Therefore, we are unable to compare our continuous identification results with previous results, but, we can compare our identification technique with the previous research done on the same dataset. Figure 8, shown the identification accuracy comparison with previous research done by Antal et al. [1] for different number of actions. We see that the proposed method outperforms the existing research. More specifically, there is a large difference in accuracy when using a small number of actions, which benefited our research when the impostor users is locked out after a few numbers of actions. VII.

C ONCLUSION

In this research we found that, for a closed user group, we can with a high accuracy identify the impostor in case our CA system detects that the current user is not the genuine user. The identification of the impostor is a valuable addition to a CA system, because now, not only the data of the genuine user is protected, but with a high probability also the impostor is

TABLE I.

Tlockout

Category +/+ +/-/+ -/Summary

T rus

+/+ +/-/+ -/Summary

T r90

R ESULT OBTAINED FROM OUR ANALYSIS .

CA ANIA 4±2 14 ± 3

Imp. ND (%)

ANGA ∞ ∞

71



4.4

0.08

69.5

82.7

47.5

70.4

63 8

∞ ∞

14 ± 3 25 ± 7

3.6

87.3 ± 1.4 90.8 ± 2.4

97.8 ± 0.6 98.5 ± 0.6

53 ± 4.4 49.4 ± 1.7

80.5 ± 3.3 76.9 ± 1.9

71



15.2

0.4

87.7

97.9

52.6

80.1

100

95

Accuracy (%)

90

85

80

75

70 PS2 with k=15 Antal et al. with SVM 65

2

4

6

8

10

12

14

16

18

20

Number of Actions

Fig. 8.

Comparison with previous research.

detected. This opens for new research, one direction could be to not only look at closed user groups, but ”half-open” user groups, where not all users are known to the system. In that case the CIM module could determine if a detected impostor is either one of the known users or an unknown user. From the results we can see that we can shift the main focus of attention between better security (lower ANIA values, but also lower CI rate) and better forensic properties (higher ANIA, but also higher identification rate). In this research we still focussed on protecting the system, i.e. locking the system as fast as possible, with as downside the fact that the identification accuracy is not as high as possible. In future research we consider not locking out the impostor user immediately when he/she is detected, but allowing the impostor a few more actions to identify him/her with a higher probability. We will also validate our results on different databases, both for mobile devices as well as for computers (using keystroke and mouse dynamics information). R EFERENCES [1] M. Antal, Z. Bokor, and L. Z. Szab´o. Information revealed from scrolling interactions on mobile devices. Pattern Recognition Letters, 56:7 – 13, 2015. [2] R. M. Bergner. What is behavior? and so what? New Ideas in Psychology, 29(2):147 – 155, 2011. [3] C. Bo, L. Zhang, T. Jung, J. Han, X.-Y. Li, and Y. Wang. Continuous user identification via touch and movement behavioral biometrics. In 2014 IEEE Int. Performance Computing and Communications Conference, pages 1–8, 2014. [4] P. Bours. Continuous keystroke dynamics: A different perspective towards biometric evaluation. Information Security Technical Report, 17:36–43, 2012.

1.9

PS1+WtF 69 ± 5.2 81.8 ± 1.7

CI Rate (%) PS2+WtF PS1+AvgF 82.3 ± 5.4 47.8 ± 1.4 93 ± 2.4 40.4 ± 2.9

# User 68 3

PS2+AvgF 70.7 ± 3.4 62.9 ± 3.8

[5] Z. Cai, C. Shen, M. Wang, Y. Song, and J. Wang. Mobile authentication through touch-behavior features. In Biometric Recognition, volume 8232 of Lecture Notes in Computer Science, pages 386–393. Springer, 2013. [6] A. De Luca, A. Hang, F. Brudy, C. Lindner, and H. Hussmann. Touch me once and i know it’s you!: Implicit authentication based on touch screen patterns. In SIGCHI Conf. on Human Factors in Computing Systems, CHI ’12, pages 987–996. ACM, 2012. [7] T. Feng, J. Yang, Z. Yan, E. M. Tapia, and W. Shi. Tips: Contextaware implicit user identification using touch screen in uncontrolled environments. In 15th Workshop on Mobile Computing Systems and Applications, pages 9:1–9:6. ACM, 2014. [8] M. Frank, R. Biedert, E. Ma, I. Martinovic, and D. Song. Touchalytics: On the applicability of touchscreen input as a behavioral biometric for continuous authentication. IEEE Trans. on Information Forensics and Security, 8(1):136–148, 2013. [9] T. Hastie and R. Tibshirani. Classification by pairwise coupling. The Annals of Statistics, 26(2):451–471, 04 1998. [10] J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas. On combining classifiers. IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(3):226–239, 1998. [11] L. Li, X. Zhao, and G. Xue. Unobservable re-authentication for smartphones. In 20th Annual Network & Distributed System Security Symposium, pages 1–16. The Internet Society, 2013. [12] W. Melssen, R. Wehrens, and L. Buydens. Supervised kohonen networks for classification problems. Chemometrics and Intelligent Laboratory Systems, 83(2):99 – 113, 2006. [13] Y. Meng, D. S. Wong, and L.-F. Kwok. Design of touch dynamics based user authentication with an adaptive mechanism on mobile phones. In 29th Annual ACM Symposium on Applied Computing, SAC ’14, pages 1680–1687. ACM, 2014. [14] S. Mondal and P. Bours. A computational approach to the continuous authentication biometric system. Information Sciences, 304:28 – 53, 2015. [15] A. Roy, T. Halevi, and N. Memon. An hmm-based behavior modeling approach for continuous mobile authentication. In 2014 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pages 3789–3793, 2014. [16] A. Serwadda, V. Phoha, and Z. Wang. Which verifiers work?: A benchmark evaluation of touch-based authentication algorithms. In 2013 IEEE 6th Int. Conf. on Biometrics: Theory, Applications and Systems, pages 1–8, 2013. [17] N. Smirnov. Approximate distribution laws for random variables, constructed from empirical data. Uspekhi Matem. Nauk, 10:179–206, 1944. [18] D. Ververidis and C. Kotropoulos. Information loss of the mahalanobis distance in high dimensions: Application to feature selection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 31(12):2275– 2281, 2009. [19] R. V. Yampolskiy and V. Govindaraju. Behavioural biometrics: a survey and classification. International Journal of Biometrics, 1(1):81–113, 2008. [20] X. Zhao, T. Feng, and W. Shi. Continuous mobile authentication using a novel graphic touch gesture feature. In 2013 IEEE Int. Conf. on Biometrics: Theory, Applications and Systems, pages 1–6, Sept 2013.

Suggest Documents