computers & security 63 (2016) 85–116
Available online at www.sciencedirect.com
ScienceDirect j o u r n a l h o m e p a g e : w w w. e l s e v i e r. c o m / l o c a t e / c o s e
Toward the design of adaptive selection strategies for multi-factor authentication Dipankar Dasgupta a, Arunava Roy b,*, Abhijit Nag a a b
Center for Information Assurance, Department of Computer Science, The University of Memphis, TN 38152, USA Department of Industrial and Systems Engineering, National University of Singapore
A R T I C L E
I N F O
A B S T R A C T
Article history:
Authentication is the fundamental safeguard against any illegitimate access to a comput-
Received 11 December 2015
ing device and other sensitive online applications. Because of recent security threats,
Received in revised form 5
authentication through a single factor is not reliable to provide adequate protection of these
September 2016
devices and applications. Hence, to facilitate continuous protection of computing devices
Accepted 9 September 2016
and other critical online services from unauthorized access, multi-factor authentication can
Available online 20 September 2016
provide a viable option. Many authentication mechanisms with varying degrees of accuracy and portability are available for different types of computing devices. As a consequence,
Keywords:
several existing and well-known multi-factor authentication strategies have already been
Multi-factor authentication
utilized to enhance the security of various applications. Keeping this in mind, we devel-
Adaptive selection
oped a framework for authenticating a user efficiently through a subset of available
Biometrics
authentication modalities along with their several features (authentication factors) in a time-
Security
varying operating environment (devices, media, and surrounding conditions, like light, noise,
Protection
motion, etc.) on a regular basis. The present work is divided into two parts, namely, a formulation for calculating trustworthy values of different authentication factors and then the development of a novel adaptive strategy for selecting different available authentication factors based on their calculated trustworthy values, performance, selection of devices, media, and surroundings. Here, adaptive strategy ensures the incorporation of the existing environmental conditions on the selection of authentication factors and provides significant diversity in the selection process. Simulation results show the proposed selection approach performs better than other existing and widely used selection strategies, mainly, random and optimal cost selections in different settings of operating environments. The detailed implementation of the proposed multi-factor authentication strategy, along with performance evaluation and user study, has been accomplished to establish its superiority over the existing frameworks. © 2016 Elsevier Ltd. All rights reserved.
1.
Introduction
With the advancements of modern technology, most user activities rely upon various online services, which need to be
trusted and secured to prevent the thorny issue of illegitimate access. Authentication is the major defense to address that growing need. Authentications through a single-factor (user id and password, for example) are suffering from some significant pitfalls. For instance, if a single factor authentication
* Corresponding author. E-mail addresses:
[email protected] (D. Dasgupta),
[email protected] (A. Roy),
[email protected] (A. Nag). http://dx.doi.org/10.1016/j.cose.2016.09.004 0167-4048/© 2016 Elsevier Ltd. All rights reserved.
86
computers & security 63 (2016) 85–116
fails, the users cannot access the service. Moreover, the security of the system can never be predicted if it is breached for a single factor authentication. As a consequence, authentication through different factors is an ongoing trend to provide secure, resilient, and robust access verification to the legitimate users of a computing system and, at the same time, to make it harder for intruders to gain unauthorized access. The majority of authentication systems in use today check a user’s identity during login to a system. For example, two-factor based authentication systems (used in different e-mail servers for example) check for two different factors at the time of accessing the services for the first time but do not validate again once that service is in continuous use. This can increase the chance of compromising user identity because authentication information is not validated throughout the existing session, which opens a back door for an intruder to impersonate the actual user and gains access to his or her sensitive data. Rapidly developing mobile technology continues to increase users’ access to online services. Hence, checking the authenticity of the registered users on a continuous basis is of utmost importance in order to protect any sensitive user data. This enhances the need to move toward the multi-factor authentication system and provide different choices to the users during authenticating to verify their identity. Multi-factor authentication (MFA) also comes with a fail-safe feature in case of compromising any authentication factor as users are authenticated utilizing the other existing non-compromised modalities. One of the issues for MFA is how to choose the better set of authentication factors out of all possible choices in any given operating environment. The choice of a good set of authentication factors determines the performance of the MFA as a whole. Surprisingly, recent MFA products do not address this issue while providing solutions to the end users (Parziale and Chen, 2009; Patel et al., 2013; Primo et al., 2014; Serwadda et al., 2013; Stewart et al., 2011; Vielhauer, 2005). Additionally, using the same factors or a random selection of factors for MFA to validate users across all possible operating environments (devices, media, and surroundings factors) is not reasonable as the selected sets are predictable, exploitable (for the set of same factors) and less trustworthy (for random selection, all the selected sets of authentication factors are not equally trusted). Hence, there is an increasing need for adaptive selection of authentication factors (to validate the users at any given time) sensing the existing operating conditions. The authentication factor selection process also addresses the following issues to provide a more resilient MFA. Firstly, the selection procedure should not follow (having a bias toward) any pattern that can be used by the attackers. Secondly, the process should make the consideration of the previous selection of the authentication factors to avoid repetitive checking of the same factors. All these demands guide our current research to develop an adaptive solution of the authentication selection for a multi-factor authentication system. In different multi-factor authentication systems, several types of authentication modalities are used (Nag et al., 2015). Some modalities are physiological biometrics like face, fingerprint, voice, etc., while some others fall into behavioral biometrics like keystroke dynamics, mouse movement, gait, etc. In addition, some non-biometric modalities like SMS (for
PIN code), password, and CAPTCHA are also popular in authenticating users. In this work, the aforementioned authentication modalities are considered to build the modality selection framework. The authentication factor is defined in this paper to cover different features of authentication modalities. Any feature of an authentication modality or any combination of different features of authentication modalities are considered as authentication factors. The details regarding the definition and mathematical representation are mentioned in Section 3. To compare different authentication factors under different operating conditions, a trustworthy measure is developed in order to quantify which authentication factors perform better to authenticate users within particular environments. To provide the selection of authentication factors, an approach using the non-linear programming problem with probabilistic objective, capable of adaptive and non-repetitive selection of different authentication factors, is proposed in this work. Several major contributions of the present work are given as follows: 1. Developing a strategy for calculating trustworthy values of different authentication factors, which has been divided into two parts. Initially, a non-linear programming problem with probabilistic constraints has been designed for calculating the trustworthy value of every single feature of different available authentication modalities. The error rate of individual feature (found from different research works (Deutschmann and Lindholm, 2013; Harb et al., 2014; Hassanien et al., 2009; Kang et al., 2014; Kent et al., 2015; Nag and Dasgupta, 2014; Nag et al., 2014, 2015; Parziale and Chen, 2009; Wu et al., 2015; Xiang et al., 2015)) has been added to it to make the trustworthy value more realistic. Subsequently, another strategy has been innovated for calculating trustworthy values of different sets of authentication factors as the error rates for the combination of different authentication features cannot be found in existing literature (Nag and Dasgupta, 2014). 2. Designing an adaptive selection framework for authentication modalities along with their features (authentication factors) using a non-linear multi-objective programming problem with probabilistic objective (Luenberger and Ye, 2008) that incorporates their computational performance, and the previous selection history. The use of trustworthiness and other contextual information to dynamically perform an adaptive selection of authentication factors over a fixed set is novel and can provide much better security than the existing MFA systems. The proposed framework has been evaluated for performance and compared with different available MFAs to show the advantages of the current work. The present framework includes the influences of the surrounding environments (light, motion, noise, etc., see Section 5.2) along with different devices (fixed, cellular, etc.) and media (wired, wireless, etc.) for the adaptive selection of authentication factors. The selected authentication factors change with the changing devices, media and the surrounding environments, making it more reasonable and logical than the existing multi-factor frameworks. For example, if the luminance is less than 100 lux (i.e., outside the range of visibility), then the modalities, dependent upon light (found in Table 4), cannot be selected. Similar types of con-
computers & security 63 (2016) 85–116
clusion can be made for other modalities that depend on the elements of S (given in Section 5). This feature is ignored by most of the existing MFA frameworks (FIDO, Microsoft Azure, etc.), which could be considered as one of the major motivations of the present work. 3. It mainly includes a number of potential passive biometrics like keystroke, mouse movements, typing behavior, gait, etc., which could authenticate without much user interventions. As a consequence, the users become less annoyed while using the proposed adaptive-MFA framework. No other previous works on multi-factor authentication frameworks have included all these passive biometrics together (Feng et al., 2012; Kang et al., 2014; Klieme et al., 2014; Locklear et al., 2014), which could be considered as a motivation for the present work. 4. Any authentication modality consists of several features (see Section 3) and none of the earlier works on multi-factor authentication have ever tried to use every individual feature as a separate authentication factor. This could save a significant amount of execution time and resources. It is also a major motivation of the present work. 5. By considering every individual feature as an authentication factor the present framework has significantly increased the number of possible combinations of authentication credentials, which, in turn, increase the level of obfuscation among the illegitimate users by significantly reducing the probability of repetitive selection of the same set of authentication factors. For example, in the present framework, Eye (a feature of face recognition) and Palm print (a feature of hand biometrics) could be selected as an authentication factor (see Section 3). None of the previous work on MFA has ever touched upon all these important aspects (Feng et al., 2012; Kang et al., 2014; Klieme et al., 2014; Locklear et al., 2014) and could be considered as a major motivation of the present work. Before proceeding toward the development of an adaptive strategy for multi-factor authentication, it would be apt for clarity to describe the organizational structure of the paper by discussing its important components in different sections and subsections. These include literature review in Section 2; descriptions of different authentication factors in Section 3; some reviews of optimization problems in Section 4; development of the proposed methodologies in Section 5; simulation results of the proposed adaptive-MFA are presented in Section 6; implementation details along with technicalities are presented in Section 7; performance evaluation of the adaptive-MFA prototype is presented in Section 8; a detailed user study of the proposed adaptive-MFA framework has been presented in Section 9; advantages of the proposed work are listed in Section 10; an overview of the proposed adaptive-MFA as a cloud application has been presented in Section 11; qualitative comparisons of the proposed adaptive-MFA with other existing popular MFA approaches are presented in Section 12. Toward the end, the important findings and conclusions arising out of the present work are encapsulated in Section 13. Lastly, at the very end, two appendices are added to present the explanations of some frequently used acronyms and different mathematical steps for convenience and a better understanding of the presented work.
2.
87
Literature review
Two-factor authentication (Kang et al., 2014) is a widely used approach nowadays for many online services, where the first factor is the traditional password and the second factor is either an access code through SMS or a PIN-code generated randomly at the time of authentication. Microsoft’s Windows Azure Active Directory uses multi-factor authentication for its cloud applications. It uses a one-time password, an automated phone call, and a text message (SMS), for a total of three authentication modalities. This approach uses a fixed set of modalities and the different authentication factors are selected only by user preference. Fast Identity Online (FIDO) alliance developed a framework for online authentication that provides an open and scalable solution to reduce user dependency on passwords. They provide the biometric authentication modalities and PIN-codes to support multi-factor authentication without the use of traditional passwords. However, they did not include passive biometrics like keystroke dynamics, mouse movements, typing behavior etc. Hence, a continuous way of authenticating the users without user interruptions was not part of their framework. Continuous Authentication requires both behavioral and cognitive modalities to be considered while authenticating the users. DARPA launched Active Authentication project (Deutschmann and Lindholm, 2013) to focus on the development of different authentication modalities. Stylometry (Brennan et al., 2012) uses different stylometric methods (for example, writing style as part of author recognition) to validate the authentic users while they are typing. With these methods, deceptive writing by malicious users can be identified and used as a passive authentication technique for continuous MFA. Web browsing behavior (Abramson and Aha, 2013; Kwok, 2012) is also utilized to identify actual users of a system by continuously monitoring their pattern of browsing. This approach captures the semantic behavior of the users through both semantic and syntactic session features (for example, time to click, etc.) to execute user identification. Screen Fingerprints (Feng and Jain, 2011; Harb et al., 2014; Inacu and Constantinescu, 2013; Jain et al., 1997, 2010; Parziale and Chen, 2009; Patel et al., 2013; Serwadda et al., 2013; Stewart et al., 2011; Vielhauer, 2005) is a good candidate for a biometric modality for continuous authentication. This approach captures computer screen recording and extracts discriminative visual features from that recording. It works based on visual cues (typing, mouse moving, scrolling, etc.) that are always observable on a screen irrespective of different types of applications. Behavioral biometrics (Abramson and Aha, 2013; Deutschmann and Lindholm, 2013; Gascon et al., 2014; Jain et al., 1997; Kwok, 2012) can also be used in online courses to perform assessments of students’ work and also for authentication of authors to verify their literary works. Incorporation of keystroke and mouse dynamics have also significantly improved user authentication in a passive way (Guidorizzi, 2013; Gunetti and Picardi, 2005; Jain et al., 2010; Lee et al., 2012; Locklear et al., 2014). For keystroke dynamics, key hold time, key interval, trigraph, digraph, and a fusion of them are used as features. For mouse dynamics, screen resolution, pointer speed and acceleration settings, and mouse polling rate are used as features.
88
computers & security 63 (2016) 85–116
Thus, these authentication modalities can be applied easily in continuous authentication systems. A good amount of work has been done to facilitate the use of continuous user authentication. Authors in this body of work (Jain et al., 1997; Locklear et al., 2014; Niinuma and Jain, 2010) use temporal information about the users (like a user’s face and other features) which does not change the posture of the users. This is done by computing histograms for face and body identification, facial or body recognition, and finally, calculating the similarity. This approach can even identify a user in the absence of biometric observations. Incorporating different behavioral biometrics (keyboard interactions, mouse movements and application interactions), by BehavioSec (Niinuma and Jain, 2010; Revett et al., 2008; Serwadda et al., 2013; Stewart et al., 2011), provides promising results in continuous authentication of the users. A trust model is designed to include the effect of all three biometrics to provide faster detection of incorrect users. Using this model, an incorrect user can be detected within 18 seconds using a keyboard (15–50 keystrokes), or 2.4 minutes using a mouse (66 interactions). Typing behavior and linguistic style of the users (Guidorizzi, 2013; Gunetti and Picardi, 2005; Stewart et al., 2011) are also considered as passive authentication modalities to help this authentication approach. In this work, features are extracted from properties of word creation, lexical complexity, revision count, and keyboard proficiency, and their sub-properties are used to build a set of fine-grained features. These features ensure greater discriminability between users. These works illustrate that user authentication in a continuous manner is possible using a set of passive authentication modalities and without any user interruption. Mobile devices are widely used now to do online activities such as browsing emails, checking bank accounts, maintaining social networks, etc. Hence, a good amount of research has been conducted on providing continuous authentication to the users of mobile devices. Smartphone accelerometer features mentioned in Feng et al. (2012), Revett et al. (2008) and Vielhauer (2005) provide a way to identify the user’s pattern of how he or she holds his or her phone. The impact of the position of the phone and context-aware information on the location of the phone during authentication provide significant improvements in the authentication accuracy of the users. These studies also consider the effect of which hand the user holds his phone in and whether the phone is kept in the pocket or in the hand. This work explores the new way of gait-based authentication. By analyzing the typing motion behavior of the users, continuous authentication can be done (Feng et al., 2012; Jain et al., 2010; Klieme et al., 2014; Lucas and Kanade, 1981; Obied and Reda Alhajj, 2009; Ou and Ou, 2009). Different machine learning based classifiers can be adopted to classify the actual users utilizing the extracted features. As most smartphones have touch screens, typing motion can also be easily integrated within the continuous authentication process. Similarly, touch screen gestures (Kwok, 2012) can also be used to do continuous authentication in mobile devices. Along with the touch gestures (for example, flick, spread, pinch, drag, and tap), virtual typing (typing using a touch-screen based keyboard or inputting a phone number with touch) and touch based drawing (drawing shapes using finger) are also considered to perform mobile based continuous authentication. User specific fea-
tures are extracted for each mentioned category above and the user verification can be conducted. The same touch based inputs are also used in other works (Parziale and Chen, 2009; Patel et al., 2013). These research studies demonstrate that continuous user authentication is possible for all the existing devices, which do not depend on specific hardware or platforms.
3.
Authentication factors
This section describes different authentication modalities along with their features and usability conditions. Before proceeding to describe different features of various authentication modalities, it would be apt to define authentication factor. The authentication factor is defined in any of the following ways: (i) Single feature of an authentication modality; (ii) Any combination of features of an authentication modality; (iii) The combination of multiple features of different authentication modalities. Let Mk∈+ be the kth authentication modality and {Mk : fk,i} be its ith feature. Then, {{Mk }: { fk,i }i∈+ }k∈+ can be defined as the ith features of different combinations of {Mk }k∈+ . For example, {M1 : f1,1} and {M2 : f2,1}, the first features of M1 and M2 respectively, can be considered as two authentication factors (described in scenario i). Moreover, {M1 : f1,1, f1,2}, combinations of {M1 : f1,1} and {M1 : f1,2}, can also be considered as an authentication factor (scenario ii). Additionally, the combination of {M1 : f1,1} and {M2 : f2,1}, i.e., {M1, M2 : f1,1, f2,1}, can also be considered as one authentication factor (scenario iii). In this work, fifteen well-known authentication modalities (twelve of them are biometric and the remaining are nonbiometric) have been used. The biometric authentication modalities used in this work can be classified as physiological (face recognition, iris, tightly couple iris and face recognition, fingerprint, hand biometrics, and periocular) and behavioral (keystroke, mouse dynamics, author identification, touch signature pattern, gait recognition, and voice recognition). The detailed descriptions of different features of the authentication modalities are shown in Table 1. Additionally, the usability conditions for different authentication modalities are also included, which help us to make the proposed selection procedure adaptive.
4.
A review of optimization problems
In applied mathematics, an optimization problem can be considered as the problem of finding the best solution from the set of all feasible solutions based on certain previously designed criteria (guided search). The optimization problems (Balakrishnan, 1996; Dasgupta, 2006, 2014; Luenberger and Ye, 2008) can be divided into two categories depending on whether the variables are continuous or discrete. The standard mathematical form of a (continuous) optimization problem is given as follows:
Minimize f ( x ) x
computers & security 63 (2016) 85–116
89
Table 1 – Descriptions of various features of different biometric and non-biometric modalities along with their EER, FAR, FRR, FMR, and FNMR. Descriptions and computational features Face recognition (M1) Geometrical features: 7 category features ( {M1 : { f1,i }i∈+ } ) will be considered as follows: Lip ({M1 : f1,1}): Lip center position (xc and yc), the lip shape (h1, h2 and w), lip orientation (θ). The EER of this feature lies between 5.2% and 6.8% (Vielhauer, 2005). Detailed descriptions regarding EER, FAR, FRR can be found in Appendix A. Eye (M1 : f1,2}): A circle with three parameters (x0, y0 and r) for the iris; two parabolic arcs with six parameters (xc, yc, h1, h2, w, θ) to model boundaries of the eye; the last one differs in closed and open eye positions. The false match probability for the iris recognition is in the range of 1 in 1013 and the FNMR is very low (Vielhauer, 2005). The error rate of the retina scan is 1 out of 10,000,000 (almost 0%). Again, its FMR and FNMR are 0% and 1% respectively (Vielhauer, 2005). The FAR, FRR and the crossover rates of the retina scan are 0.31%, 0.04% and 0.8% respectively (Vielhauer, 2005). Brow and cheek ({M1 : f1,3}): Left and right brow: triangular template with six parameters (x1, y1), (x2, y2) and (x3, y3). Cheek has six parameters (both up and downward triangle template) Both brow and cheek templates are tracked using the Lucas–Kanade algorithm. Here, EER is about 15% (Vielhauer, 2005). Textural features ({M1 : f1,4}): Textural features will be elicited using Local Ternary Pattern (LTP) and Genetic and Evolutionary Feature Extractor (GEFE). GEFE evolves the LTP feature extractors to elicit the distinctive features from the facial images. Its ERR is not very less, about 10% (Vielhauer, 2005). Overall FAR and FRR for the face recognition modality are 1% and 20% respectively. Usability: Under normal lighting conditions, geometric features will be used. Under varying lighting conditions, textual features will be utilized. This modality is not reliable if the facial features can be altered through plastic surgery (Lucas and Kanade, 1981; Vielhauer, 2005). It is also dependent on the motion of the object (Vielhauer, 2005; Xiang et al., 2015). Iris recognition (M2) Localization: Iris and pupil boundary localization, and pupil center detection. Shape, edge and region information will be fused. Feature extraction: Texture phase structure information will be elicited. Its different features are denoted by { M2 : { f2,i }i∈+ }. Its FAR is about 0.01% (Vielhauer, 2005). Here, the feature {M2 : f2,1} is mainly the eye, which is very similar to {M2 : f1,2}. Hence, {M2 : f1,2} and {M2 : f2,1} could be interchangeably used. Usability: Reliable and accurate biometric trait for ideal environment. However, localizing the irises in eye images obtained at a distance in unconstrained environments is difficult. Also, if a user suffers from an eye disease, this modality is not suitable. It is also dependent on light. This modality is also dependent on the motion of the object (Vielhauer, 2005). Tightly coupled iris and face recognition (M3) Localization: Face detection, eye detection, pupil center detection, iris boundary detection, computing eyelids and noise mask (Vielhauer, 2005). Feature extraction: Face image and normalized iris image are divided into several overlapped patches from which the feature distributions are elicited and concatenated into an enhanced feature vector to be used as an iris-face descriptor. Fusion: Both score and feature level fusions will be used. Its different features are denoted by { M3 : { f3,i }i∈+ }. Usability: Face and iris detection from a single source in an ideal environment where frontal facial image can be captured reliably. Hence, it is dependent on light. Fingerprint (M4) Three levels of fingerprint features will be considered (with a total of 6 category features { M4 : { f4,i }i∈+ }): Level 1 features (global fingerprint features): singular points, ridge orientation map, and ridge frequency map. Level 2 features (minutiae based features): Ridge ending and ridge bifurcation. Minutiae location and orientation. Each of them is combination of a good number of features. But they combine make a unique identification of a feature. Level 3 features (sweat-pore-based features): Sweat pores are considered to be highly distinctive in terms of their number, position, and shape. The best algorithm for fingerprint verification yields an EER less than 2.07% and more than 30% of the algorithms yield an EER less than 5% (Vielhauer, 2005). The overall FAR, FRR, crossover rate and failure to enrollment rates are 2%, 2%, 2% and 1% respectively. Its different features are denoted by { M4 : { f4,i }i∈+ }. Usability: Fingerprints can be forged and altered by transplant surgery (Vielhauer, 2005). Hand biometrics (M5) The features { M5 : { f5,i }i∈+ } are as follows: Palm print ({M5 : f5,1}): Quite similarly to fingerprint techniques and also originated from forensics are individual structures in the palm skin. It is highly reliable with EER is less than 1%. It’s FAR and FRR are 4.49% and 2.04% respectively (Vielhauer, 2005). Hand geometry ({M5 : f5,2}): In hand geometry, the goal is to extract geometrical features from images of the entire hand. Its EER is 0.0012%. It’s FAR and FRR are 5.29% and 8.34% respectively (Vielhauer, 2005). Vein structure ({M5 : f5,3}): By using infrared cameras, it is possible to retrieve images of the vein structure of the hand. Its EER lies between 2.3% and 3.75% (Vielhauer, 2005). Usability: Hand biometrics can be forged and altered. Password (M6) Password (Kang et al., 2014; Vielhauer, 2005) is the most common modality. It can be stored in hashed form and matched with the input by hashing the given password as string matching. The password can be made with alpha-numeric characters along with some special characters. The features { M6 : { f6,i }i∈+ } corresponding to this modality are stated as: Master password; Key pattern; Security question; Profile information. Usability: The security of the passwords depends on the users’ ability to maintain the password secret. Also it is susceptible to numerous attacks.
(continued on next page)
90
computers & security 63 (2016) 85–116
Table 1 – (continued) Descriptions and computational features CAPTCHA (M7) CAPTCHA (Vielhauer, 2005) is used to prevent different automated software or web robots to perform actions and can discriminate between human and bots. A CAPTCHA features image files of slightly distorted alphanumeric characters. It also has a readout feature for users who are visually impaired. In short the features { M7 : { f7,i }i∈+ } are given as only numbers; only characters; alphanumeric characters; pictorial CAPTCHA. Usability: This has been used in online applications widely. But sometimes it is really harder for humans to interpret. SMS (M8) SMS (Vielhauer, 2005) is used to send the passcode to a given registered phone number and that code is valid for a short period of time. The features { M8 : { f8,i }i∈+ } of this modality are given as any number or character; emoticon and special character sequence showing a message. Usability: It has the issue of expense while sending or receiving. It cannot be used in places with limited cellular coverage. Voice (M9) Voice recognition (Vielhauer, 2005) uses pitch and different format features. Recent evaluations based on more than 59,000 trials from 291 speakers in identification mode have reported an accuracy in the range of 1% false recognition rate at a 5% miss rate (i.e., a 1% FIR plus a 4% non-identification rate), for the most accurate among the 16 participating systems (Vielhauer, 2005). The FAR, FRR and the crossover rates for the voice modality are 2%, 10%, and 6% respectively. Its different features are denoted by { M9 : { f9,i }i∈+ }. Usability: This modality may fail to detect a legitimate user in a noisy environment. Also, if a user suffers from a throat infection, this modality is no longer useful. It is also dependent on motion (M). Keystroke (M10) This modality detects the pattern of the keystrokes (Vielhauer, 2005). The features used for this techniques are mean latency and standard deviation of digraphs (a combination of two letters representing one sound), mean duration and standard deviation of keystrokes. One of the possibilities to measure recognition accuracy in keystrokes is a test scenario, where all users type the same password. This textdependent based authentication appears to be more precise than text-independent schemes; see for example Vielhauer (2005), where recognition rates in the range of 83%–92% have been observed. Verification tests, such as those performed in Vielhauer (2005), have shown an EER in the range of 15%–20%. The overall FAR, FRR and the crossover rates for the keystroke modality are 7%, 0.1%, and 1.8% respectively. Its different features are denoted by { M10 : { f10,i }i∈+ }. Usability: This trait is particularly useful for verification only. Mouse dynamics (M11) The research on mouse dynamics mainly concentrated on the motor skill features (e.g., time for signal, click, time for double click, and speed in a particular direction) or mouse actions such as cursor positions on the screen, the idle time of the mouse and the movement distances (Vielhauer, 2005). Again, the other features regarding the mouse dynamics are the mouse angles, directions, the angle of curvature and curvature distance (Vielhauer, 2005). This technique is designated to compensate for platform dependencies such as screen resolutions and mouse sensitivity. Direction between two consecutive points is defined as the angle between the line connecting the two points and the horizontal line passing through the first point. The angle of curvature is measured using three consecutive points (A, B, C) and is defined as the angle formed between the line passing through A and B and the line passing through B and C. The curvature distance between three consecutive points A, B, C is the ratio of the length of the line between A and C to the length of the line that is perpendicular to line AC and terminates at B. The features are represented as { M11 : { f11,i }i∈+ }. Usability: This trait is particularly useful for verification only. Text identification (M12) Different features { M12 : { f12,i }i∈+ } of the modality are listed as follows: Uni-gram feature: Uni-gram feature extraction counts the frequency of each character and the total number of characters in a sample. Once these frequencies have been counted, each character frequency is then divided by the total number of characters. Stylometric and structural features: We also use stylometric and structural features that calculate style-based features and may include, but are not limited to, word length frequency distribution, the total number and frequency distribution of function words, total number of words, total number of distinct words, and total number of characters. A feature vector is then created for each sample in the dataset. Usability: Author can be identified based on a passive comparison of the content he/she publishes to other contents found on the web. This modality is particularly useful to reveal the identity when the careful authors may not register for a service with their real names and use tools to hide their identity at the network level. Touch signature pattern (M13) To capture touch signature pattern 5 features ( {M13 : { f13,i }i∈+ } ) will be considered. Tap buttons and check boxes (capture pressure while taping buttons and check boxes) and swipe movement from left to right, right to left, top to bottom, bottom to top. The EER is about 2% (Vielhauer, 2005). Usability: This modality can be used in mobile devices such as cell phones, iPads and tablets, but cannot be used for PCs. Periocular (M14) Level 1 (geometrical) and level 2 (textural) features will be used. Level 1 features include upper/lower eye folds, upper/lower eyelids, wrinkles, and moles. Level 2 features include skin texture, pores, hair follicles, skin tags and other dermatological features (Kang et al., 2014). Level 1 and 2 features will be considered. The features are represented as ( {M14 : { f14,i }i∈+ } ). Usability: Periocular is complementary to iris recognition in unconstrained situations and will be used when iris information is not available or cannot be acquired reliably. Face images acquired with low resolution sensors or large standoff distances offer very little or no information about iris texture. Even under ideal conditions characterized by favorable lighting conditions and an optimal standoff distance, if the subject blinks or closes his eye, the iris boundary cannot be detected accurately (Vielhauer, 2005). Hence, this modality is dependent on light. This modality is also dependent on the motion of the object (Vielhauer, 2005).
(continued on next page)
computers & security 63 (2016) 85–116
91
Table 1 – (continued) Descriptions and computational features Gait recognition (M15) It means how a person walks. Gait is a pattern of movement of the limbs, including humans, during locomotion over a solid substrate (Vielhauer, 2005). The extraction of movement features from the video firstly requires detection of the bulk movement of the target body. Here correct identification rates between 76% (wearing different shoes) and 99% have been reported for 5 different databases, some of which contain videos of more than 100 subjects. The features are represented as ( {M15 : { f15,i }i∈+ }). In verification mode, recent work reported an improvement in EER from 8.6% to 7.3% by using Bayesian probability rather than Euclidean distance. These results were obtained based on relatively large databases, consisting of 1079 video sequences from 115 persons (Vielhauer, 2005). Usability: This modality is dependent on both light (L) and motion (M) (Vielhauer, 2005).
Subject to
ϕ i (x ) = 0,
gi ( x ) ≤ 0,
(i = 1, 2, … , m) ,
(i = 1, 2, … , p)
where f ( x ): n → is the objective function to be minimized over the variable x, gi(x) ≤ 0 are called inequality constraints, and φi(x) = 0 are called equality constraints. By convention, the standard form defines a minimization problem. Similarly, the maximization problem can be defined as follows:
where f(x) is a k-dimensional objective vector (k ≥ 2), fi(x) is the ith objective to be maximized (i = 1, 2, …, k), x is a decision vector, and X is the set of all feasible decision vectors (i.e., X is the feasible region in the decision space) or the set of constraints. The feasible set is defined by some constraint functions. Moreover, the vector-valued objective function is often defined as f : X → k , then f(x) = (f1(x), f2(x), …, fk(x))T. The space in which the objective vector belongs is called the objective space, and the image of the feasible set under F is called the attained set.
Maximize f ( x ) x
Subject to
ϕ i (x ) = 0,
gi ( x ) ≤ 0,
(i = 1, 2, … , m) ,
(i = 1, 2, … , p)
Here, f ( x ): n → is the objective function to be maximized over x and gi(x), φi(x) are considered as the constraints. Hence, a maximization problem can be treated by negating the objective function. For better and clearer understanding, the following mathematical model can be considered as a maximization problem:
Maximize: A = xy Subject to: 2x + y = 2400 where A = xy is the objective and 2x + y = 2400 is the constraint of the maximization problem. On the other hand, multi-objective optimization (Dasgupta, 2006, 2014; Dunn and Peucker, 2002; Klieme et al., 2014; Luenberger and Ye, 2008) is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics, and logistics (see the section on applications for detailed examples), where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. In mathematical terms, a k-objective optimization problem can be formulated as
Maximize f ( x ) = { f1 ( x ) , f2 ( x ) , … , fk ( x )} Subject to x ∈ X
5.
Proposed methodologies
This section describes a methodology for adaptive selection of multiple factors for authentication, which is suitable for the existing operating environments and becomes more unpredictable to the hackers. It has been divided into the following two parts: (i) Developing a mathematical model for calculating the trustworthy values of different authentication factors, which has been presented in Subsection 5.1. This has been divided into two parts, viz., trustworthy value calculation for every single feature of different authentication modalities and calculation of trustworthy values for different combinations of previously calculated features. This is due to the unavailability of the EER, ERR, FAR for different combinations of authentication factors. (ii) Next, the proposed adaptive selection mechanism, considering the influences of different devices, media and surrounding conditions, has been described Subsection 5.2. Fig. 1 shows the pictorial representation of the proposed work. According to Fig. 1, the proposed trustworthy model calculates the trustworthy values for different authentication factors based on pair-wise comparative preference information and individual error rates. These values are later used to design the adaptive selection algorithm. In addition to the trustworthy values, the performance of the authentication factors and the previous history of the selection of authentication factors are also used to provide a non-repetitive solution of authentication factors. Constraints are designed to consider the influence of surrounding conditions in the selection decision.
92
computers & security 63 (2016) 85–116
Fig. 1 – Pictorial representation of the proposed work.
5.1. Formulation for calculating trustworthy values of different authentication factors This subsection demonstrates the proposed strategy for calculating the trustworthy values of different authentication factors. Initially, a non-linear programming problem with probabilistic constraints has been designed for calculating the trustworthy value of every single feature of different available authentication modalities. Error rates corresponding to every individual feature (found from different research works (Nag and Dasgupta, 2014; Nag et al., 2014; Vielhauer, 2005)) have been added to them to make the trustworthy value more realistic. Subsequently, another strategy has been innovated for calculating trustworthy values of different sets of authentication factors as their combined error rates cannot be found from existing literature (Nag and Dasgupta, 2014; Nag et al., 2014; Vielhauer, 2005).
5.1.1. Calculation of trustworthy values of individual features Initially, the proposed formulation for calculating the trustworthy values of individual features of various authentication modalities in different devices, media and operating environments has been discussed. This proposed strategy exploits the potentiality of their error rates that can be found from different existing literature (Nag and Dasgupta, 2014; Nag et al., 2014; Vielhauer, 2005). In this subsection, the authors mainly try to formulate a non-linear programming problem with probabilistic constraints for calculating the trustworthy values of individual features of different authentication factors. Let there be n number of authentication modalities, l number of features for each modality, d number of different classes of devices and e number of different classifications of media available for the present problem.
Now, let us assume that an authentication modality (with a set of features), Mi; (i = 1, 2,…n) is more (or less or equally) trusted for a user in a device Dj; (j = 1, 2, …, d) rather than in device Dk; (k = 1, 2, …, d; k ≠ j) for a particular medium Mel; (l = 1, 2…e). Accordingly, these pair-wise comparisons have been done among the devices in a fixed medium and all the corresponding outcomes are equally likely as, no prior assumptions regarding the trustworthiness of the devices and media can be made. For example, it can never be confirmed if the face recognition authentication modality in fixed devices (desktops, workstations, etc.) is more trusted than in portable devices (laptops, tablets, etc.) in a wired medium (WI), as there are equal chances of being it’s equal and less trusted. The corresponding representation for the pair-wise comparative trustworthy preference is Tij′ (Ms; fs,l ) for the sth modality (n options) with features {Ms; fs,l} (taking one feature at a time), ith device (d options) and the jth medium (e options). Thus, for a particular pair-wise comparison involving ith and kth devices for a fixed (jth) media and fixed (mth) modality, the following conditions will occur: Tij′ (Ms; fs,l ) > or = or < Tkj′ (Ms; fs,l ) ; i ≠ k ; and since, they are equally likely:
{P( Tij′ (Ms; fs,l ) > Tkj′ (Ms; fs,l ); i ≠ k)} = {P( Tij′ (Ms; fs,l ) < Tkj′ (Ms; fs,l ); i ≠ k)} = {P( Tij′ (Ms; fs,l ) = Tkj′ (Ms; fs,l ); i ≠ k)} =
1 3
These comparisons only involve the trustworthiness of different devices with respect to the various features of a particular authentication modality in a single medium. As a consequence, the random variable { Tij′ (Ms; fs,l )} can be constructed to determine the comparisons of the trustworthiness of a specific feature of a particular authentication modality (with a set
93
computers & security 63 (2016) 85–116
of features) in different devices in a fixed medium. Similarly, the comparison can also be done among the trustworthiness of a particular authentication modality (with a set of features) in different media, keeping the device as fixed. In a similar way, let a particular feature of one authentication modality {Ms; fs,l} is more (or less or equally) trusted for a user in a medium Mej (j = 1, 2…e) rather than in medium Mek (k = 1, 2…e; k ≠ j) for a particular device Dl; (l = 1, 2…d). All the above cases are equally likely as no prior assumptions regarding the trustworthiness of devices and media have been made. Thus, for a particular pair-wise comparison involving ith devices, jth and kth media, the following conditions will occur:
Tij′ (Ms; fs,l ) > or = or < Tik′ (Ms; fs,l );
j ≠ k;
P{Tij′ (Ms; fs,l ) > or = or < Tik′ (Ms; fs,l ); j ≠ k} =
and
1 , 3
as all the three cases are equally likely. These comparisons involve the trustworthiness of different media, keeping the device selection fixed. Consequently, the random variable {Tij′ (Ms; fs,l )}ej =1 has been constructed to determine the pairwise comparisons of the trustworthiness of an authentication factor in different media, keeping the device selection (i) fixed. Similarly, another random variable {Tij′ (Ms; fs,l )}id=1 has been constructed to determine the pair-wise comparisons of the trustworthiness of an authentication factor in different devices, keeping the medium selection (j) fixed. Due to the unavailability of supporting data for pair-wise comparative preference, the distribution of {Tij′ (Ms; fs,l )}ej =1 and {Tij′ (Ms; fs,l )}id=1 is considered as standard normal (N(0,1)), i.e., {Tij′ (Ms; fs,l )}ej =1 ∼ N(0, 1) and {Tij′ (Ms; fs,l )}id=1 ∼ N(0, 1) , in the following approaches. Based on the above cases, the following non-linear programming problem with probabilistic constraints (NLPPPC) (Luenberger and Ye, 2008) has been formed to find a set of Tij′ (Ms; fs,l ) values.
Maximize ∑ ∑ ∑ j
+ ∑∑∑ i
j
k j≠k
i
k i≠k
Tij′ (Ms; fs,l ) − Tkj′ (Ms; fs,l ) ε1
Tij′ (Ms; fs,l ) − Tik′ (Ms; fs,l ) ε2
(1)
Subject to
P{Tij′ (Ms; fs,l ) ≥ Tkj′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3;; i ≠ k; s, l ∈ + } ≥ 1 − ε 1; P{Tkj′ (Ms; fs,l ) > Tij′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; i ≠ k; s, l ∈ + } ≥ ε 1; P{Tij′ (Ms; fs,l ) ≥ Tik′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; j ≠ k; s, l ∈ + } ≥ 1 − ε 2; P{Tik′ (Ms; fs,l ) > Tij′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; j ≠ k; s, l ∈ + } ≥ ε 2;
(2)
(3)
(4)
(5)
0 ≤ Tij′ (Ms; fs,l ) ≤ 1;
∀i, j = 1, 2, 3;
l ∈ +;
ε 1, ε 2 ∈ (0, 1)
(6)
In the present work, we consider fifteen authentication modalities, which are reported in the literature along with their several important features (as mentioned in Table 1), three types of devices (fixed device, handheld device and portable device) and three different media (wired, wireless and the cellular medium) to calculate the trustworthy values of the authentication features. However, the proposed adaptiveMFA framework can be applied for any number of devices, media and authentication modalities. Detailed descriptions regarding different authentication modalities are shown in Table 1. The primary objective of the proposed NLPPPC is to find the trustworthy value of every single feature of any authentication modality. The proposed formulation for calculating the trustworthy values is generalized, although it is more applicable for the biometric authentication modalities. However, the authors are trying to investigate how the proposed system can produce a better visualization of trust values for different non-biometric authentication modalities. In this work, the range of n can be decided as (1 ≤ n ≤ 15). Moreover, the FAR, FRR, EER, FMR and FNMR (Vielhauer, 2005) for some of the features of different authentication modalities cannot be found in literature. Consequently, the authors were able to find the trustworthy values of those features of the authentication modalities for which the FAR, FRR, EER, FMR and FNMR are found in the existing literature (Vielhauer, 2005). A detailed description of FAR, FRR, EER, FMR and FNMR can be found in Appendix A. However, the authors believe that the trustworthy values of all the features can be found applying the proposed method if the corresponding FAR, FRR, EER, FMR and FNMR are known. In this work, the authors have considered three types of devices, namely, fixed device (FD), portable device (PD), handheld device (HD), and three types of media, namely, wired (WI), wireless (WL) and cellular (CL). As a consequence, the random variables, {Tij′ (Ms; fs,l )}ej =1 and {Tij′ (Ms; fs,l )}id=1 , are converted to {Tij′ (Ms; fs,l )}3j =1 ∼ N(0, 1) (media changed, i.e., { Ti′1 (Ms; fs,l )s,l , Ti′2 (Ms; fs,l )s,l , Ti′3 (Ms; fs,l )s,l }) and {Tij′ (Ms; fs,l )}3i =1 ∼ N(0, 1) (device changed, i.e., { T1′ j (Ms; fs,l )s,l , T2′ j (Ms; fs,l )s,l , T3′ j (Ms; fs,l )s,l }) respectively. ε 1 ∈ (0, 1) is the critical region for P{Tij′ (Ms; fs,l ) ≥ Tkj′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; i ≠ k; l ∈ + } as well as the acceptance region of P{Tkj′ (Ms; fs,l ) > Tij′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; i ≠ k; l ∈ + } . If ε1→1, then the probability of accepting the hypothesis given in Equation (1) decreases, increasing the probability of accepting the hypothesis presented in Equation (3). Similarly, ε 2 ∈ ( 0, 1) is the rejection region P{Tij′ (Ms; fs,l ) ≥ Tik′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; of j ≠ k; l ∈ + } , however, it can be considered as the acceptance or critical region for P{Tik′ (Ms; fs,l ) > Tij′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; j ≠ k; l ∈ + } . The objective of the proposed NLPPPC is to observe the influence of the size of the critical regions on the difference of the trustworthy values of a particular feature of a given modality, i.e., if the difference becomes significant, the rejection regions corresponding to the constraints P{Tij′ (Ms; fs,l ) ≥ Tkj′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; i ≠ k; l ∈ + } and P{Tij′ (Ms; fs,l ) ≥ Tik′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; j ≠ k;
94
computers & security 63 (2016) 85–116
l ∈ + } reduce. For example, if the difference between ′ (M1; f1,1 ) and T22 ′ (M1; f1,1 ) (different instances of T12 the random variables) is very significant, i.e., if ′ (M1; f1,1 ) T22 ′ (M1; f1,1 ) or T22 ′ (M1; f1,1 ) T12 ′ (M1; f1,1 ), the reT12 jection region corresponding to P{Tij′ (Ms; fs,l ) ≥ Tik′ (Ms; fs,l ); for each j = 1, 2, 3 and i, k = 1, 2, 3; i ≠ k; l ∈ + } decreases. A similar conclusion can be made in case of the trustworthy value of a particular modality in a device and on different media. Now, combining both cases, the objective function of the proposed NLPPPC has been constructed. The nature of the objective function is convex. Now, to illustrate the solution procedure, an example has been cited that considers the pair-wise comparative preference information as a metric to calculate the trustworthy function values. Here, the face recognition modality with feature (M1 : f1,1) (lip center positions xc and yc) with three devices (FD, PD, and HD) and three media (WI, WL, and CL), has nine trustworthy values. This example can be explained well with the help of the proposed non-linear optimization problem with probabilistic constraints as follows: Case 1: For WI, a PD is more trustworthy than an HD, i.e., ′ (M1; f1,1 ) > T31 ′ (M1; f1,1 )} ≥ 1 − ε 1 . P{T21 Case 2: For WI, a PD is less trustworthy than an HD, i.e., ′ (M1; f1,1 ) < T31 ′ (M1; f1,1 )} ≥ ε 1 . P{T21 Case 3: Both the devices are equally trusted, i.e., ′ (M1; f1,1 ) = T31 ′ (M1; f1,1 )} ≥ 1 − ε 1 . P{T21 Case 4: For the PD, a WI is more trusted than a WL medium, ′ (M1; f1,1 ) > T22 ′ (M1; f1,1 )} ≥ 1 − ε 2 . i.e., P{T21 Case 5: For the PD, a WI is less trusted than a WL medium, i.e., P{T21 ′ (M1; f1,1 ) < T22 ′ (M1; f1,1 )} ≥ ε 2 . Case 6: For the PD, a WI is equally trusted like a WL medium, i.e., P{T21 ′ (M1; f1,1 ) = T22 ′ (M1; f1,1 )} ≥ 1 − ε 2 . Similar type of cases can be formed to cover all the possible combinations of device and media for the constraints of the optimization problem. In order to solve the probabilistic constraints, it is needed to convert them to non-probabilistic constraints and solve the optimization problem. The details of converting probabilistic constraints to non-probabilistic constraints are shown in Appendix B. Next, the above optimization problem is solved to obtain the following T-matrix form:
⎡ T11 ′ (M1; f1,1 ) T12 ′ (M1; f1,1 ) T13 ′ (M1; f1,1 ) ⎤ ⎢ ⎥ ⎢ T21 ′ (M1; f1,1 ) T22 ′ (M1; f1,1 ) T23 ′ (M1; f1,1 ) ⎥ ⎢ ⎥ ⎢ T31 ′ (M1; f1,1 ) T32 ′ (M1; f1,1 ) T33 ′ (M1; f1,1 ) ⎥ ⎣ ⎦ ⎡ 0.99 0.761 0.415⎤ = ⎢ 0.502 0.856 0.473⎥ ε 1 = 0.4429 and ⎢ ⎥ 047⎦⎥ ⎣⎢0.594 0.434 0.0
′ (M1; f1,1 ) = T21 ′ (M1; f1,1 ) + T22 ′ (M1; f1,1 ) + T23 ′ (M1; f1,1 ) TPD = 0.502 + 0.856 + 0.473 = 1.83. ′ (M1; f1,1 ) = T31 ′ (M1; f1,1 ) + T32 ′ (M1; f1,1 ) + T33 ′ (M1; f1,1 ) THD = 0.594 + 0.434 + 0.047 = 1.07. ′ (M1; f1,1 ) = T11 ′ (M1; f1,1 ) + T21 ′ (M1; f1,1 ) + T31 ′ (M1; f1,1 ) TWI = 0.99 + 0.502 + 0.594 = 2.08. ′ (M1; f1,1 ) = T12 ′ (M1; f1,1 ) + T22 ′ (M1; f1,1 ) + T32 ′ (M1; f1,1 ) TWL = 0.76 + 0.856 + 0.434 = 2.05. ′ (M1; f1,1 ) = T13 ′ (M1; f1,1 ) + T23 ′ (M1; f1,1 ) + T33 ′ (M1; f1,1 ) TCL = 0.415 + 0.473 + 0.047 = 0.92. Here, the row entries are added to find the trustworthy values of various individual authentication factors in different devices, whereas the column entries are added to find the trustworthy values in different media. Next, we will use the EER values to find the final trustworthy values of different authentication factors in different devices and media. From the literature (Vielhauer, 2005), it can be found that EER for M1; f1,1 is 6.8%. Hence, for remaining 93.2% cases, the performance of this feature is quite satisfactory. The values of the trustworthy factor mentioned above do not consider the error rates. With the incorporation of the EER, we can scale the abovementioned trustworthy values as follows:
{
6 .8 ∗ 2.17 ≈ 2.02. 100
{
6 .8 ∗ 1.83 ≈ 1.71. 100
{
6 .8 ∗ 1.07 ≈ 0.997. 100
{
6 .8 ∗ 2.08 ≈ 1.94. 100
{
6 .8 ∗ 2.05 ≈ 1.92. 100
{
6 .8 ∗ 0.92 ≈ 0.86. 100
TFD (M1; f1,1 ) = 2.17 −
TPD (M1; f1,1 ) = 1.83 −
THD (M1; f1,1 ) = 1.07 −
TWI (M1; f1,1 ) = 2.08 −
TWL (M1; f1,1 ) = 2.05 −
TCL (M1; f1,1 ) = 0.92 −
ε 2 = 0.2392.
The trustworthy values of (M1 : f1,1) in different devices (FD, PD and HD) as well as in different media (WI, WL, and CL) can be calculated as follows:
′ (M1; f1,1 ) = T11 ′ (M1; f1,1 ) + T12 ′ (M1; f1,1 ) + T13 ′ (M1; f1,1 ) TFD = 0.99 + 0.761 + 0.415 = 2.17.
}
}
}
}
}
}
From the above-calculated values it is clear that M1; f1,1 is more trustworthy in FD than HD. However, these values do not signify that M1; f1,1 is twice trustworthy in FD than HD. In a similar manner, the trustworthy values for other modalities with their various features in different devices and media can be calculated. Some very interesting and realistic results regarding the trustworthy values of various features of different authentication modalities are tabulated and are shown in Table 2. For example, from Table 2 it can be found that eye (M1 : f1,2) (one of the features of the face recognition modality)
95
computers & security 63 (2016) 85–116
Table 2 – Trustworthy values of different features of various authentication modalities. Modalities
(M1)
(M2) (M3) (M4) (M5) (M6) (M7) (M8) (M9) (M10)
Features
(M1 : f1,1) (M1 : f1,2) (M1 : f1,3) … (M1 : f1,1, f1,2, …, f1,7) (M2 : f2,i; i ∈ + ) (M3 : f3,i; i ∈ + ) (M4 : f4,1) (M4 : f4,2) (M5 : f6,i; i ∈ + ) (M6 : f6,i; i ∈ + ) (M7 : f8,i; i ∈ + ) (M8 : f9,i; i ∈ + ) (M9 : f9,i; i ∈ + ) (M10 : f10,i; i ∈ + )
Trustworthy values FD
PD
HD
WI
WL
CL
2.02 2.15 1.84 … 1.97 2.17 2.12 2.07 2.05 1.76 1.44 0.63 1.95 1.84 0.99
1.67 1.79 1.52 … 1.59 1.79 1.76 1.71 1.70 1.26 0.87 0.78 1.61 1.52 0.96
1.00 1.07 0.92 … 0.95 1.07 1.05 1.03 1.02 1.13 0.87 0.63 0.97 0.91 0.92
1.95 2.08 1.78 … 1.78 2.09 2.04 1.99 1.98 1.76 1.00 1.76 1.88 1.77 0.92
1.92 2.04 1.74 … 1.87 2.05 2.01 1.96 1.94 1.43 0.5 1.64 1.85 1.74 0.82
0.87 0.93 0.8 … 0.79 0.94 0.92 0.89 0.89 1.26 0.5 1.48 0.84 0.80 0.86
is more trusted than the lip (M1 : f1,1) (another feature of the face recognition modality), which is supported by the trustworthy values found in Table 2. These trustworthy values can be used to select the set of authentication factors that is described in the next section. Same types of arguments can be made for different features of remaining modalities. Again, an authentication modality can be considered as the combination of all of its features. For example, the face recognition modality M1 can be considered as the combination of all of its features, such as M1 : f1,1; M1 : f1,2; …, M1 : f1,7, i.e., lip, eye, brow and cheek, etc. However, from Table 2 it could be found that the trustworthy value of one of the features of M1 like eye (M1 : f1,2) is greater than (M1 : f1,1, f1,2, … f1,7), i.e., the face recognition authentication modality (M1). Hence, in such cases it would be better to choose the individual features that have more trustworthy values than the original modality for authentication due to their lesser execution time, better performance and higher trust.
5.1.2. Computing trustworthy values of different combinations of authentication factors The previous subsection discusses the calculation of trustworthy values for individual features (authentication factors) incorporating their respective error rates. Quite on the contrary, the present sub-section discusses the calculation of trustworthy values of combined authentication factors from their individual trustworthy values. In this approach, it is assumed that the influence of the individual trustworthy value of different authentication factors on their combined trustworthy values follows the standard normal distribution (N(0,1)). The motivation behind this assumption is as follows. The error rates (EER, ERR, FAR, etc.) (Vielhauer, 2005) for the combined authentication factors are not available in the existing literature. The error rates of different individual features are already incorporated in calculating their trustworthy values as shown in the previous section. Hence, it is required to propagate the influence of individual error rates of each authentication factor into the trustworthy value of the combined authentication factors. This is achieved through the above-mentioned assumption. Details of this approach are illustrated below.
Let us define two continuous random variables, Xdmpj and Xdmpj , in the following ways:
(
)
Xdmpj ∼ N (0, 1) : The influence of Tdm M p: { f p, j } j on p Tdm {M p }: { f p, j } j ; ∀d, m, p, j ∈ + , in a particular medium irp respective of devices, where Tdm M p: { f p, j } j and p Tdm {M p }: { f p, j } j can be expressed as Tdm (M p: f p,1, f p,2, …) p and Tdm (M1, M2, … : f p,1, f p,2, …) respectively. Xdmpj ∼ N (0, 1) : The influence of Tdm M p: ( f p, j ) j on p Tdm {M p }: { f p, j } j ; ∀d, m, p, j ∈ + , in a particular device irrep spective of medium. These notations can be explained as follows: Tdm M p: { f p, j } j : Trustworthy value of M p: { f p, j } j on a p p specific medium irrespective of devices. Tdm M p: { f p, j } j : Trustworthy value of M p: { f p, j } j on a p p specific device irrespective of media. Tdm {M p }: { f p, j } j : Trustworthy value of {M p }: { f p, j } j p p on a specific medium irrespective of devices. Tdm {M p }: { f p, j } j : Trustworthy value of {M p }: { f p, j } j p p on a specific device irrespective of media.
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
These are used to calculate the trustworthy values in Sub-sections 5.1.2.1 and 5.1.2.2.
5.1.2.1. Computing trustworthy values in a specific medium. In order to calculate the trustworthy value of a particular combination of authentication factors in a specific medium, the following two quantities need to be combined: 1. The influences of trustworthy values of the authentication factors involved in the previously mentioned combination irrespective of devices. 2. The influence of individual trustworthy value of the previously mentioned authentication factors in that particular medium. Detailed descriptions regarding the calculation of the influences of a particular combination of authentication factors in a specific medium are given as follows: Let us assume that the highest and lowest trustworthy values of an authentication factor ( (M pk : f pk, j )k∈+ ) on different
96
computers & security 63 (2016) 85–116
devices are bk∈+ and ak∈+ respectively. Now, the trustworthy value of {M pk : f pk, j }k∈+ in any medium can be calculated as follows:
(
Tdm {M pk }: { f pk, j } j
)
k
= E [a1 ≤ xdmp1 j ≤ b1; a2 ≤ xdmp2 j ≤ b2; … ; an ≤ xdmpn j ≤ bn ] =∫
b1
=∫
b1
a1
a1
∫
b2
∫
b2
a2
a2
bn
…∫ xdmp1 j ∗ xdmp2 j ∗ … ∗ xdmpn j ∗ ψ Xdmp1 j, Xdmp2 j,…, Xdmpn j ( xdmp1 j, xdmp2 j, … , xdmpn j ) dxdmp1 jdxdmp2 j … dxdmpn j an
bn
…∫ xdmp1 j ∗ xdmp2 j ∗ … ∗ xdmpn j ∗ an
1 − 12 xdmp1 j2 e 2π
Again, the random variables Xdmp1 j, Xdmp2 j, … , Xdmpn j are independent of each other as different authentication modalities along with their several features are independent (Vielhauer, 2005). For example, eye (M1 : f1,1) and lip (M1 : f1,2) (two different features of face recognition authentication modality (M1)) are mutually independent. Now, the contributions of all the features of an authentication modality have been considered for calculating its trustworthy value. Consequently, the above expression can be written as follows:
(
Tdm {M pk }: { f pk, j } j
) =∫ k
b1
a1
xdmp1 j
(
(
(
)
k
(
1 − 12 Xdmpn j2 1 − 12 xdmp2 j2 … e dxdmp1 jdxdmp2 j … dxdmpn j e 2π 2π
A sample scenario is presented in Fig. 2 that shows the influence of the trustworthy values of M1 : f1,1; M2 : f2,1. . .Mn : fn,1 for a device (irrespective of any medium) on the combined trustworthy value of {M1 : f1,1, M2 : f2,1. . .Mn : fn,1}. Now the above methodology for calculating the trustworthy value of combined authentication factors is described with
b2 bn 1 − 12 xdmp1 j2 1 − 12 xdmp2 j2 1 − 12 xdmpn j2 e dxdmp1 j ∫ xdmp2 j e dxdmp2 j … ∫ xdmpn j e dxdmpn j a2 an 2π 2π 2π
)
Next, to calculate the trustworthy value of {M pk }: { f pk, j } j in k a particular medium, the influence of the individual trustworthy values of {M pk : f pk, j }k ∈+ in that medium (which follows the standard normal distribution with the combined trustworthy value) has been added with Tdm {M pk }: { f pk, j } j as follows:
Tdm {M pk }: { f pk, j } j
where Tmi (M pk : f pk, j ) is the trustworthy value of ( M pk : f pk, j ) on the ith medium and “ E ” is the expectation operator.
= Tdm {M pk } : { f pk, j } j
the following example, which calculates the combined trustworthy value of {M1 : f1,1, M1 : f1,2} in WI.
)
k
)
k
+
1 2π
∑∑ k
j
e
−
1 {Tmi (M pk : fpk, j )}2 2
= E [a1 ≤ xdmp1 j ≤ b1; a2 ≤ xdmp2 j ≤ b2; … an ≤ xdmpn j ≤ bn ] +
1 2π
∑∑ k
j
e
−
1 {Tmi (M pk : fpk, j )}2 2
,
Fig. 2 – Pictorial representation of calculation strategy of trustworthy values of combined authentication factors from individual trustworthy values in a particular medium.
97
computers & security 63 (2016) 85–116
Tdm {M1: f1,1, M1: f1,2 } = E [1.00 ≤ xdm11 ≤ 2.02; 1.07 ≤ xdm12 ≤ 2.15] =∫
2.02
1.0 00
∫
2.15
1.07
1 − 12 xdm112 1 − 12 xdm122 e xdm12 e dxdm11dxdm12 2π 2π 1 1 − 2 xdm112 1 − 12 xdm122 ⎞ ⎛ 2.15 ⎞ dxdm12 ⎟ ≈ 2.67 e dxdm11 ⎟ ⎜ ∫ xdm12 e ⎠ ⎝ 1.07 ⎠ 2π 2π
xdm11
⎛ 2.02 = ⎜ ∫ xdm11 ⎝ 1.00
Next, to get the trustworthy value of {M1 : f1,1, M1 : f1,2} in WI, the influence of the individual trustworthy values of {M1 : f1,1} and {M1 : f1,2} in WI (which follows the standard normal distribution with the combined trustworthy value) has been incorporated as follows:
the influences of a particular combination of authentication factors in a specific medium are as follows: Let us assume that the lowest and highest trustworthy values of an authentication factor ( (M pk : f pk, j )k ∈+ ) on different
1 − 12 (TWI(M1 : f1,1 )2 + TWI(M1 : f1,2 )2 ) e 2π 1 1 − 2 xdm112 1 − 12 xdm122 1 − 12 2.082 ⎫ ⎞ ⎛ 2.15 ⎞ ⎧ 1 − 12 1.952 e dxdm11 ⎟ ⎜ ∫ xdm12 e dxdm12 ⎟ + ⎨ e + e ⎬ ⎠ ⎝ 1.07 ⎠ ⎩ 2π 2π 2π 2π ⎭
Tdm {M1: f1,1; M1: f1,2 } = Tdm {M1: f1,1; M1: f1,2 } + ⎛ 2.02 = ⎜ ∫ xdm11 ⎝ 1.00 ≈ 2.78.
Similarly, the combined trustworthy values of other authentication factors in a particular medium can be calculated, some of them are listed in Table 3.
(
Tmd {M pk }: { f pk, j } j
)
k
media are ck∈+ and dk∈+ respectively. Now, the trustworthy value of {M pk : f pk, j }k ∈+ in any device can be calculated as follows:
= E ⎡⎣c1 ≤ xmdp1 j ≤ d1; c2 ≤ xmdp2 j ≤ d2; … ; cn ≤ xmdpn j ≤ dn ⎤⎦ =∫
d1
c1
=∫
d1
c1
∫
d2
c2
∫
d2
c2
(
dn
)
…∫ xmdp1 j ∗ xmdp2 j ∗ … ∗ xmdpn j ∗ ψ Xmdp1 j, Xmdp2 j,…, Xmdpn j xmdp1 j, xmdp2 j, … , xmdpn j dxmdp1 jdxmdp2 j … dxmdpn j cn
dn
…∫ xdmp1 j ∗ xdmp2 j ∗ … ∗ xdmpn j ∗ cn
1 e 2π
1 − xmdp1 j2 2
5.1.2.2. Computing trustworthy values in a specific device. In order to calculate the trustworthy value of a particular combination of authentication factors in a specific medium, the following two quantities need to be combined: 1. The influences of trustworthy values of the authentication factors involved in the previously mentioned combination irrespective of devices. 2. The influences of individual trustworthy values of the previously mentioned authentication factors in that particular medium. Detailed descriptions regarding the calculation of
1 e 2π
1 − xmdp2 j2 2
…
1 e 2π
1 − Xmdpn j2 2
dxmdp1 jdxmdp2 j … dxmdpn j
Again, the random variables Xmdp1 j, Xmdp2 j, … , Xmdpn j are independent of each other as different authentication modalities along with their several features are independent of each other. For example, eye (M1 : f1,1) and lip (M1 : f1,2) (two different features of face recognition authentication modality (M1)) are independent of each other. Now, the contributions of all the features of an authentication modality have been considered for calculating its trustworthy value. Consequently, the above expression can be written as follows:
Table 3 – The calculation of trustworthy values of combined authentication factors from individual trustworthy values. Features
(M1:f1,1,M1:f1,2) (M1:f1,1,f1,2,f1,3) … (M1:f1,1,f1,2,f1,3,f1,4) (M5:f5,1,f5,2) (M5:f5,1,f5,3) (M5:f5,1,f5,2) …
Trustworthy values FD
PD
HD
WI
WL
CL
2.72 3.15 … 3.57 3.95 3.90 3.95 …
2.81 2.99 … 3.79 3.94 2.89 3.94 …
3.1 2.97 … 3.19 3.97 3.79 3.97 …
2.78 2.57 … 2.61 3.73 3.79 3.73 …
2.79 2.71 … 2.75 2.78 3.69 2.78 …
3.21 0.99 … 0.99 0.97 3.79 0.97 …
98
computers & security 63 (2016) 85–116
(
Tmd {M pk }: { f pk , j } j
) =∫ k
b1
xmdp1 j
a1
bn b2 1 − 12 xmdp1 j2 1 − 12 xmdp2 j2 1 − 12 xmdpn j2 e dxmdp1 j ∫ xmdp2 j e dxmdp2 j … ∫ xmdpn j e dxmdpn j a a 2 n 2π 2π 2π
(
)
Next, to calculate the trustworthy value of {M pk }: { f pk, j } j in k a particular device, the influence of the individual trustworthy values of {M pk : f pk, j }k ∈+ in that device (which follows the standard normal distribution with the combined trustworthy value) has been added with Tmd {M pk }: { f pk, j } j as follows:
(
(
Tmd {M pk }: { f pk, j } j
)
k
(
= Tmd {M pk }: { f pk, j } j
)
The above methodology for the calculating trustworthy value of a combination of combined authentication factors in a specific device is described with the following example, which calculates the combined trustworthy value of {M1 : f1,1; M1 : f1,2} in FD.
k
)
k
+
1 2π
∑∑ k
j
e
−
1 {Td (M pk : fpk, j )}2 2 i
= E [a1 ≤ xdmp1 j ≤ b1; a2 ≤ xdmp2 j ≤ b2; … an ≤ xdmpn j ≤ bn ] +
1 2π
∑∑ k
j
e
−
1 {Td (M pk : fpk, j )}2 2 i
Tmd {M1: f1,1, M1: f1,2 } = E [0.87 ≤ xmd 11 ≤ 1.95; 0.93 ≤ xmd 12 ≤ 2.08] =∫
1.95
0.8 87
∫
2.08
0.93
1 − 12 xmd 112 1 − 12 xmd 122 e xmd 12 e dxmd 11dxmd 12 2π 2π 1 − 12 xmd 112 1 − 12 xd 122 ⎞ ⎛ 2.15 ⎞ e dxmd 11 ⎟ ⎜ ∫ xd 12 e dxmd 12 ⎟ 1 . 07 ⎠ ⎝ ⎠ 2π 2π
xmd 11
⎛ 2.02 = ⎜ ∫ xd 11 ⎝ 1.00
where Tdi (M pk : f pk, j ) is the trustworthy value of ( M pk : f pk, j ) on the ith medium. For better and clearer understanding a sample scenario has been pictorially represented in Fig. 3, which shows the influence of the individual trustworthy values of M 1 : f 1,1 ; M2 : f2,1…Mn : fn,1 for a device (irrespective of any medium) on the combined trustworthy value of {M1 : f1,1; M2 : f2,1…Mn : fn,1}.
Next, to get the trustworthy value of {M1 : f1,1, M1 : f1,2} in FD, the influence of the individual trustworthy values of {M1 : f1,1} and {M1 : f1,2} in FD (which follows the standard normal distribution with the combined trustworthy value) has been incorporated as follows:
Fig. 3 – Pictorial representation of calculation strategy of trustworthy values of combined authentication factors from individual trustworthy values in a particular device.
computers & security 63 (2016) 85–116
99
1 − 12 (TFD(M1 : f1,1 )2 + TFD(M1 : f1,2 )2 ) e 2π 1 − 12 xmd 112 1 − 12 xmd 122 1 − 12 2.082 ⎫ ⎞ ⎛ 2.08 ⎞ ⎧ 1 − 12 1.952 e dxmd 11 ⎟ ⎜ ∫ xmd 12 e dxmd 12 ⎟ + ⎨ e + e ⎬ ⎠ ⎝ 0.93 ⎠ ⎩ 2π 2π 2π 2π ⎭
Tmd {M1: f1,1; M1: f1,2 } = Tmd {M1: f1,1; M1: f1,2 } + ⎛ 1.95 = ⎜ ∫ xmd 11 ⎝ 0.87 ≈ 2.72.
Similarly, the combined trustworthy values of other authentication factors in a particular medium can be calculated, some of them are listed in Table 3. The overall strategies presented in the proposed work have been summarized in Fig. 4.
5.1.3. Computing trustworthy values for dependent authentication factors Some of the common features could be found in different authentication factors. For example, the feature “localization” could
be found in the authentication factors like iris recognition (M2) and tightly coupled iris and face recognition (M3). Moreover, the iris could be found as a common feature for face recognition (M1) and iris recognition (M2) (see Table 1). Quite in a similar manner, other common features could be well found, which lead us to formulate the trustworthy evaluation procedure for the dependent authentication factors separately. Extending the concept presented in Section 5.1.2, in the case of dependent authentication modalities the trustworthiness values could be found using the following steps:
Fig. 4 – Overall adaptive-MFA strategies presented in the proposed work.
100
computers & security 63 (2016) 85–116
(i) Computing the trustworthy values of the authentication factor in a specific medium, irrespective of devices. (ii) Computing the trustworthy values of the authentication factor in a specific device, irrespective of media.
5.1.3.1. Computing trustworthy values in a specific medium. The trustworthy value of an authentication factor ( {M p1 }: { f p1, j } j), dependent on several features (some of them may be common with other authentication factors), in any medium (extending the concept presented in Subsection 5.1.2.1), i.e., Tdm {M p1 }: { f p1, j } j , could be calculated as follows:
(
)
(
)
Tdm {M p1 }: { f p1, j } j = E [ xdmp1 j a2 ≤ xdmp2 j ≤ b2, … , an ≤ xdmpn j ≤ bn ], where {Xdmpi j }i∈+ ∼ N ( 0, 1) (assumed to follow a standard normal distribution in the present work) are the continuous random variables (see Section 5.1.2). Next, to calculate the final trustworthy value of ( {M p1 }: { f p1, j } j) in a particular medium, the
(
)
(
)
Tmd {M p1 } : { f p1, j } j = Tmd {M p1 } : { f p1, j } j +
)
(
)
Tmd {M pk }: { f pk, j } j
k
(
= Tmd {M pk }: { f pk, j } j
)
k
+
1 2π
∑∑ k
j
e
−
Tdm M1 : { f p, j }p, j = E [ xdm1 j 1.00 ≤ xdm11 ≤ 2.02,
1.07 ≤ xdm12 ≤ 2.15, , 0.95 ≤ xdm17 ≤ 1.97]
= E [ xdm1 j 1.00 ≤ xdm11 ≤ 2.02, 1.07 ≤ xdm21 ≤ 2.15, , 0.95 ≤ xdm17 ≤ 1.97] ≈ 1.49
Here, the features {M1 : f1,2} and {M2 : f2,1} are very much similar (see Table 1) and Xdm12 , Xdm21 are the corresponding random variables in different media with fixed device. The value of Tdm M1 : { f p, j }p, j has been calculated using the formula for conditional expectation given in the book of Feller (1971). Hence, we have replaced it in the above equation. Next, to get the trustworthy value of { M1 : { f p, j }p, j } (i.e., the whole modality) in WI, the influence of the individual trustworthy values of all the features (including the common features) in WI (which follows the standard normal distribution with the combined trustworthy value) has been done as follows:
{
}
}
,
(
(
{
)
)
}
(
)
where Tdi (M pk : f pk, j ) is the trustworthy value of ( M pk : f pk, j ) on the ith medium. Now the above methodology for evaluating the trustworthy value of the dependent authentication factors is described
1 {Td (M pk : fpk, j )}2 2 i
where Tmi (M pk : f pk, j ) is the trustworthy value of ( M pk : f pk, j ) on the ith medium. Now the above methodology for calculating the trustworthy value of the dependent authentication factors is described with the following example, which calculates the combined trustworthy value of { M1 : { f p, j }p, j } in WI.
}
{
5.1.3.2. Computing trustworthy values in a specific device. The trustworthy value of an authentication factor ( {M p1 } : { f p1 , j } j), dependent on several features (some of them may be common with other authentication factors), in any device (extending the concept presented in Subsection 5.1.2.2), i.e., Tmd {M p1 } : { f p1 , j } j , could be calculated as follows: Tmd {M p1 } : { f p1, j } j = E ⎣⎡ xmdp1 j a2 ≤ xmdp2 j ≤ b2, … , an ≤ xmdpn j ≤ bn ⎦⎤ , where Xmdpi j i∈+ ~ N ( 0, 1) are the continuous random variables (see Section 5.1.2). Next, to calculate the final trustworthy value of ( {M p1 } : { f p1, j } j ) in a particular device, the influence of the individual trustworthy values of {M p1 : f p1, j }k∈+ in that medium (which follows the standard normal distribution with the combined trustworthy value) has been added with Tmd {M p1 } : { f p1, j } j as follows:
1 2π
= E [a1 ≤ xdmp1 j ≤ b1; a2 ≤ xdmp2 j ≤ b2; … an ≤ xdmpn j ≤ bn ] +
{
}
1 − 12 (TWI (M1 : f1,1 )2 + TWI (M1 : f1,2 )2 ++ TWI (M1 : f1,7 )2 +) + = 1.7756 ≈ 1.78 e 2π which supports the value presented in Table 2.
1 1 2 2 − {Tdi (Mpk : f pk, j )} − {Tdi (Mpk : f pk, j )} 1 1 ∑k ∑ j e 2 = E ⎡⎣ xmdp1 j a2 ≤ xmdp2 j ≤ b2, … , an ≤ xmdpn j ≤ bn ⎤⎦ + ∑k ∑ j e 2 2π 2π
influence of the individual trustworthy values of {M pk : f pk, j }k ∈+ in that medium (which follows the standard normal distribution with the combined trustworthy value) has been added with Tdm {M p1 }: { f p1, j } j as follows:
(
{
Tdm M1 : { f p, j }p, j = Tdm M1 : { f p, j }p, j
∑∑ k
j
e
−
1 {Td (M pk : fpk, j )}2 2 i
with the following example, which calculates the combined trustworthy value of { M1 : { f p, j }p, j } in FD.
{
}
Tmd M1 : { f p, j }p, j = E ⎡⎣ xmd 1 j 0.87 ≤ xmd 11 ≤ 1.95, 0.93 ≤ xmd 12 ≤ 2.08, , 0.79 ≤ xmd 17 ≤ 1.78]
= E ⎡⎣ xmd 1 j 0.87 ≤ xmd 11 ≤ 1.95, 0.93 ≤ xmd 21 ≤ 2.08, , 0.79 ≤ xmd 17 ≤ 1.78] ≈ 1.68
{
}
The value of Tmd M1 : { f p, j }p, j has been calculated using the formula for conditional expectation presented in the book of Feller (1971). Quite similarly, the features {M1 : f1,2} and {M2 : f2,1} are very much similar (see Table 1). Next, to get the trustworthy value of { M1 : { f p, j }p, j } in FD, the influence of the individual trustworthy values of all the features (including the common features) in FD (which follows the standard normal distribution with the combined trustworthy value) has been done as follows: Tmd M1 : { f p, j }p, j = Tmd M1 : { f p, j }p, j +
{
}
{
1 − 12 (TFD (M1 : f1,1 )2 + TFD (M1 : f1,2 )2 ++ TFD (M1 : f1,7 )2 +) e = 1.9657 ≈ 1.97 , 2π supports the value presented in Table 2.
}
which
101
computers & security 63 (2016) 85–116
5.2.
Selection of different authentication factors
This section showcases the procedure for adaptive selection of the sets of authentication factors that are capable of secure authentication for different users considering various devices, media, and surrounding conditions. The selected set should have higher total trustworthy values and performance with less number of authentication factors. This ensures a more secure authentication process with lesser user involvement. Moreover, the selected set should be different from the previous set of selected authentication factors if the device, medium and surrounding conditions are the same. This criterion significantly reduces the chance of finding the predictable patterns in the selection of authentication factors and hence, reduces the chance of compromising the authentication process by the attackers. In order to achieve these goals, a multi-objective nonlinear quadratic optimization problem with probabilistic constraints has been formulated. Moreover, the effects of surrounding conditions are considered in the selection of a set of authentication factors as some of these may have a depen-
dence on the light (for example, face recognition), sound (for example, voice recognition), etc. Detailed descriptions regarding the surrounding conditions are given in Table 4. Hence, to incorporate this realistic situation, a set of surroundings (S) can be constructed which is given as S = {Light (L), Sound and background noise (N), Motion (M)…}. Keeping this in mind, the abovementioned influences have been incorporated as constraints of the proposed multi-objective quadratic optimization model with probabilistic objective, which is given as follows: Objectives:
(
Maximize WDp ∗ Dp + ∑ j WM j ∗ Me j
{
)
}
∗ TD p ({Mk } : { fk,i }) ∗ ∑ j TMe j ({Mk } : { fk,i })
− ({Mk }:{ fk ,i })
Minimize P ( Xt Xt −1, Xt − 2, Xt −3 … X1 )
(7)
(8)
Subject to:
Table 4 – Effects of different surrounding conditions on the proposed selection procedure. Surrounding conditions Light (L)
Sound and background noise (N)
Motion (M)
The acceptable range of surrounding conditions The luminance of the light levels varies with indoor and outdoor conditions. The common light level falls in the range of 100–1000 lux. The range of light is generally categorized as human visible range and human invisible range. The wavelength of the visible range varies from 380 nm to 750 nm. A typical human eye will respond to the visible range of wavelengths. The violet color has a wavelength of 380–450 nm. Blue has a wavelength of 450–495 nm, while green has 496–570 nm wavelength and yellow has 570–590 nm wavelength. Orange and red have a higher wavelength than all the other visible colors. Orange color has a wavelength of 590–620 nm and red has a wavelength of 620–750 nm. The nearest wavelength of visible light is infrared light, which has its wavelength in the range of 750 nm–1 mm. Generally, most of the thermal radiations emitted by objects near room temperature are infrared. Equation (11) in the adaptive selection approach considers the light condition and that constraint will be selected if the luminance is greater than a1 lux. The possible value of a1 could be 100. Similar constraints can be generated with different visible color ranges to provide a more adaptive response of the surrounding in the selection of authentication factors. The reasonable frequency range of audible sound is 20 Hz–20 kHz. Any sound with a frequency less than 20 Hz and above 20KHz cannot be heard by humans. In the adaptive selection approach, Equations (11) and (12) incorporate the effect of sound in selecting the appropriate authentication modalities. The minimum and maximum ranges are mentioned as a2 Hz and a3 Hz respectively. The possible values of a2 could be 20 and a3 could be 20 KHz. The background noise is one of the factors of the surrounding that determines the quality of the sound that can be comfortable for the human ear to listen. In general, the threshold level of hearing is 30 dB–90 dB. Within this range, any noise below 10 dB is considered as very faint noise. Faint is considered as the level of noise which falls between 10 and 35 dB. Quiet to moderate level noise falls into the 36–70 dB range. Any noise level that falls within 70–90 dB is considered as loud noise. In the adaptive selection procedure, Equations (11)–(13) incorporate the effect of background noise in the selection of authentication factors. The range is mentioned as a6 ≤ N ≤ a7. Based on different humans, the range of this surrounding condition can be set. The typical values for a6 and a7 are 30 and 70 respectively. The authentication factors are also dependent on the motion of the humans. But detailed analysis of the motion on different authentication factors is not found in the literature. This surrounding factor is incorporated in Equations (11)–(13) of the adaptive selection approach. Proper adjustment (depending on the system requirements) can be made for the minimum and maximum allowed values of motion depending on the applications.
Effected authenticated modalities M1, M2, M3, M14, and M15
M9
M1, M2, M3, M9, M14, and M15
102
computers & security 63 (2016) 85–116
Fig. 5 – Surface plots of the constraints for adaptive selection approach.
(D
l
− TDl ({Mk } : { fk,i }i )k ) + ∑ j (Me j − TMe j ({Mk } : { fk,i }i )k ) ≤ U max; 2
2
(9) where (a1lux ≤ L ≤ a2lux, a3Hz ≤ S ≤ a4Hz; motion within a sustainable range; {ai }i∈N ,) (Cf. Table 4)
tication factors repeatedly. Here TDl({Mk} : {fk,i}) and TMej({Mk} : {fk,i}) are the trustworthy values of ({Mk} : {fk,i}) in lth; (l = 1, 2, 3) device and jth; (j = 1, 2, 3) medium respectively. Moreover, ({Mk } : { fk,i }) is the number of the selected set of authentication factors. WDi and WMj are the weights of the ith device and jth medium respectively that can be defined as follows:
WDi =
Total trustworthy value of different features of activated modalities of the ith device Total trustworthy value of all the features of different modalities in the ith device
WM j =
Total trustworthy value of different features of the activated modalities of the jth medium Total trustworthy value of all the features of different modalities in the jth medium
(D − TD ({M } : { f } ) ) + ∑ (Me − TMe ({M } : { f } ) 2
l
l
k
j
j
k, i i k ≠ 1, 2,3,14 ,15 j
k
k, i i k ≠ 1, 2,3,14 ,15
)
2
≤ U max;
(10)
where (only a3Hz ≤ S ≤ a4Hz; motion within a sustainable range) (Cf. Table 4)
…
(D − TD ({M } : { f } ) ) + ∑ (Me − TMe ({M } : { f } )
− ({Mk }:{ fk ,i }) t −1
2
l
l
j
k
j
k, i i k ≠ 1, 2,3, 9,14 ,15 j
k
k, i i k ≠ 1, 2,3, 9,14 ,15
Again, according to the present study 3.97 is the highest trustworthy value for any authentication factors (found in Table 3). So, in Equations (9)–(11), the value of Umax will be 3.97. The second objective makes the set of authentication factors unpredictable to the hacker (Equation (8)). Here, {Xt} is the continuous random variable that can be defined as − ({Mk }:{ fk ,i }) t and Xt = {{TDl ({Mk } : { fk,i }i )k } ∗ {TMe j ({Mk } : { fk,i }i )k }}t its (t − 1)th occurrence can be defined as
)
2
≤ U max;
(11)
(where only motion is not within a sustainable range) (Cf. Table 4)
… The multi-objective quadratic programming problem with probabilistic objective function can be used for selecting a set of authentication factors for authenticating users. The proposed multi-objective problem has two objective functions, where the first one shows that the selected set of authentication factors should have higher trustworthy values with lesser number of authentication factors. However, the remaining tries to reduce the probability of selecting the same set of authen-
Xt −1 = {{TDl ({Mk } : { fk,i }i )k } ∗ {TMe j ({Mk } : { fk,i }i )k }}t −1
Here, Xt is the tth observation of the random variable {Xt}. Hence, to achieve the goal of non-repetitive selection of authentication features (objective 2, i.e., Equation (8)) P(Xt | Xt−1, Xt−2, Xt−3…X1) should be minimized. In the present work, the distribution of {X t } has been considered as exponential (Balakrishnan, 1996) (for the simplicity of calculation) and, as a consequence, the second objective Minimize P(Xt | Xt−1, Xt−2, Xt−3…X1) is reduced to Minimize P(Xt | Xt−1) (due to the memory less or Markov property of the exponential distribution (Balakrishnan, 1996). Fig. 5A shows the surface plots of the constraints, considering the device as FD, media as WI and WL, and M1, M2, M3 and M5 (see Table 2) are the usable set of authentication modalities along with their various features. It has
103
computers & security 63 (2016) 85–116
been assumed that all the surrounding environmental conditions, i.e., L, N, and M, etc., are favorable to the user device and single device and multiple media are employed. Consequently, Equation (9) will be selected and the following are its different forms:(FD − 1.97)2 + (WI − 1.78)2 + (WL − 1.87)2 ≤ 3.97; (FD − 2.72)2 + (WI − 2.78)2 + (WL − 2.79)2 ≤ 3.97; (FD − 3.95)2 + (WI − 3.73)2 + (WL − 2.78)2 ≤ 3.97, etc., where 1.97, 2.72 and 3.95 are the trustworthy values of M1, (M1 : f1,1, f1,2) and (M5 : f5,1, f5,2) in FD respectively. Again, 1.78, 2.78 and 3.73 are the trustworthy values of M1, (M1 : f1,1, f1,2) and (M5 : f5,1, f5,2) in WI respectively. Lastly, 1.87, 2.79 and 2.78 are the trustworthy values of M1, (M1 : f1,1, f1,2) and (M5 : f5,1, f5,2) in WL (from Table 3). On the other hand, Fig. 5B shows the surface plot of the abovementioned constraints with all possible combinations of devices, media and surrounding environments. It is the bird’s eye view representation of the solution space for the proposed selection algorithm. Next, the constraints of the proposed multi-objective optimization problem impose the adaptive selection by considering the influence of the surrounding ambience such as sound, light, noise, etc. Next, to make the proposed selection procedure adaptive, the environmental influences (i.e., L, M, N, etc.) have been incorporated as: (i) If the surrounding light condition lies inside the region of visibility (given in Table 4); the background noise is not too high and surrounding motion is not affecting the selection procedure, then Equation (9) will be selected as constraint for the selection mechanism. (ii) Similarly, other constraints are also dependent upon the surrounding conditions.
6.
Simulation results
The present section demonstrates the simulation results of the proposed adaptive-MFA. Different results are listed in Table 5,
which shows the nature of devices, media, surroundings, and the usable set of authentication modalities in the device in the first four columns. The next three columns show the selected set by the proposed method, random selection, and the optimal cost selection. For example, the first row of Table 5 shows that for a fixed device (with all the 15 modalities activated) and in a wired medium, the set of authentication modalities selected by the proposed method is (M4; M5 : f5,1, f5,2). However, (M1, M4, M13) and (M5, M13) would be the set of selected authentication modalities for the random and the optimal cost selection methods. The other rows of the table can be interpreted in a similar way, and in most of the cases the selected set of authentication factors by the optimal and random selection methods is mostly repeated. But, in the case of the proposed method the selected set of authentication factors is not repeated, making the selection unpredictable to the hackers. It can be considered as a very crucial advantage of the proposed strategy over its competitors. The grouped bar chart (shown in Fig. 6) shows the comparison of the trustworthy values of the selected set of authentication factors using the proposed selection procedure with the random and the optimal selection strategies. Random selection means selecting the authentication factors randomly. The optimal selection considers only the objectives (Equations (7) and (8)) of the multi-objective formulation. Fig. 7 shows the effect of trustworthy values in different device and media combinations for the three selection strategies. It is clear from the figure that adaptive selection outperforms the other two approaches in all the cases. The pictorial representation of the simulation results (see Fig. 7) can be consulted for a better and clearer understanding. The first layer shows different time triggering events when the adaptive selection approach will run. Different combinations of devices and media are shown in the next two layers. The fourth layer mentions the different surrounding conditions. The possible set of authentication factors is listed in the last layer. For example, in t1 triggering time, FD and WI are selected and surrounding
Table 5 – Sample outcomes of the proposed multi-factor authentication method. Device
FD FD FD FD FD … FD FD … FD FD … HD HD … HD HD …
Media
WI WI WI WI WI … WL WL … CL CL … WI WI … WL WL …
Surrounding conditions
Set of usable authentication factors
L,N, M are not good L, M, N good L, M, N good L, M, N good L, M, N good
M1, M2, …, M15 M1, …, M5, M6, M8, M9, …, M15 M1, M2, …, M5, M9, …, M15 M1, M2, …, M5, M6, M7, M9, …, M15 M1, M2, …, M5, M9, …, M15 … M1, M2, …, M15 M1, M2, …M5, M6, M8, M9, …, M15 … M1, M2, …, M15 M1, M2, …, M5, M6, M8, M9, …, M15 … M1, M2, …, M15 M1, M2, …, M5, M6, M8, M9, …, M15 … M1, M2, …, M15 M1, M2, …, M5, M6, M8, M9, …, M15 …
L, M, N good L, M, N good L, M, N not good L, M, N not good L, M, N not good L, M, N good L, M, N good L, M, N good …
Selected sets of authentication factors Proposed system
Random selection
Optimal selection
M4; M5 : f5,1, f5,2 M2; M4; M5 : f5,2 M1; M2; M5 : f5,1, f5,2 M1 : f1,2; M5 : f5,1, f5,2 M2; M3; M5 : f5,2 … M1 : f1,2; M5 : f5,1, f5,2 M1 : f1,2; M2; M5 : f5,2 … M5 : f5,1, f5,2 M5, M13 … M4; M5 : f5,1, f5,2 M2, M4, f5,2 … M1 : f1,2; M5 : f5,1, f5,2 M1 : f1,2; M2; M5 : f5,2
M1, M4, M13 M1, M4, M13 M1, M5, M13 M1, M2, M13 M1, M4, M13 … M1, M6, M13 M1, M6, M13 … M1, M4, M13 M1, M2, M3 … M1, M4, M13 M1, M4, M13 … M1, M4, M13 M1, M6, M13 …
M5, M13 M1, M13 M1, M4, M1, M4, M13 M1, M4, M13 … M1, M4, M13 M1, M4, M13 … M1, M4, M13 M1, M4, M13 … M1, M2, M13 M1, M4, M13 … M1, M2, M5 M1, M4, M13 …
104
computers & security 63 (2016) 85–116
Fig. 6 – Comparisons among the proposed adaptive selection, optimal selection, and the random selection approaches.
conditions (L, M, and N) are not good. Then M3; M4 : f4,1, f4,2 is selected by the adaptive selection approach. In other time triggering events with different combinations of device, medium, and surrounding condition, the adaptive selection approach provides a different set of authentication factors as solutions.
7. Implementation of the proposed adaptiveMFA framework The prototype of the proposed adaptive-MFA framework has been implemented as a client-server application in the devices
Fig. 7 – Sample scenarios of adaptive selection of the set of authentication factors in different device, media and surrounding conditions with varying time.
105
computers & security 63 (2016) 85–116
Table 6 – The requirement of devices for implementing the proposed frameworks. Different device types used in implementing A-MFA
Settings
Authentication server (storing database, calculating trustworthy scores, features’ values, and adaptive selection decision)
Dell PowerEdge R710 • RAM: 64 GB. • OS: Windows Server 2012. • Processors: Tow Intel Xeon X5650 with 12 cores at 2.67 GHz. • Database: MySQL 5.7.
Client machine
Dell XPS Laptop • RAM: 8 GB. • OS: Windows 10 Professional 64 bit. • Processors: Intel Core i7-3517U with 4 cores at 2.4 GHz.
Fingerprint device
Digital Persona U. are. U 4000 Sensor • Cross Match Fingerprint Driver to access the fingerprint device. • OS: Windows 10 Professional and Windows Server 2012.
Web camera
Any Windows supported Web camera • Both integrated and external web cameras.
Recording and playback device
Any Windows supported microphone and speakers • To record the voice and playback the recorded audio.
Keyboard
Any Windows supported PS/2 or USB Keyboard • To record the keystroke information of the user.
Mobile phone
Any mobile phone • Capable of receiving SMS for a temporary passcode.
and software listed in Table 6. In addition to server and client machines, some devices required for capturing the biometric data are also mentioned in Table 6. These are fingerprint device, web camera, recording and playback device, keyboard and cellular phone to receive a passcode through SMS. Experimental Setup and overview: For this implementation, we have used Dell PowerEdge R710 (with 64 GB RAM, Tow Intel Xeon X5650 third generation with 12 cores at 2.67 GHz) as server and two clients: one within the same Dell PowerEdge R710 availability zone and the other hosted at The University of Memphis, TN, USA with an Intel Core i7-3517U (2.4 GHz) with four core processors (see Table 6). We refer to the first as the LAN (local-area network) setting and the second as the WAN (wide-area network) setting. In the LAN setting, we have used the Dell PowerEdge R710 internal IP address. All queries were made over the Transport Layer Security (TLS). The client device supports all the required hardware (integrated or external web camera, integrated or external microphone, PS/2 or USB keyboard, fingerprint reader) and their corresponding device drivers. These hardware devices were able to extract all the required biometric and non-biometric user data using the prototype of the adaptive-MFA client application (implemented in Visual C#.NET). The user first provides his email address to check if he has any existing account in the database. The server side application (run in Dell PowerEdge R710 as shown in Table 6) responds back if it exists or not. If the user account does not exist, the client application starts the registration phase by capturing biometric, non-biometric and surrounding information. Regarding biometric data, the user needs to provide multiple samples of each biometric to better capture the variability of the user data. For face biometrics, the user needs to choose the available web camera
(integrated or external) to capture the face. The client application captures ten face images as part of registration data. Emgu CV-3.1.01 is used to process the face image and crop the required portion of the image for face recognition. For fingerprint biometrics, the application shows the status of the connected fingerprint device (U. are. U 4000 fingerprint device) and prompts the user to put his thumb three times to capture three different fingerprint images. For voice biometrics, the user needs to record his voice three times for a given phrase using the connected microphone.The user can hear the recorded voice sample before saving the voice information. For keystroke biometrics, the user needs to type a given phrase (first and last name) at least five times to capture his typing patterns. The screenshots capturing the biometric inputs are shown in Fig. 8 in Section 7.1. The ambient light (through Arduino compatible mini luminance light sensor) and noise (Diymall Microphone Noise Decibel Sound Sensor) information is captured in every 100 milliseconds to reflect the real time surrounding information. Details of capturing the surrounding information are discussed in Section 7.1.1. The passwords and pass-phrases are encrypted using SHA-512 before transmitting to the server. All captured user information is stored as JSON2 object and the client application creates a secure connection (https) to the server to send the information (data-in-motion). The client application then deletes all the captured user information from the client machine. The server machine extracts the JSON object upon arrival of the client request. Featured extraction phases of the captured user data are implemented using
1 2
http://www.emgu.com/wiki/index.php/Main_Page. JSON: Java Script Object Notation.
106
computers & security 63 (2016) 85–116
Fig. 8 – A. Tab to capture user face information for the prototype application of the proposed MFA framework. B. Capturing user voice information. C. Capturing user fingerprint information. D. Capturing user keystroke related information.
multi-threaded architecture for faster processing of data. This design can handle multiple registrations and authentication requests from different users in parallel with less latency. The extracted features are then stored in the database (MySQL 5.7) on the same server machine. User information in the database is hashed using B-Crypt3 on the server side (data-at-rest) to make the data resistant to brute force attacks. The response of the completion of processing has been sent to the client device through the secure (https) connection. If the user account does exist, the client application starts the authentication phase. The detailed workflow of the
3
B-crypt Generator Url:https://www.bcrypt-generator.com/.
authentication phase is shown in Fig. 9. According to Fig. 9, the client application sends the sensor data to reflect the existing surrounding conditions of the client device. These data along with device and medium information are used in the server application to provide a better set of authentication factors for the selection algorithm. The selections of authentication are then sent to the client device to challenge the user for verifying identity.The client application then captures the challenged authentication factors and sends to the server in a secure connection (https) to verify. The server upon receiving information extracts the required features from the response object and compares those values with the stored feature values (in the database).The outcome of the matching procedure is then sent back to the client device to display the authentication decision.
computers & security 63 (2016) 85–116
107
Fig. 9 – Overall process of user authentication of the proposed MFA system.
7.1.
Client-side application
In the present prototype application of the MFA framework, all the fifteen different authentication modalities (see Section 5.1.1 and Table 1) are used to create user profiles. Moreover, we have used two different variations of passwords, namely, passphrase and security questions in the present prototype application. The client application is developed in Visual C#.NET platform with .NET Framework 4.5. The graphical user interfaces (GUI) of the client application are shown in Fig. 10A and B. The password related information and the information regarding SMS are shown in Fig. 10. The window shown in Fig. 10A
captures the password, passphrase and security question like traditional authentication systems. Again, the window shown in Fig. 10B captures the phone number of the user to send the temporary password code. After entering the required information, the user needs to press “Confirm” button to save the entered information. Various necessary biometric data (see Section 5.1.1) for user registration (see Section 5.1.1) are shown in Fig. 8. In the face capture window (see Fig. 8A), the face is detected first from the web camera and then the detected face is extracted from the original image. For voice tab (Fig. 8B), the user is asked to say a given phrase in a normal tone. The user later can play back
Fig. 10 – A. Implemented client application for adaptive MFA to capture password. B. Tab to capture SMS information.
108
computers & security 63 (2016) 85–116
the recorded sound. A visualization of the captured audio signal is also provided for better feedback to the user. The fingerprint tab (Fig. 8C) captures the image of the thumb of the user. It provides appropriate feedback to the user regarding when to put the finger on the device as well as when to remove the finger from the device. In the keystroke tab (Fig. 8D), the user is asked to type a given phrase in a normal pace and the required information from the keystroke is captured as the participant types the characters. After successful capture of all the necessary user information, all data are stored in JSON4 format in the client side and are sent to the server for feature extraction and feature storage into MySQL 5.7 database.
7.1.1.
Capturing surrounding conditions
The light, noise and motion related data are captured through different sensors. The proposed adaptive-MFA framework implements the following sensors: • Arduino compatible mini luminance light sensor (like TSL2550, TSL45313), which includes a broad portfolio of digital ambient light, digital color, Proximity Detection, Lightto-Digital (LTD), Light-to-Voltage (LTV) and Light-to-Frequency (LTF) Sensors, in addition to Linear Sensor Arrays, for intelligent light sensing for a broad range of applications including display management for display-based products, etc. • Diymall Microphone Noise Decibel Sound Sensor Module for user voice recognition. • PIR motion sensors to identify the user motion within the sensors’ range. PIR motion sensors are mainly passive infrared, Pyroelectric IR motion sensors, which could detect the surrounding and user motion discussed in Section 5.2. The light sensor detects the surrounding luminance or light and provides the digital output between 0 and 1023. The light sensitivity can be adjusted and it provides stable performance. The noise sensor detects surrounding noise and provides output in the scale of 0–1023. Light sensor and noise sensor values are calibrated to the lux and decibel values respectively to show the existing surrounding conditions. Fig. 11 shows the sensor values embedded in the client application.
Fig. 11 – Captured surrounding conditions in client application.
(6 GB/s), PERC H700 (6 GB/s) (non-volatile battery-backed cache: 1 GB), SAS 6/iR and PERC 6/i (battery-backed cache: 512 MB) for the internal RAID controller. Again the external RAID controllers used are PERC H800 (6 GB/s) (non-volatile battery-backed cache: 1 GB), PERC 6/E (battery-backed cache: 512 MB), and External HBAs (non-RAID): 6 GB/s SAS HBA, SAS 5/E HBA, LSI2032 PCIe SCSI HBA. In this work, the authors have conducted an extensive user study on the proposed adaptive-MFA framework, FIDO5 and Microsoft Azure6 with 100 users and the results are shown in Table 7. Table 7 presents a detailed comparative study on the performance of the proposed adaptive-MFA prototype and two of its existing and widely used counterparts like FIDO and Microsoft Azure, which establish the better efficiency of the adaptive-MFA approach. Different performance measurement criteria are given as follows: (i) Latency: In the present work, Latency is defined as a time interval between the client request and response. We measured client authentication latency for the proposed adaptive-MFA prototype, FIDO and Microsoft Azure using two clients: one within the same Dell PowerEdge R710 availability zone and the other hosted at The University of Memphis, TN, USA, with an Intel Core i73517U (2.4 GHz) with four core processors. We refer to the first as the LAN (local-area network) setting and the second as the WAN (wide-area network) setting. In the LAN setting, we used the Dell PowerEdge R710 internal IP address. All queries were made over TLS and the
8. Performance evaluation of the proposed adaptive-MFA framework For performance and scalability evaluation we hosted our adaptive MFA server implementation on Dell PowerEdge R710 (Tow Intel® Xeon® X5650 processors with 12 cores at 2.67 GHz), 64 GB of main memory, 12 MB L3 Cache, Windows Server 2012, DIMM thermal sensors, and solid-state storage. Dell PowerEdge R710 has Intel® QuickPath Interconnect (QPI) front side BUS with PERC 6/i, SAS6/iR, PERC H200, PERC H700 as embedded hard drive controller. It has Broadcom® BCM5709C4×iSCSI TOE for NIC and iDRAC6 Express, BMC, IPMI 2.0, DellOpenManage™ for server management. Dell PowerEdge R710 mainly uses PERC H200
5
https://fidoalliance.org/. azure.microsoft.com/en-us/services/multi-factorauthentication/. 6
4
JSON: Java Script Object Notation.
Any devices (FD, PD, and HD) and media (WI, WL, and CL) It is unable to detect the noise, motion, etc.
Any devices (FD, PD, and HD) and media (WI, WL, and CL) It is also unable to detect the noise, motion, etc.
5.9 0.2 192.2 57 Implemented (see Sections 3 and 5). Different combinations of features, i.e., the concept of authentication factors (see Section 3), are not implemented in its counterparts. Any devices (FD, PD, and HD) and media (WI, WL, and CL) It can detect the surrounding conditions and accordingly choose the suitable set of authentication factors. 89 5 11 204 Feature level is not implemented. 96 20 6 301 Feature level is not implemented.
Adaptive nature
Platforms
109
measurements include the time required for clients to send a request and being authenticated. The latency has been measured in two phases, namely: time taken for authenticating for the 1st time and that for the remaining times with 1000 requests made by the 100 users, and the results are shown in Table 7. This two-phase latency calculation is due to the delay in instance creation for the first time, which significantly decreases in the remaining attempts. In all the cases computation time dominates in the LAN setting because of the almost negligible network latency, while in the WAN setting, a case with cold connections (no HTTP KeepAlive) pays a performance penalty due to the four round-trips required to set up a new TCP and TLS connection. HTTP KeepAlive header is mainly a hop-by-hop header that informs hosts about connection management policies (Bellare et al., 2013; Roskind, 2013). The situation becomes slightly better for WAN with hot connection (see Table 7). From Table 7 it could be found that the overall performance of the adaptive MFA is much better than that of its competitors. However, even 500 ms latencies would not create any problem for the proposed adaptive-MFA application and its competitors as straightforward engineering improvements would significantly boost WAN timing, for example: using TLS session resumption, using lower-latency secure protocol like Quick UDP Internet Connections (QUIC) (Roskind, 2013), or even switching to a custom UDP (Bellare et al., 2013). These latencies were captured on a Dell PowerEdge R710 instance using the Python profiling library line profiler. The authors found that the response time for the 1st time for adaptiveMFA is significantly less than its counterparts (see Table 7). Moreover, the time taken for the remaining is also significantly less than that of its counterparts. Hence, the proposed adaptive-MFA framework shows better latency. Another important observation presented in the present work is that the latency values for all the three MFA systems follow heavy-tailed Pareto distribution, which is hyperbolic over its entire range (Lutkepohl, 2005). Hence, the latencies for all of the MFAs will decrease very slowly beyond certain threshold values. A Pareto distribution with shape parameter α and location parameter k has the cumulative distribution function F(x) = P[X ≤ x] = {1 − (k/x)α} with corresponding probability density function f(x) = αkαx−α−1; α,k > 0; x ≥k. If X is heavy-tailed with parameter α then its first m < α moments E [ X m ] are finite and all of its higher moments are infinite. The resulting threshold values are given in Table 7 and the proposed adaptive-MFA has the least latency threshold values among the three MFAs. (ii) Throughput: We used the distributed load testing tool AutoBench7 to measure maximum throughput for different combinations of authentication factors to simultaneously authenticate several users using the proposed adaptive-MFA framework. For checking the throughput of the adaptive-MFA, we used two clients in the same Dell PowerEdge R710 and continuously sent authentication requests (involving 100 users) simultaneously keeping all connections as cold: no TLS session resumption or HTTP KeepAlive. The average throughput for
Threshold values Next time on word avg. latency Threshold values Feature level
Avg. latency (in ms)
Used factors
1st time avg. latency
Cold 33 25 29
LAN
Hot 18 13 11
Cold 343 301 324
WAN
Hot 117 104 101
Cold 13 9 9
LAN
Hot 28 18 21
Cold 313 303 224
WAN
Hot 121 109 91
Different biometric and non-biometric authentication modalities and their different combinations. It covers more authentication modalities due to its counterparts. LAN WAN Cold Hot Cold 8 3.1 314 4 1 301 7.9 1.2 210.2 Multi-factor Used factors: SMS, and password Multi-factor Used factors: phone call, SMS, and password
FIDO Microsoft Azure Measuring criteria
Table 7 – Comparisons with other existing and already used multi-factor authentication (MFA) systems.
A-MFA
Hot 71 62 63
computers & security 63 (2016) 85–116
7
http://www.eembc.org/benchmark/automotive_sl.php.
110
(iii)
(iv)
(v)
(vi)
computers & security 63 (2016) 85–116
different authentication factors using the proposed adaptive-MFA could be found as 2676 connections per second (cps) and the value could be significantly increased using more powerful servers. Thus the proposed adaptive-MFA prototype implementation can handle a large number of clients on a single server instance. Storage Space Utilization: Our present adaptive-MFA framework implementation stores all user information regarding every single authentication modality (collected during the registration period) in JSON format on the client side and sends it to the server for feature extraction and feature storage into MySQL 5.7 database. Hence, the corresponding database table entry has two columns like information of every individual feature and its corresponding trustworthy value. A detailed internal architecture of the client database connection is shown in Fig. 9. In MySQL 5.7 the internal representation of a table has a maximum row size of 65,535 bytes, even if the storage engine is capable of supporting larger rows. This calculation excludes BLOB and TEXT columns, which only contribute 9–12 bytes toward the row size, and the related information is stored internally in a different area of memory than the row buffer. In the present adaptive-MFA prototype implementation, the entries for the feature of the authentication modalities are mainly treated as VARCHAR (255 bytes) and their trustworthy values are treated as float (4 bytes). Hence, on average, the storage space for each entry has been found to be under 200 bytes (measured as the average of 10 M entries), including database overheads and indexes, which is much less than that of the available storage space of MySQL 5.7. The proposed adaptive-MFA implementation scales easily to 10 M clients with approximately under 32 GB of storage, which is extremely cost effective from the business point of view. Hence, the proposed adaptiveMFA can provide service to a huge number of clients using a little amount of storage space. False Acceptance Rate (FAR): In the present work, the FAR is defined as a measure of the likelihood that the authentication system will incorrectly accept an access attempt by an illegitimate user. FAR is mainly stated as the ratio of the number of false acceptances divided by the number of identification attempts (Vielhauer, 2005). It has been found that the FAR of the adaptive-MFA is less than that of the FIDO and the Microsoft Azure, where the experiment has been conducted with 100 users (each of the users has been conducted 10 separate authentication trials; see Table 8). False Rejection Rate (FRR): In the present work, the FRR is defined as a measure of the likelihood that the system fails to recognize an authorized person and rejects an impostor (Vielhauer, 2005). The FRR of the adaptive MFA is significantly less than that of the FIDO and Microsoft Azure authentication system where the experiment has been conducted with 100 users (each of the users has been conducted 10 separate authentication trials; see Table 8). Accuracy Rate: The accuracy rate of the adaptive-MFA prototype is much better than its counterparts, which makes it most reliable among the three multi-factor authentication systems (see Table 8).
9.
A detailed user study
In this paper, the authors have conducted a detailed user study on the proposed adaptive-MFA framework with 100 users (each of the users has been conducted 10 separate authentication trials) to analyze its usability. Moreover, the FIDO and the Microsoft Azure have also been tested by the same users and compared with the proposed adaptive-MFA framework. This user study has been conducted in the Ergonomics Lab (Research), Department of Industrial and Systems Engineering, National University of Singapore. For this purpose, the users have asked to respond the questionnaire given in Table 9, which covers several aspects like user credibility, complexity involved in the registration phase (how easy it is to register, time taken for registration and how easy it is to follow the instructions of the registration phase), and the difficulty involved in the authentication phase (how easy it is to provide registration information, what is the frequency of the repetitive authentication factors in case of successive login, time taken to get the authentication decision, frequency of successful authentication, etc.). The results of the questionnaire are presented in Table 8, which shows that the proposed adaptive-MFA framework works better than its competitors. From the user study, it is quite clear that all the users are qualified enough to be the parts of the study as they are quite familiar with different MFA frameworks. In response to Question 3 (see Table 9), 92% of the users have chosen “3” for both FIDO and Microsoft Azure, whereas 97% of the users have selected “1” for the proposed adaptive-MFA framework. Hence, from the user point of view, the adaptive-MFA framework is easier to navigate. All the users feel that the time taken to fill out the required fields in registration forms for all the three MFA frameworks was satisfactory. Average user registration time for the adaptiveMFA is less than that of its counterparts. The instructions for user registration process for all the three MFAs are easy to follow. Quite similarly, it is easy to enter the user information for several authentication modalities. However, in response to Question 8 (see Table 9) most of the users feel that the selected authentication modalities are not repeated for the adaptive-MFA, whereas for the other two systems very often the users get repetitive authentication modalities. This is due to considering each feature as an authentication factor, which, in turn, increases the choice of selection. Consequently, the proposed adaptive-MFA could significantly increase the level of obfuscation among the illegitimate users. However, the time taken for the authentication decision is the same for all the three cases. Lastly, the success rate of the adaptiveMFA is higher than its counterparts. Hence, the adaptiveMFA shows its significant edge over the other two widely used MFA systems.
10.
Advantages of the proposed MFA
This section showcases different advantages of the present work that cannot be found in other well-known and widely used multi-factor authentication strategies.
computers & security 63 (2016) 85–116
111
Table 8 – Questionnaire for the user study for the proposed adaptive MFA Framework. Questionnaire for the user study for the proposed adaptive-MFA system Please fill out the following questions based on your experience of the adaptive MFA system. Question 1: Before today, have you used the multi-factor authentication service before? □Yes □No Question 2: If you answered “yes” to an earlier question, how many times have you used it on a weekly basis? □ 0 time □ 1–2 times □3–5 times □ more than 5 times Registration phase Question 3: How easy is it to navigate through the registration forms in adaptive MFA? (1–10, where 1: most easy, 10: most difficult) Choose an item. Question 4: Was the time taken to fill out the required fields in registration form satisfactory? □completely agree □ agree □ somehow agree □ somehow disagree □ disagree □completely disagree Question 5: The time taken to get the registration decision from the server after submission is satisfactory. □completely agree □ agree □ somehow agree □ somehow disagree □ disagree □completely disagree Question 6: The instructions in the registration process are easy to follow. □completely agree □ agree □ somehow agree □ somehow disagree □disagree □completely disagree Authentication phase Question 7: How easy is it to enter information for authenticating in adaptive MFA? (1–10, where 1: most easy, 10: most difficult). Choose an item. Question 8: Did the set of selected authentication modalities remain the same in successive login attempts? □all are the same □most are the same □few are the same □none are the same Question 9: The time taken to get the authentication decision from the server is satisfactory. □completely agree □ agree □ somehow agree □ somehow disagree □ disagree □completely disagree Question 10: How many times were you successfully authenticated? ____ times Question 11: To what degree does this MFA system make you feel more secure to use? (1–10, where 1: most secure, 10: most insecure) 1: Most secure 10: Most insecure Choose an item.
(i) By considering individual features of different authentication modalities, the search space of the problem becomes larger, which reduces the probability of selection of the same set of authentication modalities. (ii) The proposed selection procedure is able to improve trust by selecting those specific features of the authentication modalities that have more trustworthy values than the original modalities. (iii) If the selected set of authentication modalities contains only more than one features of the same modality, then the users need not use other authentication modalities to authenticate themselves. For example, if the selected set contains only the lip and eye (two different features of the face recognition modality (M1), then only the facial image of the users is sufficient for the authentication purpose. Consequently, the users are not annoyed. In spite of looking like a single factor authentication, the proposed model presents a novel and advanced multi-factor authentication. (iv) The proposed selection procedure is adaptive as it is able to sense the surrounding ambiance and make selections accordingly. For example, if the luminance is less
than 100 lux, (i.e., outside the range of visibility), then the modalities, dependent upon light (found in Table 4) cannot be selected. Similar types of conclusion can be made for other modalities that depend on the elements of S (given in Section 5). (v) Contrary to existing authentication strategies, the proposed method employs both biometric (physiological and behavioral) and non-biometric authentication modalities simultaneously. (vi) All existing multi-factor authentication strategies are mainly static and suffer from the repetitive selection of authentication modalities, whereas the proposed selection system is dynamic and sensitive to the environment. Moreover, the selected sets of authentication factors by the proposed system are not repeated, making them unpredictable for the hackers. Consequently, the proposed method makes the online application significantly more secure than the other existing multi-factor authentication strategies. The above points clearly establish the novelty of the proposed adaptive multi-factor authentication method.
112
computers & security 63 (2016) 85–116
Table 9 – Response to the questionnaire presented in Table 8. It also presents a comparative user study with 100 users on the FIDO, Microsoft Azure, and the proposed adaptive-MFA framework. Question 1: Before today, have you used the multi-factor authentication service before? Question 2: If you answered “Yes” to an earlier question, how many times do you use it on a weekly basis?
Question 3: How easy is it to navigate through the registration forms in Adaptive MFA? Question 4: Was the time taken to fill out the required fields in registration form satisfactory? Question 5: Avg. user registration time (in minutes) Question 6: The instructions in the registration process are easy to follow. Question 7: How easy is it to enter information for authenticating in adaptive MFA? Question 8: Did the set of selected authentication modalities remain the same in successive login attempts? Question 9: The time taken to get the authentication decision from the server is satisfactory. Question 10: How many times were you successfully authenticated? Question 11: To what degree does this MFA system make you feel more secure to use? False acceptance rate (FAR): The system allows incorrect persons. False rejection rate (FRR): The system fails to recognize accurate persons. Accuracy rate
FIDO
Microsoft Azure
Adaptive-MFA
92% of the users have chosen “3”.
92% of the users have chosen “3”.
97% of the users have chosen “1”.
100% of the users have chosen “completely agree”.
100% of the users have chosen “completely agree”.
100% of the users have chosen “completely agree”.
12 minutes (somehow agree)
10 minutes (somehow agree)
7 minutes (agree)
100% of the users have chosen “completely agree”. 100% of the users have chosen “1”.
100% of the users have chosen “completely agree”. 100% of the users have chosen “1”.
100% of the users have chosen “completely agree”. 100% of the users have chosen “1”.
89% of the users have selected “most are same”.
87% of the users have selected “most are same”.
95% of the users have selected “few are same”.
100% of the users have chosen “completely agree”.
100% of the users have chosen “completely agree”.
100% of the users have chosen “completely agree”.
On an average of 8.9 for every 10 attempts 95% of the users have chosen “4” (because of repetitive selection of authentication modalities). 7%
On an average of 9.2 for every 10 attempts 95% of the users have chosen “4” (because of repetitive selection of authentication modalities). 6%
On an average of 9.6 for every 10 attempts 98% of the users have chosen “1” (because of non-repetitive selection of the set of authentication factors). 3%
8%
9%
4%
Around 89%
Less than 92%
Around 96%
11. Adaptive-MFA framework as a cloud application The proposed adaptive-MFA can also be implemented as a cloud-based application to facilitate a higher number of users. We strongly consider it as our potential future work. For cloud implementation, this adaptive-MFA can be hosted on Amazon Elastic Compute Cloud8 (EC2) with a c4.xlarge instance to provide 8 virtual CPUs (to reduce the latency of the application), 64 GB of main memory, and solid state storage. This future extension of the adaptive-MFA will bring more visibility to the proposed solution and can identify the legitimate users in a seamless manner across the web. Detailed descriptions regarding this cloud implementation will be found in our future publications. However, the overview of the major steps, needed for the adaptive-MFA cloud implementation, is given as follows: 1. The user database will be stored in a cloud-like infrastructure. To better support the scalability and robustness of the authentication service, the database can be replicated in
8
https://aws.amazon.com/ec2/.
All of them have used MFA before. All of them use MFA more than 5 times in a week.
several servers to reduce the latency for the registration and authentication. 2. In order to support cloud infrastructure, the client application will be needed to deploy as a browser extension (for example Chrome Extension, Firefox Add-on). (i) Face and voice data will be captured through browsers that support HTML 5. (ii) Keystroke (alpha-numeric characters) data can be captured through JavaScript, which is supported by any web browser. All USB and PS/2 keyboards’ data can be recorded using JavaScript by the popular browsers in all types of operating systems. (iii) Fingerprint devices are not supported by browsers as of now. It requires launching a native application on the client’s machine to capture the fingerprint data from the user. As part of the cloud-based interface, the browser extension will install required drivers (for example, Digital Persona Fingerprint SDK) of wellknown fingerprint devices and launch a fingerprint capturing application as soon as it detects a fingerprint device in the system. (iv) The password related information can easily be captured from any web browser and can be integrated with the user’s data.
113
computers & security 63 (2016) 85–116
Table 10 – Comparison of the proposed method with different widely used other multi-factor authentication approaches. Different authentication approaches Cognitive-centric text production and revision features (Locklear et al., 2014) Context-aware/gait based (Primo et al., 2014) Typing motion behavior (Gascon et al., 2014) Temporal authentication (Niinuma and Jain, 2010) Messaging app usage (Klieme et al., 2014) Touch screen gestures (Feng et al., 2012) Keystroke dynamics (Gunetti and Picardi, 2005) Behavioral biometrics (Deutschmann and Lindholm, 2013) Proposed adaptive MFA approach
Factors considered
Selection strategy for multi-factor
Applicability
Multi-factor: behavioral biometrics and keystroke dynamics
Fusion of all features
All devices
Single factor: behavioral biometrics
None
Mobile devices
Single factor: behavioral biometrics using statistical features Multi-factor: face detection and body localization
None
Mobile devices
Both factors individually
Single factor: behavioral biometrics
None
Fixed and portable devices Mobile devices
Single factor: behavioral biometrics with finger gestures Single factor: behavioral biometrics
None
Mobile devices
None
All devices
Fusion of three in a trust model Adaptive selection of multiple factors sensing the environment
Fixed and portable
Multi-factor: keyboard, mouse, and application interactions Multi-factor: behavioral biometrics, physiological biometrics, password, SMS, CAPTCHA. Present work considers not only every individual authentication modality but also the different features, i.e., the selected set may contain some features of different authentication modalities. This unique feature could not be found in other multi-factor authentication strategies (Parziale and Chen, 2009; Patel et al., 2013; Primo et al., 2014; Serwadda et al., 2013; Stewart et al., 2011; Vielhauer, 2005).
(v) SMS (as part of one-time password) can also be integrated with the client application of cloud architecture. The cloud server will send SMS codes as per user request. Again to support time-based one-time password (TOTP), the SMS PUSH will be implemented to support the Android, iOS, and the Windows phones. This allows the users to authenticate his identity (one factor) with a finger tap. 3. The feature extraction of the captured users’ data will be done in the server side cloud application, which provides faster execution of different extracting algorithms and storage into the database. The registration and authentication decision will be pushed back to the client device. Browser extensions on the client side also delete all captured user information as soon as they leave the client device. 4. The selection and matching modules will run on the serverside cloud applications and the cloud interface will push back the authentication decision to the client device.
12. Qualitative comparisons with other approaches There are different MFA approaches which support continuous authentication of users. Table 9 lists a comparison of other existing approaches to our proposed approach. From the table, it is clear that our proposed adaptive approach differs significantly from the other existing approaches. None of the other listed approaches use an adaptive approach as part of their selection strategy. Many of these approaches choose static selection strategies that consider all the factors at the same time. Again, some approaches are applicable to fixed and portable devices while others are applicable to mobile devices only. Our proposed approach is applicable to all three different types of devices and provides the selection decision adaptively sensing the devices, media and surrounding conditions (Table 10).
13. The performance evaluation of the future cloud-based adaptive-MFA application can be done using a group of capable users (used in the present work) with a detailed user study regarding the authentication accuracy, latency across various environmental settings, throughput, storage space utilization, FAR and FRR. The same set of users can be enrolled for the cloud-based adaptive-MFA applications to compare the overall performance of the proposed solution with its counterparts. The authors would like to help the readers if they want to build the cloud-based counterparts of the proposed adaptiveMFA application along with its performance evaluation.
All devices
Conclusion
This work focuses on designing just-in-time authentication framework using multiple authentication factors (modalities along with their several features) in order to provide a trustworthy, resilient, and scalable solution for authentication. The proposed trustworthy model computes the trust values for different authentication factors by considering several probabilistic constraints. In particular, it uses pair-wise comparisons among different devices and media. The adaptive selection scheme makes intelligent decisions, choosing authentication factors at run-time by considering the performance, trustworthy values,
114
computers & security 63 (2016) 85–116
and the history of the previous selection of the given factors. This approach also avoids repeated selections of the same set of authentication factors in successive re-authentications, thereby reducing the chance of establishing any recognizable patterns. Therefore, no prior information regarding the selected set of authentication factors is available for the attackers to hack. Again, the proposed selection mechanism considers individual features of a modality as different authentication factors and hence, the search space of the selection procedure becomes relatively large. This criterion ensures the selection of non-repetitive sets of authentication factors for different authentication triggering times. This adaptive selection approach would also run at different triggering times (time intervals which may vary with different devices and media combinations) for re-authentication of legitimate users. This approach opens the door for many possible applications where ensuring security is the topmost concern. Some of them are highlighted here: 1. Massive Open Online Courses (MOOC), where it is really difficult to distinguish between the registered user and the actual user who is going to take the exams and homework problems. With the increase of MOOC among different universities, it is an urgent need to verify the students’ identities in a secure manner. Adaptive-MFA is a good solution to verify the students’ identities and different combinations of authentication factors can be generated varied with the difficulty of the types of the tasks (homework, quiz, midterm exam, project report, final exam). 2. Banking applications like online fund transfer or online payment. Adaptive-MFA can easily verify the legitimate users to successfully complete the transaction with more trusted settings. 3. Proposed adaptive-MFA framework can easily be integrated to secure access of all types of electronic medical records. These medical records are very confidential and sensitive to the patients. Adaptive-MFA can trigger different authentication factors sensing the device, media, and surroundings of the users and provide more robust and resilient authentication factors that provide higher accuracy in that particular environment. 4. Adaptive-MFA can further be extended to deploy at different levels of Internet Computing. Some of them are listed here: a. Application level (financial applications, email/business/ personal applications, social applications) b. User level (root user, administrators, guest users) c. Document level (pdf containing an application form, resume, and a document containing proprietary information, image/video containing confidential and sensitive footage). In this paper, three different variations of face recognition are considered for detailed analysis. However, in a specific implementation, these variations could also be treated as a single modality. A detailed implementation of the proposed adaptive-MFA has been done in the present work. It has also been compared with two well-known and widely used MFA systems like FIDO and Microsoft Azure and it has been found that the performance of the proposed MFA is much better than
its counterparts. A detailed user study has been accomplished to analyze its usability with few common devices and the end result seems to be very convincing. It encourages us to think about a full-scale business purpose software launch of the proposed adaptive-MFA framework in the near future.
Acknowledgements The authors are thankful to the Ergonomics Lab, Department of Industrial and Systems Engineering, National University of Singapore, for providing comparative experimental results on the FIDO and Microsoft Azure. The authors are also thankful to The University of Memphis, TN, USA, for providing an excellent environment for completing this work. The authors are very much thankful to the extremely learned reviewers and the journal editorial board member for their valuable suggestions toward the improvement of the paper. The authors appreciate their untiring effort to upgrade the quality of the present paper.
Appendix A A.1. EER (Equal Error Rate) A security system based on biometric authentication sets some predefined threshold values for its false acceptance rate and false rejection rate. In the case of these two values becoming equal, the equal value is known as equal error rate (EER). This quantity signifies the equal value of the proportions of both false acceptances and false rejections. In general, the lower value of EER is preferred as this indicates the higher accuracy of a security system.
A.2. FRR (False Rejection Rate) or FNMR (False Non-match Rate) False rejection (Type I error) is a condition in the biometric based system, where the system fails to identify an authorized person. False rejection rate (FRR) is the measure to calculate the chance of the biometric based system failing to authenticate a legitimate user. The value of this quantity can be calculated as the probability of the system to fail in finding a match between an input biometric and the registered biometric stored in the database. Sometimes it measures a ratio of false rejections and a total number of attempts to do authentication.
A.3. FAR (False Acceptance Rate) or FMR (False Match Rate) In biometrics, the instance of a security system incorrectly verifying or identifying an unauthorized person is also referred to as a type II error. A false acceptance typically is considered the most serious of biometric security errors as it gives unauthorized users access to systems that expressly are trying to keep them out. The false acceptance rate, or FAR, is the measure of the likelihood that the biometric security system will incorrectly accept an access attempt by an unauthorized user.
computers & security 63 (2016) 85–116
Table A1 – Different lighting conditions. Illuminance
Surfaces illuminated by
0.0001 lux 0.002 lux 0.27–1.0 lux 3.4 lux 50 lux 80 lux 100 lux 320–500 lux 400 lux 1,000 lux 10,000–25,000 lux 32,000–100,000 lux
Moonless, overcast night sky Moonless clear night sky with airglow Full moon on a clear night Dark limit of civil twilight under a clear sky Family living room lights Office building hallway/toilet lighting Very dark overcast day Office lighting or sunset on a clear day Sunrise Overcast day; typical TV studio lighting Full daylight (not direct sun) Direct sunlight
A.4. Lux The lux is the SI unit of illuminance and luminous emittance, measuring luminous flux per unit area. It is equal to one lumen per square meter. In photometry, this is used as a measure of the intensity, as perceived by the human eye, of light that hits or passes through a surface. It is analogous to the radiometric unit watts per square meter, but with the power at each wavelength weighted according to the luminosity function, a standardized model of human visual brightness perception. Hence, it can be defined as 11x = 11 m/m2 (Table A1).
Appendix B Constraint, Equation (2) can be expressed as follows:
P {Tij (Ms; fs,l ) ≥ Tkj (Ms; fs,l ) ; for each j = 1, 2, 3 and
i, k = 1, 2, 3; i ≠ k; l ∈ + } ≥ 1 − ε 1, where ε 1 ∈ [ 0, 1] ,
or
P {Tij (Ms; fs,l ) ≥ Tkj (Ms; fs,l )} ≥ 1 − ε 1 or,
P {Tij (Ms; fs,l ) ≥ x ≥ Tkj (Ms; fs,l )} ≥ 1 − ε 1; x ~ N ( 0, 1) or, 2
1 −2x e dx ≥ 1 − ε 1 Tkj (Ms; fs ,l ) 2π
∫
Tij (Ms; fs ,l )
or,
∫
Tij (Ms; fs ,l )
Tkj (Ms; fs ,l )
e
− x2 2
dx ≥ 2π (1 − ε 1 )
or,
∫
Tij (Ms; fs ,l )
Tkj (Ms; fs ,l )
x2 x 4 ⎫ ⎧ + − …⎬ dx ≥ 2π (1 − ε 1 ) ⎨1 − 2 8 ⎩ ⎭
115
Simplifying the above inequality Equation (2) can be converted to a non-probabilistic constraint. Similarly, probabilistic constraints listed in Equations (3)–(5) can be converted into nonprobabilistic constraints.
REFERENCES
Abramson M, Aha DW. User authentication from web browsing behavior. In: FLAIRS conference. 2013. Balakrishnan K. Exponential distribution: theory, methods and applications. CRC Press, Washington D.C.; 1996. Bellare M, SriramKeelveedhi S, Thomas Ristenpart T. Dupless: server-aided encryption for duplicated storage. In: USENIX security. USENIX; 2013. Brennan M, Afroz S, Greenstadt R. Adversarial stylometry: circumventing authorship recognition to preserve privacy and anonymity. ACM Trans Inf Syst Secur 2012;15(3):12–2012. Dasgupta D. Advances in artificial immune systems. IEEE Comput Intell Mag 2006;1(4):40–9. Dasgupta D. Artificial immune systems and their applications. Springer Publishing Company, Incorporated; 2014. Deutschmann I, Lindholm J. Behavioral biometrics for DARPA’s active authentication program. In: Biometrics Special Interest Group (BIOSIG), 2013 International Conference of the. IEEE; 2013. p. 1–8. Dunn S, Peucker S. Genetic algorithm optimization of mathematical models using distributed computing. In: Developments in applied artificial intelligence. Springer; 2002. p. 220–31. Feller W. An introduction to probability theory and its applications, vol. 2. 2nd ed. John Wiley; 1971. Feng J, Jain AK. Fingerprint reconstruction: from minutiae to phase. IEEE Trans Pattern Anal Mach Intell 2011;33(2): 209–23. Feng T, Liu Z, Kwon K-A, Shi W, Carbunar B, Jiang Y, et al. Continuous mobile authentication using touchscreen gestures. In: Homeland Security (HST), 2012 IEEE conference on technologies for. IEEE; 2012. p. 451–6. Gascon H, Uellenbeck S, Wolf C, Rieck K. Continuous authentication on mobile devices by analysis of typing motion behavior. Sicherheit 2014;1–12. Guidorizzi RP. Security: active authentication. IT Prof Mag 2013;15(4):4–7. Gunetti D, Picardi C. Keystroke analysis of free text. ACM Trans Inf Syst Secur 2005;8(3):312–47. Harb EB, Debbabi M, Assi C. On fingerprinting probing activities. Comput Secur 2014;43:35–48. Hassanien AE, Abraham A, Grosan C. Spiking neural network and wavelets for hiding iris data in digital images. Soft Comput 2009;13:401–16. Inacu I, Constantinescu N. Intuitionistic fuzzy system for fingerprints authentication. Appl Soft Comput 2013;13(4):2136–42. Jain AK, Hong L, Pankanti S, Bolle R. An identity authentication system using fingerprints. P IEEE 1997;85(9):1365–88. Jain AK, Feng J, Nandakumar K. Fingerprint matching. Computer 2010;43(2):36–44. Kang J, Nyang D, Lee K. Two-factor face authentication using matrix permutation transformation and a user password. Inf Sci (Ny) 2014;269:1–20. Kent A, Liebrock LM, Neil JC. Authentication graphs: analyzing user behavior within an enterprise network. Comput Secur 2015;48:150–66. Klieme E, Engelbrecht K-P, Moller S. Poster: towards continuous authentication based on mobile messaging app usage, 2014.
116
computers & security 63 (2016) 85–116
Kwok K. User identification and characterization from web browsing behavior. DTIC Document, Tech. Rep., 2012. Lee GC, Loo CK, Chockalingam L. An integrated approach for head gesture based interface. Appl Soft Comput 2012;12(3):1101–14. Locklear H, Govindarajan S, Sitova Z, Goodkind A, Brizan DG, Rosenberg A, et al. Continuous authentication with cognitioncentric text production and revision features, 2014. Lucas BD, Kanade T. An iterative image registration technique with an application to stereo vision. In: Proceedings of the international joint conference on artificial intelligence (IJCAI), vol. 81. 1981, Springer. p. 674–9. Luenberger DG, Ye Y. Linear and nonlinear programming, vol. 116. Springer; 2008. Lutkepohl H. New introduction to multiple time series analysis. Berlin: Springer; 2005. Nag AK, Dasgupta D. An adaptive approach for continuous multifactor authentication in an identity eco-system. In: Proceedings of the 9th annual cyber and information security research conference. ACM; 2014. p. 65–8. Nag AK, Dasgupta D, Deb K. An adaptive approach for active multi-factor authentication. In: 9th annual symposium on information assurance (ASIA14). 2014. p. 39. Nag AK, Roy A, Dasgupta D. An adaptive approach towards the selection of multi-factor authentication. In: Computational intelligence, 2015 IEEE symposium series on. 2015. p. 463–72. Niinuma K, Jain AK. Continuous user authentication using temporal information. In: SPIE Defense, Security, and Sensing. International Society for Optics and Photonics. 2010. p. 76. Obied A, Reda Alhajj R. Fraudulent and malicious sites on the web. Appl Intell 2009;30:112–20. Ou CM, Ou CR. Adaptation of proxy certificates to nonrepudiation protocol of agent-based mobile payment systems. Appl Intell 2009;30:233–43. Parziale G, Chen Y. Advanced technologies for touchless fingerprint recognition. Handbook of remote biometrics. Springer; 2009. p. 83–109. Patel V, Yeh T, Salem M, Zhang Y, Chen Y, Chellappa R, et al. Screen fingerprints: a novel modality for active authentication, 2013. Primo A, Phoha V, Kumar R, Serwadda A. Context-aware active authentication using smartphone accelerometer measurements. In: Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on. IEEE; 2014. p. 98–105. Revett K, Jahankhani H, de Magalhaes ST, Santos HM. A survey of user authentication based on mouse dynamics. Global E-security. Springer; 2008. p. 210–19. Roskind J. QUIC: multiplexed stream transport over UDP. Google working design document, 2013. Serwadda A, Govindarajan S, Pokala R, Wang Z, Koch P, Balagani K, et al. Scan based evaluation of continuous keystroke authentication systems. IT Prof Mag 2013;1.
Stewart JC, Monaco JV, Cha S-H, Tappert CC. An investigation of keystroke and Stylometry traits for authenticating online test takers. In: Biometrics (IJCB), 2011 international joint conference on. IEEE; 2011. p. 1–7. Vielhauer C. Biometric user authentication for IT security: from fundamentals to handwriting, vol. 18. Springer; 2005. Wu Z, Liang B, You L, Jian Z, Li J. High-dimension space projection-based biometric encryption for fingerprint with fuzzy minutia. Soft Comput 2015;doi:10.1007/s00500-0151778-2. Xiang C, Tang C, Cai Y, Xu Q. Privacy-preserving face recognition with outsourced computation. Soft Comput 2015;doi:10.1007/ s00500-015-1759-5. Prof. Dipankar Dasgupta is a professor of Computer Science at the University of Memphis. His research interests are broadly in the area of scientific computing, design, and development of intelligent cyber security solutions inspired by biological processes. He contributed remarkably in applying bio-inspired approaches to intrusion detection, spam detection and building survivable systems. Dr. Dasgupta is one of the founding fathers of the new field of artificial immune systems, in which he has established himself. His latest book, “Immunological Computation”, is a graduate level textbook, was published by CRC Press in 2008. He also edited two books: one on Evolutionary Algorithms in Engineering Applications and the other is entitled “Artificial Immune Systems and Their Applications”, published by Springer-Verlag. The first AIS book is widely used as a reference book, and it was translated into Russian. He has more than 230 publications which are widely being cited; a search with his name in Google Scholars indicates more than 11,000 citations and according to Scholar indexing, Dipankar Dasgupta’s h-index: 53 and g-index: 89 and an academic search at Microsoft shows that he collaborated with 106 co-authors – extraordinary testimony to the broad influence of his contributions within the research community. Dr. Dasgupta is a Fellow of IEEE and Life Member of ACM, and the Associate Editor-in-Chief of Immune Computation Journal and the editorial board of 5 other journals. Prof. Dasgupta received the 2014 ACM SIGEVO Impact Award. Dr. Arunava Roy obtained his PhD from the Department of Applied Mathematics, Indian School of Mines, Dhanbad, and presently works as a Researcher in the Department of Industrial and Systems Engineering, National University of Singapore, Singapore – 117576. Previously, he was a Post-Doctoral Fellow in the Department of Computer Science, The University of Memphis, TN, USA – 38111. His areas of interest are web software reliability, software reliability, cyber security, algorithm design and analysis, data structure, and statistical and mathematical modeling. Abhijit Kumar Nag is a PhD candidate in the Department of Computer Science at The University of Memphis, TN, USA, since 2012. His research interest includes authentication systems, evolutionary algorithms, cyber security and bio-inspired/nature-inspired computing.