An Autonomous Model to Enforce Security Policies ... - IEEE Xplore

3 downloads 0 Views 799KB Size Report
shown that building a user interface that dynamically changes with the security policies defined for each user is a cumbersome task. This work is a further ...
An Autonomous Model to Enforce Security Policies Based on User’s Behavior Kambiz Ghazinour

Mehdi Ghayoumi

Department of Computer Science Kent State University Kent, Ohio, USA [email protected]

Department of Computer Science Kent State University Kent, Ohio, USA [email protected]

Abstract – To protect user’s information, computer systems utilize access control models. These models are supported by a set of policies defined by security administrators in the environment where the organization is active. In previous studies it has been shown that building a user interface that dynamically changes with the security policies defined for each user is a cumbersome task. This work is a further expansion of an improved dynamic model that adjusts users’ security policies based on the level of trust that they hold. We use machine learning beside the trust manager component that helps the system to adapt itself, learn from the user’s behavior and recognize access patterns based on the similar access requests and not only limit the illegitimate access, but also predict and prevent potential malicious and questionable accesses.

organizations, where we may have hundreds of different roles and access control policies, handling RBAC policies is even harder. In [2], the authors discussed that programming user interfaces that conform to the latest dynamic access control policies is not generally an easy job. They introduce a model that creates forms dynamically based on the tables' structures and the access policies in the relational database management system (RDBMS). The approach reduces the extensive amount of work needed to rebuild user interfaces based on the access control policies statically. It also enables the security officers and designers to have the opportunity for immediate testing to see if the roles are working as they should. In [3] the notion of trust is addressed when an access control systems is desired to behave truly dynamically. That is, when users misuse the access level that they have been granted or try to access the data that they are not supposed to access, the model should raise a flag and limit/reduce the privileges that the users have. In [3] a trust management component is introduced that handles trust levels associated with the users. It can modify the security policies assigned to the users and result in limiting or reducing their data access. In this paper we take the model introduced in [3] and add a machine learning component that helps the system to adapt itself, learn from the user’s behavior and recognize access patterns based on the similar access requests and not only limit the illegitimate access, but also predict and prevent potential malicious and questionable accesses. Machine learning can be considered as science for the design of computational methods using experience to improve their performance [4]. The algorithms in machine learning on large-scale problems, make accurate predictions, and address a variety of learning problems (e.g., classification, aggression, clustering of data).

Keywords-components; Trust Model; Dynamic Model; Machine Learning; Security Policies; Access Policies; Database.

I.INTRODUCTION In order to protect the information stored in a computer system, security policies are in placed to limit illegitimate or accidental access to data. This is a crucial part of any access control model available. Today we encounter many security and privacy policies set by service providers to assure their customers that their collected data remains safe and will not be accessed by unauthorized individuals or companies. In order to enforce their promises/policies there should be an access control mechanism in place that controls every single access request and evaluates them. In other words, using access control mechanisms guarantees that malicious and questionable users cannot access sensitive data and also legitimate users cannot accidentally access parts of the data that are not supposed to be revealed. There are a variety of access control models available and we chose one of the most widely used models, called Role Based Access Control Model (RBAC) [1] in which access is granted to the users based on their role in the organization. In large

978-1-4799-8679-8/15/$31.00 copyright 2015 IEEE ICIS 2015, June 28-July 1 2015, Las Vegas, USA

This paper is structured as follows: Section 2 describes background and related work. Section 3, briefly describes the trust model [3] based on which we propose our model. Sections 4 and 5 introduce our approach and how machine learning helps enforcing security policies based on user’s behavior. Section 6 describes several examples to show the feasibility of our model. Section 7 concludes the paper and suggests future research direction.

II.RELATED WORK As human computer interaction and inter-computer interaction increases, new and more sophisticated algorithms should be in place to recognize the trustworthiness of the parties that interact with each other. Trust is referred to as the relationship between two entities, where one entity is willing to rely on the actions performed by another entity. Trust becomes meaningful between individual persons relationships, between a person and an object or action, or within and between social groups. The demanding interactions between users in online applications, significantly enhance user experience, but it also makes the issues of security and reliability. Untruthful users can easily join the system and behave unkindly to achieve their goals. Traditional security solutions such as Authentication, Authorization and Accounting and Public Key Infrastructure indeed help to reduce the impact of behavior of the untruthful users. The trustor lacks control over outcomes because they rely on an individual, organization, or a technology to perform a task [5]. Relying on another, entails the risk that the trustee may not fulfill a task as it was expected from the trustor, either intentionally or unintentionally. Accordingly, any trust situation involves risk [6]. Trust is a complex construct and is defined in a multitude of different ways [7]. MAYER et al. [8] define trust as the “willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” [9]. Whether the trustor trusts the other party or not depends on the perceived trustworthiness of the other party [10]. Thus, our study focuses on initial trust-building. RBAC [11] has been widely used as a cost effective access control mechanism [12]. Due to its characteristics (i.e. rich specification, separation of duty and ease of management), it is being employed in a large variety of domains [13]. In RBAC, the aim is to provide a model and tool to help manage access controls in an enterprise with a very large number of users and data items. Three main

Figure 1. Trusted Dynamic User Interface (TDUI) data flow and architecture

components of RBAC are roles, users and permissions where role represents job functions, and permissions are defined on objects and operations. Permissions can be defined in terms of allowing or preventing a role from performing a specific action on a specific data object. Assume we have user u, with role r that has accesses the data item d1,d2 and d5. For simplicity we assume that only read action is considered here. Obviously this can be extended to cover other forms of action including delete and update. For user u on role r there is a history of access pattern, AC as follows: AP(u, r): d1,d2,d5 Section 5 will explain how this access pattern is used to help learning behavior of the user to prevent unnecessary accesses. III.TRUSTED DYNAMIC USER INTERFACE (TDUI) In [3], the authors present a dynamic trust model to enforce security policies. Our model is based on their trust model. This section briefly explains how their trust model works. As illustrated in Figure 1, the model consists of three main engines, Component Manager, RBAC Extractor and Trust Manager. The data flow is described as follows. Step 1 - Once the user intend to log in to the model, the user name and password should match the ones entered into the database by the security manager or database administrator.

Step 2 - By reviewing the User Assignment relationship, the RBAC Extractor engine determines roles that are assigned to the user by the Security Administrator. According to the RBAC architecture, each user is associated with at least one role. Step 3 - The list of tables the user is allowed to observe are then displayed and the users select the form they will use. This form is related to one or more table(s) in the database. Step 4 - The list of permissions are sent by the RBAC Extractor engine to the Trust Manager and Component Manager to identify what permissions are assigned to the related roles of the user and whether the user is attempting to access a data that is not allowed to be accessed. After considering all the permissions, RBAC Extractor provides the Component Manager and Trust Manager with a list of permissions for four different actions, Select, Update, Insert and Delete. Step 5 – The list of updated permissions are submitted to the Trust Manager. Step 6 – The Trust Manager checks the business rules associated with the user to see if the user needed the data that has accessed. If the access was unnecessary, the system loses trust in the user and according to the policies defined by Security Administrator the trust level is changed which limits the access of the user and the modified security policies are updated on the system tables. If not, then this step is skipped. Step 7 - The Component Manager identifies the related fields on the desirable table(s) and their specifications. Step 8 - Finally, the dynamic user interface is created and displayed to the user based on the data collected in steps 4 and 7. If the user selects another table from the list, data flow starts from step 3. IV. MACHINE LEARNING FOR TRUST Over the past several decades, machine learning has been successfully applied to many applications. In particular, with the advent of the Big Data, machine learning has become a proper tool to solve large-scale, complex, real-world problems. Recently, the increasing amount of data as well as the rich metadata brought by large-scale Web applications has led to a new trend of applying formerly unutilized machine learning methodologies, such as supervised learning, to more precisely model trust. Machine learning, aims to automatically learn to recognize complex patterns and make intelligent decisions based on existing datasets. There are two main types of learning: supervised learning and unsupervised learning. We argue that by investigating useful features that are capable of distinguishing legitimate requests from unnecessary/illegitimate ones, sophisticated machine learning algorithms can be applied to analyze past access pattern history. If these algorithms manage to

D Trust Manager

B Machine Learning

A C

System Tables

Business Rules

Access Patterns History

Figure 2. Assisting Trust Manager Using Machine Learning

model efficiently what a necessary (or unnecessary) requests is, we can then use this to predict trustworthiness of a potential requests. For example, MetaTrust [14] is generic in the sense that various machine learning algorithms can be integrated, demonstrating that trustworthiness can be efficiently learned. Specifically, the trustor’s past interactions are described/characterized by a set of features. Without loss of generality, two classes are assumed: successful and unsuccessful interactions. Note that feature selection is application dependent, and all the interactions have the same feature set. For Machine learning method, the trustor divides its historical interactions into two groups: successful and unsuccessful. It then performs Machine learning mothed on these two groups to obtain a linear classifier that allows it to estimate whether the potential interaction is likely to get classified in the successful group. In our model machine learning is used on the access patterns described in Section 2 to assist and recommend the trust management component to predict and prevent unnecessary or questionable future access based on the behavior of similar users who hold the same role as the user who is sending a request to the trust model. V. TRUST-AWARE RECOMMENDER SYSTEM Currently most existing trust-aware recommender systems use trust relations. However, distrust in recommendation is still in the early stages of development and an active area of exploration. There are several works exploiting distrust in social recommender systems. They treat trust and distrust separately, and simply use distrust in an opposite way to trust, such as filtering distrusted users or considering distrust relations as negative weights. However, trust and distrust are shaped by different dimensions of

trustworthiness. Further, trust affects behavior intentions differently from distrust. Furthermore, distrust relations are not independent of trust relations. A deeper understanding of distrust and its relations with trust can help to develop efficient trust-aware recommender systems by exploiting both trust and distrust. Recommender systems aim to overcome the information overload problem in a way that covers users’ preferences and interests. Trust-aware recommender systems are an attempt to incorporate trust information into classical recommender systems to improve their recommendations and address some of their challenges including the cold-start problem and their weakness against the attacks. As shown in Figure 2, the trust manager and machine learning components interact with system tables, business rules and access patterns history in the following manner: • A legitimate access requested by the user A. When Trust Manager receives an access request from a user it checks to see if the request is in accordance with the security policies and business rules that are defined by the security officer for that particular user with the associated role on the data item that is intended to access. In this case, it seems the user’s access was a legitimate access that enables the user to do the day to day activity according to the business rules. B. Next, the Trust Manager informs the Machine Learning component about the access request of the user. C. The request is stored alongside other requests in the Access Pattern History part of the database and the access is given to the user. These kind of requests do not raise any concern for the Trust Manager and since no change of security policies is required then step D is skipped. • A questionable access request by the user A. The Trust Manager receives an access request from a user and by checking it against the System Tables it realizes that the access was not necessary for the task defined by the business rules. B. Next, the Trust Manager informs the Machine Learning component about the access request of the user and wants to see what precautionary steps should be taken. C. The Machine Learning Component checks the available access patterns from other users of the same role and using a machine learning algorithm such as KNN finds the closest behavioral pattern to the one that has been received. After finding the potential access pattern, the machine learning component informs the Trust Manager with the

possible threat and suggests in which direction the access policies should be restricted. For future reference, the request is also stored alongside other requests in the Access Pattern History part of the database and the access is given to the user. D. Having the recommendation in hand, the Trust Manager updates the System Tables for that user and prevents potential illegitimate accesses by that user. VI.EXAMPLE This section presents three examples/scenarios to clarify how this model works. Assume we have a user, Alice that has the role of Staff. There is also a set of data items available in our database including customers’ first name, last name, phone number, date of birth, social security number, and home address. A. Scenario 1 Alice requests to see the first name and last name of a customer. Her request is sent to the Trust Manager and it checks against the business rules and policies. It turns out that it is a legitimate access as Alice is in the process of contacting a customer to set an appointment with them. Trust Manager does not raise any concern and Alice’s access is stored in the Access Pattern History. Also, no change in security policy levels need to be made and System Tables are untouched. B. Scenario 2 Alice decides to see the first name, last name, and social security number of a customer. The request is sent to the Trust Manager, the Trust Manager checks against the Business Rules and realizes that according to what Alice is doing (contacting a customer to set an appointment) she does not need to see the customer’s social security number. This raises a Flag to the Trust Manager and the Machine Learning Components is informed for a recommendation. The Machine Learning Component studies the Access Pattern History and realizes that there were for example three other examples where the users of the same role as Alice have accessed social security number that they did not required and also after that they accessed the date of birth of the customer as well which resulted in more security/privacy breaches. Hence besides storing Alice’s recent request, the Machine Learning Component recommends the Trust Manager that not only the social security number should not be accessible to Alice anymore, but also as a precautionary action, the date of birth of the customer should be inaccessible for Alice as well. After updating the System Tables, Alice is only given the values of the first name and last name of the customer.

C. Scenario 3 Alice wants to access the date of birth of a customer and since the new updates security policy shows that she has no access to that data item anymore, her access is denied. Remember that this happened as the result of Alice’s behavior where the machine learning component and Trust Manager Component together based on the similar behaviors by the users take the preventive actions to make sure that the data would not be compromised. Obviously if Alice had not attempted to access the social security number of the customer, she would have not lost her privilege on accessing the date of birth of the customer. VII. CONCLUSION AND FUTURE WORK Access control models are supported by a set of policies defined by security administrators in different organizations. In previous studies it has been shown that building a user interface that dynamically changes with the security policies defined for each user is a cumbersome task. We presented a further expansion of an improved dynamic model that adjusts users’ security policies based on the level of trust that they hold. We use machine learning beside the trust manager component that helps the system to adapt itself, learn from the user’s behavior and recognize access patterns based on the similar access requests and not only limit the illegitimate access, but also predict and prevent potential malicious and questionable accesses. We also provided a couple of scenarios to clarify the data flow and interaction of different components in our model. For future research direction, we plan to apply several well-studied machine learning techniques and experiment the usability, scalability and effectiveness of the proposed model. REFERENCES [1]

[2]

[3]

[4] [5]

[6]

D. Ferraiolo, D. Kuhn, and R. Chandramouli. 2003 “Rolebased access control Artech House computer security series”. Artech House. K. Ghazinour, and M. Ghayoumi. 2015. Dynamic Modeling for Representing Access Control Policies Effect“. In the Proceeding of the International Conference on Cyber Security (ICCS), California, USA. K. Ghazinour, and M. Ghayoumi, 2015. A Dynamic Trust Model Enforcing Security Policies. In the proceedings of the international conference on Intelligent Information Processing, Security and Advanced Communication (IPAC'2015). T.M. Mitchell, 1997. Machine learning. New York, NY: The McGraw-Hill Companies, Inc. W. Riker. 1974,”The Nature of Trust“, in: Perspectives on Social Power, Aldine Publishing Company, Chicago, pp. 6381. K.T. Dirks, and D.L. Ferrin. 2001,”The Role of Trust in Organizational Settings“, Organization Science, 12(4), pp. 450467.

[7]

[8]

[9]

[10] [11]

[12] [13]

[14]

L.T. Hosmer, 1995, ”Trust: The Connecting Link between Organizational Theory and Philosophical Ethics“, the Academy of Management Review, 20(2), pp. 379- 403. R.C. Mayer, J.H. Davis, and F.D. Schoorman, 1995,”An Integrative Model of Organizational Trust“, the Academy of Management Review, 20(3), pp. 709-734. C.L. Corritore, B. Kracher, and S. Wiedenbeck, 2003,"On Line Trust: Concepts, Evolving Themes, a Model", International Journal of Human-Computer Studies, 58(6), pp. 737-758, F. Flores, and R.C. Solomon,1998,”Creating Trust“, Business Ethics Quarterly, 8(2), pp. 205-232. R. Sandhu, D. Ferraiolo, and R. Kuhn, 2000. “The NIST model for role-based access control: towards a unified standard”, Symposium on Access Control Models and Technologies: Proceedings of the fifth ACM workshop on Role-based access control, 26(28):47-63. R. Ramakrishnan and J. Gehrke. 2003 “Database Management Systems”, McGraw-Hill Science / Engineering / Math. The Economic Impact of Role-Based Access Control, RTI Project Number: 07007.012, National Institute of Standards and Technology (NIST), 2002. [Online]. Available at: www.nist.gov/director/prog-ofc/report02-1.pdf. L. Xin, G. Tredan, and A. Datta, 2011. Metatrust: Discriminant analysis of local information for global trust assessment. In Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).