Abstract. In this paper we address the realization of personal privacy control in pervasive computing. We argue that personal privacy demands differ substantially.
Towards Personal Privacy Control Susana Alcalde Bag¨ue´ s1,2 , Andreas Zeidler1 , Carlos Fernandez Valdivielso2, and Ignacio R. Matias2 1
Siemens AG, Corporate Research and Technologies Munich, Germany 2 Public University of Navarra Department of Electrical and Electronic Engineering Navarra, Spain {susana.alcalde.ext,a.zeidler}@siemens.com {carlos.fernandez,natxo}@unavarra.es
Abstract. In this paper we address the realization of personal privacy control in pervasive computing. We argue that personal privacy demands differ substantially from those assumed in enterprise privacy control. This is demonstrated by introducing seven requirements specific for personal privacy, which are then used for the definition of our privacy policy language, called SenTry. It is designed to take into account the expected level of privacy from the perspective of the individual when interacting with context-aware services. SenTry serves as the base for implementing personal privacy in our User-centric Privacy Framework for pervasive computing.
1 Introduction Privacy is a prime concern in today’s information society where personal sensitive data has to be revealed in common daily tasks. Thus, laws exist that shall control the collection and processing of sensitive information and should prevent its misuse by enterprises. Individuals, though, often are not really aware of personal privacy issues and mostly make decisions casually or on the move. Even in open settings, like the Internet, users control privacy mostly manually and are limited to acknowledging some prefabricated privacy statements. To our believes, for the upcoming era of so-called Ambient intelligence [1], which fosters the deployment of heterogeneous Context-Aware Mobile Services (CAMS), such habitual control of personal privacy eventually will fall short. The large number of services alone will make a manual per-use authorization of access to personal data (as required by law) an impossible task. Personal privacy is a more “intimate” concern than the enterprise’s requirement to meet existing legislations. The challenge is to meet the individual’s expected level of privacy when information is revealed to third parties. In this paper, we focus on managing personal privacy “offline”, e.g., beforehand of actually being in a particular situation. To do so, we have elaborated requirements for personal privacy control and applied them in the design of the SenTry language, which allows users to generate appropriate User Privacy Policies to automatically govern all accesses to their sensitive data. The SenTry language is presented in this paper as a centerpiece of our ongoing work, focused in the development of the User-centric Privacy Framework (UCPF) [2]. The R. Meersman, Z. Tari, P. Herrero et al. (Eds.): OTM 2007 Ws, Part II, LNCS 4806, pp. 886–895, 2007. c Springer-Verlag Berlin Heidelberg 2007
Towards Personal Privacy Control
887
SenTry language allows the specification of fine-grained constraints on the use of personal data to conform to a user’s privacy criteria. Thus, UCPF takes the roll of a trusted privacy enforcement point and acts as the sentry of its users’ personal privacy by supervising the application of policies in the interaction of the user with CAMSs. The remainder of this document is structured as follows: First, Section 2 provides some background information on privacy policies. After that, in Section 3, requirements for implementing personal privacy are elaborated. Section 4 then details the central features of the SenTry language, which is followed by Section 5 where details of the implementation and simple use cases are presented. Finally, Section 6 indicates the directions of future work and concludes the paper.
2 Background While there are many languages for access control they are rarely adequate for enforcing privacy policies [3]. Privacy control in general needs expressive methods to delimit accesses to and usage of sensitive information. A privacy policy language should cater for the enforcement of a user privacy criteria as well. Also requirements differ when we consider the needs of an enterprise or those of an individual. In many countries legislation regulates the collection and use of privacy data. It prevents its misuse and demands that enterprises comply with certain privacy practices (directives 95/46/EC and 2002/58/EC in Europe). Therefore, the main requirement of an enterprise from a privacy policy system is that it allows for automatic enforcement of enterprise privacy practices. Thus, the enterprise reduces the risk of unauthorized disclosure and the risk of misuse of the collected data. A description of enterprise-level privacy requirements is given in [4]. There exist only a few policy languages for the support of an automated analysis of a privacy criteria. Probably the best known are the Platform for Privacy Preferences (P3P) [5], the Enterprise Privacy Authorization Language (EPAL) [6] and the eXtensible Access Control Markup Language (XACML) [7]. P3P is an standard from the World Wide Web Consortium (W3C) that enables websites to express their privacy policies and to compare them with the user’s privacy criteria, which in turn can be specified by using A P3P Preference Exchange Language (APPEL) [8]. While APPEL provides a good starting point for expressing user privacy preferences, it cannot support the richness of expressions needed in ambient intelligence scenarios, see Section 3. In [9] user requirements for a privacy policy system are detailed with emphasis in the granularity of constraints users might want to apply to control the distribution of their location information. Here, rules are implemented as system components called validators without defining a concrete implementation language, though. Apart from the lack of expression, P3P does not address the problem of enforcing a website’s privacy policy [4]. The use of P3P alone does not give assurance about the actual privacy practices in the backend of the website and whether “obligations” implicitly included in the user privacy preferences, such as the purpose of the data collection, are respected. EPAL and XACML are two platform-independent languages that support the definition and enforcement of privacy policies and obligations. IBM submitted EPAL 1.2 to the W3C in November, 2003, for consideration as a privacy policy language standard
888
S.A. Bag¨ue´ s et al.
(still pending). XACML 2.0 is an XML-based language designed primarily for access control and extended for privacy. It has been accepted as an OASIS standard and is widely deployed. In [3] a comparison of both languages shows that EPAL offers only a subset of the functionalities of XACML. Nevertheless, while XACML has been developed for some time and has reached a high level of standardization, it has only started to take possible privacy constraints on information management into account. It may be enough for enterprise privacy control but it lacks of some important features to enforce personal privacy. In the next Section, we present those requirements that a privacy policy language from the perspective of the individual shall meet.
3 Requirements for Implementing Personal Privacy A privacy policy system used to define and enforce an individual’s privacy criteria should provide a way of describing the elements involved in his/her interaction with a CAMS (Figure 1), together with the environmental conditions in which such interaction could occur or not, and operate accordingly. Next we outline seven requirements, which have guided the design of our policy system and particularly the definition of the SenTry language, and how they were incorporated into our implemented solutions. Centralized privacy enforcement. In order to offer a controlled distribution of sensitive personal data, the policy system should centralize the collection and distribution of context-related data of its users. Thus, we provide a trusted privacy enforcement point with our UCPF, which collects context information of its users from appropriate context providers and controls all accesses to privacy-relevant information. Figure 1 represents the role of the UCPF in our model of protected interaction chain, showing the following entities: i) A Target is the tracked individual and the source of any Resource(location, calendar, situation, medical data, etc); ii) A CAMS compiles the resource and carries out the Action (purpose of the interaction); iii) A Subjectis the user of the service and the final recipient of the data; iv) A Context Provider acts as an intermediate entity, responsible for collecting and disseminating context; v) The UCPF acts as sink of its users’ context and enforces their privacy criteria, providing a protected interaction with the service.
Fig. 1. Our Model of Interaction Chain with a CAMS
User aware. A privacy policy system should be aware of its users needs; avoiding situations in which the target has a passive role limited to accept or reject CAMSs’ privacy practices without explicitly specifying his privacy preferences, as is the case of some previous approaches e.g., PawS [10]. In the UCPF individuals have an active role in the specification and enforcement of their privacy criteria.
Towards Personal Privacy Control
889
Context awareness. Policies should be context-aware in the sense that their evaluation involves checking individuals’ context against the applicable policy. Additionally to the typical restrictions on the entities of Figure 1, constraints related to a user’s context should be possible. Moreover, the peer’s context is a rich source of information that also can be employed to restrict when a rule applies. Therefore, our SenTry language incorporates dynamic constraints that take into account individuals’ context values and express context-awareness rules. For realizing the idea of the UCPF being the sink of its users’s context information from potential different context providers, a standard model for context representation is assumed. Semantic awareness. Obviously, the interaction with pervasive services and user customized applications demands some awareness of the underlying semantics. Therefore, the policy language used for reasoning on context information has to be expressive and aware of the underlying meaning at the same time. This recommends the use of a semantic representation model for privacy policies and context data, which also will provide a common semantic frame for the collaboration of the different entities involved in the interaction chain. Post-disclosure awareness. Privacy control should not be limited to a pre-disclosure phase typical of access control. To guarantee the expected privacy once the data has been disclosed, is vital to provide mechanisms that delimit the extension of the granted action, e.g. a target may want to limit the time that an action is granted, or to restrict any secondary use on the data transmitted. For that, we use Obligations [11], they allow to bind an entity (service, subject) with the obligation of performing a predefined action in a future on an specified object. We include the so-called Positive Obligation Permission as a possible result of the Positive Authorization Rule, to trigger a negotiation process between the UCPF and a CAMS before any resource is transmitted. As a result the CAMS might agree on holding an obligation with the UCPF’s user, allowing post-disclosure control. Transformations. The user should have means to reveal “fuzzy” information when using a certain service. We introduce the concept of Transformations [12] to allow users to better specify their privacy preferences. Basically, we define Transformations as any process that the tracked user may define on a specific piece of context information to limit the maximum accuracy to be revealed, e.g coordinates accuracy max. 500 m, or filter calendar items labeled as private. We have created the Positive Transformation Permission and the Positive Transformation and Positive Obligation Permission. Both include a Transformation instance added to the positive permission of disclosure, which is performed before transmitting the data. Constraints on Active interactions. For practical reasons (like fine-grained control of stateful information disclosure; see examples in Section 5 for more details), in the interaction of a user with a CAMS, the active and passive roles must be distinguishable. We classify users interactions as active or passive. In the active case the user is actively using the service and some action from the service is requested, e.g. where to find the closest-by Italian restaurant. To compile an appropriate answer the CAMS asks the current location in return. The interaction is passive if the user receives a request from the CAMS without previously requesting the service e.g., if a colleague asks the Friendfinder application where a user currently is located, the disclosure of the user’s location
890
S.A. Bag¨ue´ s et al.
does not necessarily involve that he gets any benefit in return. We will show that the introduction of constraints to limit a disclosure on whether or not a service request is based on an active interaction, brings much greater privacy control (cf. 5).
4 SenTry Language The SenTry language (SeT) is built on top of the Web Ontology Language (OWL) and the Semantic Web Rules Language (SWRL) as a combination of instances of our user-centric privacy ontology (SeT ontology) and SWRL predicates. The SeT ontology describes the classes and properties associated with the policy domain in OWL DL using a unique XML namespace. OWL provides considerable expressive power for modeling a domain knowledge. However, it also has some limitations, which mainly stem from the fact that is not possible to capture relationships between a certain property and another in the domain [13]. For instance, we cannot define in OWL that if a person hasName “Pablo” and hasGroup “UPNA” the property hasAccessMyLocation should get the value “True”. A possible way to overcome this restriction is to extend OWL with a semantic rule language. An important step has been taken with the definition of the Semantic Web Rules Language, based on a combination of OWL DL and OWL Lite sublanguages with RuleML. Due to the mentioned limitations on the OWL language we have included SWRL predicates to reason about the individuals provided by the SeT ontology, primarily in terms of classes and properties. In common with many other rule languages SWRL rules are written as antecedentconsequent pairs. In SWRL terminology, the antecedent is referred to as the rule body and the consequent as the head. The head and body consist of a conjunction of one or more atoms (SWRL predicates). SWRL does not support more complex logical combinations of predicates than the conjunction, represented with the symbol “∧” in the examples shown in Figure 5 and 6. The SeT ontology has three main constructors, namely: Policy, Service Request and User Request. Together they model the elements needed to provide a protected interaction of a user with a CAMS. The OWL classes and properties of the Policy constructor are shown in Figure 2, an instance of the class policy represent the privacy criteria of an entity who is identified with the property onTarget. In our model, the system holds a unique policy instance per target, which contains a collection of rules associated with the property hasRule. A SenTry rule is created as a combination of instances of the class set:Rule and of the class swrl:Imp (SWRL rules). set:Organization Policy
subclasses
set:Personal Policy set:PAR set:NAR
set:Policy hasRule
subclasses
set:Rule SWRL rule
swrl:Imp Legend
Class Property min cardinality = 1
onTarget onSubject onService onAction onResource
set:Entity set:Action
MatchedRequest
owl:oneOf {“True“ “False“}
set:Resource UpdateResource
hasEffect
owl:oneOf
set:RuleEffect Property cardinality = 1 Property max cardinality = 1
{“True“ “False“}
SWRL rules link No cardinality
Fig. 2. Classes and Properties of a Policy
Towards Personal Privacy Control
891
A policy is divided into two disjoint subclasses PersonalPolicy and OrganizationPolicy. Organizational policies covers the need of companies to manage specified context of a predefined group of individuals (employees, clients, etc). A person as part of an organization may adhere, usually based on a contractual relationship, to some privacy policy which might be orthogonal to her own. The personal policy is mainly about how individuals control the use of personal information in every days life. In the following we focus on the personal policy, leaving the organizational policy for future work. In SenTry, we have included two rule modalities: the Positive Authorization Rule (PAR) and the Negative Authorization Rule (NAR). By using one of these two rule subclasses we explicitly delimit the elements involved in the interaction described in Figure 1 with the properties onSubject, onService, onAction and onResource and specify the result of such interaction with the property hasEffect. The only possible effect of a NAR rule is to deny the disclosure of the requested data. This rule includes an instance of the Negative Authorization Permission (NAP) as result. On the other hand, the PAR rules allow the disclosure of the data by using one out of four positive permissions: the Positive Authorization Permission (PAP), the Positive Transformation Permission (PTP), the Positive Obligation Permission (POP) and the Positive Transformation Obligation Permission (PTOP). Since a policy might contain multiple rules and since rules might be evaluated to different results for a given request e.g., the NAR “Deny”, the PAR “Grant”, “Grant with transformation”, “Grant with obligation”, or “Grant with transformation and Obligation”. The system determines potential rule conflicts, see Section 5, before compiling the effect of the policy evaluation process. The final result is assigned to the request class through its property hasEvaluation (Figure 3(a)).
(a) Service Request
(b) User Request
Fig. 3. Classes and Properties
Our second constructor is the Service Request, it holds the information needed to check the applicability of a SenTry rule against a third-party request. The policy decision point (PDP) [14] of the UCPF is fired to evaluate an instance of the service request class shown in Figure 3(a). The value given by the OWL property withTarget selects a policy among the policies stored in the system. The rest of properties of the service request class, withSubject, withService, withAction and withResource check whether the rules included in the selected policy match that request. A rule applies if all the values included in the service request are equal to those given in the rule by onSubject, onService, onAction and onResource.
892
S.A. Bag¨ue´ s et al.
The last constructor created in SeT is the User Request. This element models active interactions between a user and a CAMS, see Section 3. A user identified with the property hasUser in Figure 3(b) might actively apply for a particular CAMS, the service instance attached to the property toService, and allow the action defined with allowAction on the resource contained in the property allowResource, of a user request instance. The CAMS uses the resource to compile the expected answer. We explicitly distinguish between rules that will affect an active interaction with a service by evaluating an instance of the class user request against an instance of the service request.
5 Implementing a SenTry Policy Our PDP component has been developed in JAVA on top of the Java Expert System Shell (Jess). Once the PDP gets an evaluation request message, it triggers the execution of the Jess rule engine and accesses the policy repository to retrieve the policy applicable for the current situation, which is evaluated based on the provided service request. Along with rules in the Jess language, Jess accepts rules formulated in SWRL [15], which are translated into Jess rules and facts with the Java API SWRL factory. Jess implements the well-known Rete algorithm to efficiently evaluate rule predicates and facts. Each time a user defines a new rule, a new instance of the set:Rule is created together with the needed SWRL predicates. In our system rule predicates are first expressed with SWRL and then automatically translated to Jess. The SWRL predicates are classified in three groups, namely: Filter predicates, Static predicates and Dynamic predicates.
Fig. 4. Policy Evaluation Process
The SWRL predicates included in a SenTry rule are grouped in three SWRL rules and enforced consecutively as shown in Figure 4. In step 1, the Filter predicates are evaluated. They are a set of common predicates used to filter those instances of the set:Rule class that match a request. There is only a single SWRL “rule- step 1” shared by all the rules. If it is evaluated to true the rule MatchedRequest property (see Figure 2) is set to “true” and the SWRL “rule- step 2” is triggered. On the other hand, if it is false for all the set:Rule individuals the process ends without further evaluation. The evaluation of the Static predicates leads (if true) to two possible results: i) there are not Dynamic predicates associated with the matched set:Rule. Thus, the SWRL “rule- step 2” asserts the appropriate instance of the RuleEffect class; ii) there exist Dynamic predicates defined and the evaluation of the Static predicates has the effect
Towards Personal Privacy Control
893
of setting the UpdateResources property of that set:Rule instance from “false” to “activated”. The Static predicates include exclusively constraints on multiple recipients and/or the active interaction constraint. Once this second step is finished, an internal process checks if there are any instance of a set:Rule with the property UpdateResources equals to “activated” and then updates the list of context values from the respective context providers. Finally, it sets the UpdateResources property to “true”. Step 3 of the evaluation process evaluates Dynamic predicates. They include dynamic constraints that should be updated before further evaluation, such as time constraints, constraints on the target’s context, and on a target peer’s context. As consequence it always returns a RuleEffect. The last part of this process consist of applying a combining algorithm on the RuleEffect instances returned in the process described. It combines rule effects generated in the second and third step. The system generates NAR rules per service and subject by default, only when a user explicitly creates a PAR, it is possible to permit a requested action. We use a grant overrides combining algorithm to resolve potential conflicts with four variations that allow or disallow to inherit obligations and transformations from the POP, the PTP and PTOP results. 5.1 Examples We now introduce two simple use cases that show how a SenTry policy is implemented. Use case 1 includes only two SWRL rules (step 1 and 2), while the use case 2 has also Dynamic predicates (step 3). Step 1, Figure 5(b), as mentioned is common to both use cases. It filters the policy with target “Pablo” and the rules within that policy than specify the same resource and action that the given request. Use case 1: Pablo uses his mobile phone to call a taxi in the evening, he wants the taxi company to determine his location automatically to ensure a smooth pickup, but he does not want them to be able to trace him once the journey is over. In the above use case the target, Pablo, allows being tracked during a limited time slot meanwhile the request for the taxi service is still “active”. Thus, the “rule- step 2” (Figure 5(c)) includes only a constraint: the active interaction constraint. Step 2 here checks whether an active instance of a user request exists with the flag isActive equals to true, for a given service request and returns a grant permission, defined by the set:Rule (PabloPAR-UC1) in Figure 5(a). Use case 2: When Pablo is on a business trip, he likes to meet old colleagues that may also be in the city. He allows to be located by the peer group Colleagues, but only if the potential subject is in the same city and with a maximum accuracy of 500 m.
(a) set:Policy
(b) SWRL rule (step 1) Fig. 5. Pablo’s Rule - Use Case 1
(c) SWRL rule (step 2)
894
S.A. Bag¨ue´ s et al.
(a) set:Policy
(b) SWRL rule (step 2)
(c) SWRL rule (step 3)
Fig. 6. Pablo’s Rule - Use Case 2
Use case 2 requires constraints based on multiple subjects and dynamic constraints. The “rule- step 2”, Figure 6(b), checks the type of service as well as the group of the requesting subject. Here, Pablo allows the disclosure of his coordinates with a maximum accuracy of 500m (PTP-500) to the group Colleagues. The “rule- step 3” applies constraints based on the target’s situation (Business trip) and limits the disclosure to cases in which target and subject are in the same city, Figure 6(c). There the rule includes predicates expressed over instances of the context model ontology (CM), situation and coordinates. We assume an standard context model defined as an OWL ontology. The CM is related with the SeT ontology through the property hasID-CP per person and per context provider as is shown in Figure 6(c). Assuming that the system holds also two NAR rules created by default to govern the interaction of the taxi-service and the friend-finder application with Pablo, Figure 5 and 6 show all the necessary rules to meet Pablo’s demands for granting both services.
6 Conclusions and Outlook In this paper we introduce seven requirements for Personal Privacy Control and outlines how they are incorporated into our implemented solutions: the User-centric Privacy Framework (UCPF) and the SenTry policy language. In particular, we give a detailed description of our SenTry language, built as a combination of OWL individuals from the SeT ontology and SWRL predicates. It allows for the specification of fine-grained constraints on the use of personal and sensitive data to conform to a user’s privacy criteria. The UCPF then provides a trusted privacy enforcement point that acts as the sentry of its users’ personal privacy by supervising the application of policies in the interaction chain of the user with context-aware services. Additionally, this paper presents two examples that show how a SenTry policy can be specified and what it looks like. Beside its application in our prototype of the UCPF, as part of the smart home lab infrastructure, the SenTry language is currently used in the IST project CONNECT. The CONNECT Project is centered around two main scenarios: the “Home CareWork” and the “Mobile Advertising” scenario. For both exist ongoing implementation work in their respective demonstrator applications where SenTry policies are used. This allows us to study the applicability of the policy language in real scenarios and in greater detail. Further development of the SenTry language obviously involves to provide an intuitive and easy-to-use user interface, which will include three complementary tools,
Towards Personal Privacy Control
895
namely: Privacy Management Tool, Privacy Status Tool and Deniability Tool. This will open-up the management and control of privacy to the average user of context-aware services. Another ongoing line of work is the extension of the UCPF with an obligation negotiation protocol, which allows for binding services with respective SenTry obligations. Altogether, in this paper we outlined how a combination of our UCPF and the SenTry language can be leveraged to control and manage the dissemination of personal sensitive data to the outside world.
References 1. IST Advisory Group (ISTAG): Ambient Intelligence: From Vision to Reality. For participation - in society and business. Luxembourg: Office for Official Publications of the European Communities (2003) 2. Bag¨ue´ s, S.A., Zeidler, A., Valdivielso, C.F., Matias, I.R.: Sentry@home - leveraging the smart home for privacy in pervasive computing. the International Journal of Smart Home (to appear, 2007) 3. Anderson, A.: A Comparison of Two Privacy Policy Languages: EPAL and XACML. Technical report, Sun Microsystems Laboratories, Technical Report TR-2005-147 (2005) 4. Dekker, M., Etalle, S., den Hartog, J.: Security, Privacy and Trust in Modern Data Management. ch. 25: Privacy Policies. Springer, Heidelberg (2007) 5. Cranor, L., Langheinrich, M., Marchiori, M., Reagle, J.: The platform for privacy preferences 1.0 (P3P1.0) specification. W3C Recommendation (2002) 6. Ashley, P., Hada, S., Karjoth, G., Powers, C., Schunter, M.: Enterprise Privacy Authorization Language (EPAL 1.2) Specification (2003), http://www.zurich.ibm.com/ security/enterprise-privacy/epal/ 7. OASIS standard: eXtensible Access Control Markup Language. Version 2 (2005) 8. Langheinrich, M., Cranor, L., Marchiori, M.: Appel: A P3P Preference Exchange Language. W3C Working Draft (2002) 9. Myles, G., Friday, A., Davies, N.: Preserving privacy in environments with location-based applications. IEEE Pervasive Computing 2(1), 56–64 (2003) 10. Langheinrich, M.: A privacy awareness system for ubiquitous computing environments. In: Borriello, G., Holmquist, L.E. (eds.) UbiComp 2002. LNCS, vol. 2498, pp. 237–245. Springer, Heidelberg (2002) 11. Kagal, L., Finin, T.: Modeling Conversation Policies using Permissions and Obligations. Journal of Autonomous Agents and Multi-Agent Systems (2006) 12. Bag¨ue´ s, S.A., Zeidler, A., Valdivielso, C.F., Matias, I.R.: A user-centric privacy framework for pervasive environments. In: Meersman, R., Tari, Z., Herrero, P. (eds.) On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops. LNCS, vol. 4278, pp. 1347– 1356. Springer, Heidelberg (2006) 13. Horrocks, I., Patel-Schneider, P.F., Bechhofer, S., Tsarkov, D.: OWL rules: A proposal and prototype implementation. J. of Web Semantics 3(1), 23–40 (2005) 14. Yavatkar, R., Pendarakis, D., Guerin, R.: RFC2753 - A framework for policy-based admission control (January 2000) 15. O’Connor, M., et al.: Supporting Rule System Interoperability on the Semantic Web with SWRL. In: Gil, Y., Motta, E., Benjamins, V.R., Musen, M.A. (eds.) ISWC 2005. LNCS, vol. 3729, Springer, Heidelberg (2005)