Reconciling Privacy and Security in Pervasive Computing - CiteSeerX

2 downloads 256676 Views 127KB Size Report
Pervasive Computing, Homomorphic Cryptography, Access. Control. 1. INTRODUCTION .... of digital signatures on interaction outcomes. We shall de-.
Reconciling Privacy and Security in Pervasive Computing The Case for Pseudonymous Group Membership Ian Wakeman

Dan Chalmers

Michael Fry

Dept of Informatics University of Sussex Brighton, UK

Dept of Informatics University of Sussex Brighton, UK

School of IT University of Sydney Sydney, Australia

[email protected]

[email protected]

[email protected]

ABSTRACT In this paper, we outline an approach to the identification of entities for access control that is based on the membership of groups, rather than individuals. By using group membership as a level of indirection between the individual and the system, we can increase privacy and provide incentives for better behaviour. Privacy comes from the use of pseudonyms generated within the group and which can be authenticated as belonging to the group. The incentives for better behaviour come from the continuous nature of groups - members may come and go, but the group lives on, and groups are organised so as to ensure group-longevity, and prevent actions which may harm the group’s reputation. We present a novel pseudonym generation mechanism suitable for use in groups without a centralised administration. Finally, we argue that the use of group membership as the basis for formulating policies on interaction is more efficient for disconnected operation, facilitating proxies and the efficient storage of revoked membership and distrusted organisations within bloom filters for small memory footprints.

Keywords Pervasive Computing, Homomorphic Cryptography, Access Control

1.

INTRODUCTION

As is described in the first sentence of nearly every pervasive computing research paper, our aim is to enable Marc Weiser’s vision of ubiquitous computing. One barrier that has to be overcome is to provide safeguards such that services and infra-structures are used in a responsible manner. We must ensure that devices are not used in a nefarious manner, such as sending spam email on a public wireless network. When there is contention for device access, the priority users are given access. Meeting such constraints requires the implementation of access control for the pervasive devices and environments.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...$5.00.

Access control is a well-researched area in operating systems and other security applications. The typical implementation assumes that there is a single administration protecting and controlling the use of the computing infra-structure, and that there are well-defined tasks and processes that can be modeled in the access control techniques. The users and machines are finite in number and generally known a priori. The environment changes in a controlled manner, and can be viewed as at least quasi-static. The goal is to maximise the task efficiency and minimise the risk of disruption. In contrast, many proposed pervasive computing systems are intended to be dynamic and unfocused on a particular task. They are instead intended to facilitate opportunity [1]. The access control goals are less clearly defined, since the population of users may be dynamic and how the equipment is to be used is not known in advance. Instead the goal of access control is manage the risk of abuse and to ensure accountability and audit trails in the event of malfeasance. Policy-based management and configuration offers a way forward in implementing a flexible and adaptive approach to managing pervasive computing systems [2],[3]. Policies are triples of events, conditions, and actions. The management system monitors the system, watching for the specified events. When they occur, and the conditions in the system also match, then the action is triggered. The power of such system is the policies can be specified in languages with variables that are bound based on the conditions at run-time, allowing management to be context dependent. The proponents of policy systema argue that the level of indirection in capturing management offers both flexibility and is closer to how people specify management goals [4]. Since the users of pervasive systems are unknown beforehand, there has been much work on dynamically computing how much to trust entities, and to use this computed trust as a condition in the management policy. Trust can be calculated in many ways such as direct knowledge of previous interactions, recommendations from other users, and more indirect ways of calculating a reputation based on previous interaction histories [5]. Rather than limiting risk, the emphasis is on managing the level of risk so that the losses of any bad decision are commensurate with the value from offering the service. One of the major design goals in such systems is in setting incentives for good behaviour, and punishments for bad behaviour so that the system works if people behave rationally. Implicit in such trust based systems is the need to track individuals which has wide implications for the privacy of individuals. We take it as a given that privacy is desirable.

Therefore in any interaction, we wish to be able to shield the real identity of the user by utilising a pseudonym or alias. A pseudonym is a label used to identify the user in digital interactions. It should be difficult to identify and prove the connection between a pseudonym and the real user, but it obviously cannot always be impossible. For instance, if a user has to show co-location before access to a device, and is recorded by other devices as being the sole person in the location, then we have a proven link between the pseudonym and the user [6]. We have derived the following desiderata for the properties of pseudonym systems for pervasive computing:

the more complex public key protocols, and then provide a signed one-off authentication cookie using such algorithms as HMAC [7] to be presented to the device. Whilst the combination of policies, trust-based access control and pseudonyms would seem a feasible solution to the security of pervasive systems, pseudonyms suffer from a number of disadvantages: • A pseudonym can be discarded, and are thus by nature transient. From game theory, we know that there is no incentive to behave well in the final round of any sequence of interactions, since there is no longer any threat of punishment. The transient nature of pseudonyms is thus an incentive to misbehaviour.

Fast authentication: If a user claims to own a pseudonym, this should be provable simply and efficiently.

• The amount of state that must be collected, stored and searched to construct reputation and recommendation systems is very large when building reputations for individuals, and the scalability of such systems is in doubt.

Variable longevity: Pseudonyms can be used as one-off identities and then discarded, or may be re-used over long periods of time in multiple different contexts. Interaction history: For reputation systems and in cases where abuse is detected, the history of interactions undertaken by the user operating under the pseudonym should be recoverable. Multiple ownership: In different contexts, a user may wish to use different pseudonyms, such as to manage multiple different identities. For interactions which have recorded consequences, the user may wish to use several identities so that the interaction is associated with all the identities. Users should therefore be able to carry multiple valid pseudonyms concurrently. However, although a pseudonym may be usable by many different people, we require a unique individual to be ultimately responsible. Compact representation: Our aim is to provide identifiers for interaction that scale over bandwidth, power and memory. We should therefore ensure that the most compact representation is used which meet all the design goals, eg through using locally derived indices to substitute for keys once they are exchanged. Amenable to management by proxy: Pervasive computing systems may sometimes work disconnected from the wider network, or may have very small amounts of bandwidth and processing power. Security computation should be partitionable between the system and a proxy. The obvious technique is public key cryptography, where the pseudonyms are the public key. There is a secret associated with the public key that is known only to the entity identified by the public key. The entity demonstrates knowledge of the secret by encrypting something with the private key, as in normal public key authentication. Longevity is currently provided by the strength of the algorithms to attack, whilst interaction history can be generated through the use of digital signatures on interaction outcomes. We shall defer the discussion of the size of the proxy to later, since the representation is not just about the public key, but what is required to be used in the surrounding public key infrastructure. To use a proxy, the device and its proxy shares a secret for use in symmetric cryptographic algorithms. The proxy will then authenticate anyone request accessing the device using

• Finally, any system built from individual pseudonyms is open to abuse from misreporting bad behaviour, thus denying service to the individuals and reducing the information content of misbehaviour reports. In the remainder of this paper, we argue that pseudonyms can be effective if they are tied to groups of individuals, and describe a possible architecture for the authentication of pseudonyms based upon both centralised and distributed management of the groups.

2.

PSEUDONYMOUS GROUP MEMBERSHIP

We define a group as being one or more individuals combined together. Our aim is to model how society uses group membership to associate attributes with an individual based on entrance criteria, e.g. membership of the Royal College of Surgeons confers the expectation of a certain amount of medical knowledge, how group membership imposes expectations of behaving according to the group norms and of being punished if those norms exceeded, e.g. being part of a gaming clan would exclude people who cheat, and how groups sustain reputation despite membership changes, e.g. the Royal College of Surgeons has existed since 1745 and has maintained its reputation over the years. We argue that for many policy configurations, it is membership of a group that is important for the access control decision rather than the identity of the individual. For instance, a device may be issued by a service operator. An obvious policy is that maintenance engineers can have control access on a device. Which particular engineer will service the device is not known in advance, only that they be able to authenticate themselves as members of the group of maintenance engineers for that company. We defer discussion of the relationship to Role Based Access Control till Section 5. The use of groups obviates the problem of pseudonym transience. A group provides the indefinite lifetime to ensure that protection of reputation is an incentive to ensure good behaviour, as described by Ba [8]. If a group is long-lived, then the continued reputation of the group is enough to provide for proper checking and management of the entities that are believed to be part of the group. Aggregating individual behaviours into the behaviour of a group also allows more scalable manipulation for reputation

systems. Indeed, if a group can convincingly demonstrate that they impose and maintain standards of behaviour on that group, then the history need only comprise the bad behaviours to demonstrate that the group does manage its denizens. If we use group memberships as the standard for judging the likely behaviour of an individual, then we have offloaded the judgment of behaviour into the entry criteria for joining a group. Groups are not limited to standards of behaviour. For instance the customers of an ISP can become a group. When a customer signs a contract with an ISP, they agree to the terms and conditions, and if the ISP then grants a pseudonym for use in mail systems, abuse of the pseudonym in sending mail can result in the penalty clauses of the contract being activated, such as loss of bandwidth or loss of service altogether. The internal structure of the group depends largely on the social form fo the group. Some groups have a central management, such as the customers of an ISP. Other groups have elected officers, whilst others are more of a cooperative anarchy. We therefore propose that groups be the issuing authority for pseudonyms. Inside the group, the user has an established identity, and can be granted pseudonyms for use external to the group. The group will authenticate these pseudonyms as being valid group members but will not reveal the identity of the pseudonym. Computing systems can thus compose the deontic policies about who has access to to their devices and services in terms of group membership, allowing the design of reputation and recommendation systems designed around groups rather than individuals. At the point of use, pseudonyms must be authenticated and checked for validity. Since we wish to preserve privacy, the authentication system must not require a direct query of the group management. When a user of a pseudonym misbehaves and is reported back to the group, the group administration should be able to discover which user is associated with the pseudonym, and take whatever locally prescribed action is appropriate. The authentication structures should allow for both central and distributed management of the group, and for fast revocation of misbehaving pseudonyms.

The user will need to regenerate the certificate before the expiry to continue using the certificate. If a pseudonym is reported for abuse, and membership is to be revoked, then the pseudonym is added to the revoked pseudonym list and will no longer have its certificate renewed. The administration can discover which user is associated with the pseudonym, and take the appropriate action. When there is no centralised administration, our aim is to ensure that the knowledge of which identity is associated with which pseudonym is shared amongst several members of the group, so that no single group member can derive the information. Existing solutions are reported in the literature such as that from Bellare et al [11]. Whilst these are provably secure, they suffer from requiring a single manager, and interactive zero knowledge proofs. We are therefore exploring the possibility of using homomorphic digital signatures [12]. Within the group, short-lived public key pairs are generated for use in a homomorphic cryptographic algorithm. A homomorphic algorithm is one that allows a mathematical operation such as addition or multiplication to take place on encrypted numbers which will produce the matching cryptotext of performing an operation on the plaintext numbers. We denote these pseudonym-signing keys as KGP . The number of pseudonym signing keys is dependent upon the group structure: if there are a few elected officers, then one signing key shared amongst the officers is likely to be sufficient; if all group members can sign, then more keys are needed, with measures being taken to agree upon key distribution. The key idea is that a member wishing to gain a signed pseudonym decomposes the pseudonym public key and presents the separate parts to independent group members which share the same signing key. The group member can then create the certificate by recomposing the signed parts. If there is sufficient misbehaviour on the part of the pseudonym owner to warrant the group punishing the individual concerned, then the group can discover who asked for the pseudonym by the group members sharing the signing key recombining the parts they have signed until they find a match, and so linking the pseudonym to the group member. For a member of the group to generate a new pseudonym in a distributed group, the following steps are taken: 1. The member generates the pseudonym key pair KP .

3.

FLEXIBLE GROUP MEMBERSHIP AUTHENTICATION

In the following, we take it as given that there is a process of group admission, which generates an identity to be used within the group. It is this identity upon which the group can impose sanctions if the group membership is abused. Each group has a primary public/private key pair associated with the group. We assume that each group G has its own master public key pair KG . This key is long lived, and is maintained by the group using whatever internal protocols are appropriate, such as through a group supervisor, or through a proactive threshold signature [9, 10] for distributed group management. In the case of a centrally administered group, the pseudonym generation and signing is relatively trivial. The group member generates the pseudonym key pair, and presents it to the administration for signing. The administration issues a certificate with an expiry date, signed by a key appropriate for authenticating membership of the group. The administration logs which user asked for this pseudonym certificate.

2. The member factors the public key based on the homomorphic operation of the encryption algorithm, such as Paillier encryption [13]. 3. The member identifies a set of group members sharing the private key of a valid pseudonym-signing key KGP . To ensure that the privacy of the member is protected, the signed factors are not shared across the signing members. If all the factors are known, the signed pseudonym could be recreated. 4. The member requests the signing of the various factors with the private pseudonym-signing key. 5. The member creates the signed version of their pseudonym key by combining the signatures on the factors using the homomorphic operation, and creates a certificate providing the pseudonym public key, the pseudonym signature the public key of the pseudonym-signing key pair (KP , KGP 0 (KP ), KGP ), and a reference to the certificate of valid signing keys for that group.

Group

reports issues

issues issues

Pseudonym Revocation List, signed by primary group key

Abuse Monitor

Policy Check

Device

Pseudonym Validation

checks

Access Control

Pseudonym Certificate signed by group signing key Expiry time

checks presents

requests

User

Signing Keys Certificate signed by primary key

Service User

grants/denies

Figure 1: Group Authentication Protocol 6. The group periodically issues a dated current group keys certificate containing the list of valid signing keys for the group, a list of invalid pseudonyms and an expiry date, signed by the long lived group key. The member stores the current group keys certificate asserting that KGP is a valid pseudonym-signing key of the group. We thus require each group to issue one or more certificates signed with the group’s primary key and with an expiry time, listing the valid pseudonym signing keys. It is the pair of the signed pseudonym, and the group key-signed list of signing keys which authenticates a pseudonym as belonging to the group. Note that there is no direct expiry time attribute on the signed signature - this is obtained from the valid signing keys certificate. Misbehaviour reports are collected at a public group address, by email/web-services or whatever is appropriate. How these are handled is up to the organisation of the group. Automatic systems may be set up for a centralised management system. Alternately, the review of received misbehaviour reports may become an item at the next meeting of the group. If the outcome of the decision process is that pseudonyms are to be revoked, then the revocation should have immediate effect. A pseudonym holder may wish to revoke the pseudonym themselves, since they may believe it is compromised, or it may have just have been required for a one-off use. Again, the revocation of a pseudonym needs to be fast. We use a list of revoked pseudonyms, publicly accessible at an address associated with the group, and added to the certificate of group signing keys. To revoke membership of a group for a pseudonym, the following actions are taken:

3. When the time arises for publishing a new signing key list, the next signing key list doesn’t publish the signing key. 4. When a pseudonym is renewed, the group member will have to find a new key and its key holders to sign the pseudonym. 5. When all signing key lists which hold the signing key expire, then the pseudonym can be safely removed from the revocation list. The expiry time of the signing key certificate should be relatively short for two reasons. As pseudonyms are revoked as part of their natural life-cycle, the revoked pseudonym list would grow without bound. The group should therefore age out its signing keys, and require group members to revalidate their pseudonyms if they are to be long-lived. Also, if it is discovered that a signing key has been compromised, then the signing key is removed from the next issue of the signing keys certificate. Thus the frequency of the re-issuing is a compromise between ensuring that the damage possible by any compromised signing key is limited, whilst reducing the amount of revalidation required for pseudonyms. It should be noted that the expiry time of any signing key list is honoured, even if a signing key list is issued. The authentication process for a given pseudonym is shown in Figure 1. For a member to authenticate themselves as members of the group outside the group, they go through the following procedure. 1. The member presents to the server their pseudonym public key KP , the signature of the pseudonym key 0 by KGP , KGP KP , KGP , and the group signing key certificate.

1. The pseudonym is added to the revoked pseudonym list, and the list is signed by the group primary key and published, superceding the previous revocation list.

2. The server checks that they recognise the group.

2. The signing key for that pseudonym is removed from the internal list of valid signing keys, and the signing key holders notified.

4. The server checks that the signing key is in the signing key list, the expiry time has not been passed, and that the list is properly signed by the primary group key.

3. The server checks that the signature of the pseudonym key is valid.

5. The server checks that the pseudonym is not in the revoked pseudonym list issued by the group. 6. The server issues a challenge to the member encrypted in the pseudonym public key that must be correctly decrypted by the member to provide authentication. The authentication process thus requires the transmission of the pseudonym and its certificate, the signing keys certificate and the revoked pseudonym list. To improve efficiency, the signing keys certificate can be partitioned, possibly at the level of one key per certificate. The revocation list can be implemented as a bloom filter, since the occasional false positive can be checked against the full list. Implementation over proxy is thus easily accomplished using the techniques described above. We can argue that the list of groups that will be used for policies in any given device are likely to be quite small. It would therefore be possible to download the revocation lists at points of synchronization, and to use these stored lists to check for revoked signatures. Whilst this provides a window for exploitation of revoked signatures, this risk may be acceptable for disconnected operation, especially if the synchronization times are random.

4.

ATTACKS

Pseudonyms are protected against impersonation by the strength of the underlying cryptographic techniques. Denial of service attacks are possible using reports of misbehaviour. As in any over-reporting attack, we need to ensure that reports can be quenched. To ensure false reports don’t result in inappropriate sanctions, the group must provide mechanisms to ensure the validity of the reports, such as through requiring the reports to carry a time-stamped pseudonym-signed initial agreement of use. Further policies can be used, such as requiring reports to be demonstrably distinct by origin or context of use. In the case of distributed management and generation of pseudonyms, we have to protect against subordination of the groups. If a key is generated against a set of collaborating group members, then the pseudonym is revealed. There is therefore a trade-off in the number of signers against security. We protect against replay of the signing key certificate through dating and the use of integrity checks. One unsolved attack is how to protect against the reuse of pseudonyms to compose other valid pseudonyms. In the language of Johnson [12], we need to find a homomorphic scheme that is easily decomposable, but has a small span ie the continued application of the group operation doesn’t span a wide range of values, such as addition. This is part of ongoing design work.

5.

RELATED WORK

Our ideas are akin to the certificate framework presented by Camenisch et al in [14, 15]. In their work a certificate presents a set of attributes of the user, and the user and service provider engage in zero knowledge proofs on a predicate over the attributes describing the necessary entrance requirements. A trusted third party is used to hold the commitment to the predicate and to reveal the identity if misuse is found to have occured. Our approach can be seen as having converted the attributes into group membership of the certificate issuing authority.

Our proposal to use group membership as the basis for ascertaining access rights is similar to the conventional work in Role Based Access Control [16]. However, they are orthogonal, since RBAC defines roles in terms of the organisation managing the service, whilst our groups define the identities and memberships that the user wishes to present. Admission into a role can be made on the basis of appropriate memberships.

6.

CONCLUSION

We have described group membership scheme can be used for access control in pervasive computing systems. By using group membership as a level of indirection between the individual and the system, we can increase privacy and provide incentives for better behaviour. Privacy comes from the use of pseudonyms generated within the group and which can be authenticated as belonging to the group. The incentives for better behaviour come from the continuous nature of groups - members may come and go, but the group lives on, and groups are organised so as to ensure grouplongevity, and prevent actions which may harm the group’s reputation. We have presented a possible novel pseudonym generation mechanism suitable for use in groups without a centralised administration. Finally, we have argued that the use of group membership as the basis for formulating policies on interaction is more efficient for disconnected operation, facilitating proxies and the efficient storage of revoked membership and distrusted organisations within bloom filters for small memory footprints.

7.

REFERENCES

[1] Yvonne Rogers. Moving on from weiser’s vision of calm computing: Engaging ubicomp experiences. In Ubicomp, pages 404–421, Orange Country, Ca, 2006. Springer Verlag. [2] Lalana Kagal, Tim Finin, and Anupam Joshi. A policy language for a pervasive computing environment. In Proc of 4th IEEE Workshop on Policies for Distributed Systems and Networks, Lake Como, Italy, June 2003. [3] Tim Owen, Ian Wakeman, Bill Keller, Julie Weeds, and David Weir. Managing the policies of non-technical users in a dynamic world. In IEEE 6th International Workshop on Policies for Distributed Systems and Networks, Stockholm, Sweden, June 2005. [4] Jeffrey O. Kephart and William E. Walsh. An artificial intelligence perspective on autonomic computing policies. In Proc of 5th IEEE Workshop on Policies for Distributed Systems and Networks, IBM Thomas J Watson Research Center, Yorktown Heights, New York, June 2004. [5] Vinny Cahill, Brian Shand, Elizabeth Gray, Ciar´ an Bryce, Nathan Dimmock, Andrew Twigg, Jean Bacon, Colin English, Waleed Wagealla, Sotirios Terzis, Paddy Nicon, Giovanna di Marzo Serugendo, Jean-Marc Seigneur, Marco Carbone, Karl Krukow, Christian Jensen, Yong Chen, and Mogens Nielsen. Using trust for secure collaboration in uncertain environments. IEEE Pervasive Computing, 2(3):52–61, August 2003. [6] Alastair R. Beresford and Frank Stajano. Location privacy in pervasive computing. IEEE Pervasive

Computing, 3(1):46–55, 2003. [7] Mihir Bellare, Ran Canetti, and Hugo Krawczyk. Keying hash functions for message authentication. In CRYPTO, pages 1–25, 1996. [8] Sulin Ba. Establishing online trust through a community responsibility system. Decision Support Systems, 31:323–336, 2001. [9] Sean Rhea, Patrick Eaton, Dennis Geels, Hakim Weatherspoon, Ben Zhao, and John Kubiatowicz. Pond: the oceanstore prototype. In Usenix Conference on File and Storage Technologies (FAST), 2003. [10] Tal Rabin. A simplified approach to threshold and proactive rsa. In Proceedings of Crypto, 1998. [11] Mihir Bellare, Daniele Micciancio, and Bogdan Warinschi. Foundations of group signatures: formal definition, simplified requirements and a construction based on trapdoor permutations. In Eli Biham, editor, Advances in cryptology - EUROCRYPT 2003, proceedings of the internarional conference on the theory and application of cryptographic techniques, volume 2656 of Lecture Notes in Computer Science, pages 614–629, Warsaw, Poland, May 2003. Springer-Verlag. [12] R. Johnson, D. Molnar, D. Song, and D. Wagner. Homomorphic signature schemes. In Proceedings of the RSA Security Conference Cryptographers Track, number 2271 in LNCS. Springer-Verlag, February 2002. [13] Pascal Paillier. Public-key cryptosystems based on composite degree residuosity classes. In EUROCRYPT, pages 223–238, 1999. [14] Jan Camenisch, Dieter Sommer, and Roger Zimmermann. A general certification framework with application to privacy-enhancing certificate infrastructures. In International Information Security Conference. IFIP, 2006. [15] Michael Backes, Jan Camenisch, and Dieter Sommer. Anonymous yet accountable access control. In WPES, Alexandria, Va, November 2005. [16] Ravi S. Sandhu, Edward J. Coynek, Hal L. Feinsteink, and Charles E. Youmank. Role-based access control models. IEEE Computer, 29(2):38–47, 1996.

Suggest Documents