information security technical report 11 (2006) 119–128
available at www.sciencedirect.com
www.compseconline.com/publications/prodinf.htm
What user-controlled identity management should learn from communities5 Katrin Borcea-Pfitzmanna,1, Marit Hansenb,*, Katja Liesebacha,2, Andreas Pfitzmanna,3, Sandra Steinbrechera,4 a
TU Dresden, Department of Computer Science, 01062 Dresden, Germany Independent Centre for Privacy Protection Schleswig-Holstein, Holstenstraße 98, 24103 Kiel, Germany
b
abstract To enable trustworthy privacy, identity management has to be user-controlled, i.e. each user administrates his/her partial identities being supported by an identity management system running on his/her machines under his/her control. Past work on user-controlled identity management focused on isolated users administrating their partial identities mainly used towards organizations, e.g., shops, public administrations and the like. But users intensively interact with other users as well. Additionally, these interactions are not only direct, but indirect, too, as, e.g., within communities. A universally usable identity management meta-system (IMMS) will have to be able to handle and combine all interactions possible. For the sake of privacy, users interacting with organizations might minimize the personal information transmitted in the context of AAA (authentication, authorization, and accounting) without losing functionality. But users interacting with other users, in particular within a community, have to share additional supportive information, e.g., awareness information. Otherwise, neither a community nor team spirit will develop. Balancing privacy and functionality in communities is a current research question. Therefore, an IMMS has to be flexible enough to incorporate new knowledge and demands as they develop. ª 2006 Elsevier Ltd. All rights reserved.
1.
Introduction
For each of us, developing one’s own personality is a lifelong process, which is deeply influenced by others
interacting with us. In former times, interaction only took place when two or more persons met. Then writing and reading books and letters started to bridge both time and space. Nowadays, interaction via communication networks
5 The information in this document is provided as is, and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at his/her sole risk and liability. The work reported in this paper was supported by the IST PRIME project; however, it represents not necessarily the view of the project. The PRIME project receives research funding from the European Community’s Sixth Framework Programme and the Swiss Federal Office for Education and Science. * Corresponding author. Tel. þ49 431 988 1214; fax: +49 431 988 1223. E-mail addresses:
[email protected] (K. Borcea-Pfitzmann),
[email protected] (M. Hansen),
[email protected] (K. Liesebach),
[email protected] (A. Pfitzmann),
[email protected] (S. Steinbrecher). 1 Tel.: þ49 351 463 38002. 2 Tel.: þ49 351 463 37919. 3 Tel.: þ49 351 463 38277. 4 Tel.: þ49 351 463 38272. 1363-4127/$ – see front matter ª 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.istr.2006.03.008
120
information security technical report 11 (2006) 119–128
Fig. 1 – Partial identities of John.
is added, where the Internet and its various services gain more and more importance. Even in former times, not our complete personality (called identity in the sequel) was involved in each interaction – for good reasons. In modern societies, we would say that we make and take various roles, where usually not all roles we take are conflict-free. So we need means to separate different parts of our lives, forming a partial identity (pID) (Pfitzmann and Hansen, 2006) in each part, cf. Fig. 1. If we are happy to be able to behave consistently across all our roles taken, we can define our identity simply as the union of all our pIDs. Otherwise, it gets more complicated. Interaction with others is not only bilateral: first of all, some interactions are within some group context, e.g., group discussions. Secondarily, others’ interaction with me may be heavily influenced by still others, so I am influenced by opinion, knowledge, aims, dependencies, etc. of these still others indirectly, and so on. For the purposes of managing our pIDs (i.e. user-controlled identity management), we have to take into consideration the direct and indirect relationships we have in various communities. After this short introduction, in Section 2 we describe how the problem area developed. In Section 3 we give an introduction to the scenario of communities illustrated by a comprehensive example. In Section 4, we discuss the requirements that members have within communities and mechanisms to fulfill them resulting in building blocks for communityenhanced identity management. Finally, we conclude and give an outlook in Section 5.
2.
The problem as it developed
In the early times of the Internet (called ARPANET then), the Internet was a means of supporting well-isolated
communities across distance primarily in space, but secondarily in time as well. The aim of the Internet users was mainly to exchange and thereby share information, e.g., thoughts and knowledge. Neither security nor privacy was much of an issue: members of the community knew each other (often already from in-person meetings) and trusted each other at least with respect to the aims of the community. Members built up particular reputations within the community. Role conflicts were not a real issue, since with very few exceptions each person was a part of one Internet-supported community only. This also kept privacy problems to a minimum: community members wanted to share some knowledge and gain a reputation with respect to a particular part of their life, having a well defined pID within ‘‘their’’ community. The evolvement of the aims of the community was a social process, i.e., mainly driven by consensus. This way, communities were self-regulating. Law in general and law-enforcement in particular was not an issue. Ten years ago, with the Internet gaining commercial significance, getting many more people ‘‘into the net’’ putting the more idealistic providers and the single-purpose users in a minority, the following properties started changing: Since knowing each other by means not mediated by the Internet was no longer the norm, but the exception, defining, showing, proving, and checking identities got an issue in the Internet. Since a growing number of transactions within the Internet involve larger amounts of money, security got an issue. Since the ordinary user of the Internet takes many roles and quite probably is a member of many communities, privacy is an issue requiring defining, developing, showing, proving, and checking pIDs.
information security technical report 11 (2006) 119–128
Since the linkability by usage of unique pIDs should be limited, transferring reputation from one pID to another (of course, only of the same person) gains importance. The same holds for attributes certified by others, where the validity of the attribute and its certification shall be transferable to other pIDs of the same person for reasons of security while maintaining privacy. More and more, the Internet is a means for all kinds of collaboration: to achieve your goals, you work together with others which may have conflicting goals, but who you need to achieve your goals and which may need you to achieve their goals. The most prominent examples might be e-commerce and e-banking. In addition, we are no longer part of just one community serving just one particular aim. We are part of many and quite diverse collaborating networks, living ever more roles and an ever-increasing part of our lives ‘‘in the net’’. At the same time, usually, we interact with people or organizations we have never met nor will meet within in-person meetings. This causes both severe privacy problems (e.g., profiling across applications yielding dossiers about ever larger parts of your life) and severe security problems (there is no longer a benevolent system administrator knowing all the members of the community; users are burdened with all kinds of log-in data, which they either forget or choose to be the same on many systems, so a successful compromise of any of the systems undermines the security in other areas as well). Today, identity management is urgently needed for using the Internet. And, since it is about influence, power, and money, it is no surprise that there is a big struggle who knows, authenticates and authorizes what. The actors are big application providers, companies running infrastructures, companies selling software trying to consolidate their quasi-monopolies and, last but not least, police and secret services. Some of them want to sell services, all of them want to know as much about users as possible. According to the EU project FIDIS,5 identity management systems (IMS) can be classified depending on the systems’ goals (Bauer et al., 2005): Type 1: IMS for account management, especially implementing an AAA (authentication, authorization, and accounting) infrastructure, Type 2: IMS for profiling of user data by an organization, e.g., by data warehouses which support personalized services or the analysis of customer behavior, Type 3: IMS for user-controlled context-dependent role and pseudonym management. IMS of the first two types typically store the userrelated data on the server side. The respective user transfers the control over these data to the provider of the IMS. This follows directly from the goals of AAA, respectively, profiling that both need reliable identification of persons or reliable assignment of attributes to a person.
The simplest of these classes are stand-alone systems with only one user-account database usable at this server and for the applications provided. IMS of Type 3 are characterized by the focus on control by the user. The users can decide individually, whether, to whom and for which purpose they want to disclose their data. The IMS should assist them by keeping track of former transactions and data transfers that influence their decisions (i.e., privacy-enhancing or user-controlled identity management) (Chaum, 1985; Hansen et al., 2004). Type 1 and Type 2 IMS are often implemented in a centralized way: all data are stored at the central IMS provider. Meanwhile, more and more federated approaches are realized with a plurality of IMS providers, where each stores only part of the information, e.g., Liberty Alliance.6 Type 3 IMS which enable the users to store and manage their own personal data are a specific form of federated IMS. We prefer identity management to be user-controlled – otherwise, privacy and separation of power with respect to personal information would disappear. Especially in communities, it would make no sense to have central parties who know and manage the users’ personal data. This article explains how user-controlled identity management can be achieved even in complex settings, what problems can be solved that way, which problems remain, and even which additional problems usercontrolled identity management may cause.
3.
Change of perspective: communities
In the previous section, we have seen that system design in the field of identity management, at first, addressed the communication between individuals and organizations. As communities are formed by interactions between individuals, they work differently and therefore have extended requirements.
3.1.
Individuals vs. organizations
Main differences between individuals and organizations which are of interest when designing systems are shown in the following table (Table 1). These differences show that an identity management system which fits the needs of users addressing organizations (e.g., customer / business/citizen / administration), usually in a well-structured way, not necessarily is suitable for userto-user communication which may have a variety of flavors.
3.2.
Types of relationships in communities
Considering relationships in the offline world between peers, which could be human beings as well as organizations, we see that a multiplicity of relationships is possible and real. Obviously, these relationships are dynamic: they change over time based on direct as well as indirect influences, such as experiences, observations and interpretations, associated to mutual trust and expectations in the particular relationship. Within relationships, peers may interact with each other, i.e., information flows between these peers. Interactions imply
5
FIDIS – Future of identity in the information society (http:// www.fidis.net/).
121
6
http://www.projectliberty.org/.
122
information security technical report 11 (2006) 119–128
Table 1 – Properties of individuals vs. properties of organizations Property
Individual
Organization
Interests
Individuals have specific (self-) interests which depend on manifold purposes in their personal life. Interests are highly context-dependent and may change frequently.
Focusing mainly on one or few purposes, the organization has superordinate interests which are not necessarily those of its members. The organization’s interest usually is less context-dependent and does not change frequently.
Being addressed by law
Individuals are protected by Human Rights. As far as others are not harmed and only personal matters are concerned, only very few legal obligations apply to individuals.
Organizations have to fulfill manifold legal obligations, e.g., from private or public law.
Methods enabling others to estimate behavior (comprising trustworthiness and accountability)
As individuals do not have to comply with specific quality standards (other than acting according to law and social norms), there are alternative methods: if the individual is personally known by the other party, both would rely on their experience with each other in former interactions. For some areas centralized databases are maintained which may inform authorized parties about estimated creditworthiness (scoring values from credit agencies) or police records (from the register of convictions). In addition, for specific communities and services, reputation systems are used.
Organizations usually have to comply with several quality standards (e.g., ISO 9000 or sector-specific guidelines), and in many cases they are inspected by supervisory bodies on a regular basis or as occasion demands. In addition, centralized databases inform on creditworthiness of organizations, and there may be entries in reputation systems. As reputation usually is regarded as important for organizations, they often establish internal checks and audits to prevent misbehavior resulting in bad reputation.
Expected ‘‘service level’’ in ensuring confidentiality, integrity, and availability
Interaction partners do not expect professional service levels from individuals as individuals cannot be available all the time and usually have no professional expertise to implement and maintain necessary technical and organizational methods as safeguards.
Guaranteeing appropriate service levels is usually necessary for organizations, so services are maintained in a professional way. Concerning protection goals, this means, e.g., implementing and reliably maintaining safeguards to provide the appropriate level of confidentiality, integrity and availability. Professional organizations provide the necessary environment, employ system administrators, security and privacy officers with clear responsibilities and establish proxies for absent staff.
scenarios such as communication (in the meaning of conversations), collaboration as well as cooperation between peers. However, the mentioned relationships are also present in the online world. Therefore, we want to point out these kinds of relationships by considering the interaction environment, which comprises a set of peers and a reference peer directly or indirectly interacts with. We assume that peers are represented by pIDs and give descriptive examples in the following sections. Basically, we can distinguish two kinds of relationships: direct relationships based on direct interactions between a reference peer and another peer as well as indirect relationships resulting from the underlying community.
3.2.1.
Direct relationships
Direct relationships between pIDs of a peer and pIDs of other peers (cf. Fig. 2: edge 1) are based on interactions such as transaction processes in e-auctioning systems or discussions in chat systems. Relationships between different pIDs of a peer with different pIDs of another peer (cf. Fig. 2: edge 2) are a special case of direct relationships. This situation occurs especially in longterm collaborative environments, in which repeated interactions between peers are possible and supported, e.g., the joint
composition of articles of different topics within one Wiki or forum, in which both peers use topic-dependent pIDs. Here, additional special requirements arise to ensure privacy at a semantic level, e.g., linkability through individual communication styles, which, however, is not in the focus of this article.
3.2.2.
Indirect relationships
In contrast to direct relationships, indirect relationships are a characteristic property of collaborative networks resulting from the multiplicity of possible interactions between peers. This comprises relationships based on interactions between pIDs which the reference peer interacts with (cf. Fig. 2: edge 3). Here, the relationship between the particular peers is influenced by the associated relationships to the reference peer and it also influences these associated relationships. In order to illustrate this kind of relationship, we refer to the example of an open-source portal, in which peers participate in different developer communities while using various pIDs. Statements given and modules of core functionalities provided by the reference peer could influence developments of further modules as well as decisions of peer A or a second peer B in case of module re-use. In case the activities of peers A and B converge on basis of the modules provided by the reference peer, a relationship between them will be established when
information security technical report 11 (2006) 119–128
123
Fig. 2 – Direct and indirect relationships between a reference peer and other peers based on interactions.
they start communicating and sharing knowledge with each other. Considering that the reference peer can use the same pID as well as different pIDs when interacting with other pIDs, further requirements with respect to trustworthiness and prevention of linkability have to be regarded. Furthermore, indirect relationships between the reference peer and other peers arise from former interactions, in which the reference peer is not directly involved, i.e., between pIDs and other pIDs outside the direct interaction environment of the reference peer (cf. Fig. 2: edge 4). In this context, the relationships between the reference peer and other peers basically influence the relationships of these peers to other peers. By this, a relationship between the latter peers and the reference peer is established based on imported as well as on own experiences, observations, and interpretations. For exemplifying purposes, the previously introduced open-source community example can be extended by a further peer C re-using modules, which are based on the contribution of the reference peer, further developed by A and B. Dependent on the quality of the modules used and the underlying implementation of core functionalities, peer C gets a picture – conscious and unconscious – of the reference peer by which an indirect relationship between both peers is established which can influence further interactions between them. Finally, the community a pID is belonging to – the same community as the reference peer or another one – substantially influences the characteristics as well as properties of their relationships. In addition to the previously described relationships from the reference peer’s point of view, for community aspects, it is also important to consider relationships from the point of view of the peer being involved.
3.3.
Approaches of community implementation
From the technical perspective, most of the communities and collaborative networks are implemented as mailing list, as newsgroup or as a web application (e.g., eBay). This means, most of these communities are implemented in a centralized way: there is a provider who offers the technical system that (re)distributes messages to the community members or where the community members meet. Most of the web-based systems already offer an identity management system for AAA and potentially also for profiling, but the third type of identity management we focus on, the user-controlled approach, usually is neglected. Different approaches of community implementations can be identified according to their contextual orientation, level of interactivity, organizational structures, and role allocations. The contextual orientation of the approaches run from transaction-oriented approaches, e.g., e-shopping, electronic marketplaces or making use of services (e.g., searching for phone numbers), to approaches to be geared to long-term as well as durable interactions such as the common creation and sharing of knowledge. This comprises also systems for cooperation and collaboration, for example open-source communities, Wikis and fora as well as collaborative learning systems. The level of interactivity is also used for differentiating community and collaborative networks. So, we may identify scenarios which require very strong direct communication and cooperation (cf. groupware tools). Others, like newsgroups also imply a high degree of interactivity, but this kind of interactivity is less direct. When glancing at the electronic
124
information security technical report 11 (2006) 119–128
marketplaces, we even have to state that there is a very low interactivity between peers compared to the number of community members. Further, the interactions are rather indirectly managed by the platform through supporting functionality of processing the transactions. Furthermore, current communities and collaborative networks differ considerably in their organizational structures, i.e., with respect to the underlying objectives and conditions as well as to the properties of the network: the systems may be structured loosely or rigidly as well as flatly, hierarchically, or cross-linked. In this context, we can distinguish the organizational structure elements of communities, groups and teams. Thereby, communities are loosely structured and characterized by a large number of members working on information of a common domain, but not necessarily knowing each other. Comparably, groups are also loosely structured and address a common domain. But in contrast to communities, they are smaller and their members know each other. This is also true for teams, except for the most important distinction: teams have underlying stable and rigid structures. Concerning role allocations, we have to distinguish whether roles are fixed and assigned to a user or whether the user can choose them self-determinedly or whether role associations emerge as results of the members’ activities.
3.4.
Example: collaborative learning system
To explain our approach, we want to demonstrate the requirements and deficiencies of current systems on the basis of an exemplary scenario that is well-known to all of us: a person’s life comprises learning, i.e., acquiring and processing information in quite different domains. These domains relate, e.g., to the person’s private life as well as to his/her profession. Thus, learning is not restricted to taking courses or reading books, but also comprises acquiring knowledge by implicit learning, e.g., talking to friends, listening to talks, searching and browsing information in various sources, participating in discussion fora, etc. Hence, our aim is to demonstrate identity management behavior well-known from the offline world of learning. We map it to and realize it within an online system that supports common learning scenarios as well as various different learning approaches and methods. Currently, such a system is being developed in the BluES7 project (Borcea et al., 2005). This project aims at developing an environment that integrates the advantages of the nonelectronic world of learning and working. It provides all users of the e-learning application in an equal way with means for cooperation, communication, and interaction. Further, it aims at maximally fostering learners according to their individual capabilities and qualifications. To allow the realization of such aims, the design of the system has to consider the rules and requirements of democracy at application level, i.e., in socio-technological terms. The concept of democratization should enable all users to do – within commonly agreed rules – everything in the environment: each user gets the possibility to read, to generate, and to change 7
BluES – BluES like universal eEducation system (http://blues. inf.tu-dresden.de/).
contents. Furthermore, he/she should be able to perform those actions together with other users of the environment on his/her initiative. This requires possibilities for dynamic group building as well as for non-restricted collaboration and communication. Shared workspaces, which can be established in order to follow certain objectives, are the basis. The users of each shared workspace form an own community as they can interact with each other via this workspace. Another fundamental idea of our sample scenario is that the system’s concepts do not follow the strict role approach (e.g., administrator, accountant, author, customer, learner, etc.), which everybody knows from other applications of the online world and which eases the system design, there. But, in the non-electronic world, people do not in all cases play a single specific role. For instance, a person, who is member of a group of learners taking the course ‘‘Algorithm systems’’, might switch to the role tutor when giving a practical course about ‘‘Algebraic and logical foundations’’, where some of the learners participating in the first course also attend the practical course. Furthermore, the same person might be a friend of some of his fellow students and arranges with them meetings where they exchange experiences of their studies. In such a case, the same persons know the individual in very different situations and roles (learner, tutor, and friend). Moreover, the described person might care about scholars and assist them with their schoolwork. Since he is interested in research questions, he is also member of a research group where he works on some interesting research questions on a collaborative basis. The latter situations describe the reference person as being part of different non-overlapping communities and playing in different roles (consulting tutor, researcher) as well as being represented in different pIDs. To map those situations to the online world, it is necessary to refrain from the rigid assignment of single, exclusive roles online.
4.
Requirements and mechanisms
Having introduced the scenario, we now look into the requirements the participating peers have and discuss mechanisms that could help to realize them.
4.1. Protection goals and measures with respect to security and privacy Peers within a community or collaborative network have several requirements when interacting with other peers. Especially, peers typically have an expectation regarding the interactors’ behavior that these might fulfill or not. What fulfillment means, either might be defined implicitly (e.g., behavior follows social norms) or even explicitly (e.g., on the basis of a contractual agreement). Since interaction in the online world means some flow of information, computer security and cryptography can help to ensure the security requirements the peers might have regarding the information transmitted within an interaction and the circumstances of transmission. Typical protection goals are shown in Table 2.
information security technical report 11 (2006) 119–128
Table 2 – Overview of protection goals of peers in an interaction Protection goal
Information exchanged
Expectation of peers
Availability
All information should be exchanged which the peers have agreed upon beforehand.
The peers should be available (and reachable in time) during the whole interaction.
Integrity
The information should be correct and up-to-date; unauthorized manipulation should be detected.
The information should be authentic regarding the peer who sent it. The peers should be accountable for their actions.
Confidentiality
The information should be kept confidential against everyone who should not be involved.
Where possible, the peers interacting should be able to act anonymously or pseudonymously.
Note that not in all cases protection goals are symmetric, e.g., often a peer would like to stay anonymous while requiring accountability of the counterpart (Wolf and Pfitzmann, 2000). The concept of ‘‘multilateral security’’ (Rannenberg et al., 1999) takes this into account: it supports the peers to express their protection goals. Security conflicts are recognized and compromises negotiated. Each peer can then enforce his/her protection goals within the agreed compromise. This concept implements maximal security by minimal assumptions on others. As already outlined above, the use of user-controlled identity management can help the peers to protect their participation in interactions by determining the explicit linkability of (pseudonymous) peers to interactions and interactions to each other (Pfitzmann and Hansen, 2006). Digital information can be kept confidential during transmission by encryption. It can be made integer and authentic by adding digital signatures (and if necessary time stamps). These measures need appropriate public-key infrastructures. The correctness of information can be tested by third parties. Fair exchange can provide that both peers are equally satisfied with an interaction. With these measures, legal enforceability can be reached, but disputes between peers have to be solved outside the community or network in a legal process based on national and international law.
4.2.
Reducing misbehavior by reputation systems
Nevertheless, interaction offers peers several possibilities for misbehavior – by technical means only, misbehavior cannot be prevented. Still peers have to make assumptions on the probable trustworthiness of others: If it was not possible to agree on correct behavior explicitly or if the involvement of third parties to check correctness of information is too expensive, peers have to rely on the
125
level of trustworthiness they experienced in former interactions. Confidentiality of information transmitted means both that during the transmission the information should be kept secret and also that the peers who receive it should not redistribute it. This requires trust in the correct behavior of those peers – especially because in many cases it would not be possible to prove that confidential information leaked out at a specific peer. Legal enforceability helps to ensure a peer’s correct behavior. This holds especially for professional interaction and interaction between an individual and an organization. But many interactions between individuals might be more informal, or it might be too expensive to enforce liability. It often occurs that peers interact with each other only once. To help new interactors to estimate the others’ behavior, reputation systems have been designed and established that collect the experiences former interactors made. The most popular example of a reputation system is implemented by eBay (http://www.ebay.com/). Since 1995, eBay is a provider of an electronic marketplace where registered users are allowed to sell and buy products. Their reputation system collects the experiences sellers and buyers gained. After each exchange, buyers and seller may give marks and additional comments to each other that are added to the members’ public reputation. eBay provides a centralized reputation system where all reputation information is stored and managed on their central servers. But, there are also other designs possible, e.g., reputation might be distributed among the peers and requested in the case of an interaction. Reputation systems can only give a clue how others might interact in the future because, e.g., peers have different expectations, former interactors may have lied about others’ behavior or peers may suddenly change their behavior. But despite these uncertainties, a usually large number of reputations and an honest majority of former interactors can decrease the occurrences of misbehavior in future interactions. This means, reputation systems do not make expensive accountability measures (like, e.g., digital signatures under agreements made) obsolete, but aim to reduce the cases where expensive legal enforceability of these measures might become necessary.
4.3. Building blocks for community-enhanced identity management We have seen that in communities, individual users have specific requirements for identity management based on the necessity of managing potentially highly dynamic, multilateral interactions between peers. Building blocks covering these needs are introduced in this section, ordered according to their typical usage procedure.
4.3.1.
Community selection and forming
When introducing strong privacy mechanisms providing anonymity and/or unlinkability into online communities or collaborative networks, enhanced means for community selection and forming, i.e., finding potential collaboration and cooperation partners are sought.
126
information security technical report 11 (2006) 119–128
Looking at real-world processes, mechanisms such as reputation management, advertising, etc. might help. Thus, the decision to enter into a community mostly is made by getting to know it through advertisings or rumors from friends (or from a friend of a friend and so forth) which would correspond to the reputation concept and which might be driven by social relationships forming social networks. Once more, such decision-making values depend on the particular contexts, i.e., factors like trust and dependencies play important roles. From the community’s point of view, the question arises how newcomers should be treated. Friedman and Resnick (1998) propose to charge every newcomer an entrance fee or to use cryptographic mechanisms that ensure that peers are not able to re-enter communities with new pseudonyms after misbehavior under an old pseudonym.
4.3.2.
Trust management by reputation
Reputation systems (cf. Sections 4.2 and 4.3.1) can be used for trust management in the context of user-controlled identity management. Unfortunately, reputation systems raise new privacy aspects since reputation means to give away some privacy against the benefit of linking reputation to a pseudonymous peer. In contrast, pseudonymous reputation in turn increases privacy because the holder is anonymous within the set of all peers having the same reputation. However, if pseudonyms are used for a long time and in many interactions within a reputation system, the set of pseudonyms with the same history of reputation will become very small. Privacy-enhancing measures that help to prevent this are (Steinbrecher, 2006): Peers use pseudonyms only for a limited time. To maintain the same level of reputation, the transfer of reputation between pseudonyms, e.g., by anonymous credentials (Chaum, 1985), is needed. Each peer uses several pseudonyms in parallel and collects the reputations separately. This will reduce his/her overall reputation, but will increase his/her privacy. The set of possible reputations is limited. This potentially increases the number of peers having the same reputation.
4.3.3.
Awareness
Awareness information is fundamental for motivating community members in participating and for efficient working: Privacy and group awareness: own and other users’ privacy settings as well as the recent history of actions within the community and detailed information about the configuration of the community and its members. Context awareness: contextual information describing the environment the users are working in, e.g., place, time, utilities available, etc. Informal awareness: many people very much appreciate the consumption of additional, implicit information which increases the feeling of being part of a real (and not an artificial) environment (Dourish and Bly, 1992). Awareness information consists of additional data sets, which influence privacy-relevant decisions (e.g., consenting
disclosure of personal data or configuring privacy settings). However, awareness information may be privacy-sensitive itself, so users may want to restrict disclosure if the information is related to themselves.
4.3.4.
Separation of contexts
Communities and collaborative networks themselves do not strive for just one or a few particular objectives – they rather combine many different sub-scenarios within one environment. Therefore, users have to increase their awareness of the need to differentiate their privacy requirements. This can be achieved by partitioning personal data which are disclosed within different contexts. To give an example, a user participates in an online discussion where she wants to learn about a specific topic. Concurrently, she registers for a course about the same topic she intends to complete and get a certificate. Since she wishes to start the course within an unbiased environment by not being recognized, she switches to another pID. Consequently, the system has to be aware of the distinct contexts the users work in and should support managing the different contexts as well as assist selecting the appropriate pIDs (Borcea et al., 2005).
4.3.5.
Access
Nearly all online applications require authentication and authorization mechanisms. These mechanisms are mostly derived from the well-known ACL (access control list) or role-based access control approaches. While in the basic ACL approach permissions to operations on objects are listed together with the indication of the according pseudonym of the authorized person, the role-based mechanism lists permissions to operations on objects together with the according roles. Additionally, the assignment of at least one role to each pseudonym is needed. Since our approach allows for dynamically switching pIDs (and, consequently, also switching the corresponding pseudonyms) depending on the actual context as well as for diverse role interpretations and kinds of usage, basic ACLs and role-based access control are not suitable here. Alternatively, we suggest a mechanism which is inspired by capabilities. In order to avoid linkability of the different pIDs a user employs, anonymous credentials are used instead of capabilities (Franz et al., 2006). This way, access to all kinds of objects in the environment can be controlled independently of the aimed organizational structure on community level within a workspace.
4.3.6.
Negotiation of policies
Traditional identity management scenarios are designed for rather bilateral scenarios where the indication of strict policies on, e.g., which personal data to disclose to whom or which security mechanisms to apply for securing the transaction, allows for quite straightforward negotiations between two peers (or usually a client–server pair). In contrast, in multilateral scenarios the users’ personal requirements may diverge very much. Even if the users determine specific policies – if those policies are conditional with respect to the behaviors of others, the negotiation processes may become very complex and hinder the actual work. Since current research approaches in the field of negotiations on basis of policies primarily focus on traditional IMS scenarios (cf. Section 3),
information security technical report 11 (2006) 119–128
conceptual designs and implementations of such complex negotiations are open research questions until now.
4.3.7.
Workflows
By adapting the organization of personal lives to new technical possibilities, individuals more and more relocate administrative tasks like time management or arrangements between community members to the technical level. While similar workflows between organizations and individuals are quite specified, the formalization of workflows of personal lives is much more complex and dynamic which again poses particular challenges for the assisting system: it must regard specific privacy and identity management concerns of the individuals and it must enable to adapt the process flow during each phase of the workflow. Generic building blocks of workflows should be offered to the users. When considering privacy aspects, workflows should not allow for recognition of a user in case he/she re-uses the same building block of workflows under different pIDs. Therefore, it might be reasonable to allow export of building blocks to other users to increase the anonymity set. Of course, in this case the building blocks have to be sanitized to avoid linkability options. To enable users to individually construct their own workflows from the introduced building blocks, users need a framework for composition and modification and the building blocks should be designed to support these operations. Overall, the framework and set of building blocks form an identity management meta-system (IMMS).
5.
Conclusion and outlook
User-controlled identity management is needed to enable trustworthy privacy. Many building blocks for user-controlled identity management are already available. But still, there are major challenges: An infrastructure for comprehensive functionality is not in place. But a comprehensive identity management metasystem is urgently needed. Current design of user-controlled identity management focuses mainly on the relation individual – organization. With an increasing number of relations between individuals, identity management will be needed for these relations as well. For multilateral interactions of various characteristics within communities, further building blocks are needed such as reputation management, providing awareness information in a user-controlled way, multilateral negotiation, and multilateral AAA (e.g., by voting). For various applications, identity management has to be blended into generic, but individualizable workflows to fulfill individual needs without burdening the user with too many decisions regarding identity management. How this can be achieved is a current research question. Besides the obviously needed functionality at the user’s side to enable user-controlled identity management, to interact with others needs obligation management for an easier and more trustworthy compliance with policies agreed for particular personal information of the interactor. These policies may be negotiated individually or they may be given by law.
127
Identity management, user-control, and privacy-enhancements are all important as individual aspects and even more their synthesis. However, we must not forget that all these properties are only secondary properties of applications. Users use applications to communicate, keep relationships, do business, and the like. Thus, the main challenge when designing a comprehensive identity management infrastructure for multilateral interactions is to support the users in these primary properties of the system they use.
references
Bauer M, Meints M, Hansen M, editors. D3.1: structured overview on prototypes and concepts of identity management systems. Deliverable 3.1 in the network of excellence FIDIS – future of identity in the information society, V1.1, http://fidis.net/ fileadmin/fidis/deliverables/fidis-wp3-del3.1.overview_on_ IMS.final.pdf; September 2005 [current May 2006]. Borcea K, Donker H, Franz E, Liesebach K, Pfitzmann A, Wahrig H. Intra-application partitioning of personal data. In: Proceedings of workshop on privacy-enhanced personalization (PEP 2005), Edinburgh, UK; 2005, http://www.isr.uci.edu/pep05/papers/ borcea-pep.pdf [current May 2006]. Chaum D. Security without identification: transaction systems to make big brother obsolete. Communications of the ACM, http://chaum.com/articles/Security_Wthout_Identification. htm, October 1985;28(10):1030–44 [current May 2006]. Dourish P, Bly S. Portholes: supporting awareness in a distributed work group. In: Proceedings of ACM CHI’92 conference on human factors in computing system, http://www.ics.uci.edu/ wjpd/publications/1992/chi92-portholes.pdf; 1992. p. 541–7 [current May 2006]. Friedman EJ, Resnick P. The social cost of cheap pseudonyms [Version of 1998: Mimeo, Version of 2001]. Journal of Economics and Management Strategy, http://www.si.umich.edu/ wpresnick/papers/identifiers/081199.pdf, 1998;10(2):173–99 [Version of Aug 1999, current May 2006]. Franz E, Bo¨ttcher A, Wahrig H, Borcea-Pfitzmann K. Access control in a privacy-aware eLearning environment. In: Proceedings of AReS 2006, workshop on security in eLearning (SEL), Vienna; April 2006. Hansen M, Berlich P, Camenisch J, Clauß S, Pfitzmann A, Waidner M. Privacy enhancing identity management. Information Security Technical Report 2004;9(1):35–44 [Elsevier]. Pfitzmann A, Hansen M. Anonymity, unlinkability, unobservability, pseudonymity, and identity management – a consolidated proposal for terminology. Working paper v0.28, http://dud.inf. tu-dresden.de/Anon_Terminology.shtml; May 2006 [current May 2006]. Rannenberg K, Pfitzmann A, Mu¨ller G. IT security and multilateral security. In: Mu¨ller G, Rannenberg K, editors. Multilateral security in communications. Technology, infrastructure, economy, vol. 3. Mu¨nchen: Addison-Wesley; 1999. p. 21–9. Steinbrecher S. Design options for privacy-respecting reputation systems within centralized Internet communities. In: FischerHu¨bner S, Rannenberg K, Yngstro¨m L, Lindskog S, editors. Proceedings of the IFIP TC-11 21st international information security conference (SEC 2006), 22–24 May, 2006, Karlstad, Sweden. International Federation for Information Processing (IFIP), vol. 201. Springer; 2006. Wolf G, Pfitzmann A. Properties of protection goals and their integration into a user interface. Computer Networks 2000;32(6): 685–700.
128
information security technical report 11 (2006) 119–128
Katrin Borcea-Pfitzmann is a research assistant since 1997 when she got the diploma degree in Computer Science. She has been working as project leader and researcher in different projects in the area of e-learning where she is also working on her PhD. Her major research interests are new concepts and technologies in the area of collaborative e-learning, privacypreserving e-learning and reputation management. Currently, she is working in the projects PRIME, BluES, and privacyaware e-learning.
Marit Hansen is a computer scientist and is head of the ‘‘Privacy-Enhancing Technologies (PET)’’ Section at the Independent Centre for Privacy Protection Schleswig-Holstein (the state privacy commission), Germany. Since her diploma in 1995 she has been working on security and privacy aspects especially concerning the Internet, anonymity, pseudonymity, identity management, biometrics, multilateral security, and e-privacy from both the technical and the legal perspectives. In several projects she and her team actively participate in technology design in order to support PET and give feedback to policy makers.
Katja Liesebach studied Multimedia and Computer Science. Since 2004, she is a research assistant working on her PhD at the Department of Computer Science at Dresden University
of Technology. She mainly focuses on the educational semantic web as well as on challenges and questions raised by the embedding of collaborative e-learning in/to privacyenhancing environments. In this context, she is also author and co-author of several scientific publications. Currently, she is involved in the projects PRIME and BluES.
Andreas Pfitzmann is a professor of computer science at Dresden University of Technology. His research interests include privacy and multilateral security, mainly in communication networks, mobile computing, and distributed applications. He has authored or co-authored about 120 papers in these fields. He received diploma and doctoral degrees in computer science from the University of Karlsruhe. He is member of ACM, IEEE, and GI, where he served as chairman of the Special Interest Group on Dependable IT Systems for 10 years.
Sandra Steinbrecher is a scientific assistant of Computer Science at Dresden University of Technology. Since her diploma that she received from University of Saarland in 2000 she has been working in several projects and areas of privacy, computer security and cryptography. Her major research interests are the modeling and measurement of anonymity in distributed networks, privacy-enhancing identity management and the design of privacy-respecting reputation systems.