Making Autonomic Computing Systems Accountable - Semantic Scholar

3 downloads 134290 Views 178KB Size Report
research, however, suggests that software systems architectures, and the ... and 2, we give an example of the use of layered accounts of system behaviour which ...
Making Autonomic Computing Systems Accountable: The Problem of HumanComputer Interaction Stuart Anderson1, Mark Hartswood1, Rob Procter1, Mark Rouncefield2, Roger Slack1, James Soutter1 and Alex Voss1 1 School of Informatics, University of Edinburgh 2 Computing Department, Lancaster University 1 {mjh|rnp|rslack|jsoutter|av}@inf.ed.ac.uk 2 [email protected]

Abstract The vision of autonomic computing raises fundamental questions about how we interact with computer systems. In this paper, we attempt to make a start at addressing these questions by outlining their character and proposing some strategies for understanding them better. We take systems management as an activity in which to explore these issues and examine the problem of how we may make autonomic computing systems accountable, in and through interaction, for their behaviour We conclude that there is no technological solution to this problem. Rather, it calls for designers of autonomic computing systems to engage with users so as to undestand at first hand the challenges of being a user.

means in practice. This view assumes, inter alia, an inclusive definition of the notion of user, interaction styles and timescales over which this interaction takes place. We begin by reviewing the current human-computer interaction paradigm and introduce the notion of system accountability as a primary requirement for making sense of and trusting system behaviour. We then take an example of how accounts are typically provided in interaction and show why they are often inadequate. We consider some approaches to resolving the accountability problem in autonomic computing systems. We conclude by arguing for the importance of designers having a situated understanding of how current and future generations of computer systems are actually used.

1. Introduction

2. The Problem Interaction

of

Human-Computer

The vision of autonomic computing, the automated management of computing resources [4], calls for the development of computing systems whose characteristics include: • The capacity to configure and re-configure themselves under varying and unpredictable conditions; • Constantly looking for ways to optimise their workings; • Possessing knowledge of their environments and the context surrounding their activity and acting accordingly; • Anticipating the optimised resources needed to meet a user’s information needs while keeping their complexity hidden. This vision, we argue, raises fundamental questions about how we interact with computer systems. In this paper, we attempt to make a start at addressing these questions by outlining their character and proposing some strategies for understanding them better. We wish to stress from the outset that we take a very broad view of what ‘interacting with autonomic computing systems’

Our current ideas of human-computer interaction are premised on the principle that users act and systems react. User actions are performed through the user interface’s input mechanisms and their effects are signalled to the user as system state changes via the user interface’s output mechanisms. In this way, through the ‘on demand’ demonstration of a deterministic relationship between cause and effect, users learn to ‘make sense’ of a system’s behaviour and so to ‘trust’ what it does. Increasingly, however, we find that this model of human-computer interaction fails to hold. For example, it is difficult for distributed systems, such as the WWW, to sustain observably deterministic behaviour in the face of e.g., unpredictable network delays. Our earlier research shows clearly that users often demonstrate considerable resourcefulness in making sense of, and in coping with, this apparently nondeterministic behaviour [14]. Such coping strategies may often prove adedquate for the purposes at hand, but the fact remains that many systems have ceased to be accountable for their behaviour in the ways in which our

current ideas of human-computer interaction presume. Being accountable means being able to provide an explanation for ‘why things are this way’ that is adequate for the purposes at hand. Accountability, we find, becomes even more relevant for systems whose role makes it important for users to make sense of their behaviour precisely and accurately [8]. Accordingly, researchers have begun to question of how systems might be made more accountable in, and through, interaction e.g., [3]. The problem is, of course, that what ‘counts’ as an adequate account is a situated matter. We argue that accountability is a critical issue for autonomic systems. Further, we suggest that there may be different levels of autonomy and that there are attendant levels of accountability that such systems might have. In other words, there are ‘grammars’ of autonomy and attendant levels of accountability. This paper considers these two points as a preface to opening up the issues of sense making, trust and repair in autonomic computing systems. The issues we wish to address are these: what forms might autonomy take and how can it be made accountable (indeed, what character would that accountability take?). These are not trivial issues, they are at the heart of what it means to have autonomic computing systems in our lives and how we can come to trust and repair it. Drawing on our own studies of pre-autonomic – but still massively holdable-to-account – technologies [8],[14], we will address issues of where we might trust systems; what types of information might be required to have trustable systems and how we might be able to integrate such systems into the fabric of our daily lives. We want to examine questions around contingencies (how far must and how far can the world be ‘tamed’ to be suitable for autonomic computing systems?); the relationships of information, autonomy and context (does more information ‘solve’ the problem of context and how might this be employed in autonomic computing systems); and the ways that autonomic computing systems might be designed (can autonomy be a property of a generic system or is there a need to have bespoke systems?). In particular, we will examine the possibilities offered by the work of Dourish and Button [3] and the Gibsonian notion of ‘affordances’ [7]. Dourish and Button build on Gibson and suggest that we can treat accounts of system behaviour given by systems themselves as layered in character and thereby made relevant to different user populations and needs.

performance and configuration to meet the specificity of organisational needs and practices, attempting to match these at all times to changes in the organisation’s business context. The point is that systems management as an activity may change, but will not disappear with the advent of autonomic computing systems. Interactionally, systems managers may need to be able to define and manipulate descriptions of organisational goals, configuration policies and to be able to make sense of system performance data in terms of these descriptions so as to verify that organisational goals are being met, and to identify, diagnose and correct failures in their achievement. The interactional problems here are, in some senses, quite familiar. They stem from the difficulties of relating high-level, abstract system descriptions, as represented by organisational goals, configuration policies, etc., to low-level, behavioural data, with the added complexity of the likely dissociation of cause and effect because of inertia in system responses to configurational changes. Some of these issues are familiar; the real-time systems control literature, for example, highlights the value of predictive models of system behaviour as a means of dealing with cause-effect dissociation. Our own research into the static configuration of complex software systems and their subsequent deployment suggests there are other issues with which we will have to grapple. For example, we find that there is a tension between strict adherence to principles of ‘good’, standardised organisational practice and the needs that arise in local contexts of use [2],[14]. Resolving this at the system management level may call for sets of configuration rules (often contested within the organisation between different so-called stakeholders) about which actors are allowed to deviate from standard procedures and their supporting system configurations, under what circumstances and in which ways. Our research, however, suggests that software systems architectures, and the additional overheads faced by systems managers, make this often very difficult to achieve in practice [2]. The intriguing question is whether autonomic computing systems offer a way out of this problem by eliminating much (if not all) of the management overheads associated with local configuration management, or make them worse because the system becomes more opaque and complex.

3. The Issue of Systems Management

We now present an illustration of how, even in with pre-autonomic technologies, it is difficult for computer systems to provide users with an adequate account of their behaviour. In Figures 1 and 2, we give an example of the use of layered accounts of system behaviour which

We will take the activity of systems management as the focus for our examination of autonomic computing systems interaction issues. We may define the role of systems management as the fine-tuning of system

4. An Example of The Problem of Accounts

demonstrates both how these have become commonplace

in interaction design and how they still may fall short of

Figure 1: An example of a ‘level one’ account.

Figure 2: A ‘level two’ account derived from the level one account in Figure 1. what the user actually needs in order to understand or to ‘trust’ the system. Moreover, the example demonstrates the implementation of system management policy, in this case concerning what are considered to be potentially problematic Internet transactions. In what follows, we develop this example to explore the implications of an autonomic approach to security policy implementation. In Figure 1, we see what we may call a ‘level one’ account of a situation in which the user is called upon to make a decision: a firewall has produced an alert concerning an application’s attempt to access the Internet. Clearly, from a system security perspective, access to the Internet for downloading or uploading data to/from the system can be an accountable matter, particularly where such transactions are undertaken by third party applications in a way that might reconfigure those applications or allow downloaded executables to run. The

firewall’s implementation of policy with respect to these transactions is statically configured and involves calling upon the user to decide if access requests are appropriate and can be proceeded with. Figure 2 shows the ‘level two’ account produced by the system in response to the user’s request for more information. It is important to note that the policy implementation requires user involvement since the firewall software cannot by itself determine if the transaction is to be viewed as a safe one. It is expected that the user has responsibility for deciding whether the transaction should be continued, and that they are in a position to do so – that the user has the requisite knowledge to decide whether the remote site is a trustworthy, whether the transaction is legitimate for this application, and so on. Thus, the implementation of the policy can be thought of as partial, requiring the user to

‘fill in the gaps’ on the occasion of its application. A key question is the degree to which it is feasible for automation to close these gaps. We return to this question later in the paper. What the firewall system dialogues offer are accounts at varying ‘levels’ of description. The ‘level one’ account informs that there is some action required, and gives some basic details. The ‘level two’ account gives further details about the nature of the transaction at a protocol level. Presumably, the level two account is supposed to help a user reach a decision about whether or not to allow the transaction to proceed, but unless the user has prior knowledge of the trustworthiness or otherwise of upgrade.newdotnet.net or is able to spot irregularities in the details of the transaction, then such accounts will be of little use to them. The account omits a description of why it was necessary for the application to make this particular a transaction at this particular time. To confuse the issue further, applications often comprise of a number of modules and the reports generated by firewalls often refer to modules rather than the applications to which they belong. An improved basis for decision-making might be afforded by the firewall accounting for the context of the transaction: Has the message appeared in response to the user’s action or due to some background process? Is this a legitimate transaction for this application at this time? What are the potential consequences of the transaction? One problem is that the designed for layered account has a finite depth and extent; if, when the user has reached the last account, the explanation is still not adequate, the user is still unable to make a decision. Another is that it is relatively easy (technically) to supply accounts with increasing depth, where, for example, details of the abstracted levels of a protocol can be ‘unpeeled’ and made available to the user. It is more difficult to increase the extent (or ‘breadth’) of the account, that is, to relate what the application might be trying to achieve, the implications of this in the context of the user’s activity, location, and so on, until eventually (and inevitably), an account is produced that relates organisational concerns, goals and policy. That there are many possible accounts that the firewall may provide (i.e., accounts differing in their depth and extent) raises the question of what sort of account is likely to be most useful. What our example accounts afford, as Hutchby points out, is “emer(gent) in the context of material encounters between actors and objects.” ([9], p. 27). The question is, then, what do these accounts afford? We might start by saying what kinds of accounts they are, and looking at them from the standpoint of a user, we might say that they are technicist accounts. To be sure, an experienced system manager looking at these accounts might know what to do and how the account has been assembled, but what of a more

‘regular’ end-user? How would such a user know what to do and what action to take, as well as what action the system was about to, or able to, take? The firewall application provides a ‘one size fits all’ account, rather than accounts that are ‘recipient designed’ for particular users. One advantage, however, of a firewall application with a statically configured policy is that it is guaranteed to behave deterministically, that it will flag all occasions of such transactions unfailingly (unless the user disables the alert for that site, or completely; then it can be depended upon not to provide an alert). We should also note the role of organisational knowledge here: consider a user who knows what to do when she encounters a situation of this type. She will be able to take action based not on some concerted inquiry into the deep structure of the situation, but on some sui generis knowledge. Yet, there will also be occasions when this is not the case and when an understanding will be required. We will return to this point.

5. Autonomic Accountability

Computing

Systens

and

5.1 The Stranger in the Loop? Russell et al. observe that: “… automation usually addresses the tasks that can be easily automated, leaving only complex, difficult, and rare tasks. In this way, automation reduces the chance for users to obtain handson experience; having been taken out of the loop, they are no longer vigilant or completely unaware of the current operating context. Thus, ironically, autonomic computing can decrease system transparency, increase system complexity and limit opportunities for human-computer interactions, all of which can make a conputer system harder for people to use and make it more likely that they will make mistakes” ([10], p. 179). This is at the heart of the problem: the dichotomy is that autonomic computing systems are designed to work without users intervention most of the time – that is their raison d’etre – yet there is a sense in which users have to be in ‘the loop’ just enough to allow them not to be strangers to the system and what it is doing. That is to say, one needs to be confident that what the system is in fact doing is what we would hope and or expect it to be doing. We might think of this as the paradox of interactional satisficing – just enough involvement to know what is going on and what might need to be fixed without having to tend the system all the time. Yet, as in our example, we see that it is only when things go beyond the rule set by which the system can make decisions itself that the system calls on the user, so to speak. Should the user have been getting on with other things and not monitoring the system – perhaps because they believe that

the autonomic system does not need monitoring – then they are confronted with a prompt or system message into which they have to make a concerted inquiry. Harvey Sacks suggested that his interest in interaction was motivated by the need to answer the question ‘why that now?’ – this is the same question that faces the user in our example. However, Sacks was surrounded by persons interacting with each other and thus his phenomenon was not an occult one. Here, however, the user faces the need to put together some idea of what has happened to make things come to this state, and as we can see, there are few resources available (especially if the system does not make these accessible to the user). As a stranger to the system – perhaps faced with the first decision to make in many months, what does the user do and how can she know what her choices will imply? She is, to the extent that the system gets on with things satisfactorily, a stranger to the system and, as Schütz [11] notes, the stranger simply does not have the system of common relevances with the system, so to speak – like a person from another culture, they may well apprehend the system needs something, but what the right ‘move’ is is problematic because they do not possess this system of relevances. 5.2. Contextual Application of Policy and Accounts One called for charactertistic of autonomic computing systems is that they have access to contextual information, i.e., their operational environment, including other systems and the activities – and possibly the intentions – of users. The question is, does such contextual information provide a solution to the problem of making autonomic computing systems accountable? There are two issues here. The first concerns whether contextualising autonomic computing systems makes their actions more reliable. The second is whether this can help shape accounts so that they are seen as more understandable, relevant and timely by their users. In our previous studies of decision aids in medical work, we find that even where system responses are governed by a relatively simple set of rules, that these can be opaque to users of the system, who may begin to view the system as ‘inconsistent’ or adopt mistaken views about its behaviour [8]. Given that an autonomic computing system may be supposed to make operational decisions based on a high-level policy representation, the analogy with a decision aid is a useful one. Key metrics in decision aid evaluation are sensitivity and specificity. In the context of our firewall example, unsafe transactions that are blocked are true positive decisions, unsafe transactions that are allowed are false negative decisions, safe transactions that are blocked are false positive decisions and safe transactions that are allowed are true negative decisions. Clearly, what is needed is a system

with a sufficiently high sensitivity (few false negative decisions) and high specificity (few false positive decisions). If this is not achieved, then it is difficult to see how the system can be ‘left to its own devices’ in implementing policy – automating operational decisions based on policy may lead to decisions that are less ‘competent’ than if left to human skill and judgement, with hazards for organisational objectives and integrity. We can modify our firewall example to conform more closely to the vision of an autonomic computing system using contextual information to ‘intelligently’ apply management policies. One can imagine this being done with greater degrees of complexity. To contextualise the security policy in a simple way, a list of ‘trustworthy’ sites might be maintained for various sorts of Internet transactions. Rather than asking the user, access to nontrustworthy sites is denied. It may become more difficult to make sense of the system’s behaviour, since some transactions of a particular sort may be allowed, and others of the same sort denied. An account would need to be given in terms of the policy’s implementation (i.e., listed and non-listed sites) for this to make sense. If the implementation is too restrictive, then it could quickly become irksome for users who become frustrated in their attempts to carry out their legitimate work. There may also be greater demand on system managers to maintain lists of sites and to field user complaints. A less strict security policy might be to deny access to a known list of untrustworthy sites. While this is likely to provide fewer rejected transactions it is possible that users may be lulled into a false sense of security, thinking that whatever transactions are allowed are also safe. Users would still have to be alert to the possibility of false negative decisions. In order to mitigate some of these problems, one can envisage a more complex agent-based system that actively sought security information (from trusted sources) and maintained an access control list of known trusted and known untrusted sites. The system might also look intelligently for discrepancies in the transaction protocol for signs that a transaction may be untrustworthy. Although this may provide for a more specific policy implementation, it would also result in a system that which increasingly behaves in an apparently non-deterministic fashion (for some sites the system may deny access, for others it may allow access, and for a final group it may refer the decision to the user). So, while the availability of contextual information, however, does not make the problem of providing accounts of system behaviour that the user can understand and trust go away; indeed, it gets more complicated. There are now three requirements these accounts must satisfy if the user is to be able to determine that the system’s behaviour is consistent with that policy: they must provide a representation of the policy, some

mechanism for providing evidence that will enable the user to trust that the policy is being conformed to, and some means of accounting for the system’s behaviour in response to user demands. Perhaps, however, if the contextualisation of autonomic computing system behaviour extends to knowing what the user is doing and what she intends by it, then this can provide a solution to an apparently escalating problem. In fact, there are two issues here. First, our studies show that organisational policies that are implemented without factoring in the user context are likely to only make systems less usable and useful [2],[14]. So, the decision-making context within which an autonomic computing system operates ought to be sensitive to what the user is doing, or intends to do. Rolebased security policies, for example, are on attempt to incorporate the user context into system decisions. Here, organisational roles are used as rough and ready ‘models’ of users’ access requirements. Arguably, such static user modelling techniques are better than none at all, but they are brittle in that they seldom capture the full extent of their application. The second issue concerns whether knowing what the user is doing and what she intends by it can be used to shape the accounts the system provides to its users. Here, the prospects are distinctly less promising. It is, as anyone who has had experience of so-called ‘intelligent’ user agents1, is a very hard problem with, we argue, no foreseeable solution. 5.3 The limits of policy implementation Policy cannot be enacted in each and every instance by the system, since there will be occasions when the system would not be able to specify what complying with the policy would be. That is, no rule specifies within itself all instances of its application, as Wittgenstein §201 [15] reminds us. There will, therefore, be times when the user is required to decide what action complies with policy. Just as a no entry sign means no entry on occasions when one is not driving a fire engine to a fire, but entry when one is, so a file might not be downloadable from a site on all occasions save for this one. That is, there are exceptions and humans are best at coping with these. Of course, there will be times when the exception does, in fact, become the rule and, again, it is up to humans to devise when this will be and to enable systems to make the required changes to realise this. No system, therefore, can be wholly autonomic as there will be at some stage the need to have a human user input to decide policy and how to realise this – attendant to this is the need to keep the user informed about, inter alia, potential security threats, other changes in the

environment and problems with fulfilling the security policy. This is not a trivial observation, since it turns on the accountability of systems, and the ways that they make problems, threats and shortcomings visible to users, and what users do about them. A problematic account might mean that a substantial amount of effort is required to ‘excavate’ the problem and formulate its solution. Therefore, when we talk about accountability we are talking about a pervasive and foundational phenomenon. We are left with the problem of rendering the action (or inaction) of an autonomic computing system accountable as increasing complexity of the system makes its behaviour more opaque to end users. One solution might be to provide more complex accounts of the system’s behaviour, why a transaction might be available one day, but not the next, on one machine but not on another, and so on. It behoves us to suggest what type of account we would add here, and in answer to that question we want to propose not simply one account, but a series of potential accounts linked to what, following [12], we call ‘finite provinces of meaning’. That is to say, in recognising that there is no universal account that would ‘do’ for all practical purposes, we must develop a series of accounts that will do for the situated purposes – the finite provinces of meaning – that users might come up against2. Here, then, we turn to the notion of glossing. Glosses are “methods for producing observable-reportable understandings ... a multitude of ways for exhibiting-inspeaking and exhibiting-for-the-telling that and how speaking is understood.” ([6], cited in [3], p. 16. Italics in original.) As Dourish and Button [3] point out: unlike machines, humans make their actions available in, and as a part of, their doing them – that is to say glosses are in vivo phenomenon for humans in a way that they are not for machines. We said above that accounts are constituent features of the circumstances they describe and are elaborated by those circumstances – this accountability is in part a gloss – after all one could not say everything about an activity to a person, there is always something more to say. Yet that does not mean that the gloss is opaque – no, the gloss is for all practical purposes here and now, elaborating and being elaborated by the things that it glosses. It is this that machines cannot do, as Dourish and Button [3] rightly point out. Looking at the system accounts in the Figures 1 and 2, we find that they have been generated by a series of rules, rules decided by its designers when the system was created. These rules are to the effect that “if this or that happens put up this warning screen and make these choices available”. Leaving aside for the moment the 2

1

Microsoft’s ‘paper clip’ is probably the most common example of this user interface technology.

Indeed, we would argue that a substantial part of the problem in the examples given in Figures 1 and 2 is that they are designed to ‘do’ for all practical purposes.

issue of how this could happen in a dynamic system, we might ask what use such an invariant account could be. This becomes especially important if one compares it with the activities of, say, a child on a merry-go-round: the child can tell about their experience as it is going on and do so in a number of ways that inform and are informed by the activity itself – that is, they can make the situation accountable in myriad ways. Therefore, one might ask the question “how do we get at these situated accounts?” Surely, a computing system cannot be expected to provide such accounts? We agree that it is problematic for a computing system – whatever its purported ‘intelligence’ – to produce such accounts. Our solution is to be found in the examination of uses of a system and potential accounts that might come up, and to engage with users as to how the accounts might be designed so as to afford the kinds of information required. Users are not uniform, but when we look at policies and organisational arrangements we can see how ‘what usually happens’ and ‘what is policy’ might be resources to afford information, not to some idealised ‘user’, but to a ‘member’ – i.e., someone who knows what is going on around here and what are routine grounds for action and what are exceptions. We must, therefore, examine the constitution of membership by engaging with users in workplaces to develop accounts that afford what Gibson [7] termed ‘information pickup’ – i.e., information for the ‘individual-in-the-social-context’ as Anderson and Sharrock [1] put it. This also suggests that accounts should not be invariant – there will be a need to provide some information about the event within the account – but again, we argue that this can be realised through an engagement with users.

6. Conclusions Autonomic computing systems will not eliminate the need for users to interact with them from a foundation of understanding and trust. In fact, as we have argued, they make this understanding and trust potentially more difficult to achieve. We might ask how far it would be intrusive (would we want to know what the system is doing at all times and, if not, when?). We do not need to know about the workings of the postal service to post a letter. Of course, we might want to inquire into these workings if a letter is late or undelivered. Yet, we would be unhappy if the post office called at midnight to say that our letter had been put into a train or the like. The point is that the system might be inquirable-into but it should not be obtrusive – that is to say, it should afford inquiry while not making operations obtrusive. In contrast, how far we would accept a more ‘silent’ approach (is the system working and what exactly is it doing?). This is an issue because it relates to trust; how can we trust a system when we are unaware of what it is doing? Trust is not an

either or category, but depends on situated judgements which themselves turn on accountability, yet if the system does what it does in the background, so to speak, and, if it adapts, how can we know what dimensions are trustable? As we have seen, solutions to this might involve the use of layered accounts that progressively divulge more information to the user on demand. They might also involve the exploitation by autonomic computing systems of contextual information as a means, for example, to guide its behaviour and for determining what account is likely to be called for by the user at any given moment. We have argued, however, that neither of these approaches can deliver a solution in themselves. All such ‘designed for’ accounts must have finite depth and therefore are limited in their capacity to answer what we take to be the user’s canonical interaction question: ‘why that now?’. Similarly, contextualisation, in as far as it may be applied to the user’s actions, can deliver little real benefit. The overriding question is what do accounts generated by autonomic computing systems afford users, how might these accounts be assembled and for whom? We argue that it is through the consideration of these “seen but unnoticed” [5] issues that only comes from engagement with users that designers will be able to provide the kinds of accountability that users will need in order to make sense of, and to trust, autonomic computing systems.

7. References [1] R.J. Anderson, and W.W. Sharrock. Can Organisations Afford Knowledge? Computer Supported Co-operative Work 1; 143-161, 1993. [2] S. Anderson, G. Hardstone, R. Procter, and R. Williams. Down in the (Data)base(ment): Supporting Configuration in Organisational Information Systems. 1st DIRC Conference on Dependable Computing Systems, Royal Statistical Society, London, November, 2002. [3] P. Dourish, and G. Button. On “Technomethodology”: Foundational relationships between Ethnomethodology and System Design. Human-Computer Interaction 13(4); 395-432, 1998. [4] A.G. Ganek, and T.A. Corbi. The dawning of the autonomic computing era. IBM Systems Journal. 43(1); 5-18, 2003. [5] H. Garfinkel. Studies in Ethnomethodology. Englewood Cliffs, New Jersey: Prentice Hall, 1967. [6] H. Garfinkel, and H. Sacks. On Formal Structures of Practical Actions. In J. C. McKinney and E. A. Tiryakian (Eds) Theoretical Sociology. New York: Appleton Century Crofts. [7] J.J. Gibson. The senses considered as perceptual systems. Boston: Houghton Mifflin, 1966. [8] M. Hartswood, R. Procter, M. Rouncefield, and R. Slack. Finding Order in the Machine: Computer-Aided Detection Tools in Mammography. In Proceedings of the 1st DIRC Conference on Dependable Computing Systems, Royal Statistical Society, London, November, 2002. [9] I. Hutchby. Conversation and Technology. Cambridge: Polity, 2001.

[10] D.M. Russell, P.P. Magio, R. Dordick, and C. Neti. Dealing with Ghosts: Managing the user experience of autonomic computing. IBM Systems Journal. 43(1); 177-188, 2003. [11] A. Schütz. The Stranger. In Collected Papers Vol. III. Dordrecht: Martinus Nijhoff, 1959. [12] A. Schütz. The Phenomenology of the Social World. Evanston: Northwest University Press, 1967. [13] D. Stanyer, and R. Procter. Improving Web Usability with the Link Lens. In Mendelzon, A. et al. (Eds.), Journal of Computer Networks and ISDN Systems, Vol. 31, Proceedings of the Eighth International WWW Conference, Toronto, May, 1999. Elsevier, pp. 455-66.

[14] A. Voss, R. Slack, R. Procter, R. Williams, M. Hartswood, and M. Rouncefield. Dependability as Ordinary Action. In S. Anderson, S. Bologna, and M. Felici (Eds.) Proceedings of the International Conference on Computer Safety, Reliability and Security (Safecomp), Catania, September, 2002. Lecture Notes in Computer Science v 2434. Springer-Verlag, pp. 32-43. [15] L. Wittgenstein. Philosophical Investigations. Oxford: Blackwell, 1953.

Suggest Documents