The Implications of Users' Multimedia Privacy Perceptions ... - CiteSeerX

1 downloads 3007 Views 70KB Size Report
communication environments: virtual reality, video conferencing and Internet multicasting. People have been ... Multimedia environments, however, can distort that world, making natural assumptions ..... behaviours but totally interface related (Section 6 part ii). This is an ideal ..... 113- 133 IOS Press, Amsterdam. Raab, C. D ...
The Implications of Users' Multimedia Privacy Perceptions on Communication and Information Privacy Policies. ANNE ADAMS Computer Science Dept University College London Gower Street, London. UK. WC1 6BT [email protected]

Global multimedia communications is a powerful force in advancing the freedom of information.

However, with an increase in multimedia data, and functionality for

accessing and using such data, there are associated risks.

The limit of users'

acceptability for these risks is of the utmost importance because perceived infringements of privacy can lead to a rejection of communication technology thus decreasing its commercial viability. research

into

communication

users’

privacy

environments:

This paper details the findings from three years perceptions

virtual

reality,

of

three

video

information

conferencing

multimedia

and

Internet

multicasting. People have been interacting with the world and each other for thousands of years so replicating real world scenarios virtually is only natural. Multimedia

environments,

however,

can

distort

that

world,

making

natural

assumptions inaccurate. The results from these studies show how this has serious implications for the present and future approaches to privacy policy making within multimedia environments. Ultimately to take these findings on board there is a need for a change in the current adversarial approach of many security departments. The major threat to privacy may not necessarily be from malicious intent.

All perceived

invasions of privacy in these studies occurred unintentionally but were found to be inadvertently supported by the technology and policies of implementation.

Privacy has often been suggested as a basic human requirement. The U.S. Supreme Court ruled that privacy is a more fundamental right than any of those stated in the Bill of Rights (Cowan, 1983; Schoeman, 1992). Policy makers for computer systems, however, have the complicated job of weighing up the import of privacy against the call for freedom from censorship (James Boyle, 1997). Vitally important in this juggling act is an appraisal of users' perceptions of their privacy within these environments.

This issue is complicated further, in multimedia communications, by

the information complexity and increasing abilities to utilise that information in and out of context.

In order to devise appropriate and effective privacy policies for these

environments it is vital that we accurately identify users' perceptions of privacy within computer mediated communications. This paper's findings are based on a three-year in-depth analysis within various multimedia environments for information exchange tasks by users with differing technical expertise. Social interaction is a complex phenomenon. Although researched for centuries much of the reasoning behind our social behaviours still eludes us.

It isn't

surprising, then, that when we sought to devise a virtual world to interact with and through we should seek to replicate the real world rather than invent a new one (Agre, 1997; Laurel, 1993). As our social sphere is so complex it also isn't surprising that this can often develop into a simplification of the world.

This paper highlights

empirical research identifying that this replication or simplification may be the source of inaccurate assumptions by system designers, security and policy administrators and ultimately end-users.

(1) Policy Making and the Importance of Users' Perceptions Escalating access to computerised information and its utilisation are creating an increase in associated privacy risks Within

multimedia

complexity.

communications

(Bellotti, 1997; Neumann, 1995; Smith, 1993). these

risks

increase

with

the

information

However, as privacy becomes a prevalent issue within computer

technology of the modern age the press often exaggerates potential problems as a sensationalistic tactic. Is it any wonder then that over the past 30 years there has been an increase in the opinion that computerisation has decreased people’s privacy (Kling, 1996; Raab, 1998).

Computerisation, though, is not the only culprit in people’s

perception of decreased privacy. Slow to react organisations have been noted as

playing a key role in its decline. Even if an organisation does act promptly, because of a lack of standards across organisations, this may prove to be pointless. (Adams & Sasse, in press; Smith 1993). There is a need for policy makers to understand how privacy issues arise (Agre, 1997). To develop appropriate and effective privacy policies for computer systems it is vital that they are based on the right information. Policy makers must therefore fully understand the risks and capabilities of the technology. Marc Rotenberg (1992) has argued that enlisting the help of computer professionals can greatly help in the production of good computer privacy policies. Various successful instances are sighted where members of the 'Computer Professionals for Social Responsibility (CPSR)' and other computer professionals have helped in developing effective privacy policies.

However, any expert may have a distorted perspective of the

situation and potential risks that do not reflect the perceptions of those whose privacy needs protecting. As multimedia communication deals with complex information in a new way the current 'personal information' policy paradigm must be reviewed to assess its appropriateness within this domain.

The concept of personal information is a

confusing term, the majority of privacy literature uses this expression but few define what they mean by it.

Wacks (1989), however, in his book 'Personal Information:

Privacy and the Law' provides a definition of personal information; "Personal information consists of those facts, communications, or opinions which relate to the individual and which it would be reasonable to expect him to regard as intimate or sensitive and therefore to want to withhold or at least to restrict their collection, use, or circulation." (P.26) Two things should be noted from this definition; firstly that it is the individual's perception of what is intimate or sensitive that is noted as important, secondly that the information 'relates to the individual'. However, as computers and thus policy making issues become more complex this has often been simplified to 1) other's perceptions of what is sensitive to the individual and 2) defining information that is personally identifiable as the most sensitive information. (Bennet, 1992; Agre, 1997).

The

problem with many definitions of 'personal information' is that they concentrate on the information itself rather than how it is perceived (Davies, 1997). It must also be remembered that pivotal to the privacy concept is a notion of the individual and his or her relationship with society (Wacks, 1989). For us to be private there must be a

public environment. Privacy and thus being private can only be reviewed within that public context (Goffman,1969; Agre, 19971 ). It has been argued that the purpose of privacy policies is to increase trust in technologies and organisations through procedures to 'take the lid off' personal dataprocessing media (Bennett, 1992). However, privacy, as with trust, relies on how we perceive it. It is not necessarily important how private or safe we are but whether we perceive ourselves to be safe and private. Over recent years there have been moves to provide privacy policy feedback to computer users in an aim to increase perceptions of privacy.

These 'semantic cueing mechanisms' (e.g trust badges, opting-in vs.

opting-out), however, rely on accurate appraisals of users' perceptions of privacy. If policies are based on inaccurate users' privacy perceptions they will not address users' current and future fears and may complicate further issues. Within the fast changing field of multimedia communications a need to keep ahead of potential privacy problems has led to an increase in privacy policies based on anecdotal findings (Bellotti & Sellen, 1993; Mackay, 1994). This approach may uncover some important issues but without a holistic appraisal it may only highlight idiosyncratic problems particular to specific situations and organisational cultures (Dourish, 1993).

(2) Users' Perceptions based on 'Mental Models' Throughout our lives we develop 'mental models' (internalised mental representations) of the world and our interactions. Mental representations contain both procedures and conceptual structures based on incoming information being associated with already existing knowledge (Johnson-Laird, 1983).

These mental models play a central and

unifying role in our conceptions of the world and enable us to predict and interact with it. You may have a mental model of a restaurant; what is expected to be found (e.g. plates, food, tables, cutlery) to occur (e.g. ordering food, eating food, paying for that food) or how you should behave (e.g. etiquette - table manners, tipping etc.). These mental models are affected by previous experiences that are often couched within cultural contexts (e.g. US tipping culture, Greek plate throwing etc.).

Our

mental models cannot be purely cognitive as they develop within cultures where learning them is part of our assimilation and socialisation (Harrison & Dourish, 1996).

1

As Agre (1997) points out information is not a commodity but strongly embedded in the way we live our lives.

Mental models, therefore aren't scientifically based but are incomplete, unstable and often superstitiously based (Norman, 1983).

We all assume that in many situations

we know the codes of practice of what is acceptable and unacceptable behaviour (e.g. acceptable to clap at the end of a theatre performance but not at the end of a funeral service).

However, these codes change within different cultures and these cultures

can vary between organisations, cities or countries.

I myself found that within New

York – Little Italy – it was acceptable for people to stare into restaurants & indeed it has become a pastime for many New Yorkers to own telescopes and view others in their homes.

When returning to London I again encountered someone watching me

within a restaurant. The restaurant staff, however, vehemently chastised this behaviour and the police were almost called. When in New York my acceptance of the behaviour was governed by viewing others acceptance of the behaviour. However, other community outsiders (non New Yorkers) viewing that this behaviour was acceptable within this culture may still have found it invasive and vowed never to return to New York again.

What is of vital importance here is for us to receive

feedback of what is acceptable and unacceptable within a culture. This allows noncommunity members the ability to assess whether they can adjust to the cultural changes or not. Although the divides of what is acceptable or unacceptable can change between societies most people within those societies know the divides. Therefore those who breach those social barriers are usually fully aware they are doing so. Moving between different cultures can be difficult but for many the human ability to adapt enables many of us travelling between different cultures to learn what is acceptable or unacceptable by social cues from others within that culture. Social cues / norms & pressure are therefore exerted to produce informed knowledge of acceptable / unacceptable behaviours.

What is accepted as social / anti-social

behaviours, however, can not only change between cultures but between different situations and contexts. This allows us to retain some degree of privacy within certain contexts where we can have the freedom to express ourselves and our individuality free from social censorship (Schoeman, 1992).

Goffman (1969) highlights the

importance of the front and back sections of society (entrance and back entrance to a house, different sections within a house, city) which are often physically presented (e.g restaurant front entrance grand, back entrance rough) with visual cues. It is noted how some backstreet behaviours are deemed as inappropriate on the mainstreet and

visa versa. Physical cues for each situation can act as an added cue to appropriate behaviour within that situation. However, mere physical cues are pointless without a shared understanding of the place within that community.

(3) Replicating the Real World Virtually There have been many accounts of virtual worlds being modelled on the real world. Agre (1997) presents a comprehensive review of the replication of reality by computer designers. The limitations of the 'Mirror World' (Gelernter, 1991) approach when looking to devise privacy enhancing technologies are also presented. The mirroring of the real world in a computer format has been a guiding influence and a source of design flaws at all levels of computer system design.

Neural nets are a

source of architectural simulations of the brain while Virtual Reality has sought to simulate whole environments virtually.

Within multimedia communications real

world spatial metaphors are often used to assist and shape interactions (Harrison & Dourish, 1996).

Interface design is also renowned for its use, sometimes

detrimentally, of real world metaphors e.g. desktop metaphor (Lakoff and Johnson, 1980; Laurel, 1993).

Even the language used within an interface can be based on

metaphors and has been found to affect the mental model developed by the user (Clarke & Sasse, 1997).

Emotive relationships have also been noted as associated

with linguistic metaphors2 . The advantages of this approach in system design are numerous but two important benefits for the user are an:•

Increased sense of familiarity and thus comfort with the environment



Increased ease in the assimilation and learning of a familiar environment

However, there have been problems highlighted within Human Computer Interaction research with this approach namely that:•

Confusion and disorientation will occur if there are elements of the virtual world which don’t match the real world



Advantageous elements of the virtual world may be missed, as they don’t fit the users' (or designers) mental model of the real world metaphor.

2

Linguistic metaphors are often used in every day conversations. An argument, for example, is often referred to as a battle with sides, defenders, attackers where an argument can be 'shot down'. Emotive associations with this metaphor also exist. To lose an argument is to FEEL defeated / bad. (Lakoff and Johnson,1980)

The confusion and disorientation noted could be due to a lack of shared social & physical cues within Internet and multimedia communication environments. The Internet, in particular, covers most continents and thus many cultures and yet it can isolate us from the very social cues that allow us to adapt our behaviours accordingly. Within the virtual world there are often no clear communities with cues from others of what is acceptable / unacceptable behaviour. Even within specified communities context cues are often lacking thus not allowing us to accurately judge appropriate behaviours for different situations.

This can lead to confusion regarding where the

public / private divide occurs. Laurel (1993) notes that, as with theatre, the interface (i.e. the stage and its actors) is all the user sees or is interested in. This highlights the importance, for the user, of cues that are often missing in interface designs. Harrison and Dourish (1996) argue that it is the sense of place rather than the physicality of space that frames our behaviours.

Many collaborative and communication

environments are designed around replicating real world spaces through spatial metaphors.

However, they highlight that these replications do not produce the

socially constructed understanding of place we require for mediating our interactions. Ultimately, many cues can be relayed within a virtual format but much is often missed. Worse than a lack of cues, however, is the presentation of inaccurate cues. If we have an inaccurate mental model of a communication environment we are likely to inaccurately predict its behaviour or act inappropriately. With respect to privacy my research has highlighted that this will produce the following user responses: 1. The user feels they understand the cues and acts accordingly. 2. Unfortunately the user is unaware that the system has limits in how far it is actually replicating the real world. 3. The user makes assumptions, which are not being fulfilled by the system. 4. When the user realises their assumptions are inaccurate this can result in:i)

an increase in perceived invasions of privacy,

ii)

an emotive rejection of the technology and

iii)

decreased trust in the organisation that instigated the technology

A simple replication of a real world scenario, which isn't supported by the system, is a frequent cause of user misconceptions encountered by HCI professionals. There has subsequently been much research within the HCI field into users' mental models (Johnson-Laird, 1983; Norman 1983), developing systems around users' mental models (Norman, 1986; Clarke & Sasse, 1997) and specific metaphors that aid or

hinder the development of appropriate mental models (Hutchins et al 1986; Lakoff and Johnson,1980; Norman 1983). Desktop metaphors are often presented as an example of how a user is encouraged in the misconception that they are acting directly upon entities (e.g documents, files) when they are only acting upon an interface that then interacts with the system. This can lead to an inaccurate mental model of their actions and control within these environments.

This paper highlights how many of

those misconceptions can lead to privacy invasions.

All of the invasions of privacy

(and potential privacy invasions) that occurred within the studies, detailed in this paper, were unintentional and due to misconceptions of the system or virtual situation. Users retained inaccurate assumptions of what was and wasn't appropriate behaviour, for themselves and for others, within the computer mediated environments. However, these problems cannot be dealt with using the current policy approach, of relating privacy issues and potential solutions only to personally identifiable information. This approach, within multimedia communications, is impractical, over restrictive and yet provides ineffective privacy protection. Ultimately it is vital to understand that the virtual world is NOT a simple replication of the real world and thus assumptions on that basis should not be made. Further research is needed into human perceptions and interactions within this ‘New (separate) World’ if we are to understand it let alone manipulate it. I do not argue that we should remake the virtual world removed from the real world, as this would be impossible. Every perception we have and every action we take is couched within our experiences and understanding of the social world around us (Goffman, 1969; Giddens, 1984). However, many everyday assumptions we make to help us navigate our everyday life are not supported and inaccurate within the virtual world.

It is

important to understand which assumptions become inaccurate and how this affects further computer-mediated interactions.

My research has concentrated on how the

system end-user perceives their privacy within technically mediated communications.

(4) Studies Table 1: Summarised3 studies from three-years privacy research. TITLE

Papers

STUDY DETAILS

STUDY 1

• Users' security perceptions (specifically authentication mechanisms)

Adams et al, (1997); Adams & Sasse, • (in press) •

STUDY 2

Privacy perceptions in 'Video Conferencing' & 'Virtual Reality'

(Adams, 1999) •

STUDY 3

environments

• Privacy perceptions of Conference Multicasting

Adams & Sasse, (1999b)

STUDY 4



3

Perceptions of an incident of privacy invasion

Adams & Sasse, (1999a)

An initial questionnaire of 139 computer users' security perceptions from organisations throughout the world was reviewed. 30 in-depth interviews from a telecommunication organisation & a nontelecommunication related organisation. 35 undergraduate students at UCL used a videoconferencing system for interactive sessions with each other and a tutor throughout an 8-week network communications course. A series of focus groups throughout the course assessed system privacy implications. Participants were initially novice users with high system immersion levels and conscious in-group surveillance knowledge. 9 Ph.D. students at universities in the UK appraised a prototype virtual reality system through a focus group. The system was introduced as a potential information exchange aid to participants whose multimedia communications knowledge varied from novices to experienced users. High system immersion levels were ascertained with conscious in-group surveillance knowledge. 24 attendees at a conference that was multicast on the Mbone were interviewed indepth. Participants were expert users with varied system immersion levels and conscious in-group surveillance knowledge. 46 university departmental members responded to a quantitative/qualitative questionnaire evaluating a video surveillance device positioned in a common room. Participants had low immersion levels in the system and most were without conscious surveillance knowledge or official consent to surveillance.

The findings from these studies are too extensive to detail in this paper. Further information on each study can be found in the published papers (see table 1) whilst a summary of the relevant impact on privacy policies can be found in the following sections. A multimedia communications model will be detailed in a further paper and in the final PhD Thesis.

(5) Limits of 'Personal Information' in Multimedia Communications Much privacy policy literature centres on concepts of 'personal information'. However, there are many limitations with this concept for the specific nature of multimedia communications.

Restricting privacy policy to identifiable data is

therefore reviewed within this context.

My research has identified the concept of

'information sensitivity', which is briefly presented. i) Identifiable but not invasive / Anonymous but invasive All my research highlights how the present privacy paradigm is totally inadequate

for

multimedia

communications.

The

majority

of

multimedia

communication is directly personally identifiable (e.g. user's visual image, email address, name etc) yet it would be impractical to treat it all as personal information. However, some multimedia environments allow for complete anonymity which produces the misguided impression that no 'personal information' is released therefore users' privacy is secured. Imagine a situation where a woman goes into a room full of strangers. Everyone seems polite and normal except one person who keeps standing very close to her often right in front of her constantly staring at her.

Every time she moves

around or even out of the room this person follows her. Whenever she starts up a conversation with this person they ignore her but appear to be listening intently when she has a conversation with anyone else.

This would appear to be anti-social

behaviour and the woman would be justified in feeling this was invasive behaviour. Now imagine that this situation is within a virtual environment where anonymous avatars (graphical representations of the user e.g. cartoon characters) represent everyone. Would the woman feel less invaded or more so? According to the current privacy policy paradigm the woman has not been personally identified and yet she noted these behaviours as invasive.

This situation actually occurred and a later

investigation proved that these were neither intentional nor malicious anti-social behaviours but totally interface related (Section 6 part ii). This is an ideal example of how replicating the real world can cause some clashes in perceptions, as the virtual world is not accurately replicating reality. How we are identified relates very strongly to which context we are identified within. Some social psychologists make the distinction between the personal and the social identity (Auoustinos & Walker, 1995; Tajfel, 1981) where the former relates to

characteristics that are strictly individual and the latter to an individual’s position within a social group.

Within the traditional computer privacy paradigm personal

information relates to both information about an individual (our name etc.) and their social groupings (our ethnic background, political / religious convictions, area in which we live). society.

This highlights the importance of the individual's place within

However, it could be argued that the social grouping itself has its own

identity, which relates4 to the individual.

This would mean that although an

individual is anonymous if the social grouping is identified the individual is indirectly identified.

Within Britain a recent advertising campaign highlighted this problem

when its advertisements detailed a specific street (a social grouping) as containing individuals (not identified) who were breaking the law. People within the street, who were not breaking the law, reacted very negatively to this portrayal to the world of negative details about their street. Individuals could similarly find it invasive if sensitive information is made public about anonymous individuals from their specific school, church or social group. This brings to the fore the notion of a social grouping privacy. It could be argued that as we become larger, more multicultural societies the smaller social groupings we join which retain our beliefs, feelings and biases become more important. ii)Information Sensitivity Privacy accounts often make a simple private / not private distinction by devising privacy mechanisms for either all information or just ‘personal information’ without clearly defining what this term means to the user. My research has identified the concept of information sensitivity.

Users rated certain types of information as

having “sensitive” or “private” dimensions with degrees of sensitivity; in turn, this perception determined the amount of effort expended on reviewing and maintaining that information. Within multimedia communications the information itself can be divided into two broad perceptual categories. topic

of

discussion,

whilst

secondary

Primary information relating to the

information

social/psychological characteristics about the user.

relays

other

interpretative

Primary information may greatly

affect the perceived sensitivity of secondary information, or vice versa.

Often,

privacy risks associated with primary information are noted and assessed by users and 4

According to Wacks' (1989) definition of personal information if the information relates to the individual it could be personal information. The information, however, may not relate directly but indirectly via a social grouping.

organisations but the secondary information (a source of many user assumptions & misconceptions) is overlooked.

Technical details of a process discussed within an

email conversation would be primary information.

If, however, a user relayed

obviously incorrect technical details this would portray his lack of knowledge / understanding of the data and thus also be secondary level information.

With the

increase of multimedia there is also a relative increase in secondary information. e.g. text textual cues : the presentation of the information, type of language used audio verbal cues : tone of voice, accent / dialect, gaps in conversation Video visual cues : dress & appearance, mannerisms Users often trade-off these types of risks against their assessment of who is viewing the information and what the benefits are from the information exchange. However, if their appraisals of the situation (e.g. who's viewing the information, in what context) are inaccurate they may have released information of a higher sensitivity than wished.

With increased system capabilities and poor interfaces, inaccurate

perceptions of situations and potential risks are often portrayed to the user.

Once

realised this can make previously established natural interactions inappropriate, making individuals feel awkward, flustered etc.

Social perceptions of ourselves may

also be disrupted causing personal, social grouping and even organisational discreditation (Goffman, 1967). iii) 'Information Sensitivity' and the Information Receiver Privacy risks associated with the information receiver, such as vulnerability and trust, can restrict self-expression and personal development. This highlights whether it is what is known about a person that is invasive or who knows it. All of my studies identified that when the user deems information to be potentially sensitive there is a need for greater benefits from disclosure if the Information Receiver (IR) is personally known.

It could be argued that because highly sensitive information is

more representative of a person, there is higher risk of embarrassment with a known IR, than with a complete stranger. We may not mind how someone living thousands of miles away and who we’re never going to meet views us, our beliefs and our attitudes but if he is the son of our next door neighbour the personal risks involved increase ten fold (or more). However, with a personally known IR trust stemming from personal experience or from social / organisational norms can encourage a highrisk disclosure depending on the perceived benefits.

Information sensitivity is not a static factor it can change depending on who receives the information. I personally wouldn't mind the supermarket knowing what I consume considering, like many, that it is low sensitivity information.

However, if

close friends or relatives, who could make valued judgements about me, knew how much chocolate or alcohol I consumed, the information becomes highly sensitive. Similarly multimedia information perceived by the user to be of a low or medium sensitivity is often not regarded as potentially invasive.

However, if this information

is released, without the users' awareness, to 'outsiders' it is often deemed as invasive (Goffman, 1981).

It is vital, therefore, to assess users' assumptions about the

information receiver prior to distribution decisions being made.

In study three,

conference organisers decided to broadcast sessions over the conference hotel TV network based on the speaker's acceptance of remote non-attendees viewing the sessions.

However, my research identified that, unlike remote viewing, the

respondents perceived conference hotel viewing as not important enough to outweigh the privacy risks of broadcasting. It is important to understand privacy trade-offs so that the effects of changing circumstances can be assessed prior to users loosing their trust in the organisation. iv) 'Information Sensitivity' and its Usage Traditional data protection systems often define some data to be inherently sensitive5 requiring special protection rules whilst other types of data are deemed to be relatively innocuous6 .

However, there are valid criticisms of this approach namely

that data sensitivity depends on the context in which it is used (Raab & Bennet, 1988; Dix, 1990). The recording and re-use of information was identified in all studies to be a major factor in potential invasions of privacy.

Recording of information increases

the likelihood of re-using the information (in its entirety) within a different context or editing the information. The increase in media and thus secondary information was identified as directly related to an increase in the information's potential invasiveness when re-used in a different context.

In study three, conference sessions broadcast and recorded

were acceptable by the majority of speakers when viewed within a similar context.

5

Information relating to a persons 'racial or ethnic origin, political opinions, religious or philosophical beleifs, trade-union membership, or concerning health or sex life. 6 Payroll data, household management data

However, when respondents were questioned on secondary level information being assessed within a different context this was found to be unacceptable e.g. i)

Sessions used to illustrate mistakes commonly made in presentations.

ii)

Session evaluated to study the behavior of “techies”.

iii)

Sessions reviewed to identify how people from different ethnic backgrounds act and react in an argument.

However, primary level information re-used for a different purpose can also be a source of invasive material.

A popular privacy advocate's web site takes the users’

email address (meant as a means of direct communication) as a form of user ID. The reasons stated behind this information usage are to allow for an ID that is both unique and yet memorable. However, this assumes that a unique identifier (a name) is on a similar sensitivity level as an identification medium for direct communication.

A user

being asked to input his email address as a form of User ID could7 be worried about the distribution of this information and its later usage.

Although these fears can

probably be allayed by further information, it should be asked whether the threat to users’ trust in the organisation procedures and motives are worth risking. This potential privacy problem probably has its roots in the concept of a personal / not personal information distinction.

Organisations often assume that a user providing so

called, ‘personal information’ for accepted organisational practices (e.g providing a service) accepts that this can be used in any way that fits within these parameters. This again makes the mistake of assuming that information remains at the same degree of sensitivity regardless of slight changes in its usage. Editing of information is often a source of users' privacy fears as it is possible to maliciously misrepresent information.

Privacy invasions can still occur, without

malicious intent, when large sections of low sensitivity multimedia information are used out of context. In study 3 it was identified that a session presenter had been informed by chance, through a colleague, that a session he had presented at, which he knew was recorded, had a section re-used at a seminar as an example of multicasting technology.

The respondent (although an advocate of session recording) was

concerned because a section of the information may not portray his argument accurately for a non-technical audience thus potentially making him look ridiculous.

7

This potential problem is verified to some extent by the fact that it is accounted as a FAQ on their web page.

The Information receivers were obviously not worried about the privacy implications of re-using this information because; i)

The sessions were broadcast and understood to be an open forum

ii)

They were not presenting the session snippet for its primary or secondary source information but to view the medium itself (quality, reliability etc.)

However, as there is primary and secondary information within the recording they could not guarantee that this information would not be viewed and by someone not acceptable to the initial information broadcaster (system user).

6) Multimedia Isolation effects on Perceptions i) User isolation in virtual environments As mentioned in section 3 multimedia communication environments often lack social / physical and context cues required for users to accurately judge the situation and adapt their behaviour accordingly.

This argument is corroborated by my research

findings which highlight how a user can feel isolated and disorientated within these environments.

Study one identified that when users feel isolated from social cues

they resort to physical cues in the world around them. Respondents not aware of the information's sensitivity or potential security risks responded instead to their physical environment ('well it's hard to get into this building so we must be relatively safe here').

These findings have serious privacy implications when it is considered that

many Internet users communicate from their home - a great source of perceived security from privacy invasion. ii) User isolation can cause inaccurate trade -off's Within many multimedia situations users trade-off what they perceive to be potential privacy risks against perceived benefits, trust in the information receiver or low probabilities of risk (Bellotti & Sellen, 1993; Adams 1999; Adams & Sasse, 1999a&b).

However, these assessments are based on assumptions, which the

technology can make inaccurate.

If a system presents a real world scenario that

appears fully realistic but the cues offered are a distortion of what is actually occurring, inappropriate trade-offs may occur.

A system allowing the IR to freeze

frames, so they appear to be avidly viewing but instead have gone to make a cup of tea, could produce an inaccurate appraisal of their attention within the interaction. This, in turn, produces further potential privacy mismatches (if, for example, someone

else views the meeting from the IR’s seat) between the actual and what the user perceives as the IR.

Social psychological trade-offs have been made in personal

interactions for thousands of years.

The differences here are with the system’s

distortion of the cues we use to interpret; how much risk the IR is taking by engaging in interactions, how much they are contributing to interactions etc.

This inaccurate

feedback can cause the user to feel vulnerable and isolated upon realising that inaccurate trade-offs have been made. iii) Information Receivers isolation can cause unintentional privacy invasions The user is not the only one who can feel isolated within multimedia environments due to a lack of feedback.

The woman in the previously mentioned

invasive virtual environment scenario assumed the other persons invasive behaviours were intentional.

However, the person in question had problems co-ordinating his

avatar's movements.

The woman had subsequently made the assumption that

standing close to her and in front of her were intentional actions - most people don't usually have a problem controlling their movement in the real world.

She had also

assumed that the person would recognise that the avatar's actions were making her uncomfortable (as would happen in the real world with cues such as vocal sighs, looks, body language) yet there was no facial or body language feedback for the other person to receive this information.

Finally the woman had assumed that the avatar

was looking at her - even though there were no faces on the avatars - because this fitted in with her mental model of other anti-social behaviours.

The person in

question was actually totally unaware of standing in front of her, too close to her or making her feel uncomfortable. A lack of facial and body cues, which we take for granted in real world situations, can produce an even more isolating and inhibiting situation for a user. Some researchers have realised the importance of body cues and gestures within these environments and are seeking to replicate them (Rime & Schiaratura, 1991; Marsh, 1998). Ultimately the importance of feedback to the IR is as important as it is to the user when developing social norms for acceptable behaviours within these environments. iv) Isolated from user perceptions Finally, this research has highlighted how system designers, policy makers and organisations can easily become isolated from users' perceptions of environments and privacy risks.

Within study 4 this resulted in a serious invasion of privacy and a

retrospective over-reactive departmental policy devised to calm the situation and

those who felt their privacy had been invaded.

In this scenario the technology

instigators had considered the use of the technology in this situation as applicable. However, they were later identified as mis-interpreting the situation as being public while the majority of users perceived it as a private, semi-private situation. Key to their distorted assumptions was their familiarity with the environmental tools and thus their sense of control over the technology. In many of the studies designers or system instigators / organisers made decisions based on their own perceptions of the situation which is why unintentional invasions of privacy occurred and could continue to occur.

When the conference

cameramen, in study 3, panned the conference they had decided that hundreds of people were viewing the attendees so more people viewing them remotely would not be particularly more invasive.

However, the attendees in the real world situation

could see who was watching them and who was not whereas when this situation was made virtual there was no awareness of who was watching. One attendee, who fell asleep in a multicast session, found this out to his cost when his employer (viewing remotely) later reprimanded him for sleeping while attending a conference they had paid to send him to.

(7) The Importance of User Perceptions To date all my research has highlighted the importance of users' perceptions when devising multimedia systems and respective privacy policies.

However,

obtaining users' perceptions is a complicated matter in itself. Users don't always say exactly what they mean and often can appear to contradict themselves. Policy makers understand the process of seeing into the future can often be complex and yet it is important to identify how people themselves perceive current and future situations. With the help of a social science methodology - 'Grounded Theory' -(Strauss & Corbin, 1990) my research has been able to start to unravel users' perceptions of privacy. i) Collecting perceptions can be invasive The gathering of privacy perceptions is in itself a sensitive endeavour, which must be carefully undertaken to collect accurate perceptions without being invasive. Even asking a question could be invasive regardless of whether the information was

later anonymised or not. I recently attended a 'Media and Kids' conference at which an organisation presented their system for obtaining information from school children. Computers are given to a school in exchange for children completing questionnaires. The company is just about to progress into questioning children about their family consumption habits.

However, the company delegate added, knowing my privacy

interests, that as the children all had pseudonyms the information was anonymous and not a privacy risk. As my research in multimedia communications has highlighted the anonymity of information does not guarantee privacy protection.

If your child came

home from school and said 'Oh mummy / daddy.. I said in a questionnaire at school today that you drink 5 bottles of wine a day is that right?' would this be invasive? It is the nature of the questions, who is asked, and in what situation that is important and potentially invasive - not the anonymity of the responses. ii) Trust and the privacy invasive cycle Much of my research has highlighted that without feedback on potential privacy risks8 within a particular situation users will not be able to make an accurate trade-off against the perceived benefits. However, users don't go into every situation ready to assess the benefits and risks of that information exchange.

The degree of

trust in the information receiver, the technology and technology instigators determines how much of an evaluation is required. Study 3 highlighted a high degree of trust by the conference attendees in the organization instigating the technology as well as in the

technology

in

question.

With

this

trust-bond

there

is

an

implicit

acknowledgement that the trusted party will retain their best interests and not betray that trust.

Subsequent breaches in trust can have serious consequences. In study 4,

when users' privacy was invaded their trust in the technology and the organization had also been breached.

Those who felt the most discomfort subsequently rejected

transmission of any audio and video data under any circumstances.

Users' emotive

responses are produced as a defense mechanism to perceived threats, resulting from a lack of control over (potentially detrimental) representations of the self (Goffman, 1969; Schoeman, 1992). Although in study 3 respondents exhibited a high degree of trust the research identified several occurrences where this was being breached. Users' trust and thus apparent lack of concern for privacy issues should not allow 8

This does not mean blanket feedback of every risk within a given situation (e.g. outlook privacy risk feedback) as this only results in users disregarding feedback as pointless. Feedback as with information sensitivity should be detailed in degrees of importance.

organizations to become complacent in privacy policy and design programs.

My

research has highlighted that if this trust is breached an emotive rejection of the technology and the technology instigator is likely to occur along with a decrease in previous trust levels. Trust is an illusive but vital factor in information exchange and thus its maintenance should be paramount in organizational perspectives.

(8) Conclusions Ascertaining users' privacy perceptions within multimedia communications is a vital step towards developing more acceptable systems.

When reviewing users'

perceptions it is important to understand how they are constructed.

Mental models

have been identified as important in enabling us to interact effectively in social situations as well as to adapt to new situations. Our models are based on social and physical cues from our environment as well as our assumptions based on previous knowledge and experiences.

When environments replicate the real world virtually

they allow users a quicker less stressful entrance to the virtual world. However, if this world is based on limited or inaccurate social and physical cues the users are likely to have inaccurate mental models and be basing their interaction on the wrong assumptions. A comprehensive three-year research program into users' perceptions of privacy in multimedia communications has produced a detailed picture of the source of many privacy problems.

Most of my research into privacy perceptions has

highlighted the limitations of the current privacy policy approach for multimedia communications.

The concept of 'personal information' is employed by many as an

assessment by others of users' potential worries regarding some information that is identifiable.

However, within multimedia communications most information is

identifiable making it impracticable to treat as personal information.

Conversely,

some multimedia information is individually anonymous and yet can be either personally invasive or reflect badly on the individual's privacy via social grouping privacy invasion. This paper details the concept of 'Information Sensitivity' within the domain of multimedia communications. Not only are the degrees of an information's sensitivity affected by its primary and secondary content (Section 5 part ii) but also by who receives the information (Section 5 part iii) and what it will be used for (Section 5 part iv).

The research has also emphasised links between a lack of social and

physical cues (feedback) to the user, information receiver and designers / policy makers and privacy misconceptions that arise (Section 6).

Ultimately, this again

highlights the importance of users' perceptions in privacy policy and design procedures. However, it must not be assumed that users will know what is likely to be invasive as they often rely on trust in technology or organisations protecting their privacy. It must not be forgotten that even though privacy is not an important factor for some they will react strongly when they see that it has been invaded. This can result in an emotive rejection of the technology and the technology instigators beyond the confines of the present situation. There is a need to counteract these privacy problems before they arise thus solving them before people loose their trust and become emotive about the situation. There is a culture specific to multimedia communications, which allows for the free exchange of information reliant to some degree of trust.

Terms such as

'telepresence' and 'awareness technologies' are often used to highlight the benefits of information exchange rather than potential risks9 .

However, this trusting scenario is

reliant upon small, within community, settings.

Some technologies (media spaces)

that are specific to organisational workgroups have been found to reflect different cultural perspectives within the same organisation (Dourish, 1993). Relying on trust to retain privacy within that medium and community can be dangerous.

Privacy

invasion may occur unintentionally and be reacted to negatively, not by overreactionaries but by technology advocates who may just be a little less free with their information next time. This may not mean a solution of restrictive clamping down on multimedia information but conducive codes-of-conduct allowing for users to assess relevant potential risks. Communication between the user and the information receiver with regard to privacy are also required so that a social norm of acceptable behaviour can be constructed.

Virtual worlds can be isolating environments

requiring new forms of socially communicated norms. The technical and military style bias towards security of many security departments has been commented on as a narrow perspective which has produced security mechanisms which are, in practice, less effective than they are generally assumed to be (Davis & Price, 1987; Hitchings, 1995). Previous research (Adams et 9

Davies (1997) also noted how positive social persuasion has been successfully used within Britain to increase security camera usage. However, as with CCTV usage this acceptable trade-off is reliant on user assumptions, which if inaccurate can change the acceptability of the scenario.

al, 1997; Adams & Sasse, in press) has highlighted how this authoritarian approach has led to security departments’ reluctance to communicate with users with regard to work practices and user requirements. This approach does not fit with modern distributed and networked organizations, which depend on communication and collaboration. However, because of the 'enemy within' security culture of many organisations, user feedback is hard to administer. (Adams et al, 1997; Adams & Sasse, in Press).

The present privacy paradigm of privacy protection being devised

for the individual against a malicious invasion of privacy, highlights the adversarial nature of the security domain.

However, most of my research has highlighted that

socially unacceptable behaviours can be stumbled across by a lack of cues to the user and the information receiver isolating them from the social norms of acceptable behaviour for that specific situation. Often this is caused by poor interface design but also by misconceptions of user perceptions by organisations and system designers. As privacy perceptions are so complicated and multimedia communications often defy real world assumptions there is a vital need, now more than ever, to keep in tune with users' perceptions within these environments. ACKNOWLEDGMENTS I gratefully acknowledge the help and support of my PhD Supervisors Angela Sasse and Peter Lunt as well as other UCL colleagues. This project is funded by a British Telecom / ESRC CASE studentship S00429637018. REFERENCES: Adams, A. (1999) Users’ perception of privacy in multimedia communication in Proceedings (extended abstracts) of CHI’ 99 (Pittsburgh PA, May 1999), ACM Press, 53-54 Adams, A., Sasse, M. A. & Lunt, P. (1997) “Making passwords secure and usable” in H. Thimbleby, B. O’Conaill & P. Thomas (eds.), “People & Computers XII (Proceedings of HCI’97)” Springer, pp. 119. Adams, A. & Sasse (1999a) "Privacy issues in ubiquitous multimedia environments: Wake sleeping dogs, or let them lie?" In Proceedings of INTERACT’99 (Edinburgh UK, Sept 1999) Adams, A. & Sasse (1999b) "Taming the wolf in Privacy in multimedia communications" In Multimedia’99 (Orlando FL, Nov 1999)

sheep's clothing: Proceedings of

Adams, A. & Sasse, M. A (in press) “The user is not the enemy” in Communications of ACM. Agre, P.E.(1997) “Beyond the Mirror World: Privacy and the representational Practices of Computing” IN “Technology and Privacy the New Landscape” eds. Agre, P. E & Rotenberg, M. pp 29-62. MIT Press, Mass

Augoustinos, M & Walker, I (1995) ‘Social Cognition: An integrated introduction’ Sage Publications: London Bellotti, V. Design for privacy in multimedia computing and communication environments in Agre, P. E & Rotenberg, M. (eds.) Technology and Privacy the New Landscape. MIT Press, Mass,1997. Bellotti, V. & Sellen, A. Designing for privacy in ubiquitous computing environments in G. de Michelis, C. Simone & K. Schmidt (eds.), Proceedings of ECSCW’93, (Milano, Italy, Sept 1993), Kluwer (Academic Press), 77-92. Bennett, London

C.

(1992)

"Regulating

Privacy"

Cornell

University

Press.

Boyle, J. (1997) " Foucault in Cyberspace: Surveillance, Sovereignty, and Hard-Wired Censors" Telecommunications Policy Research Conference at http://www.si.umich.edu/~prie/tprc Clarke, L. & The Case of O’Conaill & of HCI’97)”

Sasse, M. A. (1997) “Conceptual Design Reconsidered: the Internet Session Directory Tool” in H. Thimbleby, B. P. Thomas (eds.), “People & Computers XII (Proceedings Springer, pp. 1-19.

Davies, S (1997) “Re-engineering the right to privacy” IN “Technology and Privacy the New Landscape” eds. Agre, P. E & Rotenberg, M. pp 143-166. MIT Press, Mass Davis, D. & Price, W. (1987) Wiley & Sons, Chichester.

“Security for Computer Networks”. John

Dourish, P. Culture and Control in a Media Space in G. de Michelis, C. Simone & K. Schmidt (eds.), in proceedings of ECSCW’93 , (Milano, Italy, Sept 1993), Kluwer (Academic Press). 125-137. Dix, A. Information processing, context and privacy in proceedings of INTERACT’90 (North-Holland, Sept, 1990) Kluwer (Academic Press), 1520. Gelernter, D (1991) " Mirror Worlds, or the Day Software Puts the Universe in a Shoeboax. Oxford University Press. Oxford. Giddens, A Cambridge.

"The

Constitution

Goffman, E (1969) "The Penguin press, London

presentation

Goffman, Oxford

E

(1984)

(1981)

"Forms

of

Talk"

of of

Basil

Society" self

in

Polity everyday

Blackwell

Press, life"

publications,

Harrison, R. & Dourish, P (1996) “Re-Place-ing Space: The Roles of Place and Space in Collaborative Systems.” In Proceedings of the Conference on Computer-Supported Cooperative Work (CSCS’96), ACM Press, 67-76. Hitchings, J. (1995) “Deficiencies of the Traditional Approach to Information Security and the Requirements for a New Methodology”. Computers & Security, 14, 377-383. Hutchins, E. L, Hollan, J. D. Norman, D. A. (1986) "Direct manipulation interfaces" in Norman, D. A. and Draper, S.W. (eds) "User Centered System Design" pp.87 - 124. Lawrence Erlbaum Associates, NJ. Johnson-laird, P. N. (1983) "Mental Models" Harvard University Press, Cambridge Mass. Kling, R (1996) “Information Technologies and the shifting balance between privacy and social control” in “Computers and Conroversy:

value conflicts and social choices” (eds. Kling. R) Academic Press. London. Lakoff, G. & Johnson, M. (1980) "Metaphors we Live by" Unviersity of Chicago Press. Chicago Laurel, B. (1993) "Computers As Theatre". Addison Wesley. New York. Mackay, W.E. Ethics, lies and videotape... in Proceedings of CHI '95 (Denver CO, May 1995), ACM Press, 138-145. Marsh, T (1998) "An Iconic Gesture is Worth More Than A Thousand Words" IEEE International Conference on Information Visualisation, London. Neumann, P. G (1995) Computer related risks. Addison-Wesley, New York Norman, D. A (1986) "Cognitive Engineering"in Norman, D. A & Draper, S. W (1986) Eds. "User Centred System Design: New Perspectives on Human-Computer Interaction" Lawrence Erlbaum Associated, New Jersey. Norman, D. A & Draper, S. W (1986) Eds. "User Centred System Design: New Perspectives on Human-Computer Interaction" Lawrence Erlbaum Associated, New Jersey. Raab, C. D (1998) "Electronic Confidence: Trust, Information and Public Administrations" in "Public Administrations in an Information Age: A Handbook" eds. Snellen, I. Th.M & Van De Donk, W.B.H.J. pp. 113- 133 IOS Press, Amsterdam Raab, C. D & Bennet, C. J (1998) "The Distribution of Privacy Risks: Who Needs Protection ?" the Information Society 14(4) pp 253-262 Rime, B. and Schiaratura, L (1991) “Gesture and Speech”, in Feldman R. S and Rime, B. ”Fundamentals of nonverbal behavior” Cambridge University Press, Cambridge. Rotenberg, M (1992) "Inside Risks" Communications of the ACM, 35, 4 pp. 164. Schoeman, F. D. Privacy press, Cambridge,1992.

and

Social

Freedom.

Cambridge

university

Strauss. A. & Corbin, J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Sage, Newbury Park,1990. Smith, J. (1993) “Privacy policies and practices: inside the organizational maze.” Communications of the ACM, 36(12), 105-122. Tajfel (1981) “Human groups University Press: Cambridge.

and

social

categories.”

Cambridge

Wacks, R (1989), "Personal Information: Privacy and the Law" Oxford Press. Clarendon

Suggest Documents