User Experience Design - Concepts and Examples for ...

55 downloads 0 Views 6MB Size Report
In the gaming domain, we find a lot of unique words like “teh” (not a typo!), kludge and. “pwned”. All in all, designing the UX without really knowing the users and ...
User Experience Design Concepts and Examples for Business Students

Pedro Antunes May 2018


Foreword Chapter 1 Introduction The UX in UXD ....................................................................13 Elevator panels................................................................................................................ 13 Which button is ground floor?........................................................................................ 13 The absence of ground floor ........................................................................................... 16 Where do you place the button? .................................................................................... 18 How do you arrange the other buttons?......................................................................... 19 In summary .................................................................................................................... 20

The D in UXD....................................................................... 21 Toilet paper dispensers ................................................................................................... 21 In summary.....................................................................................................................24

Some Definitions .................................................................26 UI (User Interface) ......................................................................................................... 26 UID (User Interface Design) ..........................................................................................27 HCI (Human-Computer Interaction) .............................................................................27 HCID (Human-Computer Interaction Design).............................................................. 27 UX (User Experience) ..................................................................................................... 27 UXD (User Experience Design) .....................................................................................28

Chapter 2 User Research Users and Work Practices ..................................................29 One size fits all ................................................................................................................29 Just make it work ...........................................................................................................29 User-centred design .......................................................................................................29 Example - camping tents ................................................................................................ 31 Example - air traffic control ...........................................................................................32 Work analysis .................................................................................................................34

Deciding on the Users .........................................................35 Eliciting Data From Users ..................................................37

Marketing surveys...........................................................................................................37 Interviews .......................................................................................................................38 Ethnography ................................................................................................................... 39 Usability testing ..............................................................................................................40 Contextual inquiry ..........................................................................................................40 Contextual inquiry in practice ........................................................................................41 A contextual inquiry project ...........................................................................................42

User Requirements .............................................................44 Extracting requirements ................................................................................................ 44 Formalising requirements .............................................................................................. 45 One more note on requirements ....................................................................................47

Personas ..............................................................................48 Defining user groups ......................................................................................................48 Types of personas ........................................................................................................... 48 Essential elements of personas ...................................................................................... 48 Main goals of personas ....................................................................................................51 Primary, secondary and negative personas....................................................................52

Storyboards.........................................................................53 Conceptual Frameworks ....................................................54 Conceptual frameworks in user research .......................................................................56 Example - Mobile application for firefighters ................................................................ 57 Example - Application for helping elderly people .........................................................59

Chapter 3 Structural Design Mental Models .................................................................... 60 Expectations ................................................................................................................... 60 Mental models and design .............................................................................................60 Complexity ...................................................................................................................... 62 Training and instructions ...............................................................................................62 Familiarity and intuition ................................................................................................62 Culture ............................................................................................................................ 63 Convention...................................................................................................................... 64 Consistency .....................................................................................................................65

When users give up ......................................................................................................... 67 When to fight back .........................................................................................................68

Spatial Structure.................................................................69 Thinking about applications in terms of space ..............................................................70 Structural issues ..............................................................................................................71

UED .....................................................................................74 Structural elements......................................................................................................... 74

Chapter 4 Layout Design Psychology of Visual Perception ........................................ 76 Figure and Ground .............................................................78 Visual Constructions ...........................................................81 Grouping .............................................................................85 Proximity ........................................................................................................................85 Similarity ........................................................................................................................ 87 Closure ............................................................................................................................ 87 Continuation ...................................................................................................................89 Impact on layout............................................................................................................. 92

Symmetry ............................................................................95 Wireframes ......................................................................... 98 Sketchy wireframes ........................................................................................................99 Detailed wireframes .......................................................................................................99 Sequences of wireframes ...............................................................................................101 Developing wireframes ................................................................................................. 101

Chapter 5 Designing Affordances Use of Everyday Things ....................................................103 Everyday design thinking ............................................................................................. 107 Another example with toilets .......................................................................................109

Concept of Affordance ........................................................111

Types of Affordances .........................................................115 Perceived affordances ....................................................................................................117 Hidden affordances .......................................................................................................117 False affordances ........................................................................................................... 119

Good Affordances ..............................................................121 Familiarity ..................................................................................................................... 121 Learning......................................................................................................................... 121 Remembering ................................................................................................................122 Physical issues ...............................................................................................................123 Visibility ........................................................................................................................ 125 Mapping ........................................................................................................................ 125

Chapter 6 Interaction Design Interaction Models ............................................................ 129 Model Human Processor ..............................................................................................129 Seven-Stages Model ...................................................................................................... 130 Joint Cognitive Model ...................................................................................................132

Gulfs of Evaluation and Execution ...................................135 Gulf of evaluation ..........................................................................................................135 Gulf of execution ...........................................................................................................136

Control ...............................................................................137 Feedback ............................................................................141 Attention .......................................................................................................................142 Gaze ............................................................................................................................... 142 Comprehensibility......................................................................................................... 142 Credibility ......................................................................................................................145 Timing ........................................................................................................................... 145

Confirmation .....................................................................149 Transparency .................................................................... 152 System status and transparency ...................................................................................152 System logic and transparency ..................................................................................... 153

Recognition ........................................................................ 155 Attention ............................................................................158 Short term memory.......................................................................................................158 Focus of attention .........................................................................................................159 Example - Pilots ............................................................................................................159

Error Tolerance ................................................................160 Slips............................................................................................................................... 160 Lapses............................................................................................................................ 160 Mistakes ........................................................................................................................ 160 Tolerating errors............................................................................................................ 161 Example ......................................................................................................................... 161

Chapter 7 Design Dilemmas Flexibility-Usability Trade-Off .........................................163 Progressive disclosure...................................................................................................164

Efficiency-Thoroughness Trade-Off................................. 166 Performance Load............................................................. 169 Fitts’ Law ............................................................................ 171 Hick’s Law.......................................................................... 175

Chapter 8 Rules of Thumb Consistency ........................................................................178 Functional consistency ................................................................................................. 178 Visual consistency .........................................................................................................178 Interaction consistency ................................................................................................. 178 Feedback consistency ....................................................................................................178 User consistency............................................................................................................ 179 Applying consistency.....................................................................................................179

Minimalism ....................................................................... 184 Golden Ratio......................................................................190

Worse is Better .................................................................. 192

Chapter 9 User Experience Threshold of Indignation ..................................................193 Emotion Versus Utility...................................................... 196 Expectation ...................................................................................................................196 Retention .......................................................................................................................196 Engagement .................................................................................................................. 196 Empathy ........................................................................................................................ 197 Safety ............................................................................................................................. 199 Anxiety.......................................................................................................................... 200 Competence .................................................................................................................200 Sense of control.............................................................................................................201 Arousal ......................................................................................................................... 202

Beyond Utility ................................................................... 203 Non-instrumental goals ...............................................................................................203 Aesthetics......................................................................................................................204 Familiarity .................................................................................................................... 204 Hedonism .....................................................................................................................205

Chapter 10 Prototyping Prototyping Mindset ........................................................206 Purpose of Prototype ........................................................208 Vertical prototype.........................................................................................................208 Horizontal prototype ....................................................................................................208 “T” prototype ................................................................................................................208 Evolutionary prototype ................................................................................................ 209

Fidelity of Prototype .........................................................210 Low-fidelity prototype .................................................................................................. 210 High-fidelity prototype ................................................................................................. 212

Chapter 11

Evaluation Rigorous versus Rapid ......................................................214 Rigorous evaluation ......................................................................................................214 Rapid evaluation ...........................................................................................................217

Summative versus Formative...........................................219 Summative evaluation ................................................................................................. 219 Formative evaluation ...................................................................................................221

Quantitative versus Qualitative .......................................222 Quantitative evaluation ................................................................................................222 Qualitative evaluation .................................................................................................. 222

All Together Now .............................................................. 224 Rigorous Evaluation Methods .........................................225 Think-aloud-protocols ..................................................................................................225 Questionnaires ..............................................................................................................225 Cognitive walkthrough ................................................................................................. 227

Rapid Evaluation Methods ..............................................229 Wizard of Oz .................................................................................................................229 Design walk-through .................................................................................................... 230 Scenario based evaluation ............................................................................................ 231 Guerrilla usability testing ............................................................................................. 231 Usability inspection ...................................................................................................... 231 Heuristic evaluation ...................................................................................................... 231 Card sorting ..................................................................................................................232

Chapter 12 Design Processes Iterative Design Process ...................................................233 Analysis ......................................................................................................................... 234 Design ...........................................................................................................................234 Implementation ............................................................................................................ 234 Evaluation .....................................................................................................................234 Iterative nature of process............................................................................................234 More detailed activities ................................................................................................234

Data gathering .............................................................................................................. 235 Modelling ...................................................................................................................... 235 Visioning .......................................................................................................................235 Conceptual design ........................................................................................................236 Intermediate design .....................................................................................................236 Detailed design .............................................................................................................236

Product Design Process .................................................... 237 Market/business analysis............................................................................................. 238 Product analysis ...........................................................................................................238 Business plan ................................................................................................................238

Star Process ......................................................................239 Soft Design Process ..........................................................240 Research-centred activities ..........................................................................................240 Strategy-centred activities ........................................................................................... 240 Idea-centred activities ..................................................................................................240

Chapter 13 Design Paradigms Iterative Design ................................................................242 User-Centred Design ........................................................243 Participatory Design ........................................................ 246 Meta-Design .....................................................................248 Ecological Design .............................................................. 251 Design for All ....................................................................253 Accessibility .................................................................................................................. 253 Visual impairments ......................................................................................................253 Hearing impairments ................................................................................................... 258 Motor impairments ...................................................................................................... 258 Cognitive impairments .................................................................................................258 Diversity ........................................................................................................................259

Chapter 14

Design Thinking Design Knowledge ............................................................262 Wicked Problems ..............................................................263 Problem......................................................................................................................... 263 Solution ........................................................................................................................ 263 Process ..........................................................................................................................264

Problem Solving................................................................266 Experimental learning view ......................................................................................... 266 Evolutionary view ......................................................................................................... 267

Knowledge Funnel ............................................................ 268 Mystery ......................................................................................................................... 268 Heuristic ....................................................................................................................... 269 Algorithm ......................................................................................................................269

Representation.................................................................. 270

Chapter 15 Design Theory Creation of Artefacts .........................................................272 Reflection in Action ...........................................................275 Design Cognition ...............................................................277 Problem viewing ...........................................................................................................277 Solution orientation ......................................................................................................277 Experience..................................................................................................................... 277 Problem setting ............................................................................................................. 277 Problem framing ...........................................................................................................277 Fixation ......................................................................................................................... 278 Alternatives ...................................................................................................................278 Creativity.......................................................................................................................278 Sketching ......................................................................................................................278 Opportunism................................................................................................................. 279 Time .............................................................................................................................. 279

Chapter 16

Design in Business Design-Oriented Organisations ...................................... 280 Embedding Design in Organisations .............................. 283 The design studio ......................................................................................................... 283 The convergence of design and management.............................................................. 284

Managers as Designers .................................................... 286

Foreword The primary goal of this e-book is to support teaching user experience design concepts to a target audience consisting of business students. The selection of topics emphasises the design of business applications, and the examples seek debate in lectures. The book structure is constrained by practical considerations regarding teaching a 12-week course with a total of 24 hours of lectures and a project assignment that includes the development of personas, storyboards, wireframes and a prototype. This e-book is not aimed to support research. References have been kept to a minimum and have not been thoroughly revised.

Chapter 1

Introduction The UX in UXD Elevator panels What is the simplest elevator panel? One like in Figure 1.1, which has a single button. Users will never have a problem with this panel - unless of course they do not want to go up. If users want to go down, then the panel will have to be more complex and problems will start to creep in. With two buttons, users must decide where to go and then must map that intention to the options the elevator panel gives them. For instance, as shown in Figure 1.2, the panel must have recognisable symbols. Figure 1.1 Users will never have problems with this elevator panel (Source: Author)

!

Anyway, a panel with two buttons is still quite simple and users can reasonably expect they will not have many problems. But what happens when decisions are more complex? We may start seeing some problems.

Which button is ground floor? One problem to consider is which button leads you to the ground floor. Seems simple? Maybe not, is it 0, 1, G, L, E, or any other option? Maybe the ground floor is 0. This can be seen sometimes, as in Figure 1.3. However, the relationship between 0 and ground floor is not very obvious. Maybe because the meaning of zero is the absence of something, so it should be used if you want to go nowhere, but maybe not if you want to go somewhere. The 0 does not seem very natural.


Figure 1.2 Users must decide which button fulfils their intentions (Source: Author)

! Figure 1.3 Elevator using 0 for ground floor (Source: Author)

!

If not zero, then why not 1? Well, we see 1 being frequently used for ground floor (Figure 1.4) . However, this creates a logical problem. If 1 corresponds to the ground floor, then what number corresponds to the the first floor? You see, we usually associate the number 1 to first. Of course there would be no problem if the ground floor would also be the first floor. That is the case in USA, which does not make a distinction between them. But in other places, such as in the UK, the first floor is considered above the ground floor. As a consequence, deciding which panel button, the 0 or the 1, corresponds to ground floor becomes a cultural problem. Pressing the 1 may have different meaning for different people.

Figure 1.4 Example using 1 for ground floor (Source: Author)

! Figure 1.5 This panel uses G for ground floor (Source: Author)

!

Maybe that is the reason why in some cases we see the use of G (ground), as in Figure 1.5, E (exit), as in Figure 1.6, or even L (lobby). But then we have other problems. The meaning of G may be obvious for English speaking people, but not that obvious for everyone else. Furthermore, since so many options can actually be found, users will never really know what to choose. They will have to spend time finding out. Confused? Keep calm, things can get worse. Figure 1.6 This panel uses E to go to the ground floor (Source: Author)

!

The absence of ground floor What happens when the elevator panel does not have a button that obviously corresponds to the ground floor? Well, people will get lost, and will spend time trying to figure out what is going on. Check for instance Figure 1.7. There is no 0, 1, G, E, or L. So, which button is ground floor? In this case it is the 4 button. Does it make sense? Not really. The problem is that this is very uncommon. Very few people are used to level 4 giving access to the ground floor. Therefore, people unfamiliar with the particular building will struggle. You can see in Figure 1.7 that the panel provides additional details that try to improve the situation: the 4 button is singled out and there is a star close to it. But do people know that the star means “ground floor”? Probably not. Maybe they will not even see the start,

because they are looking for buttons. Will people understand that the 4 button has been singled out because it is the ground floor? Maybe after some thoughts. Figure 1.7 Unclear ground floor (Source: Author)

! Figure 1.8 Instructions have been added to the ground floor button (Source: Author)

!

Because people do not now, they may have to be told about it. Check the panel shown in Figure 1.8. Instructions have been added to the 4 button indicating that it corresponds to the ground floor. This seems reasonable, but what do you think about having instructions for something as simple as taking an elevator to the ground floor? Instructions take time to read. They also require effort to locate, analyse and understand. And they add up. Even if each person only takes a fraction of a second to read the instructions, a lot of people will accumulate a lot of wasted time. Simple actions like taking an elevator to the ground floor should not require an instructions manual.

Where do you place the button? Often you do not think about it explicitly, but where exactly do you find the ground floor button? A problem with this particular button is that sooner or later everyone will have to use it, because they need to go to the ground floor. In terms of usage frequency, it seems obvious that the ground floor button has higher usage than any other button, after all anyone that gets in is supposed to get out of the building. Figure 1.9 Finding the button to ground floor is inefficient, because users will take too much time to find it in the panel (Source: Author)

!

If people use a button frequently, then the button must be easy to find. Users like to perform simple tasks as fast as possible. Check Figure 1.9. Do you think it is easy to find the button to the ground floor? It seems not. Users will probably look to the bottom of the panel, but they will not find the button

there. The ground floor button has been placed in the middle, and it is seamless from the others. Every time users would like to go to ground floor they will have to ramble around trying to find the button. Figure 1.10 The ground floor button is conspicuous (Source: Author)

!

Now analyse the example in Figure 1.10. In this case a colourful decoration has been added to the ground floor button. Users will spend less time finding it. They will be much more efficient.

How do you arrange the other buttons? Big buildings need lots of buttons, but how are they arranged in the panel? People tend to develop simple mental models of the things they use. In particular, they develop mental models relating elevator panels to physical buildings. A simple mental model can have a rule like this: the logical organisation of the buttons in the panel reproduces the logic of the physical world. That is, if you want to go up, you expect to find the button at the top of the panel, and so you look up. If you want to go down, you look down. This seems a reasonable model of the world. However, this model can be violated. Check the example shown in Figure 1.11 and consider that you would like to go to the 20th floor. It seems reasonable that the button would be up there, at the top of the panel, just because

in the physical world level 20 is really high. However, in this panel, the level 20 button is at the bottom of the panel. The mental model has been violated. So what happens? Users will try to find the button at the top of the panel, and then they will keep looking for it while going down. The end result is that users will waste a lot of time, and probably patience. And the curious thing is this may happen all the time, even if you use the elevator panel daily. Unless of course you develop another rule: look down if you want to go up. However, the rule does not seem very logical or even useful. This example illustrates that the elevator panel is disconnected with reality. The buttons in Figure 1.11 are arranged up-and-down and left-to-right. An arrangement that is left-toright and bottom-up (as in Figure 1.5) seems closer to physical reality. Finally, check the panel in Figure 1.12. Do you find anything wrong? Well, levels 2, 6 and 13 are missing. This creates a surprise because people are used to continuity, both in numbers and buildings. Furthermore, the panel changes its structure in the middle: from levels 1 to 7 the arrangement is vertical, but from then on it is organised left-to-right. Users will need time to adjust to the structural change. In the end, finding a particular button in this panel seems to be some kind of lottery.

In summary The UX (User Experience) in UXD (User Experience Design) expresses our fundamental concern with the users: who they are, what they need, what they do, and also how they think. These fundamental elements of the user experience must be considered in any interactive artefact, from elevator panels to complex software applications.

The D in UXD Toilet paper dispensers Everybody now and then must go to a public toilet. One of the biggest fears of going to a public toilet? Being out of paper. Figure 1.11 Addressing the problem by increasing roll size (Source: Author)

! The problem is so prevalent that we see lots of different solutions trying to overcome such fears. Let us look at some of them. We can start with the solution shown in Figure 1.11. In this case the idea is to use a bigger roll than what we use at home. The solution is simple and reduces the probability of being out of paper, which is important. However, we could argue that maybe it does not address the fear of being out of paper: what happens when the roll is almost empty? And when should the roll be changed, around the middle or close the being empty? The temptation will be high to let it run almost empty, to reduce waste. So the roll will frequently run low and people will still fear being out of paper. The solution shown in Figure 1.12 addresses the problem in a different way. The idea is that if you run out of a roll, then you have a second one available. The fear problem is addressed through redundancy.

Does it work? Not really. And because of human behaviour. In a idealised world, users would take paper from one single roll, so that when it finishes a brand new one is still available. The second roll would only be used in case of emergency. Figure 1.12 Addressing the problem through redundancy (Source: Author)

! But are we living in an idealised world? The problem is, not taking paper from the second roll requires altruism: you do not take it from there because you are willing to be good to the next users. However, in the real world, when going to a toilet, altruism is not a big priority. Users will naturally prioritise themselves. And giving priority to yourself means that you will take paper from whatever roll. Altruism requires effort. Not doing it is more efficient. Efficiency wins. Figure 1.13 Using redundancy in a different way (Source: Author)

!

So the end result is that, most probably, the two rolls will be emptied out more or less at the same time. As a consequence, the fear persists: you either have two rolls with paper or none, and having none is a big problem. The idea of redundancy is good but does not work in this case. On with other solutions. The one shown in Figure 1.13 also uses redundancy. Besides having the same problem as the previous one, it introduces a new one: lack of transparency. people will take paper from any roll (lack of altruism). Besides, because the material used in the dispenser is opaque (lack of transparency), you will never know if there is paper there or not. Maybe the rolls only have those tiny pieces of paper hanging out. Therefore, when compared to the other solutions, this one seems to increase the perceived risk. Figure 1.14 provides another solution that is similar to the previous ones but with an important new feature. First, we note that it provides redundancy: it has two rolls. But then we note that the second roll is not immediately available, as it is covered by a sliding door. The door addresses the problem of altruism. As a user, you are not tempted to use paper from the emergency roll. You would have to be willing to do it, and it would require a lot of effort: you would have to slide the door. In general, most people would not pay that cost. Only the ones without paper in the roll would pay the cost, but they would be very happy to do it. Figure 1.14 Combining redundancy with control (Source: Author)

! Still, the solution is not perfect, because the sliding door seems opaque. Therefore, users will not know if there is a roll there or not.

Figure 1.15 shows a different implementation of the same solution that solves the redundancy and transparency problems. Users cannot take paper from the second roll, and they can clearly see if it is there or not. The interesting aspect of this solution is that it intentionally addresses the user needs and controls the user behaviour by design. That is, the designers though that they could not depend on what the users would do, but instead found ways to constrain the user behaviour. Figure 1.15 Combining redundancy with control (Source: Author)

!

In summary Even simple things like toilet paper dispensers have to be designed to accomplish specific purposes. In order to accomplish such purposes, designers must think about human behaviour and must find ways to constrain and encourage human behaviour. Interestingly, in the cases we discussed the constraints are not imposed through instructions manuals. Instead, they are integrated in the design of the artefact.

The D (Design) in the UXD (User Experience Design) expresses the idea that the user experience can be improved by design, and that designers have a fundamental challenge in understanding how to influence user behaviour through design. This idea applies both to the design of simple things, such as the toilet paper dispenser, and to the design of complex things such as software applications.

Some Definitions UI (User Interface) UI is a software component mediating users’ interactions with computers. In the very early days of computing, the UI was not relevant and almost non-existing. Computers did not have displays or keyboards but printers and buttons (Figure 1.16). More significantly, they were operated by the people that developed them and therefore there were no significant mismatches between the users’ goals and the software applications. Figure 1.16 The UI of early computers consisted of physical switches and gauges (Source: Pargon / Flicker / CC)

! As computers and software applications started to spread out of computing labs, they would have to be operated by common people and consequently the UI component gained much more importance. However, the UI was still perceived as a thin layer, a cosmetic thing, necessary but less important than computers and software applications - icing on the cake. Nowadays, computers are everywhere and in various forms, and the UI is recognised as an indispensable component of this ecology made up by users, computers, software applications, and businesses. Actually, it can be said that nowadays the UI is even more important than the computer itself, given that much of the value proposition is offered by the UI. A obvious example is the mobile phone. If the main feature of a mobile phone is making phone calls, then users should be quite happy using a dumb one. After all, most dumb phones have efficient UI for making phone calls. However, most users have abandoned dumb phones for smartphones. This happened because the value proposition of

smartphones extends beyond making phone calls to encompass many other features that better integrate people in society. In the end, all these features emphasise the UI.

UID (User Interface Design) UID concerns the art and science of creating a UI. In that sense, it can be considered a multidisciplinary practice building upon knowledge from the fields of engineering, computer science, social sciences, managerial sciences, other sciences, marketing, and arts.

HCI (Human-Computer Interaction) HCI is a scientific discipline concerned with the study of the interaction between humans and computers, mediated by the UI. The HCI discipline comprehends both theoretical knowledge about how humans interact with computers, and practical knowledge about how UI are designed and developed for optimal interaction. Interaction is a broad term comprising many cognitive functions like: • Seeing, touching, listening, and speaking • Thinking about tasks and goals, and how to accomplish them • Consuming, creating and sharing knowledge • Extending human capabilities such as memory, search, storage, and decision making • Integrating human and computer control over the execution of complex and risky tasks

HCID (Human-Computer Interaction Design) HCID concerns the practice of applying knowledge from the HCI field to the development of optimal interactions with computers. Modern HCID requires the conception of complex solutions involving immersive and pervasive cognitive experiences, using interconnected systems, and mixing information processing with communication, interaction and entertainment.

UX (User Experience) UX concerns the holistic experience of interacting with computers. When compared to the UI concept, UX brings forward challenges that extend beyond pure functional purposes, like satisfaction, empathy and engagement. Figure 1.17 illustrates how a pure functional design, which may be effective and efficient, may result in a UI that is dull and boring, and which should not exist today. When compared to the HCI point of view, the UX concept raises broader concerns about the societal and cultural contexts of computer usage. The notion of UX is intimately

related with how computers became embedded in people’s lives, from working to entertaining and socialising, and also how computers nowadays drive human action. Figure 1.17 An example of pure functional design. Dull and boring

!

UXD (User Experience Design) As noted previously, when software systems were confined to computing labs, experiential concerns were not an issue. Software applications were developed based on business and engineering requirements, and people was forced to comply with the built-in functional behaviour. Designers would therefore give the primacy to the business needs rather than to the users. Nowadays, the concern for the UX is often what drives the business. A good example of this evolution is online banking, where any neglect for UXD may result in having clients disrupting services at the physical branches, just by showing up. Therefore, it is the best interest of the banking business to offer good UX. In such context, UXD is paramount to accomplish business strategies.

References Hartson, R. and Pyla, P.S., 2012. The UX Book: Process and guidelines for ensuring a quality user experience. Elsevier.

Chapter 2

User Research Users and Work Practices One size fits all A typical mistake of poor UX design is assuming a “one size fits all” attitude, as if users were all alike and would do things in exactly the same way. Would you like an example of a “one size fits all” solution? Check Figure 2.1. It’s taken from a system called HR Kiosk, which manages information related to employees. The picture shows how the system handles annual leave. It says that the worker has 283.97 hours available. What is the meaning of that? People usually do not take hours of annual leave, They take days. Suppose you would like to take three days of leave, how many hours is that? Figure 2.1 Example of “one size fits all” attitude: Showing balance of annual leave in hours, not days. Who thinks about annual leave in hours?

! So the system is providing useless information to a large category of users: people that take full days or even weeks of annual leave. Some stakeholders - or even worse, nobody may have convinced the designers that showing hours was a good idea, but it’s not.

Just make it work Another typical mistake of bad UX design is the “just make it work” attitude, as if users were not interested parties. Check for instance Figure 2.2. It shows the UI for a system that reports the students’ course evaluations. This UI may do the job, but do you think this is the best way to do it? It seems complex, having a random structure, the options are not obvious, and all those instructions suggest that users have lots of problems using the UI.

User-centred design A fundamental element of UX design is understanding the users’ work practices. Actually, “work” is a restrictive word, since a system may not be strictly concerned with work. For

instance, it may support leisure and playing. But, for simplicity, we will assume that users interact with a system with the purpose to work. Now, people do things in different ways and therefore also interact with systems in different ways. We may have to consider variations in the physical work environment, mind sets, variable goals and subgoals, dependencies with other people, different knowledge and experiences, etc. It is of critical importance for UX designers to understand the scope of work practices and to be able to express the variations in terms of user requirements. Figure 2.2 Example of “just make it work” attitude: All inputs are there, users just have to figure out how to use them

Even though some of this work can be done in the office in a purely conceptual way, designers often need to go out and contact the users in their working environment. Furthermore, to understand users’ practices, UX designers must be able to reasonably understand the domain language. For instance:

• The healthcare domain uses a lot of “doctorish” terms like intern, resident, DOB, HPR, and ICD-10 • In the gaming domain, we find a lot of unique words like “teh” (not a typo!), kludge and “pwned” All in all, designing the UX without really knowing the users and their language is a bad idea, but it happens too frequently. Often clients are tempted to say “I know exactly what has to be done, so let’s not spend time and money with what I already know”. And designers tend to say, “ok, let’s do it”. These attitudes should be resisted by UX designers. The concept of user-centred design refers to the practice of designing primarily for the users, not for the clients. The differences between users and clients are not minor. And of course if we seek to design for the users, then we have to understand their work practices.

Example - camping tents The Quechua camping tent (Figure 2.3) is an interesting example illustrating how work practice can conflict with the interests of other stakeholders, in this particular case marketing. The tent is famous for being easy to open. It only takes 2 seconds to open, marketing says (watch the video at https://youtu.be/EIhlHCBHf-Y). However, the problem is that it is really difficult to close the tent. Figure 2.3 The Quechua camping tent: Marketing emphasises the idea that opening the tent is super easy, but the problem is closing it (Source: Tael / Wikimedia / Public domain)

!

There are many movies on YouTube showing people trying to do it for a long time (like https://youtu.be/EIhlHCBHf-Y). Shouldn’t user-centred design focus on the whole experience?

Example - air traffic control What about designing for air traffic control (ATC)? Would you be able to successfully design an ATC UI without extensive consideration for the ATC practices? Figure 2.4 What an ATC sees on the radar screen. Note this is just one of several UI the controller has to work with (Source: Timitrius / Foter / CC)

! Controllers have to track a large number of airplanes; often more than 16 at a time (Figure 2.4). That is extremely stressful. Losing track of one of the items on the radar screen may represent the death of hundreds of people. Each item on the radar screen provides critical information about an airplane, such as location, altitude, velocity, angle of descent, destination, and a unique identifier for establishing communications (Figure 2.5). All that information is necessary to maintain situation awareness and must be related to information about other airplanes to avoid, for instance, assigning colliding routes (Figure 2.6).

Figure 2.5 The information displayed about an airplane, an example of conciseness, is critical to reduce the risk of losing track of airplanes (Source: Orion 8 / Wikimedia / CC)

! Figure 2.6 The working environment of an air traffic controller. Note that flight strips are used to organise the work practice (Source: NATS Press Office / Foter / CC NC)

! Besides the radar, controllers have to interact with other systems, e.g. to manage communications and to get meteorological information. Often, they also look over the shoulder of their colleagues, as they know that some airplanes will cross into their own airspaces, and therefore looking over the shoulder allows them to anticipate what will happen in the near future. For instance, if someone sees that a colleague has lots of airplanes in the nearby airspace, then there is a good chance that workload will soon increase, and thus it is advisable to prepare for that.

Controller not only have to use different systems, they also have to manage paper flight strips (Figure 2.7). These work as memory extensions. They allow taking notes about the instructions given to airplanes. Figure 2.7 A paper flight strip provides useful information about a flight, but also allows the controller to take notes (Source: StC / Wikimedia / CC)

! As you can see in Figure 2.7, controllers also use flight strips for traffic management, arranging the flight strips according to priority and taking notes about the instructions give to pilots. All in all, the ATC work context is very rich, complex, dynamic and stressful. UX design in this context requires deep analysis of the whole work context, in particular how controllers build situation awareness using technology and paper. To build such a level of understanding, UX designers must also grasp the specific language used by the community of practice, such as flight strip, ATC and clearance.

Work analysis The deep analysis of work must consider various facets: Goals. What the users seek to achieve. Most often, users have multiple ongoing goals. Work activities. The activities necessary to achieve goals in a particular domain. Some activities entail technology usage, while others not. Work practice. How people organise sets of goals and activities using patterns, rituals, traditions, and protocols. Work context. The particular constraints and possibilities associated to work, including physical, cognitive, social, procedural, and regulatory constraints. Work domain. The specialised knowledge and language necessary to accomplish work. Community of practice. The knowledge and experience shared by a group of workers in a certain work domain. Groups tend to develop and share certain ways of doing things, which may be unique within the community of practice.

Deciding on the Users “Do I really need to define who are the users? My client tells me the system will do this and that, and thus I do not need to spend time and money trying to understand the users”. Well, does the client really know how the system will be used? Or what the users really need? Very often that is not the case. To illustrate the matter, consider the iVotronic electronic voting machine shown in Figure 2.8. Conceptually, this machine works in a simple way. First, the user goes to a registration desk, so that the election authorities check the right to vote. If the right is granted, the user is given a physical device, known as PEB (Personalised Electronic Ballot), and is sent to the voting boot. Figure 2.8 Electronic voting machine developed in the US by Election Systems & Software. The slot on the left receives the PEB (Source: joebeone / Foter / CC)

! At the voting boot, the user places the PEB in the electronic voting machine, which only then will display a voting bulletin. The slot where users have to insert the PEB is visible at the left side of the machine, in Figure 2.8. After voting, the user removes the PEB from the machine and goes back to the registration desk where the vote is dowloaded into an electronic ballot box.

This functionality looks reasonably simple. Besides, the machine has been extensively used in the US and Europe. But now let us look at what happened in a concrete case. Figure 2.9 shows a voting report generated by an iVotronic machine in a voting trial done in the 2004 European elections. The report from the iVotronic machine shows that 711 people used the machine, casting 681 votes and generating 30 undervotes. An undervote is a failed attempt to vote, i.e. the user introduced the PEB on the machine but either did not select a candidate during the available time slot, or removed the PEB before casting the vote. Figure 2.9 Voting report showing that, of 711 votes, 30 were undervotes (Source: Reproduced from original by the author)

! This means that 4.2% of voters that used the machine did not cast a vote. Why was that? You can always argue that a user may have decided not to vote, but it seems strange that the user goes all the way through the voting process to decide not to vote in the last second. Perhaps another answer is that users were unable to vote. Why? Maybe because of usability problems. But then this means we should know better who are the users before starting to use electronic voting machines. For the more interested readers, here’s a possible explanation for the undervotes: Users do not know how to insert the PEB in the slot, take too much time trying different positions until the system times out, and then they give up. They do not repeat the whole process to avoid been seen as defeated by the machine. 


Eliciting Data From Users Multiple methods have been developed to elicit data from users. In the diagram shown in Figure 2.10, we list some common methods, organised in two dimensions. The first dimension distinguishes if the elicitation process is more centred on exploring or explaining work. The second dimension distinguishes if the elicitation process is more centred on generating design ideas or generating descriptive and explanatory information about work. This diagram positions five different data elicitation methods according to the two dimensions. Next, we discuss these methods. Figure 2.10 Various data elicitation methods organised in two dimensions

!

Marketing surveys The main purpose of this method is gathering data from many users. It is, possibly, the only feasible way to reach a very large number of users. It seems therefore adequate to understand how users interact with massive commercial systems like online banking or TV boxes. Though this method requires a very good, precise understanding of what data to collect. As frequently noted, “garbage in, garbage out”. If you do not know what data you would like to gather, then you most probably will get garbage.

Furthermore, this method has been criticised and is avoided by many UX designers. Some famous quotes highlight the concerns: “If I had asked people what they wanted, they would have said faster horses” — Henry Ford “First Rule of Usability? Don't Listen to Users” — Jakob Nielsen “Self-reported claims are unreliable, as are user speculations about future behaviour” — Jakob Nielsen The main issue is that users are not very good at explaining what they want, need or even desire. A famous joke in the UX field tells us that if you ask users, they’ll say “I want a pony”, like kids, when asked what they want for Christmas. So it’s better not to ask what a user wants. Besides, surveys usually return quantitative data. What is the meaning of a survey saying that 23% users like the colour red in the UI? None. Any other problems? Well, surveys usually have a small response ratio. And people seem to be tired to answer surveys anyway.

Interviews Interviews provide more feedback to the UX designer than surveys. Interviews give more flexibility in asking and answering questions. It is common to change direction during an interview because of something particularly interesting that a user says. Furthermore, interviews get more concrete data. it’s not just about if you liked a feature or not; it’s about why you did liked it or not. Finally, interviews allow gathering non-verbal communication. Often users say yes but through body language indicate they mean no. Interviews are time consuming. This can be compensated by interviewing several people at the same time (very similar to focus groups), but nevertheless it is as problem. Another problem with this method is that success depends on the experience of the UX designer. Since the process is open to surprises and may take unexpected directions, more experience means more capacity to identify what is relevant or not and what is the value of emergent information. Usually, more experience also means more capacity to engage users in externalising knowledge and experience. Interviews may be problematic in cases where work is very complex. Examples include: • Emergency management • Surgical operations • Flying commercial aircraft

In all these cases, you will find out that interviewees find it very difficult to discuss what they do. They will tend to either abstract too much or refer you to the rules. One reason may be that the interviewees are usually requested to talk about work in distant scenarios (in terms of time and place), which makes it difficult to contextualise the discourse. Another reasons may be that skilled practitioners are not necessarily good at explaining their skills. (Try to explain why you are a good tennis player.)

Ethnography Ethnography is a method for gathering data through immersion in a community of practice. For instance, if you would like to understand how systems operating with foreign exchanges are used, you would have to follow the daily life of traders. The main idea is the UX designer becomes an invisible observer of users’ work activities. This is an information-rich process, the reason why people talk about thick data when referring to ethnography, which may include events, actions, behaviours, attitudes, messages, etc. However, it takes time to become invisible. It also takes a lot of time to observe different work practices. For instance, air traffic controllers experience different periods of stress and relaxation according to different times of the year. This means that ethnography may have to be done throughout a long time period. It is also a wasteful process, in the sense that many repeated information may be collected. Another issue to consider is the negative effects of observers on subjects, a phenomenon known as Hawthorne effect. And finally, it may also be difficult to translate the acquired data into useful user requirements. (Try getting some useful recommendations by watching a tennis match.) Figure 2.11 A small usability testing laboratory using a quad multiplexer to record video from multiple sources including the computer screen, mouse and keyboard usage, and even the users’ face (Source: Author)

!

Usability testing Usability testing involves conducting highly-focussed experiments with real users, usually in lab settings, to observe how they work. Figure 2.11 shows a small usability lab the author developed a long time ago. Besides the computer, which was instrumented to log mouse and keyboard interactions, it also included two video cameras and a multiplexer. The cameras were pointed towards the keyboard, mouse and the users’s face. The multiplexer allowed capturing at the same time an image of what was going on the computer screen, the user’s physical interactions, and the user’s facial expressions, which could then be analysed together. In general, usability testing generates a large amount of low- granularity data, such as keystrokes, mouse movements and facial expressions. It may be difficult and time consuming to analyse all that data. Furthermore, usability tests tend to lack real-world context, since users are usually required to perform fake exercises in a fake working environment. Another issue to consider is that usability testing is highly dependent on the actual UI. Often, small changes in the UI result in significant changes in users’ performance, which makes it difficult to analyse work context. Finally, usability testing is not adequate for conceptual evaluation or prototyping, since it requires having a relatively complete and functional system.

Contextual inquiry Overall, we have seen that the previous methods for eliciting data from users have multiple constraints and problems. Contextual inquiry was developed to overcome some of the more fundamental problems. Contextual inquiry is a hybrid between ethnography and interview. It involves: • Observing users in the workplace to understand work (what they do). These observations can be considered proximate to ethnography, even though with much less immersion in the working environment and much more visibility of the UX designer in the process. Contextual inquiry also tends to take much less time than ethnography • Interviewing the users in the workplace (what they say). Unlike ethnography, where the UX designer is supposed to not interfere with the work, contextual inquiry requires establishing dialog with users with the purpose to clarify the details of the work context and to obtain opinions and suggestions • Discussing how they work (what it is). Engagement between UX designer and users helps identifying and explaining breakdowns, i.e. what has failed in a particular interaction, within the concrete context of the physical work environment

• Discussing work possibilities (what could be). Engagement between UX designer and users also helps identifying workarounds and new possibilities for doing work with users Contextual inquiry is not a process for gathering user requirements. The UX designer should not directly ask what a user needs (“tell me what you want”), since the user will usually provide inadequate answers (“I want a pony!”). Instead, the UX designer should ask the user to provide detailed accounts of the work practice. In this process, the UX designer is expected to develop a deep understanding of the work domain, work context, working activities, goals, and the underlying rationale for the working activities, while gathering immediate feedback from the users about the adequacy of the offered rationalisations. This type of relationship between UX designer and user has been coined masterapprentice, where the user acts as a master of a particular work domain and the UX designer acts as an apprentice, which has to learn how something is done.

Contextual inquiry in practice A very important characteristic of contextual inquiry is that it is done in the user’s working environment Figure 2.12). This allows the UX designer to observe work in context and to ask questions and obtain feedback taking any contextual issues into consideration. If contextual inquiry would be done in a meeting room, away from the specific work context, both the questions and answers would become more abstract, less realistic, and more difficult to analyse.

!

Figure 2.12 Contextual inquiry is done in the user’s workplace (Source: Jisc / CC)

Contextual inquiry allows gathering two types of knowledge: • Knowledge in the head. What the user knows. By asking questions and taking notes about what the user says • Knowledge in the world. What the user does, by observing and taking notes about physical action One issue to consider regarding knowledge in the head is that users often know what they do, but find it difficult to explain how they do it. Therefore a good practice is to avoid asking directly the user how work is done. Instead, ask the user to do something, observe, analyse, synthesise, and confirm with the user that you understood it well. Although it would be tempting to video tape the sessions, that should probably be avoided. The main reason is that you would spend too much time transcribing, coding and analysing the information. Note taking should be sufficient to gather design issues and ideas.

A contextual inquiry project A social development study was conducted by a student that was seeking to contribute to reduce elderly’s social disengagement through design. For multiple reasons, elderly people tend to stay at home, which leads to feelings of isolation and uselessness. The students’ view was that, through design, it would be possible to reduce isolation. However, the student didn’t know much about elderly people or social disengagement. Therefore, contextual enquiries with three elderly people (aged 65, 75 and 77) were set up. The student decided to go along with the subjects to understand their daily activities. At home, one subject showed a strong connection with the family through pictures placed in various rooms. Going to the community centre was a major event, since it allowed to meet with other elderly people. However, as noted by one subject, a problem contributing to stay at home was the difficulty to move around town where cars are often privileged against people. Two examples identified in these sessions where difficulties crossing streets (because of aggressive times for pedestrian crossing) and narrow pathways. For another subject, the highlight of the day was going to school to pick up the grandson. This simple activity contributed to a strong feeling of usefulness. Another subject had a small space of a community garden, which contributed to having something to do during the day, other than watching TV. From the notes taken in these contextual inquiries, the student arrived to an idea: primary schools could have small community gardens specifically targeted at the elderly population. This would allow elderly people to set a busy daily agenda mixing gardening with picking grandchildren at the school.

This idea emerged during the contextual inquiry process, while talking with the users. It did not come up in a meeting room or on a drawing board.

References Bruce, M. and Cooper, R. 2004. Creative product design: a practical guide to requirements capture management. Wiley. Beyer, H. and K. Holtzblatt (1998). Contextual design: Defining customer-centered systems. San Francisco, CA, Morgan Kaufmann.

User Requirements The observation of work practice is expected to generate design ideas and arguments about how work could and could not be done. Usually, the best way to express these arguments is through user requirements. A user requirement is a statement of what is needed in order to support the work practice. Formalising a list of user requirements helps: • Defining clear design goals. Even though design should be motivated by a grand vision, design goals are always necessary to make sure the project is going in the right direction, and also to make sure you know when the project has arrived to the finishing line • Requirements inspection. Someone has to check that the design fulfils the project objectives. This can either be done by the user, the client, or by the designer. Though in order to do the inspection, a clear list of requirements is necessary • Contractual obligations. Often UX design projects are formally contracted with a client and in that case the list of requirements defines the obligations of both parties. THis is especially important if by the end of the project there are contractual conflicts • Client’s approval. The client’s approval is similar to requirements inspection, although done at the end of the project. Often this has to be formalised and the list of requirements helps specifying clearly what has to be approved • Specification documents. Nowadays most project teams avoid spending time on specification (thanks to Agile development). However, more traditional clients may still require a long, formal specification document Note that we use the term “user requirement” and not “client requirement”. This is to emphasise the user-centred aspect of design. Even though the UX designer is expected to deliver a UI to the client, there is also an unavoidable responsibility to the user, where the UX designer must deliver something that is usable.

Extracting requirements User requirements can be extracted after contextual inquiry through a deductive process: • Deducing what the users will do with the UI. Notes taken from contextual inquiries often identify sets of activities that users must accomplish to reach their goals. Notes may also identify missing or alternative activities • Deducing how the users will interact with the UI. The focus on how is important to make sure that the UI is usable. In this regard, notes are important because they elucidate the work context, e.g. constraints imposed by the environment on the user, such as cognitive effort, learning, stress, etc.

• Deducing when the UI features should be developed. Often notes identify design ideas that may be interesting but not important. User requirements often have to be prioritised according to cost, effort and desirability The deduction process is often based on text snippets found in the notes gathered during contextual inquiry. For example: • “User needs to do X; it is very important for the whole task” - X is critical and needs to be supported • “User does X but it does not work very well” - There may be a breakdown and probably X can be redesigned • “User could not find X” - Visualisation can be improved • User has done many clicks to execute X” - Interaction can be improved

Formalising requirements Then, user requirements should be stated in a formal way, which may take different forms. For instance, the Agile community has been converging towards a format known as user stories, which relate users’ roles with needs (see Figure 2.13). Figure 2.13 Conventional structure of a user story

User story As a [role] I want [something] so that [benefit] ! Figure 2.14 Conventional structure of an essential use case

Essential use case [user identification] [user goal] [preconditions] [postconditions] Course of action: [user action] [system response] [user action] [system response]

!

...

Figure 2.15 Conventional structure of an affinity diagram

Affinity diagram [voice of the customer (green label)] [theme (pink label)] [issue (blue label)] [issue (blue label)] ... [issue (blue label)] [issue (blue label)] ... ... [theme (pink label)] [issue (blue label)] [issue (blue label)] ... [issue (blue label)] [issue (blue label)] ... ... ... [voice of the customer (green label)] …

! Figure 2.16 Affinity diagram developed with post-its (Source: N. Guimarães)

!

One way to formalise user requirements, which was developed by the computer science community, uses essential use cases. An essential use case defines a sequence of dialogues between the user and the system, expressed with user actions and system responses (see Figure 2.14). Preconditions and postconditions specify the specific context where a sequence of pairwise actions occur, e.g. a user can only buy a product online after a successful login. Another approach developed in the HCI field uses an affinity diagram. An affinity diagram is an hierarchical representation of issues raised by the users during the contextual inquiry sessions and built bottom-up (see Figure 2.15 and Figure 2.16). The issues are grouped by affinity under labels that reveal the users’ needs. Blue labels represent a theme (e.g. user authenticate using login and logout features). Pink labels collect together a set of blue labels under a common theme (e.g. users have difficulties searching products online). Green labels summarise the pink labels under them, expressing what is known as the voice of the customer (e.g. users want to buy goods through website). The methods mentioned above document the user requirements in a simple way, basically adopting structured narrative. They do not use complex relationships and formalisms. However, such a simple approach may be more difficult to apply when the target system is technically complex and the project must follow stricter procedures and comply with industry standards used in many industry fields such as healthcare, aviation, and industrial processes. In these cases, it is more advisable to use more structures approaches, such as use cases, defined by the Unified Modelling Language (ULM). Though we will not discuss further these heavyweight approaches.

One more note on requirements Requirements specifications are becoming obsolete. They take too much time to develop, and tend to be too formal. Furthermore, they are not very agile. Techniques such as personas and storyboards have been substituting requirements specifications.

References Beyer, H. and K. Holtzblatt (1998). Contextual design: Defining customer-centered systems. San Francisco, CA, Morgan Kaufmann. Constantine, L., 1995. Essential modeling: use cases for user interfaces. interactions, 2(2), pp.34-46.

Personas Defining user groups Techniques such as contextual inquiry and ethnography allow gathering a significant amount of data about what users do: goals, activities, work context, problems, workarounds, etc. This data provides the initial bag of ideas and constraints/possibilities necessary to design a UI fit for purpose. However, to make this data useful for design, it has to be organised in a purposeful way. Personas provide a simple, popular vehicle for organising knowledge about users. A persona is not an actual user, it is an archetypical user representation, which brings together in a coherent way several details about users that may scattered throughout multiple sources of information. However, this imaginary person should not be completely devoid of personality, neither set up from all the information sources in a way that generates a “jack of all trades, master of none”. Instead, UX designers should develop personas that: • Group users according to representative user groups • Synthesise user data by aggregating goals and activities into representative roles • Focus the design on a set of goals and work activities specifically targeted to a group of users • Illustrate the personas’ work activities in concrete contexts • Enrich the personas with personality traces that bring realism and help gaining enthusiasm for the users

Types of personas Essentially, we can identify two types of personas: Ad hoc personas. Are developed based on the designers intuitions about the users Data driven personas. Are developed from research data such as observations, interviews, and user data sets Data driven personas are better for design, as they can be properly justified. However, they are much more costly to obtain and develop.

Essential elements of personas Even though personas can be defined in many ways, the following template tends to be generally adopted: • A name, which helps personalising the main target for design

• A picture, which helps making the persona more real • Some basic demographic information like age, location, type of education, and occupation • A quote, which summarises the persona’s major behavioural traces • A brief summary about the persona including, for instance, job, job aspirations, lifestyle, and work relationships • A brief description of the work context • A list with key goals and needs • A list with behaviours performed by the user • A list with key design ideas necessary to support the persona • A list with negative ideas, which would not be well received by the user

!

Figure 2.17 Example of a persona (Source: D. Inkster)

Figure 2.17 shows an example of a persona using this template. The persona was named Arjun, so that the UX designers build a more personal relationship when designing for the represented user group. You are designing for multiple users, but they are represented by Arjun. The picture gives an hint that the person is a young and active business professional (wearing a suit). The bicycle suggests that the UX designer should focus on mobile users (and mobile phones), which in turn will influence the type of functions (e.g. short interactions) and work scenarios (e.g. short attention span) of the whole UI. The quote emphasises that a particular contextual factor influencing this user is lifestyle. The designer should also take that into consideration, e.g. providing a stylish UI. A short list of key goals illustrates what the user wants to achieve. This is completely focussed on the user, not on the UI or the system. That is, Arjun does not want to press a button, select an option from a menu, or search the web. He wants to know what is trendy nowadays. The short list of behaviours identifies a more concrete set of actions that again extend beyond the UI towards the daily life.

!

Figure 2.18 Example of a persona (Source: D. Inkster)

Note that the left part of the persona is mainly concerned with the person, not the system or the design. It gives hints about the user requirements, both from a goals perspective and a perspective of actions necessary to achieve these goals. The right part of the persona is then focussed on the UI design and what the designer is expected to consider. It identifies a set of things that the designer should avoid. For instance, since Arjun cares about the environment, the designer could consider adding a UI element telling about how many kilos of carbon dioxide the user has saved by using a bicycle instead of a car. Note that even though the essential elements of personas are very simple, they can be very useful for UX designers. The designers can relate with the personas, understanding their needs and wants, while avoiding excessive attention to the technicalities of the system and the specific UI. Figure 2.18 gives another example of persona.

Main goals of personas A good persona is expected to fulfil certain goals. One such goal is focus: • The interaction design should be focussed on serving the persona, even if that persona is imaginary. Later on, when using the actual UI, users may have different needs, but nevertheless they will be able to understand what specific user group the UI is trying to serve. A UI that tries to serve every user is necessarily vague • The persona helps defining priorities for UX design. The needs of the selected personas should be primarily served, while other needs and wants, for instance suggested by clients, are relegated to a non-priority category • Another goal a good persona should fulfil is engagement: • Design is a creative activity and therefore designers should be enthusiastic about what they are doing; this is easier if designers can understand and relate to users • One more goal is to have a global view: • The consideration for the personas’ needs and goals helps perceiving the big picture and trying to design for that picture in the best way possible • Finally, another goal a good persona should fulfil is communication: • Designers often have to communicate with clients, users, marketing people and software developers. A persona provides a textual/visual instrument for information sharing, negotiation, and gathering support to the design

Primary, secondary and negative personas Depending on priorities established by the UX designer, personas can be organised in three categories: Primary personas. They are the main focus of design. Their goals and behaviours must be served at all cost. Secondary personas. They may be considered during design, but are not a priority. Often, secondary personas serve to add diversity to the UI, defining different pathways to accomplish the same goals, so that users can do things with more freedom. Negative personas. They are definitely not going to be served. They mainly serve to remind UX designers of what they should avoid doing. It may seem strange to develop negative personas, but they can be useful. For example, consider the case of a group of designers working on a low-cost mobile phone operator. They wanted to be positioned as an alternative operator, mainly focussed on students. So they developed a negative persona that reflected the typical clients targeted by their competitors: business people, retired, etc. The negative personas kept reminding them what not to do.

References Cooper, A.: The inmates are running the asylum. Why High-Tech Products drive us Crazy and How to Restore the Sanity. Macmillan, Indiana (1999). Marshall, R., Cook, S., Mitchell, V., Summerskill, S., Haines, V., Maguire, M., Sims, R., Gyi, D. and Case, K., 2015. Design and evaluation: End users, user datasets and personas. Applied ergonomics, 46, pp.311-317.

Storyboards A storyboard is a graphical illustration of the interplay between a user and a system, using concepts from visual narrative. The idea was taken from the movies industry, where storyboards have been used for long to illustrate what will happen in a movie during production and filming. They serve to sell the movie to investors, to conceptualise how the story will evolve, to explain the events to the actors, and also to plan the shootings. Figure 2.19 Storyboard illustrating a customer service (Source: Rob Enslin / Flickr / CC)

! Storyboarding what users do through graphical “movie clips” which show the users’ actions (Figure 2.19, Figure 2.20 and Figure 2.21). The power of storyboards is they can be substitutes for the real system when communicating with the users. You can sit a user on a

table and run trough the storyboards while trying to understand what the user would like to do with the UI. A key factor to consider is that storyboards are easy and cheap to do, and they can be used in the very early stages of design. Figure 2.20 Storyboard example (Source: Mike Sansone / Foter / CC)

!

Conceptual Frameworks Miles and Huberman, in the book “qualitative Data Analysis”, suggested the use of conceptual frameworks to frame a qualitative study. According to the authors, a conceptual framework is a collection of concepts that define and map what the study is about. It highlights which “bins” are likely to be in play in the study, what views/topics are most important, and what will be included and excluded. All in all, a conceptual framework is an invaluable window into the researcher’s mind. To some extent, user research can be regarded as a qualitative study: the users have to be identified; the work context has to be defined; data about the work practices has to be collected; some analytical work has to be done; and some synthesis is necessary. As such, it makes sense to use conceptual frameworks to structure the user research.

Figure 2.21 Storyboard illustrating a software configuration process (Source: D. Simões)

! Of course the proposed parallelism has some limitations. For instance, qualitative studies usually operationalise the research problems using a set of variables, while design studies are usually not that explicit. Qualitative studies also follow accepted methodologies, while design projects usually do not. Nevertheless, the value brought by conceptual frameworks in revealing the researchers’ mind may also contribute to document the designers’ mind. Miles and Huberman identified three types of conceptual frameworks, organised according to a progressive understanding about a study, which naturally occurs as the study evolves. Exploratory framework. Developed at the beginning of the study. Presents a rudimentary view of the study, mainly highlighting what will and will not be studied. It

just identifies a set of concepts and informal relationships between them. It focusses and bounds the study to something, but in an exploratory way. Intermediate framework. Developed in the middle of the study. Refines the exploratory description by eliminating certain elements that during the study emerge as irrelevant. Furthermore, considering that better knowledge of the requirements and constraints arises, it provides more detailed concepts and more precise relationships between them. Confirmatory framework. Developed by the end of the study. Explains the final outcomes from the study. It articulates the concepts with more precise relationships. Figure 2.22 Conceptual framework developed on a whiteboard (Source: N. Guimarães)

!

Conceptual frameworks in user research We argue that conceptual frameworks can be very useful to document the user research. In this particular context we define conceptual framework as a semi-structured diagram using a combination of text and drawings, elucidating how the designer views a given problem and aims to develop a solution. A conceptual framework may identify types of users and user contexts, as well as many other elements such as design issues, concepts, variables, key factors, opportunities, strategies, tasks, questions, options, choices, etc. Furthermore, two other structuring elements may be used (see example in Figure 2.22):

• Links - Establishing relationships between elements. Links can have directed or undirected arrows. The meaning of a directed arrow between elements A and B is that A influences B. Undirected arrows indicate the relationship is exploratory • Boxes - Grouping related elements. These boxes help structuring the conceptual framework. Boxes may either group concepts through thematic affinity, or group them in a more abstract concept Figure 2.23 Exploratory framework for a mobile application supporting firefighters. A characteristic of these frameworks is vagueness in relationships (Source: Reproduction by the author)

!

Example - Mobile application for firefighters Figure 2.23 shows a conceptual framework developed for a project aiming to develop a mobile application for firefighters. This is an exploratory framework (as suggested by the vague relationships between elements). First, observe how the framework puts together a set of elements related to the firefighters: competencies, procedures and organisation. Then note how it also integrates more loose elements such as resources and risks. Finally, also consider how the designer

brings in a particular design perspective by adding elements like efficacy and quality of service. The exploratory framework is not centred on the mobile application per se, but on the complex relationships between the firefighters and the mobile application. Analyse how the designer organised all concepts and links relevant to the user research. In particular, the links do not indicate clear influences but possibilities worth being studied. The diagram suggests that the designer is mainly concerned with a set of social forces. For instance, competencies, risks, resources, procedures, and organisational issues such as coordination and communication. It also shows that efficacy and quality are particularly important to firefighting, and therefore also important to the mobile application. Again, this example highlights that exploratory frameworks document what the designers were thinking about when initially approaching the user research. They document the initial viewpoints about a project, which would be lost otherwise. Figure 2.24 Intermediate framework for system supporting drug prescriptions. It establishes more concrete relationships (Source: Reproduction by the author)

!

Example - Application for helping elderly people Figure 2.24 shown an intermediate framework for a project aiming to develop an application to help elderly people dealing with prescriptions. Observe how the direction of influence in links suggest this is an intermediate framework, as it provides more purposeful and structured information about the user research. Furthermore, the framework already refers some aspects of the application’s UI design. For instance, it shows that auditorial and visual alerts about prescriptions will be used. It also shows that the application will use medical records.

References Miles, M. and Huberman, M. (1994). Qualitative data analysis. An expanded sourcebook. Sage, London. Antunes, P., Xiao, L. and Pino, J. (2014) “Assessing the Impact of Educational Differences in HCI Design Practice.” International Journal of Technology and Design Education, 24(3), pp. 317-335.

Chapter 3

Structural Design Mental Models In a broad perspective, we build mental models about everything we interact with: how to walk, how to open a door, how to participate in a meeting, how to drive a car, etc. In the UX context, a mental model is the model that people construct of the system they interact with through the UI. Mental models allow us to make predictions about how things work, often in a seamless way. For instance, we do not worry much about how to walk. That model was acquired in childhood and reused multiple times even though in different contexts, e.g. walking on concrete, sand, mud, etc. Curiously, we may have to revise the model in some circumstances, such as temporarily walking with a broken leg. You will have to unlearn a few things and lear some new ones, such as using the crutches.

Expectations Mental models allow us to infer how to use a system through a recognisable UI. Consider for instance the UI pointing device. What is the mental model? You move a physical mouse and a pointer on the screen will move accordingly. After some initial tuning, you get used to easily and precisely move the pointer on the screen by moving the physical mouse. That mental model has low complexity, is easy to learn and becomes pervasive after a while. We then learn from experience how to use other UI objects like buttons, menus, text boxes and drop down menus. Furthermore, more complex mental models such as login/logout, searching the web, setting a user account, submitting a form, and paying a bill through electronic banking, can also be learned from accumulated experience.

Mental models and design Mental models are developed by users through repeated system use. Again, it is a dual relationship between the user and the system. However, UX designers cannot be completely forgetful about their indirect role in the process. Design decisions have a significant impact on how users develop mental models. To start with, trying to understand what mental models of a system the users may develop gives important insights about the system’s complexity, user attrition, and potential failures. When a system’s mental model is too complex to build, we should expect that

users will have problems interacting with it. The blame should be assigned to the UX designer, for not thinking about the problem at design time. Designers can proactively influence the users’ mental model. Shaping an appropriate mental model for a system through the UI is the most high-level goal of a UX designer, and perhaps the most important one. Check the interesting example illustrated in Figure 3.1 and Figure 3.2. The figures show a water dispenser which has an unusual feature: it counts the number of times it has been used and suggests that each refill corresponds to the elimination of waste of one bottle. This design feature certainly makes the user think about waste and conservation while using the water dispenser. It builds a mental model. Figure 3.1 Water dispenser (Source: Author)

! Figure 3.2 Fostering conservation through UI design (Source: Author)

!

Complexity The design of a UI for a complex system is particularly challenging. Complex systems naturally require a complex layout and complex interactions. They often combine numerous UI objects in multiple ways, using different layers of detail, which users are expected to learn progressively. Therefore, the designer should explicitly consider ways to help building mental models of complex systems.

Training and instructions Mental models can of course be built through training and instructions. Two examples: Driving and flying. You need training and have to pass exams to get a licence. However, training and instructions are much worse approaches to the problem than familiarity and intuition.

Familiarity and intuition When selecting a layout, combining some UI objects, and defining the navigational features, the designer should consider how familiar the users are with what if offered to them. The adoption of unfamiliar interaction mechanisms is problematic because users may have troubles when first approaching the UI. For consumer products, this first approach is critical because if users find a UI too difficult to use, they will give up. Perhaps one of the great achievements of Apple has been the democratisation of technology through intuitive interaction. Apple’s devices are sold with minimalistic to non-existent user manuals, because people intuitively know how to use them out of the box. This is much unlike TVs, which come with long and complex user manuals. Figure 3.3 iPod’s simple mental model: Menu, play, back and forward (Source: Author)

!

A good example of familiarity and intuition can be found in Apple’s iPod (Figure 3.3). You do not need an instructions manual to understand that you can play/pause music, play back and forward, and go to the menu for any other option. Another good example of intuitive interaction was Apple’s “slide to unlock”: You would see a button saying “slide to unlock”, showing an arrow pointing right; you would slide the finger over the button to the right and, voila, you would have unlocked the phone.

Culture You should consider that mental models emerge and evolve over time as cultural phenomena. For instance, the phone mental model has changed significantly through time. In the early days, you would not have to dial, you would simply pick up the phone and tell the human operator who you would like to call (Figure 3.4). Later on, the operators were eliminated and users had to learn phone numbers and how to dial the phone (Figure 3.5). Nowadays, we almost eliminated again the phone numbers from the process. Figure 3.4 This phone did not have the concept of dialling. You would pick up the phone and say who you would like to talk to (Source: kpirat / Chairs Hunt / CC)

! Figure 3.5 This phone requires dialling (Source: Internet Archive Book Images / Foter / CC)

!

Convention Figure 3.6 brings back a previous example of mental model breakdown. It shows an elevator’s panel. Now, suppose you would like to go up, where do you look? Your mental model tells you to look up. Does the panel reflect that model? Not in this case. For instance, if you want to go to level 10, you will find the button down, not up. The designer decided that the buttons would go up by columns, but when you reach the top of the column, you have to come down. This breaks the conventional mental model that tells us that to go up you should look up for the button. If the buttons were organised by lines instead of columns, the mental model would not have been broken (Figure 3.7). Another good example of mental model breakdown is the digital watch (Figure 3.8). Usually they have four buttons. Which button do you use to change the date/time? There is no convention and each watch uses a different button, so you will never build a mental model of how to use the buttons. Changing the date/time will always require exploring how to do it in a particular watch. Figure 3.6 Press to level 10: A case where to go up you have to look down, which breaks the mental model (Source: Author)

!

Figure 3.7 Press to level 10: A case where to go up you look up, which follows the mental model (Source: Author)

! An interesting point to consider in the digital watch example is that in this case familiarity is not very useful. Even though you may know that you have to press a button that will flash a couple of digits, you never know which button works for a particular watch. This example points out that mental models are built on top of conventions.

Consistency A good example of lack of consistency is shown in Figure 3.9. Check the user instructions shown in the middle. Besides being overly complex and verbose, they require the user to go through 5 steps that use disparate UI elements placed at the right, at the bottom and in other places. How can users build a mental model of what has to be done with such lack of consistency?

Figure 3.8 What is the purpose of each button in the digital watch? (Source: Author)

!

!

Figure 3.9 Check the user instructions of this system. You need to do ALL steps!

When users give up If we go back to the phone example, it is interesting to observe that nowadays phones have become so complicated to the point of being impossible to use in their entirety. Do you know how to hold a call on a phone, forward a call, or do a conference call? Most people do not know. Some people do not even care. Figure 3.10 shows the phone that sits in the author’s desk. The purpose of most buttons is unknown. Of course the author has a user manual but refuses to read it. Do you see the post-it in Figure 3.10? It’s related with mental models. To call a line outside the user has to call 1 first and the post-it reminds that. Shouldn’t the user know that? Not really. You see, in many places you dial 0 instead of 1. So this phone has actually broken a prior mental model. Figure 3.10 How to dial outside? The post-it is there to remind the user to dial 1 (Source: Author)

! The post-it is a reminder that the prior mental model is broken. You may argue that the user should learn to use 1 instead of 0, which should not be difficult. But that is missing the point: the user does not find any value in learning to use 1 or 0. The mental model is broken and there is no way to repair that tragedy. You see, the knowledge that “in some places you dial 1 and in others you dial 0” is not very useful.

When to fight back Over time, mental models may have to evolve, which means fighting the users. For example, many mobile phone users have resisted the move to smartphones because they were too complex. But how long will people resist? Simple technologies tend to be substituted by more complex technologies over time. Thus over time people will have to engage with increasingly complex systems and learn how to use them. Peer pressure is a factor to consider when fighting back. The smart phone war was won by peer pressure. Nobody wants to be singled out as the last adopter. With persistence and repeated use, a mental model may become pervasive. However, we can probably agree that this should only happen with good mental models. The bad ones should disappear.

Spatial Structure Users have a spatial relationship with the systems they use. For instance, when using a calendar, they can go to places like the “contacts list” and the “contact detail”. When using a photo editing tool, they go to places like the “library”, the “albums”, “years”, and the “photo enhancer”. Each one of these places has a particular identity and provides a particular set of functions. Furthermore, users can navigate between places. For instance, in the photo editing tool, users can move between the library, albums and individual photos. Figure 3.11 An architectural plan gives a strategic view of a building. You can understand how the building is divided in different spaces (rooms), assigned functions (e.g. bathing and sleeping), and how users flow between spaces (hallways). A UI should be designed the same way (Source: cocoparisienne / CC0)

! Discussing the spatial structure of a UI is very much like discussing an architectural plan for an house (Figure 3.11): it gives a clear idea about the floor plan, so that users know what spaces will be available, if they are more open or closed, how people flow between them, how the house relates with the landscape, etc. The architectural plan does not give many details about the concrete design features, such as lightning, plumbing, materials, etc. That is good because those nitty gritty details are usually not as critical for the users as the spatial structure. For instance, how the house relates to the sunlight is more essential than plumbing. Plumbing can always be changed afterwards, while sunlight is defined once and forever. The same reasoning applies exactly to UX design. Spatial structure is difficult because it is more difficult to change later, if something is wrong. It is easier to change a piece of text, a

button or a menu, or even add a new functionality in a photo editor. However, it is much more difficult to change the underpinning that a photo tool is structured according to a library, albums and individual photos.

Thinking about applications in terms of space A simple example of spatial structure is given in Figure 3.12, Figure 3.13 and Figure 3.14. it shows that the Keynote slide editor is structured according to a collection of different spaces, one that deals with theme selection and others that deal with slide editing and printing. Since the tool is very complex, there are other spaces not shown here, such as the one that deals with master slides. Figure 2.12 Keynote: Theme selection space

! Users can build mental models of complex tools like Keynote by realising the distinctive features and arrangements provided by the different spaces, and understanding how to navigate between them. Symmetrically, UX designers contribute to develop mental models by proving adequate structural design.

Figure 3.13 Keynote: Slide edit space

! Figure 3.14 Keynote: Print space

!

Structural issues When doing structural design, the UX designer should avoid some structures that are usually problematic for the users. Figure 3.15 illustrates a case were the structure is strictly hierarchical: a space links to a space below that links to another space below.

The main problem with this structure is that, even though it may be logical, it makes the work flow very rigid, requiring users to go up and down to execute functions in different spaces. For instance, a user that is in space A and would like to execute a function in space B, will have to go up and down until reaching B. Figure 3.15 Structural problem: Too much effort to go from A to B

!

Figure 3.16 shows an example that is often found in many systems, where work is divided into several consecutive spaces. Again, there may be a logical reason for organising the spaces this way. But again it takes too much effort to go from A to B. Furthermore, if the spaces between A and B do not have relevant functionality, users will not like the experience. Spaces devoid of functionality are not really memorable for the users. Figure 3.16 Structural problem: Too much effort to go from A to B

Figure 3.17 illustrates an example where a system is structured around a central space surrounded by many other spaces. The central space can be seen as a gateway to other spaces. The main problems with this approach is users may find it difficult to build a mental model of the system, since it does not have any clear pathways.

Figure 3.17 Other structural problem: focal spaces that are just gateways to other spaces

!

References Beyer, H. and Holtzblatt, K., 1997. Contextual Design: A Customer-Centered Approach to Systems Designs. Morgan Kaufmann.

UED Structural design cannot only exist in the designer’s mind. It has to be materialised into an artefact. The purpose of the UED (User Environment Design) is to define the overall structure of a UI. By definition, the UED is abstract and strategic, highlighting the “spatial” arrangements of the UI.

Structural elements There are few elements to consider when regarding UI structure (Figure 3.18). • Focal areas. They are like rooms in an architectural plan for an house. They define the different places where work can be done • Functions. They are defined inside focal areas. They define what can be done in each area • Objects. They are defined inside focal areas. They define what information is manipulated in each area • Links. They relate two or more focal areas. They describe how users navigate between focal areas to do their work

!

Figure 3.18 Focal area

Figure 3.19 UED diagram for a slide editor

!

Two types of links can be considered, one that permanently moves the user to another focal area, and another that temporarily switches work to a focal area, but switches back to the original focal area as soon as the user completes some activity. Figure 3.18 shows how a focal area is defined. Figure 3.19 shows a UED diagram for a slide editor like Keynote and Powerpoint. The represented editor has four focal areas: edit slide, edit slide sorter, edit notes (not shown in Figure 3.19), and print. users can move between the slide and the slide sorter. From both spaces, they can also link to print. However, print is a temporary focal area: users can do some work there but when done they return to the original area.

References Holtzblatt, K., Wendell, J. and Wood, S., 2004. Rapid contextual design: a how-to guide to key techniques for user-centered design. Elsevier.

Chapter 4

Layout Design Psychology of Visual Perception Some people think that UX design is about aesthetics. Well, certainly an element of design is deciding how a UI should look like. Though the problem with discussing aesthetics is that it is a cultural phenomenon. Each culture will appreciate a design differently, which creates a challenge for designers: which group of interest are you going to please? One way of getting out of this trap is to regard aesthetics not as a matter of culture but instead a matter of cognitive science. In a cognitive perspective, the appreciation of design is based on the users’ perception, especially visual perception. Figure 4.1 The visual perception of objects can be ambiguous (Source: Public domain)

! For humans, visual perception is an acquired and almost innate capacity. At very early stages in our intellectual growing process we learn how to make sense of the physical world surrounding us, which includes perceiving objects, shapes, colours, contours, and layouts (Figure 4.1). As we start interacting with the physical world we also start perceiving more about our relationship with objects in the physical world using attributes such as distance, movement, speed, depth of field, etc. It is just logical that, to decide on how to layout a UI, designers consider the way users perceive the UI as an object belonging to the physical world. The principles of Gestalt contribute to understand this relationship. They seek to codify a set of mechanisms we develop to handle visual perception. Understanding these principles can only be useful for

UX design. Though it should be noted that cognitive science has advanced knowledge significantly beyond what Gestalt can offer us today. The Nobel prize winner Daniel Kahneman, in the book “Thinking Fast and Slow”, suggests that our brain operates with two decision-making modes, one that relies on intuition, instinct and emotion, and therefore makes fast decisions, and another one that is more logical and inquisitive, and therefore operates slower. The principles of Gestalt concern fast thinking. They explain cognition from a subconscious perspective, which relies on certain decision-making automatisms. In this chapter we discuss some of these automatisms.

References Kahneman, D., 2011. Thinking, fast and slow. Macmillan.

Figure and Ground Think that you are a primitive human hunter pursuing an animal in the jungle. How can you visually distinguish the animal from the surrounding environment? Somehow we need a perceptual rule that tells us that there is a distinction between the animal (figure) and the jungle (ground). Hunters certainly need this rule, otherwise they would starve to death. Of course this rule is probabilistic. In a particular context, our brain will take a chance and tell us that a certain object stands from the background. Figure 4.2 See how the lion’s head mixes with the background. A split second of indecision may be enough to either escape the hunter or catch the prey (Source: Pixabay / CC0)

! Actually, in their own fight for survival, some animals like lions, tigers and zebras are very good at subverting this principle, by using optical disguises that make it difficult to differentiate figure from ground (example in Figure 4.2). The classic example shown in Figure 4.3 illustrates the dilemma of deciding between figure and ground. Sometimes you see a goblet, other times you see two faces. The brain wanders from one option to the other and the problem is that both seem viable. What are the implications of the figure and ground rule on UI layout? Well, an important one is that you should never make it difficult for users to distinguish UI objects from the background. Buttons, text boxes, menus and any other object should be perfectly distinguishable. As with the hunter’s example, who has a split second to catch the prey, it may just be a split second making the difference between a good and a bad UI. A UI with bad figure and

ground perception may require too much effort and thus suggests users to go somewhere else, while good figure and ground perception may effortlessly attract users. Figure 4.3 What is figure and what is ground? Face or goblet? (Source: Public domain)

! Figure 4.4 Inadequate use of background picture: Buttons and labels are hard to discern from the background

! Figure 4.5 A button that stands out from the background

! Analyse Figure 4.4. Do you think it is easy to distinguish the buttons and labels from the background? The background image makes it very difficult. The implications on UI layout are clear: 1) always consider that users will subconsciously try to distinguish figure from ground; 2) make an explicit decision about which UI objects should stand out as figure and which ones should be perceived as background. Figure 4.5 shows two buttons. One was configured to stand out from the background using a strong shadow effect, while the other was configured to be almost

indistinguishable from the background. They cause different perceptual impacts and therefore should be used to achieve different effects. One is shouting at you. The other is quiet. Consider that figure should be used where a UI object requires immediate attention, while ground is best used where a UI object can be ignored. We note however that in some cases experienced designers may increase ambiguity to make a UI more stimulating and playful. Shadows, colour, line thickness, placement, and fonts may be used to define figure and ground.

References Hoffman, D., 2000. Visual intelligence: How we create what we see. WW Norton & Company.

Visual Constructions Consider again our primitive human hunter trying to find prey in the jungle. As the hunter looks around, how is the scenario visually constructed? Objects can be extremely noisy and ambiguous, with trees and tangled vegetation between you and the prey. Will objects be interpreted as vegetation, animals or an infinite number of other possibilities? The sole process of spending time analysing the different possibilities will reduce the chances the hunter has of catching the pray. The best approach is to trade precision with a fast decision. The brain makes a probabilistic decision in a split second, and in a subconscious way. Figure 4.6 These shapes are perceived by bird as “mom”

! Figure 4.7 These shapes are perceived as “predator”

! Consider another example described by Donald Hoffman in the book “Visual Intelligence”. For a bird in the nest trying to identify an approaching object, the shape shown in Figure 4.6 is enough to identify it as “mom”, while the shape shown in Figure 4.7 will could be seen as “predator”. Again, this type of fast decision seems useful for survival. Our cognitive resources are usually applied to interpret visual constructions with great consideration for time. But to speed up the decisions, our brain has to make some shortcuts. Whatever makes more sense is decided, without spending precious time analysing an infinite number of possibilities.

Therefore, we tend to interpret visual objects in “good and useful” ways, a principle known as Pragnanz. The word “good” is somewhat ambiguous but can be interpreted as simple, familiar, stable, and regular. The use of the word “useful” is less ambiguous. It alludes to the fact that in our natural world, we tend to privilege interaction with useful objects. Analyse Figure 4.8. Do you see the magic square? It is not really there, but your brain does not like the odds. It prefers seeing a square instead of a set of sticks. In the natural world, the probability of finding a set of sticks exactly placed like these ones is really low. So the brain gambles on the most probable visual construction: a rectangle. Figure 4.8 The magic square: It’s not there, but is constructed by our mind

! Figure 4.9 The face on Mars? (Source: Courtesy NASA/JPL-Caltech)

! This also explains why our brain insists on seeing the face on Mars (Figure 4.9), which does not make much sense if we think rationally. But you still see the face don’t you? Well,

we are used to interact with lots of faces, much more than with awkward rock formations looking like faces. What are the implications for UI layout? One is that users, when perceptually processing a UI, will subconsciously identify visual constructions they can recognise as good and useful. You should privilege simplicity, familiarity, stability, and regularity, while avoiding ambiguity and awkwardness. Consider for instance the two menu layouts shown in Figure 4.10 and Figure 4.11. They are very similar but lead to very different visual constructions. One establishes a clear relationship of dependency between menu and submenu, while the other seems a bit awkward Are the two menus related? Why are they positioned with so much distance between them? In summary, the layout in Figure 4.10 is familiar while the layout in Figure 4.11 is not. Figure 4.10 Visual construction: We see a menu with a submenu

! Figure 4.11 Visual construction: What is the relationship between these menus?

!

In the Gestalt theory, Pragnanz is an umbrella for more detailed rules for visual construction. Some of these rules are described in the following sections.

References Hoffman, D., 2000. Visual intelligence: How we create what we see. WW Norton & Company.

Grouping The following principles, which concern the grouping of visual objects, are particularly important to understand UI layout.

Proximity Visual objects that are close together are perceived to be more related than objects that are farther apart. This principle helps processing complex information using a fast-thinking strategy that first groups visual objects together and only then gives attention to what is inside each group. Figure 4.12 Proximity: You see three columns instead of four

! Figure 4.13 The proximity rule also applies to text. The text at the bottom is harder to read

! In the jungle, our hunter would find it easier to scan groups of animals and trees rather than one animal and tree at a time. The decision to hunt an animal would be made faster. Check Figure 4.12. Do you see eight rectangles or three groups of rectangles? Do you see four columns or two lines? The horizontal spacing will determine that we group the first two columns into one group, while the other two columns will form two independent groups, since they are further away. The differences in vertical and horizontal spacing will determine that we tend to see one line instead of two lines. As shown in Figure 4.13, the proximity principle also applies to text spacing. With too much spacing, we start reading character-by-character, instead of word-by-word, which increases the reading effort significantly. Proximity has a significant impact on UI layout. By deciding the distance between objects, the designer tells the user how to group them together when scanning the UI. And if objects are perceived as belonging to the same group, then it is just natural they are related in some way.

By other words, proximity provides a fast decision-making mechanism for understanding the conceptual structure of a UI. An obvious consequence of this principle is that you should not put together UI objects that have no relationships between them. A concrete application of the proximity principle to UI layout is related with the headers and footers found in most websites. See in Figure 4.14 how the distance between the buttons placed at the top creates some unity, so that the five buttons are perceived as an header. The same applies to the four labels at the bottom forming a footer. Users will intuitively detach the header and footer from the UI and realise they are structurally distinctive from the other parts of the UI. Figure 4.14 Using proximity to create headers and footers

! Figure 4.15 Shape similarity: You see four columns instead of two lines

!

!

Figure 4.16 Size similarity: You see two groups of buttons

The use of proximity helps creating structural patterns. Patterns can and should be used to suggest contents-based and functional-based relationships in the UI.

Similarity As with proximity, visual objects that are similar are perceived to be more related than objects that are dissimilar. Once again, this principle helps processing complex information using a fast-thinking strategy that emphasises perceiving groups of objects rather than individual objects. The similarity principle can rely on shape, size and colour. Observe in Figure 4.15 that you tend to group rectangles with rectangles and circles with circles, so that you predominantly see four columns instead of two lines. Observe in Figure 4.16 that you tend to perceive two groups of buttons instead of 7 independent buttons. Once again, the implications for UI layout are significant. As with proximity, similarity helps perceiving structural patterns. Figure 4.17 Similarity: Colour is used to categorise buttons in two functional groups

! The decision to design two buttons with the same shape, colour or size is a strong indication that they are functionally related. Symmetrically, the decision to adopt different shapes, colours or sizes is a strong indication that they are not related. The example shown in Figure 4.17 suggests the user that the four buttons belong to two different functional groups.

Closure As noted when discussing good figure, we tend to visually scan the environment for familiar objects. If a part of an object is missing, we will intuitively complete the object to ensure the regularity of the physical world. Figure 4.18 Closure: You see one line, not two

! Figure 4.19 Closure: You see a rectangle, not 5 segments

!

In our jungle scenario, if the hunter objectively only sees an animal’s head, the picture will be subjectively completed with the animals’ full body. In the physical world, the probability that the unseen parts are really there is very high, and so the mind gambles towards what is more probable. It would be foolish to spend time thinking about the slight possibility that maybe only the head is there and the body is missing. The hunter would lose the prey and starve to death thinking that way. The closure principle says that we perceive visual objects as singular and recognisable shapes, rather than individual elements such as lines, curves and corners. Once again, this fast-thinking strategy helps reducing complexity by seeking simplicity and regularity. Figure 4.18 and Figure 4.19 give two simple examples of closure. In the first one, we tend to see a single line instead of two lines. The missing part in the middle is completed by our brain. The same happens in the second case, where we tend to see a rectangle instead of 5 segments and 4 corners. In both cases, our perception seeks simplicity and regularity. In our physical world, lines tend to be continuous and shapes tend to be regular. The probability of finding a rectangular shape is much higher than the probability of finding an object that looks almost like a rectangle but has one of the sides broken in the middle. Closure is also very important for UI layout. The principle tells the designer that users will look for high-order regularities and are capable to fill up the gaps. Such high-order regularities can then be used to organise the UI. In particular, they can be used to reduce the complexity of a UI. Figure 4.20 Using closure in UX design: The rectangular shapes on the right improve structure

! Consider the example shown in Figure 4.20. The buttons on the left are perceived as dispersed and unrelated. Furthermore, the perceptual effort in scanning the UI is high since the user has to visually scan each element one-by-one. The buttons on the right are surrounded by two rectangular shapes that stand out from the background. Perceptually, users will consider that the individual buttons are part of a better organised whole.

The closure principle determines that the rectangular shapes reduce clutter and give regularity to the UI, now perceived as two groups of visual objects. There is less cognitive effort in interpreting the UI layout because of the rectangular shapes. As many UI can easily get very complex, closure helps designing more coherent and meaningful interpretations of the UI.

Continuation The principle of continuation (or good continuation) says that we visually scan shapes in a continuous way. Consider Figure 4.21 as an example. When scanning the shapes, you do not do it as piecemeal units, like scanning the four extremes of the line and curve and then the crossing. Instead, you start somewhere and then keep scanning in a continuous way. Suppose you start scanning Figure 4.21 at the top-left corner. You will then follow the line in a continuous way until you arrive at the crossing. Because of the principle of continuity, you will keep scanning the line in a continuous way towards the images’ bottom-right corner, instead of deflecting up or left. That explains why in Figure 4.21 you see one line and one curve instead of four shorter segments. Figure 4.21 Continuation: We tend to scan the shapes in a continuous way, which means we see two lines and not four segments

! Once again, this principle suggests that things in the natural world have most likely a certain continuity and therefore our fast-thinking strategy is to give primacy to continuity when grouping objects. In the natural world, you would draw the shapes shown in Figure 4.21 in two continuous pen movements, not four discontinuous strokes. In relation to UI layout, the continuation principle suggests that users scan the UI by following continuous lines and curves. If the designer arranges the UI objects according to a linear or curvilinear layout, users will perceive the aligned UI objects as related.

Figure 4.22 Designing for continuation: Which is the best design option?

! This has some subtle consequences. Consider the three design options shown in Figure 4.22. Which one you think is the best? The option show at the top is the better one, because readers will see the search button and move on to the text box. The natural relationship between the search button and the text box is implicitly emphasised through continuation. After all, search buttons always need text boxes. The example shown in the middle is not as good, as the purpose of the text box in unclear: A text box does not necessarily have a relationship with search. So you would not immediately perceive the purpose of the text box. Figure 4.23 Ancient Chinese was read from top to bottom and right to left (Source: Author)

! Finally, the option shown at the bottom is also not that good, since according to good continuation the users will scan in a continuous way, and since the scan started from left to right, to read the “Search” text, having the text box below creates a discontinuance.

Another aspect of continuation is related to reading. Written communication is very important in our life and every reader necessarily has to acquire significant practice in scanning text. With very few exceptions, like Ancient Chinese (Figure 4.23), we read from left to right and top to bottom in a continuous way (even though our eyes will move around in saccades). Thus there is good reason to assume that when we scan a UI we give some primacy to this scanning pattern. Figure 4.24 Lack of closure in website design: The relationships between text and pictures are ambiguous

!



Impact on layout Understanding the grouping principles is important to good UI layout. By controlling, proximity, similarity, closure, and continuation, designers can define layout patterns that simplify visual construction and avoid ambiguity from the UI. A couple of examples are given to illustrate the importance of this issue. In Figure 4.24, we show a web page that does not use closure to layout the UI objects. As a result, the relationships between the text and pictures are ambiguous, and the user will have to spend time establishing them. Figure 4.25 Inadequate grouping in website design: Boxes were used to group text and pictures (closure); however, the lack of similarity and continuation still make it difficult to perceive the page’s structure

! In Figure 4.25, we show another page from the same website. in this case, text and pictures were enclosed with boxes, which remove the ambiguity. However, because the boxes have different sizes (inadequate use of similarity) and because there is no obvious way to scan the pictures (lack of continuation), it is still very difficult to perceive the page’ structure.

Figure 4.26 Adequate grouping in Airbnb using closure, similarity and proximity

!

A good example of grouping is shown in Figure 4.26. The different search topics have been arranged in groups using various visual elements such as rounded rectangles, colour and vertical bars. The groups are separated in a way that also contributes to clarify the UI layout. The UI elements inside each box also use the proximity principle to reduce ambiguity. Figure 4.27 shows another page from the same website where boxes and colour (background and buttons) were used to clarify the UI layout.

Figure 4.27 Adequate grouping in Airbnb using closure and similarity

!

Symmetry We like to perceive visual object as symmetric shapes. For instance, if you look at Figure 4.28, you will perceive the elements as laid out from the centre rather than the left or right sides. Figure 4.28 Symmetry: We see three groups of items aligned from the centre

! Figure 4.29 Symmetry: This face was manipulated to be asymmetric and therefore it looks awkward (Source: Brionivich / Foter / CC)

!

Even when scanning from left to right or right to left, following the continuation principle, you end up grouping the objects by symmetry, perceiving two groups of brackets rather than three groups of disparate visual objects. Interestingly, in this example, symmetry also seems to be stronger than proximity. Somehow symmetry is associated with beauty. Maybe this is so because many things in our physical world are symmetric, including people with their symmetric arms, legs, eyes, ears, etc. Analyse Figure 4.29 and observe how the artificial asymmetries result in an awkward perception of the face. Perhaps our comfortable relationship with symmetry explains why so many science fiction stories use aliens with three legs or three eyes. The principle of symmetry suggests that people associate certain positive attributes to symmetric visual objects: balance, harmony, stability, consistency, etc. Asymmetric

arrangements may give either a negative perception or a perception of disruption, which indeed may be used to obtain an effect of surprise. Observe Figure 4.30. Do you prefer the layout at the left or the one at the right? We seamlessly prefer the one at the left. Note there are three types of symmetry you can use: reflection, which uses a central line to mirror a visual object; rotation, which repeats a visual object rotated by an angle; and translation, which repeats a visual object in a different space. Figure 4.30 Using symmetry in UX design: left option is perceived as better

! Figure 4.31 Using asymmetry in UX design: The buy option is perceived as important

! Figure 4.32 Symmetrical organisation of Safari’s Favourite links

!

In Figure 4.31 we show an example where asymmetry is used to obtain a useful outcome. Certainly the bigger button appears as a disruption, something unusual when compared with the other buttons. In this example, asymmetry was used to raise attention to the dissenter, which is there to compel you to buy something.

Symmetry can be found everywhere in UI layout. For instance, Figure 4.32 shows the symmetrical organisation of Safari’s “Favourite” links (vertically and horizontally). Note also the symmetrical placement of the search box. Figure 4.33 illustrates the use of vertical and horizontal symmetry in the Airbnb website. This website is recognised for being very successful in attracting users to alternative forms of lodgement. Maybe one reason is because it gives them a sense of comfort.

!

Figure 4.33 Use of symmetry in airbnb.com: not only the picture placement is symmetric, the figures themselves have strong symmetry

Wireframes Ideas about how to layout a UI have to be crystallised somewhere. Wireframes are the right tool for the job. A wireframe is a schematic layout of a UI. Wireframes emphasise static composition, using layers of boxes and lines to articulate the UI layout. On important property of wireframes is they consider the UI in an abstract way, avoiding concrete, fine-grained details. This is because the main purpose of wireframes is more to elucidate how users are going to perceive the UI than to define exactly how the UI is going to be implemented, as a software component. A cautionary note is necessary here about the main purpose of wireframes. On the surface, it may seem the main goal is to reveal what inputs and outputs are necessary to use a system. But on a deeper level, we should realise that the UI is a mediating component built on top of the system. Therefore, wireframes must necessarily reveal something about the inner system. The UX designer is responsible for developing the whole mediating structure between the user and the system. Especially with complex systems, such mediating structure is also going to be complex. For instance, the interaction with a complex industrial system may require checking many instruments and handling many controls. All these checks and handles will have to be organised in some way. So we can more properly say that the main purpose of wireframes is to help the UX designer strategise about complexity.

!

Figure 4.34 Sketchy wireframes developed for a mobile device (Source: Author)

Sketchy wireframes Sketchy wireframes are intended to provide a raw overview of the UI, illustrating the layout and inputs and outputs but using just the core UI objects. They may be drawn with paper and pencil. However, many tools allow you to build wireframes with a sketchy outlook. Figure 4.34 shows an example of sketchy wireframes done with paper and pencil. They document the UI for a mobile device. Figure 4.35 Detailed wireframe for a Web site (Source: D. Inkster)

!

Detailed wireframes Detailed wireframes serve to document a UI with more detail and precision. Unlike the sketchy wireframes, they identify all necessary UI elements and their composition in a realistic way, very close to what the users will end up interacting with. These types of wireframes are important to explain software developers what they have to build. Another significant difference from sketchy wireframes, is that the UX designer must commit to use a certain UI framework. For instance, if developing a UI for smartphones, a decision has to be made wether to design for iPhone or Android, since they use different platforms.

Each platform provides a different set of UI objects, also known as toolkit, which provide a distinctive look & feel. The detailed design is then expected to reflect the mediating structure of the UI and the look & feel of the target platform. Nowadays, many wireframing tools provide UI toolkits with very realistic objects used in platforms such as iPhone, Android, Windows Mobile, HTML pages, etc. An example using HTML is shown in Figure 4.35. Figure 4.36 Sequences of wireframes developed for a mobile application (Source: WMF / Wikimedia / CC)

!

Sequences of wireframes Sequences of wireframes are intended to be used in a step-by-step way, for instance depicting the task flows. An example is shown in Figure 4.36. Sequences of wireframes can be used to validate a complete UI design with users. Figure 4.37 shows a sequence of wireframes that have been specifically tailored to validate with a user how a certain function is executed. The wireframe shown on the top left is first shown to the user. Then the user is requested to do a certain interaction and the following wireframe is presented. As you can easily see in the example, the level of detail can be very high. The simulated interaction requires the user to do precise operations on the UI, like pressing a specific button or providing some input using a specific UI object.

Developing wireframes A key issue to consider when developing wireframes is that you should be able to create multiple designs quickly and cheaply. The best way to do it is to keep the design modular and changeable. For example, headers and footers are usually reused by every webpage belonging to a website. Therefore it makes sense to create them in a separate layer and then reuse this layer in multiple pages. You can create more layers, each one addressing a specific purpose, such as input forms, popup error windows, login windows, printing, etc. You can then reuse and compose these layers (Figure 4.38).

!

Figure 4.37 Using wireframes (Source: Mark Congiusta / Flickr / CC)

Figure 4.38 Layer system for wireframes. The header, footer and menu can be reused

!

Chapter 5

Designing Affordances Use of Everyday Things Don Norman, in “The Design of Everyday Things”, raised attention to how some common objects have properties that tell us how they can be used. Figure 5.1 There is no ambiguity in how this basin tap can be used (Source: Author)

! For instance, consider the basin tap shown in Figure 5.1. When we look at the tap, we immediately recognise how to use it. We can perceive there is an handle. We understand where to place the hand. And we see that the handle can be rotated horizontally. There’s even a conventional sign telling how to get hot and cold water (red versus blue). Interestingly, the same object with a different configuration may not have the same capacity. For instance, observe the tap shown in Figure 5.2. Even though you can clearly recognise the object, there is some ambiguity regarding how to use it.

Figure 5.2 This basin tap creates some ambiguity regarding its use (Source: Author)

!

The handle has an unusual shape and the possible movements are unclear. For instance, there is no indication of the axes of movement. In this case, the ambiguity was created by design, to raise some interest about the object, so that users can play with it. Figure 5.3 This basin tap is so unusual that users may really find it impossible to use (Source: Author)

!

Figure 5.4 The price to pay for an unusual tap is that users need instructions(Source: Author)

! The case illustrated in Figure 5.3 is even more interesting. The basin tap is not only unusual, it seems taken from a science fiction movie. The end result is that users will not know how to use it and therefore detailed instructions are needed (Figure 5.4). Interestingly, this lack of familiarity is caused by the attempt to bring innovation: the tap integrates the washing and the drying functions, which are usually separate. We should however note that, in most UI, the goal is to avoid ambiguity and to make it as clear as possible how to interact with a system. For instance, you would not like to have the basin taps shown in Figure 5.2 and Figure 5.4 in an hospital where basin taps may have to be used in emergency situations. You would also not like airplanes to have cockpits that would create ambiguity and require pilots to explore how to use them while flying. And you would also not like to have productivity tools, such as text editors, to decrease your productivity by playing games with your perception. The issue then is that our perception is capable to tell us what works and what doesn’t; and objects can be designed to either support or contradict our perception. Understanding

this capability if fundamental for UX design, as it allows influencing human behaviour. Don Norman called this “the psychology of everyday things”. Figure 5.5 Unusual chair arrangement with only one arm rest per chair (Source: Author)

! Figure 5.6 Typical chair arrangement with two arm rests per chair (Source: Author)

!

Everyday design thinking We can find many interesting examples of everyday design thinking at work. Analyse for instance, Figure 5.5, which shows some chairs at an airport. Did you notice that there is only one arm rest per chair? Why? If you think about the users’ comfort, a better choice would be having two arm rests per chair, as shown in Figure 5.6. That makes seating much more comfortable. Or maybe you could even consider having seating like shown in Figure 5.7. That would be great for the user's comfort. Figure 5.7 Comfortable seating at the airport (Source: Author)

! The problem however is that we cannot only consider comfort. The airport seating shown in Figure 5.7 takes too much space and airports must handle a lot of people. So why not keeping the traditional chairs with two arm rests? Adding that arm rest raises another concern: it increases cost. The cost of this simple decision can be very high in a big airport with lots of seats.

Figure 5.7 Another unusual chair arrangement with no arm rests per chair (Source: Author)

! Now that we brought the cost factor, perhaps another choice is to eliminate the arm rests, as in Figure 5.7. This seems the optimal solution from a cost perspective. However, that decision may have interesting consequences. Without the arm rests, users would lay down on the chairs, as shown in Figure 5.8. Airport managers would not like it, with a lot of people sleeping around and making the airport look sloppy. Figure 5.8 The consequence of not having arm rests (Source: OakleyOriginals / Foter / CC)

!

So the best compromise is to cut down the number of arm rests to decrease cost but to leave some so that users cannot easily lay down. What should you take out from this discussion? The arm rests in airport seating are there to constrain the users. And that constraint was a deliberate decision made by the designers.

Another example with toilets Hopefully you are not yet annoyed with examples with toilets. Here’s another one, which has a more subtle design angle. Figure 5.9 This toilet design “suggests” users to leave the toilet without washing their hands (Source: Author)

! Figure 5.9 shows a man’s public toilet that, looking from the door, is structured in the following way. When entering the toilet, the user fist finds two private rooms with flush toilets, then the urinal, and finally the wash basin. Now consider the user entering this toilet and going to the urinal. When finished, what happens? Well, rationality will tell us that it is good practice to wash the hands. But the problem is that, in this toilet, to wash the hands you have to go further away from the door. Cognitively, there will be a dilemma between doing what is right (washing the hands) and what is more efficient (getting out of there). Which one wins? Very often efficiency wins. By the way, this dilemma is known as the efficiency-thoroughness tradeoff, which states that to increase efficiency you have to decrease thoroughness and vice versa.

Could we change the toilet’s design to increase thoroughness? The answer is a clear yes. A simple solution would be to move the wash basing closer to the door. Another solution would be to constrain the users to behave properly. Check Figure 5.10. The depicted toilet has two different areas, one that is closed, with the flush toilets and urinal, and another that has a clear door to the outside, with the wash basin. Figure 5.10 This design “suggests” users to wash their hands (Source: Author)

! How will people behave in this environment? Well, what happens is that this design makes it ostensibly public if you wash your hands or not. So in this case efficiency will probably lose against the shameful consideration of being seen leaving a toilet without washing the hands. Once again, users have been constrained by design. The conclusion then is that users can be influenced by constraining the available options. Such options can be manipulated with affordances.

References Norman, D., 2013. The design of everyday things: Revised and expanded edition. Basic books.

Concept of Affordance An affordance is a property in which the physical characteristics of an object reveal its function. Don Norman illustrated the concept using a very common object: a door handle. Check for instance Figure 5.11 and consider you would like to open the door. The physical characteristics of the door handle will suggest you to open the door by swivelling the handle. So the handle has an affordance, which fundamental value consists in helping people to open doors. Figure 5.11 This door can be opened by swivelling the handle (Source: Author)

! Different door handles operate in different ways and therefore have different affordances. For instance, the door handle shown in Figure 5.12 suggests the user to rotate the handle, while the one shown in Figure 5.13 suggests to pull the handle. Interestingly, the handle shown in Figure 5.12 has some cuts suggesting rotating clockwise and not the other way around.


Figure 5.12 This door can be opened by rotating the handle (Source: Author)

! Figure 5.13 This door can be opened by rotating the handle (Source: Author)

!

Regarding UX design, the concept of affordance is critical to understand that: • UI objects are related to functions (e.g. the door handle is related to opening a door) • Users use UI objects with intentions (such as opening doors) • UI objects have affordances that suggest how to interact with them to execute functions (such as a swivelling handle) When the UI object’s affordances correspond with the user’s intentions, the design will be easier to use (you will have no problems opening a door). But when the affordances do not correspond with the user’s intentions, the design will be more difficult to use (you will have a breakdown if an handle does not open a door). One aspect of affordances is they are socially and culturally related. Some affordances may be recognised by a certain group of users, but not others. Therefore, designers cannot be oblivious to the cultural context of the users they are targeting with their design. Another issue to consider is that, after users have acquired affordances through experience, they will be able to reuse them in different contexts. For instance, moving from the physical to the computational context, it makes sense to use a door handle icon to open an electronic document. This of course suggests that designers should consider how familiar the users are with certain affordances and how they can reapply them to different contexts. More formally, some fundamental properties have been associated to affordances in the HCI domain: • An affordance refers primarily to the fundamental properties of an object • An affordance refers to perceived properties of an object, which may not be actual properties. However, the users’ perception is involved in characterising the existence of an affordance (see Figure 5.14) • An affordance is tightly coupled with past knowledge and experience • An affordance can make an action difficult or easy

Figure 5.14 Affordances depend on users’ perception: In this place, the scooter drivers will perceive different affordances than the pedestrians, e.g. where to drive and where to walk. They will only share the affordance given by the pedestrian crossing (Source: Author)

!

References Norman, D., 2013. The design of everyday things: Revised and expanded edition. Basic books. McGrenere, J. and Ho, W., 2000, May. Affordances: Clarifying and evolving a concept. In Graphics Interface (Vol. 2000, pp. 179-186).

Types of Affordances Affordances define the properties of objects in two different dimensions: perceptual information and action possibility. Perceptual information refers to elements that allow seeing and understanding the affordances of a certain object. Figure 5.15 Perceptual information provided by the letter box: Where to put letters (Source: Author)

! The letter box shown in Figure 5.15 is unambiguous about where to put a letter for mailing. On the other hand, the object shown in Figure 5.16 does not provide any information that suggests an affordance. Action possibility refers to the capacity to execute or not a function by interacting with the object. The combination of perception and action possibility defines a matrix that allows us to define three different types of affordances (Figure 5.17).

Figure 5.16 This object does not suggest any affordance (Source: Eddi van W. / Flickr / CC)

!

!

Figure 5.17 Three types of affordances

Perceived affordances They provide sufficient perceptual information and have correct links to functions. This means, for instance, that a “Buy” button is properly placed on a webpage and allows the user to actually buy a product online. Figure 5.18 Perceived affordances: All available options have obvious correspondences to known functions

F IGURE 5.16

!

Perceptible affordances: All available options have obvious correspondences to known functions

Figure 5.18 gives an example taken from Apple’s Keynote. Several UI objects are available, including buttons check boxes, dropdown menus, etc. The UI provides perceptual information that is sufficient to understand how to interact with the UI objects and to understand what functions will be executed. For example, there is no doubt that “Edit Master Slide” allows the user to make changes to the master slide. Furthermore, through experience, users know what a master slide is.

Hidden affordances They have correct links to functions but do not provide sufficient perceptual information. This means for instance that a “Delete” button is not perceptible on a webpage, maybe because it is too far away from the focus of attention or you would have to scroll down the page.

Figure 5.19 Hidden affordances: You can click on the day of the week to move to the day view. You can also click on a cell to create an event

! Figure 5.20 Hidden affordance: The circle pad in the iPod allows you to scroll up and down but the affordances are not present (Source: Author)

!

Figure 5.19 gives an example taken from Apple’s Calendar. Some affordances are perceptible, for instance the “Today” and arrow buttons on the top right corner. But is there any other function available? Actually there is. For instance, you can click on the table’s day of the week to move to a day view. You can also click in a cell to create a new event. However, the perceptual information is not there and therefore we have hidden affordances. Another example. The iPod (Figure 5.20) had a circle pad that allowed you to scroll up and down by circling around with the finger. However, the UI does not give any indication that such functionality is available. By the way, if you hold the play icon for 5 seconds, the iPod will turn off. But once again, there is no indication in the UI that such feature is available. Figure 5.21 False affordances: How to use the handle to open the door, left or right, push or pull? (Source: Author)

!

False affordances The design provides perceptual information that leads the user to perceive an affordance, but the affordance is not really there and there is no link to a function. Note that the notion of false affordance is a freedom of speech, since the affordance is not really there, only the misleading perceptual information. Figure 5.21 provides a subtle example of false affordance. How do you open the door? The horizontal bar, which is symmetrically placed on the door, suggests that you can either push or pull using any side of the bar. However, the door will not open if you try to push

or pull using the right side of the bar. Look carefully to the top of the door, where it joins the wall. Do you see that the joint is on the right side? This means that you have to push or pull from the left side. So the bar gives you false information on how to open the door. It is interesting to observe how placing the bar vertically removes the false affordance, as shown in Figure 5.22. Figure 5.22 Using vertical bars removes false affordances (Source: Author)

!

References McGrenere, J. and Ho, W., 2000, May. Affordances: Clarifying and evolving a concept. In Graphics Interface (Vol. 2000, pp. 179-186).

Good Affordances Don Norman was mainly interested in how to design good affordances, so that system functions can be easily perceived by users. Several elements have to be considered.

Familiarity On a first approach, one should immediately consider familiarity. Familiarity allows users to know the expected functions of an object, to align their intentions with the functions, and also to reuse affordances in different contexts. Figure 5.23 DVD-player controls were reused by iTunes

! For example, most of us are very familiar with a set of icons/functions such as the Start/ Stop/Pause and Backward/Forward controls. They were originally used in reel-to-reel, cassette and DVD decks, and then were adopted by most modern digital players like iTunes (Figure 5.23). Even though many may have not used the early physical devices, the characteristics of the UI objects and the associated functions have become almost universal and therefore can be used anytime we would like to design this type of control. There are even international standards developed around these symbols (IEC 60417 and ISO 7000). Another familiar affordance is the computer’s trash can. Most of us know the metaphor: if you grab a file to the trash can, then it will be marked for deletion; however, it will only be deleted when you empty the trash. The trash can is easily recognisable, even if different operating systems adopt different icons. The trash can was originally developed by Apple for the Lisa computer and later on licensed to Microsoft for use in the Windows operating system. An interesting story with the trash can affordance is the use that Apple decided to do with the trash can on the early Macintosh computers. On these computers, if you would like to eject the floppy disk, you would drag it’s icon to the trash can. Though the problem is the ambiguity this design created. On the Macintosh, if you drag a file to the trash can, then the file is deleted. However, if you drag a floppy disk, the disk is not deleted. Instead, it is ejected. This design decision was considered bizarre by many and the main reason is that it conflicts with the familiar function of the trash can.

Learning If familiarity does not work, then the best option is to consider learning. Users should be able to learn the meaning of an affordance. This can be done by either providing

information on demand about a UI object (e.g. using tool tips) or proving undo/redo options, which allow users to explore the available action possibilities.

Remembering Another issue to consider is remembering. Remembering is the cognitive process of storing mental imagery in our long term memory. This process is a combination of passive storage with interaction. It builds an internal representation of an event or sequence of events with situated action, while combining new with existing knowledge in a cumulative way. Research in cognition indicates that events are easier to remember if they are: • Memorable • Perceived as necessary in the future • Repeated This suggests that users may only remember new affordances if they are frequently used or perceived as useful. Of course catastrophically memorable experiences will also me remembered. Figure 5.24 Physical affordance: The scissors constrain physically the way we interact with them (Source: Author)

!

Physical issues The scissors shown in Figure 5.24 are defined by a set of physical properties that constrain the way one physically interacts with them. The two holes provided by the scissors determine how users use their fingers to manipulate the object (which in this case privilege the right handed). And the blades can only move in a certain way. Even though most UI objects do not have such physicality, there are certain aspects of physical interaction that can be considered when designing them. They are related with the combination of physical movements such as dragging, scrolling, swiping, with sensory cues. Figure 5.25 Sensory cues provided by physical buttons: You feel the roundness and concaveness of the button, so you know where to press; you also feel the movement when you press it, so you can be sure that you pressed it (Source: Author)

!

A simple example can be given with buttons: which one you consider easier to operate, a UI button or a physical button? The physical button provides more sensory cues (Figure 5.25). For instance, you sense when you touch the button. You feel movement when you press the button. And you also know when the button has done all the available excursion. On the other hand, the UI button does not provide any physical sensory cues. Therefore, designers have to compensate the lack of physical cues with visual cues: the button changes colour when the mouse is on top of it, and it changes colour again when you press it, etc.

These visual cues have been designed to compensate the lack of physical cues and are an important part of the design of an affordance. Another good example is given by the scrollbar. Interacting with a scrollbar is complex, since the user has to combine the hand movement with the operation of the bar and the window movement on the screen without any physical cues. Besides, both the hand, the bar and the window have different constraints. For instance, the window is constrained by the window size and the screen size. The bar is also constrained in its movements. And of course the hand is constrained by the arm and the table and mouse cable. Therefore the scrollbar has been designed to compensate for the lack of physical cues. For instance, visual cues appear when the window reached its limit. The bar itself often gives an indication of the portion of the window that is visible on the screen relative to the portion that is invisible. As another example, Apple’s iOS provides several visual cues to compensate for the lack of physical cues, notably the rubber band (Figure 5.26) and pull to refresh (Figure 5.27). By the way, the rubber band effect is more technically designated as inertial scrolling in a famous US patent #7469381 assigned to Apple.

!

Figure 5.26 Rubber band used in iOS: When you drag the page down, beyond the margin, you see the background. The page will bounce back when you finish dragging

Figure 5.27 Pull to refresh used by CNN app: When you pull the page down, you get a refreshing icon and a message saying that the page is being updated

!

Visibility UX Designers can easily control the visibility of UI objects and use that property in combination with affordances. Three states can be defined: • The object is visible • The object is not visible • The object is greyed out Simply put, whenever a function is not available, the affordance should be hidden; and conversely, whenever a function is available the affordance should be perceptible. However, hiding and unhiding UI objects can be disturbing, since such behaviour may turn it more difficult to build a mental model. By greying out a UI object, the designer tells the user that a function exists but is not momentarily available. In terms of affordances, a greyed out affordance is still a perceived affordance, which helps building a mental model. It just happens that the affordance provides additional perceptual information. This avoids having hidden affordances.

Mapping Mapping is the relationship between the affordance and the function. Affordances should have clear mappings to functions. As with visibility, this issue also concerns the development of mental models and is particularly problematic with complex systems.

The canonical example of mapping is given by the gas stovetop. Check Figure 5.28. The two burners on the left are aligned vertically. However, the controls are aligned horizontally. So, which button controls the top burner, right or left? Figure 5.28 Mapping: The relationship between the buttons and the burners is not obvious, since the left button can control either the top or the bottom burner (Source: julochka / Flickr / CC)

! This mapping problem happens with most stovetops. A typical solution to this problem consists in providing instructions. You can see in Figure 5.28 that symbols are provided below the buttons to explain the mapping. However, this solution is not perfect since, as you well know, the symbols tend to disappear after some time and then the instructions are gone and users will have to constantly deal with wrong mapping. Figure 5.279 illustrates a solution for the stovetop mapping problem. On the left, you can see the typical stovetop design with the mapping problem: the mapping between the buttons and the burners is not obvious. On the right, the mapping is clear, since it is rational that the top left button will correspond to the top left burner and so on. What is the problem with this solution? The buttons take more space on the stovetop. After all, that is the reason why most stovetops are designed as the example shown on the left, to save precious space. But the lesson here is that the design decision to save space on the stovetop results in an unaccountable number events where you have to spend time figuring out how to operate a simple stove. That tradeoff does not seem fair to users.

Figure 5.29 Solving the stove problem through mapping

! Figure 5.30 How do you increase or decrease air flow, up or down? (Source: Author)

!

Figure 5.30 shows another example of problematic mapping in affordances. The wheel allows users to control the airflow into the car. However, what is the mapping to the function? Do you turn it up or down to increase the airflow? There is ambiguity in the affordance and users will not know the outcome of their actions. Users should always be able to understand how to map their actions to functions through affordances. An excellent example of mapping can be found in Figure 5.31. This UI allows winemakers to control the temperature of wine barrels during the fermentation process. The UI provides a direct mapping to the physical organisation of the winery, which makes it easy for the users to know exactly what they are controlling. Another example of clear mapping is given in Figure 5.30, which show how Wordpress uploads files. The mapping between the affordance, which defines a rectangular area for dropping files, and the function that uploads the files is unambiguous. 


Figure 5.31 Direct mapping in a UI that controls the temperature of wine in barrels (Source: Author)

! Figure 5.32 In Wordpress, the mapping between the “drop files here” affordance and file uploading is unambiguous

!

References Barsalou, L., 2008. Grounded cognition. Annu. Rev. Psychol., 59, pp.617-645. Norman, D., 2013. The design of everyday things: Revised and expanded edition. Basic books.

Chapter 6

Interaction Design Interaction Models Model Human Processor Card, Moral and Newell developed one of the earliest and most influential models describing and explaining user interaction (Figure 6.1). The model, which is known as model human processor, establishes functional parallelism between humans and computers, considering both as information processing machines. The model defines the following 5 components of human processing. Figure 6.1 Model human processor

!

Perceptual processor. Processes incoming sensory information. It deals with sight, hearing, touch, taste, and smell (even though only the first three are explored by current UI technology). Motor processor. Processes outgoing sensory information. It deals with motor activities, which mainly includes moving hands and fingers, and speaking. Cognitive processor. Processes information at higher levels, including recalling, recognising, transforming, making decisions, etc.

Working memory. Stores sensory information coming from the perceptual processor and going to the motor processor for short periods of time. It activates actions in the cognitive processor. Long term memory. Stores knowledge and experiences. According to this model, the user’s operations follows a recognise-act cycle of cognitive processing: incoming information is received by the perceptual processor, which stores visual and auditory images on the working memory. Then contents in the working memory activate actions in the cognitive processor. These actions are associated to long term memory. They also change the contents of the working memory, which in the end activate the motor processor. A fundamental implication of this model that is really relevant to UX design is that it brings attention to the role of long term memory in user interaction. As users interact with computers, they build up knowledge about the UI, which helps planing and executing goals. Such knowledge only exists in the mind and is often characterised as mental models. We discuss mental models in next chapter. Figure 6.2 Simplified seven-stages model

!

Seven-Stages Model To appreciate UX design one has to further appreciate the cognitive relationship between the user and the UI. Figure 6.2 shows a model proposed by Jacob Nielsen to describe such relationship. The model regards the UI as a layered component mediating the user and the system. This component has dedicated technology comprising software and hardware like screens,

keyboards and dashboards, which mediates the user’s access to the system. The main purpose of the UI layer is to optimise the relationship between the user and the system. Let us now analyse the mediating role of the UI. First, the user needs to operate the system to fulfil an objective. Second, the system supports various functions like information processing, data communications and machine control. Finally, the UI gives access to the functions supported by the system. In this relationship between user and system, we assume that the user, to fulfil the objective, executes some actions on the UI. In turn, the system translates the actions invoked by the user into function calls, which are executed by the system. Then, the system provides some feedback. User feedback must go through the UI. Figure 6.3 Seven-stages model of user interaction

!

In this interaction model, feedback is critical because it provides the means for the user to evaluate if the objective has been accomplished or not. If an objective has not been fully accomplished, the user has to adjust or redefine the objective and execute further actions on the UI. A feedback loop can therefore be defined as a constant cycle of information about what the system is doing, which is necessary for the user to adjust the initial objective. The interaction model proposed by Nielsen further decomposes the relationship between user and system into seven functions (Figure 6.3): Objective. Defining what the user desires to achieve. For instance, get better protection from cold.

Intention. Defining what has to be done, in generic terms. For instance, the user may decide to buy a jacket in an online store. Action. The user must translate an intention into concrete goals and prepare for physical action. For instance, having decided to buy a jacket online, the user must go to a web store, search for a jacket and pick one. Action execution. Where the concrete goals are translated into physical action. In the example given above, the user will have to click on the jacket’s icon, select the “add to basket” option and press the “buy now” button. This can go into further detail. For instance, to click on the icon, the user has to move the hand to the pointer, drag the pointer to the icon and click on the mouses’ right button. Perception. The detection of signals and symbols provided by the UI. Buttons, menus, icons, and dialogue boxes are examples of symbols, while movements, blinks and popups are examples of signals typically used by the UI to support users’ perception. For instance, in the online store example, the user will perceive the buy button and the shopping basket. Interpretation. After receiving signals and symbols, the user has to interpret what happened. For instance, after a dialogue box pops up, the user usually interprets that some important information should be analysed in order to continue the interaction. Evaluation. Involves deeper understanding of what is happening with the system and what is the impact on the user’s objective.

Joint Cognitive Model Essentially, the seven-stages model regards the interaction between two independent entities, the user and the computer. Such interaction is iterative, concerning cycles of action and feedback. Erik Hollnagel has proposed an alternative interaction model, which is based on different assumptions about the relationship between users and systems. The model is illustrated in Figure 6.4. To start with, Erik Hollnagel suggests that, even though a clear physical separation between user and system exists (for now!), when we consider the logical separation the situation is not that clear. A simple example is driving a car. You can think of car and driver as separate physical entities. However, when considering the logic of driving, one can think of a single entity that brings together the driver’s skills and the car’s abilities. They must work together in steering the car. Erik Hollnagel suggests that the separation between user and system is artificial: both steer together towards the objective, operating as a joint cognitive system. This explains why we do not differentiate between user and system in Figure 6.4.

Figure 6.4 Joint cognitive model of interaction

!

In this view, interaction consists of a continuous function involving three components. Actions. The actions done by either the user or the system. Events and feedback. The consequences of action, both in the user’s mind and in the system’s state. Constructs. The interpretation of the events and feedback, which reflects current understanding. Once again, such current understanding may be done both by the user and the system. To the above components, one should add two other important elements that enrich the interaction model. Anticipated feedback. This reflects the cases where, either users or systems, can anticipate what is going to happen and therefore are able to interact proactively instead of just interacting reactively, based on events and feedback. Disturbances. Most interesting relationships between users and systems are not fully predefined but instead open to surprises, such as unanticipated events and evolving work contexts. For the UX designer, this model poses some interesting challenges. One is to consider a closer relationship between users and computers, where different capabilities have to be combined and steered co-jointly. An interesting dilemma that illustrates the point is related to flying an airplane: should the pilot oversee the flying system, or instead should

the flying system oversee the pilot? Both pilot and flying system can be wrong sometimes, so the right answer is that each one should oversee the other. Another challenge is to design for disturbances and anticipated feedback, which suggests designing for a more open collection of requirements. Finally, the co-agency between user and system challenges the design of constructs capable to truly reflect shared understanding of events (easy and natural for users, but challenging for computers).

References Card, S., Moran, T. and Newell, A., 1986. The model human processor- An engineering model of human performance. Handbook of perception and human performance., 2, pp. 45-1. Norman, D., 2013. The design of everyday things: Revised and expanded edition. Basic books. Hollnagel, E. and Woods, D.D., 2005. Joint cognitive systems: Foundations of cognitive systems engineering. CRC Press.

Gulfs of Evaluation and Execution Nielsen’s interaction model engages users into a series of goals, actions, feedback, new goals, actions, etc. This is of course a simplified view or reality. often users do not have precise goals, select erroneous actions, press the wrong buttons, feedback information may be absent or equivocal, signals may be lost, and users may not properly interpret what happened and evaluate the situation wrongly. Even though a UX designer may not be able to address all these problems, there are at least two phenomena that should be accounted for in interaction design: the gulf of evaluation and the gulf of execution (Figure 6.5). Figure 6.5 Gulf of evaluation and gulf of execution

Gulf of evaluation The gulf of evaluation is defined as the gap between what the system provides to the user through the UI, in terms of feedback, and what the user perceives, interprets and evaluates. Often systems provide feedback that most users do not understand. For example, do you understand the meaning of all these error messages in MS Excel? If not, what will happen when you see them? #N/A #DIV/0! #NUM! #REF! #NAME? ##### #NULL

From a UX design perspective, reducing the gulf of evaluation represents a huge challenge, because it requires designers to think about how users will perceive, interpret and evaluate each piece of information that is fed back to them from the system through the UI. The issue is not only about clarity, i.e. proving messages that are simple and understandable for a wide range of users, but also about context, that is, to make sure that each piece of feedback information is evaluated in the right setting, which depends on goals, actions, mind sets, the environment, etc.

Gulf of execution The gulf of execution is defined as the gap between the user’s goals and the means provided by the system through the UI to support them. Very often, users would like to accomplish certain goals in a way that was not considered by design. Other times, users have to devise workarounds to accomplish their goals while still using the system. The gulf of execution increases attrition. If a system is too difficult to use, people will not use it. Would you like an example? Many companies implement a security policy in Outlook Mail that forces employees to have a PIN in their mobile phones if they would like to read business mail. Furthermore, the policy requires the PIN to be different from the previous one. Putting the security issue aside, this policy represents a huge cognitive effort imposed on the users. First, a mobile phone is more difficult to use with a PIN than without, because the login procedure takes time and requires memory recall. Second, people do not usually have the capacity to randomly memorise a new PIN from time to time. They like using the same all the time. Right? So this policy represents a huge gulf of execution: the mobile phone asks users to do something they resist doing. Is there a workaround? Of course. Users are very good at finding workarounds. Change the PIN three or four times in a row until you are allowed to use the same PIN again. The cost of changing the PIN multiple times is still high, but not as much as having to memorise a new PIN from time to time. All in all, the consideration for the gulfs of execution and evaluation lead UX designers to think deeply on how users interact with the UI and especially how to keep the interaction between users and systems aligned.

References Norman, D.A., 2013. The design of everyday things: Revised and expanded edition. Basic books.

Control Warning, another toilet example coming up. Movie 6.1 shows a paper dispenser common in public toilets, which illustrates perfectly the issue of control. Observe that the paper disperser provides a very well-defined space for taking out the paper with your fingers. What happens when the user, with wet fingers, tries to take out a piece of paper? Movie 6.1 The paper dispenser controls what the user can do (Source: Author)

! Well, because the fingers are wet, the paper will break and the operation does not end well. An obvious solution to the problem would be to take out the piece of paper using the two hands, because the force would be better distributed and the paper would not break. However, this solution is not possible because the paper dispenser does not allow it, by providing a very small space where you can use just one hands. In summary, the control of the interaction can be either on the UI or the user. In the former case, the UI constrains what the user can do. In the later case, the user is allowed to do whatever she decides to do. The paper dispenser mentioned above is an example where excessive UI control impacts negatively the user interaction. The dilemma of control is that users never have it when they need it and often they have it when should not. Think for instance an airplane. We like the idea that a human pilot is in control in the cockpit, because often computers make the wrong decisions. For instance, an autopilot may send a plane down when it should go up. On the other hand, there are also cases where we would like the computer to substitute the human, because of human error, e.g. a pilot neglecting an emergency signal. So the dilemma is where to place control, human or computer?

In less catastrophically cases than aviation, concerning UX design, the main issue to consider is what freedom is given to users to reach their goals. In an initial approach, we can divide users between novice and advanced: • Novice users should have less control over the UI, which means the computer with have more control in determining which functions can be executed and when • Advanced users should have more control over the UI, which means the computer should allow any function to be executed by the users at any time An example of the adoption of these two types of user control can be found in Apple’s Airport Utility. The utility offers a UI for novice users (Figure 6.6), which basically shows if a computer or iPhone is connected to the Internet or not. With this UI, users cannot mess with their network configurations. Though the tool lets advanced users gain control over the network configuration, which allows them to set the router address, DNS server, etc. (Figure 6.7) Figure 6.6 Airport utility for novices: Users are not allowed to mess up the network configuration

!

However, the control principle goes beyond this strict classification in novice/advanced control. To start with, not only novice users can progress towards the advanced stage but the UI should actually promote that. In many cases, adding shortcuts to the novice mode contributes to develop advanced interaction capabilities, since novice users can see what advanced options are available. Another possibility to consider is having several control modes that users may select. Or integrating multiple pathways for completing a task. Furthermore, beyond the strict dichotomy between novice and advanced users, the UI should offer two other capabilities: Undo/redo and emergency exits.

Figure 6.7 Airport utility in advanced mode: Users are given freedom to change the network configuration

! The undo/redo capability, when properly designed, is very powerful, as it allows users to explore the UI without concern with failure. Any function should be possibly executed and any consequence should be possibly reverted by using an undo function.

!

Figure 6.8 Amazon buying process: The user is only allowed to move forward

Figure 6.9 Wordpress upload function: There is no way to cancel this operation

!

Emergency exits provide the capability to terminate an interaction at any time. Figure 6.8 shows an example with Amazon where the process of buying a book arrived at the stage where payment has to be defined. The webpage only provides a way to move forward (and buy the book). There is no way to cancel the process. It may be the case this bad UX design. Or it may be intentional, to increase sales. In both cases this design decision violates the principle of user control. Figure 6.9 shows another example, now with Wordpress. When uploading a file, an operation that may take a long time, Wordpress does not provide a way to cancel it. The user is forced to wait until the task is completed. 


Feedback Consider the example shown in Figure 6.10, where Apple’s AirPort utility is searching for an AirPort base. How much time are you willing to wait for the AirPort base to show up? Figure 6.10 How much time are you willing to wait for something to happen?

!

!

Figure 6.11 Can you notice this confirmation in Plunker? Often users are focussed on the centre of the screen and do not notice the confirmation displayed at the top left

Any kind of UI feedback is necessary to sustain the user interaction. That is, when the user performs an action on the UI, some kind of answer is expected. Without feedback, the interaction cycle breaks, and the user does not know what to do. The following elements must be considered when designing feedback.

Attention Feedback must be provided in a way that can be easily noted by the user. For instance, the use of sound, colours, shapes and dynamic features help raising attention to feedback messages.

Gaze Feedback should be located within the user’s field of view for action. Check for example the Plunker UI shown in Figure 6.11. Usually, Plunker users are focussed on the centre of the screen, writing code, and therefore do not notice the alert message shown at the top left. Later on, they will have to decide either to discard the previous version or the current version. In any case, something will be lost.

Comprehensibility Check the feedback message shown in Figure 6.12. What does it refer to, and what can you do with it? It is vague and insufficient to really understand what’s going on. Feedback must always provide information that users can easily understand and act upon. They must be related to the users’ actions in meaningful ways.

!

Figure 6.12 What can you do with this feedback message?

Figure 6.13 In HR Kiosk users get a “row inserted” message after requesting annual leave

!

A common problem found in bad feedback designs is that feedback messages are often more about the system than the user, using language that makes sense in the system developers’ domain but that cannot be understood by the common user. The HR Kiosk system shown in Figure 6.13 is used by employees to request annual leave. The display shows what happens when an employee requests annual leave. Can you find the message they get after requesting annual leave? Yes, that’s the one: “Success! Row inserted”. What does “row inserted” have to do with annual leave? Nothing, it is a database programming thing. Software developers may understand it, but the other will not. Other times, there is too much feedback. Figure 6.14 shows an error message provided by an enterprise system when the user fails to log on. Besides being extremely long, the use of all caps turns reading the message even more difficult.

Figure 6.14 An example of excessive feedback

! Figure 6.15 Credibility: Word cannot open word files?

!

In summary, in order to be comprehensible, feedback messages should be simple, but not too simple, and centred on the user.

Credibility Feedback should convey information that is consistent with the events. Check the amusing example shown in Figure 6.15. It does not seem credible that Word cannot open Word files.

Timing Timing concerns the moment when a feedback message is provided. Jakob Nielsen, in his book “Usability Engineering”, notes three different time limits for delivering feedback to users: • 0.1s is the limit a user feels the system provides immediate feedback • 1s is the limit a user is willing to wait to continue the interaction without interruption • 10s is the limit a user is willing to wait while persisting on the task The first time limit is related to giving immediate feedback about the user’s inputs. It includes, for instance, showing immediately that a button has been pressed or showing the elements of a drop down menu as soon as you press it. Immediate feedback is critical to compensate the lack of physicality in most interactions with computers. The second limit is related with the system’s synchronicity and is important to keep the user centred on the task. That is, if the user does not get any kind of feedback from the system in one second, the system will be seen as sluggish, supporting low interactivity. Figure 6.16 Animated message displayed by Chromecast when updating the system

!

The third limit is more related to accountability. Some systems often need to take a long time to respond to users. For instance, installing system software or executing a complex query on a server may take a long time to execute. In these cases, users should be informed as soon as possible that the interaction is delayed and actual progress is going to take a while. Figure 6.16 shows the message displayed by Chromecast when updating the system firmware. This is a case where there is nothing else to do other than waiting. The feedback message is friendly and animated, seeking to entertain the user while waiting. However, ideally, feedback messages should provide reliable indications about how long the system will take to proceed. This allows users to strategise about what to do. An excellent example of such behaviour is shown in Figure 6.17. The semaphore not only tells that the pedestrians cannot cross the street, it also advises how much time it will take to be able to cross. Such information reduces anxiety and helps users planning ahead. Figure 6.17 Accountability in feedback: The user is kept informed of task progress (Source: Author)

!

Figure 6.18 System waiting for Numbers to update. A spinning icon is shown on the top left corner of the window

!

Figure 6.18 illustrates the Mac OS X System Update service where timing has been carefully considered. First, the system is optimistic and assumes that the update will not take long. Therefore, only a spinning wheel is shown at the top of the window. However, if the update takes longer than expected, the feedback strategy changes. In that case, the spinning wheel disappears and a completion bar is displayed with an estimate of time to completion (Figure 6.19). The completion bar reduces anxiety, avoids abandoning the system, and lets users plan what else to do.

!

Figure 6.19 Feedback changed to completion bar

References Nielsen, J., 1993. Usability engineering. Fremont, California: Morgan. Vickers, J.N., 2009. Advances in coupling perception and action: the quiet eye as a bidirectional link between gaze, attention, and action. Progress in brain research, 174, pp. 279-288. Zhang J, Patel VL, Johnson KA, Malin J, Smith JW. Designing human-centered distributed information systems. IEEE Intelligent Systems 2002;17:42–47.


Confirmation The idea of using confirmations in UX design is quite simple: whenever the user performs an unintended operation, the operation may cause some harm, or the operation is irreversible, a confirmation by the user is required. Confirmations usually adopt popup dialogue boxes, also known as alert boxes. One important issue with alert boxes is that they are very costly from a cognitive perspective. They decrease the task performance by requiring additional steps to be executed. They also interrupt the primary task that the user was performing to execute a secondary task, and thus make it more difficult to return to the primary task. Furthermore, even though users may be happy when a confirmation refers to a real problem, in many cases there will be false positives, i.e. a confirmation was unnecessary. The problem with false positives is they annoy users and, because of cognitive bias, cause more impression than true positives. That is, users retain more impressions about false positives than true positives. This suggests that confirmations should be used infrequently. However, we observe that confirmations tend to be used too often in interaction management. Figure 6.20 Delete confirmation in Windows 7 is active by default. This seems unnecessary because files are sent to the garbage can, not really deleted

!

An obvious case of use of excessive confirmations is when deleting files in some operating systems. For instance, in Windows 7 (and every version before that), the operating system by default would raise an alert box whenever the user decided to delete a file (Figure 6.20). However, this confirmation was unnecessary since the file would be sent to the recycle bin and could be easily recovered.

Scientific studies show that alert boxes do not work as expected. First, users consider them annoying and frustrating. Second, users try to avoid them, spending on average less than 1.5 seconds to process a confirmation. Finally, users rapidly dismiss confirmations. Besides avoiding confirmations as much as possible, several design solutions can be considered on how to use confirmations: • Be polite when managing the interaction with the user, finding a way to communicate in a non-enforceable way, and in particular avoiding interruptions • Provide credible information • Make sure that confirmations are absolutely necessary and cannot be resolved in any other ways • Make a confirmation a useful experience, by providing useful contents that explains what happened and suggests how the problem may be avoided • Make a confirmation a fun experience, using surprise and humor • Do not blame the user for mistakes and errors that may have been caused by the system Figure 6.21 Lack of politeness in Facebook: They insist on showing the signup/login popup window. You cannot really get rid of it, which becomes very frustrating

!

Figure 6.22 Lack of politeness in Facebook: This is what is shown to the user if she decides not to signup/login

!

A notorious example of lack of politeness in interaction management is given by Facebook (Figure 6.21). They insist on showing a signup/login popup window, which follows you as you try to read the page contents without logging in. If not annoying enough, if the user stays on the page for some time without logging in, an even more annoying popup window is displayed (Figure 6.22).

References Bahr, G. and Ford, R., 2011. How and why pop-ups don’t work: Pop-up prompted eye movements, user affect and decision making. Computers in Human Behavior, 27(2), pp. 776-783.

Transparency System status and transparency Have a look at Figure 6.23 and Figure 6.24. Notice how the position of the gearshift clearly indicates if the car is in the parked or driving condition. Figure 6.23 The position of the gearshift indicates the car is parked (Source: Author)

! Figure 6.24 The position of the gearshift indicates the car is driving (Source: Author)

!

Now, have a look at Figure 6.25. This design leaves the gearshift always in the central position. You can change the transmission to park, reverse or drive and in all cases the lever goes back to the central position. What is the consequence of this apparently small design decision? The lever lacks transparency of system status. Figure 6.25 This shift level moves always to the same position, independently of the system state (Source: CommunityChrysler / CC). Video available at https://youtu.be/DsTpWvZIi58

!

Is this a minor detail or an important issue? Actually, it can be a big issue, because it may generate significant conflicts between the system and the users, to the point of generating catastrophic errors. For instance, you may think the transmission is in the park mode when in reality it is in the drive or reverse modes. From there, you may generate what is known as mode error: you may cause the vehicle to start moving without a driver. You may perhaps know that the actor Anton Yelchin, who became famous by playing Chekov in the Star Trek reboot, has died in June 2016. He died in a freak accident, smashed by his own car against the gateway in his driveway. Some speculate that he may have though that his car was parked, while maybe the transmission was in the reverse position. Transparency of system status helps users know at all times what is the system status.

System logic and transparency Transparency is even more important when a system makes certain decisions on behalf of users, or provides recommendations to the users. In these cases, users need to understand the system logic by perceiving the relationships between antecedents and consequents, in order to trust the system.

References Zhang J, Patel VL, Johnson KA, Malin J, Smith JW. Designing human-centered distributed information systems. IEEE Intelligent Systems 2002;17:42–47. Sinha, R. and Swearingen, K., 2002, April. The role of transparency in recommender systems. In CHI'02 extended abstracts on Human factors in computing systems (pp. 830-831). ACM.

Recognition Have you ever used a command-line interface (CLI)? They came from the times of green, character-only displays and keyboard-only interactions. Though you can still use them in Apple, Microsoft and Unix operating systems by launching the terminal window (Figure 6.26). Figure 6.26 A terminal window only shows a prompt. All functions have to be recalled from users’ knowledge and experience

!

The main characteristic of CLI is that, to use them, you must know exactly what to do and how to do it. For instance, most unix users have somewhere in their brain that they should use the commands “rm /dir/filename” to delete a file and “ls -la” to list the contents of a directory. On the other hand, the WIMP (Windows, Icons, Menus, Pointer) interface, also known as GUI (Graphical User Interface), tells the user what can be done. In the case of deleting a file, the GUI shows the trash can and a file icon. The trash can highlights that the system has a function for deleting files. (Though the user still must know to drag the file icon to the trash can.) Furthermore, the “File” menu option is visible, which also notifies that a function is available to delete a file. These two very different paradigms have been often referred to as “in the world” versus “in the mind”, or as “recognition” versus “recall”. That is, when using the CLI, the knowledge is in the mind and the user has to recall what functions are available. When

using the GUI, the knowledge is in the world and the user can recognise the functions available on the screen trough icons, menus, etc. In principle, designers should stimulate recognition. Recalling functions requires a lot of effort. Users have to memorise functions and specific ways to invoke them. This requires practice and frequent use. Furthermore, many systems have thousands of functions and it is almost impossible to recall them. It is much easier to recognise them. The GUI paradigm provides many different ways to promote recognition. Icons and menus can be recognised when searching for an unusual function. For instance, Figure 6.28 illustrates that when you select the font menu on a text editor, the menu provides cues about the font style. So you do not have to recall the font details, you just have to search which ones fulfils your goals. Figure 6.27 A GUI window shows a collection of UI elements that allow users to recognise the available functions

!

Figure 6.28 The font menu provides cues about the type of font. Users do not need to recall knowledge about the font style

!

If recall is still needed, consider that it is much easier when contextual cues are available. For instance, it is easier to recall a persons’ name if you have cues about how the person looks like, dressing, voice intonation, where the person is, what is doing, etc.

References Norman, D., 2013. The design of everyday things: Revised and expanded edition. Basic books.

Attention Short term memory Have you ever heard about the magic number seven? Well, we find the number seven in many circumstances. For instance, the week has seven days. However, the magic number seven originated in a paper published by George Miller investigating short term memory. The paper has been a hit, with more than 22.000 citations. By the way, the paper actually referred to seven plus or minus two. George Miller was interested in investigating the span of our short term memory, which we use for a short period of time when processing information. For instance, if you need to add 3 and 5 and subtract 2, you need to store the intermediate result 8 somewhere in memory. We need short term memory for most information processing we do. (There are exclusions, like processing sensory information.) The contents stored in short term memory is lost after a while, or as soon as space is needed for new information. This strategy avoids storing forever information in our brain that is not really needed. (It would be unbearable to store in our brain all intermediate calculations we do in our daily life.) How many things can we store in short term memory? If you are given a sequence with numbers 3, 6 and 8, how many can you recall? And with the numbers 3, 6, 8, 0, 8, 9? If you try this exercise, you notice that you will start to forget numbers. George Miller made multiple experiments to understand the phenomenon. The experimental results indicated that we can recall seven, plus or minus two numbers. Now, what happens if instead you are given the numbers 22, 56, 45, 87, and 32? Again, the results indicated that we can store seven, plus or minus two. The same could be said about pictures and other pieces of more complex information. This suggests that short term memory works with chunks of information. A chunk is a collection of bits of information that we can pack and process together. We can pack the numbers 5 and 6 into 56 and process them as a bus line. We can pack letters into words. And we can also pack 5 minutes of music into a song name. You think about the song name and recall the music. Chunking allows processing a lot more information in our limited short term memory. Early studies indicated that besides the dimensional limitation, short term memory is also constrained by time, having a half-life of about 7 seconds (without rehearsal), and is also constrained by interference (like interruptions).

Recent studies indicate that our actual memory span is limited to four chunks of information. Therefore, the magic number seven became the magic number four.

Focus of attention More recent research into short term memory differentiates between our focus of attention and our functional context, suggesting that we can focus on one single chunk of information, while using three other chunks as functional context. This raises some important considerations for UX design: • Interactions should be designed to maintain focus on a primary task • Secondary tasks, interruptions and distractions should be avoided • If interference cannot be avoided, then the UI should be designed to help easily getting back to the task • Functional context should be preserved with help from the UI • Any design should consider that users will forget information when moving the focus of attention

Example - Pilots There is research literature reporting situations where pilots forgotten to check the fuel levels of airplanes before taking off, even though procedures strictly require them to do those checks. The most plausible explanation is that the pilots, while doing the required checks, are interrupted with other activities, like a call from the tower, and then, when getting back to the procedure, could not recall where they were exactly and jumped a few steps, such as checking the fuel.

References Miller, G., 1956. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological review, 63(2), p.81. Newell, A., 1994. Unified theories of cognition. Harvard University Press. Jonides, J., Lewis, R., Nee, D., Lustig, C., Berman, M. and Moore, K., 2008. The mind and brain of short-term memory. Annual review of psychology, 59, p.193.

Error Tolerance Every user commits errors. There is no way to avoid them, even with highly trained users. Therefore, a challenge for interaction management is how to deal with errors. But first, we have to consider what types of errors may occur. James Reason, in “Human Error”, identifies three types of errors.

Slips If you wanted to press the green button but instead pressed the red button next to the green one, that is slip. Slips are related to action execution rather than action planning. They occur by failure of lower level cognitive and physical functions, such as moving an arm. In the case of pressing the wrong button, the user may have correctly planned to press green, but unintentionally failed the cognitive steps and physical movement, which led to pressing red. Usually slips occur because users get used to a repeating sequence of operations and start executing them in “autopilot”, without proper attention. For instance, when driving to a known place, you usually drive your car without much attention. One day you may not notice that the cars in front have stoped and you crash. Your own autopilot failed you. Interestingly, some slips may occur because of over-attention instead of lack of attention. For instance, a learning driver may pay so much attention to the environment that forgets to deal with the brakes.

Lapses If you had to press the blue, yellow and green buttons by that order, but forgot the yellow one, that is a lapse. Likewise slips, lapses are also more related to action execution than action planning. Lapses are often associated to memory faults: You know that you had to press yellow, but forgot to do it, maybe because you were interrupted after pressing the blue button.

Mistakes If the situation required you to press the blue button but instead you decided to press the yellow button, that is a mistake. Unlike slips and lapses, which are related to action execution, mistakes are related to action planning. There are two main reasons for making a mistake: You either do not know the rules, or do not have the knowledge required to make the decision. The former are designated rulebased mistakes while the later are named knowledge-based mistakes.

Tolerating errors How can UX designers increase error tolerance? In two complementary ways, through prevention and through containment. Figure 6.29 identifies some strategies for preventing and containing user errors. Figure 6.29 Strategies for dealing with errors

Types of error

Prevention

Slips

Avoid repetitive tasks Detect improper actions

Support undo and cancel functions

Lapses

Make system state visible Show progress Reduce distractions Suggest next action

Confirmation Support back function

Rule-based mistakes

Display the rules Provide instructions and help features Show the consequences of action Training

Support undo/redo functions Increase system control

Knowledgebased mistakes

Provide decision support Show the consequences of actions Reduce complexity Confirmations Training

Support undo/redo functions Supervise the user

Containment

Example A recent example of user error was the 2018 Hawaii missile alert. On January 14 2018, an employee at the Hawaii emergency management agency pressed a button that sent a message to every mobile phone on the island (Figure 6.30): “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL”. At first, it was reported that this was a slip, i.e. a physical glitch that led the employee to press the wrong button by mistake, when receiving the instructions to send the alert. However, later on it has been reported that the actual problem was a rule-based mistake: the employee missed the initial part of the instructions saying that they were conducting a drill.

An interesting aspect of this example is that, apparently, the alert system does not have any expedite way to contain user errors: the announcement that there was no threat took 38 painful minutes to be sent. Figure 6.30 How pressing a button can cause panic: On January 14 2018, an employee at the Hawaii emergency management agency pressed a button that sent a message to every mobile phone in the island to take immediate shelter, because there was in inbound missile (Source: Public domain)

!

References Reason, J., 1990. Human error. Cambridge university press.

Chapter 7

Design Dilemmas Flexibility-Usability Trade-Off Sometimes UX designers have to deal with users that require different levels of task support. Two types of users are especially related to the flexibility-usability trade-off: novices and advanced users. Novice users have a preference for usability. In this context, usability means having detailed explanations, help features, guidance on what functions to select, and progressive disclosure. On the other hand, advanced users have a preference for flexibility. This means that users prefer having immediate access to every possible function, do not need explanations about what to do and how to do it, and really hate progressive disclosure, because it reduces performance. Figure 7.1 Google’s advanced search

! We can therefore say that we have here a conflict between flexibility and usability. One simple way to address this conflict is to divide the UI in two operational modes, one with access to a limited set of core functions, which are necessary to complete a task in the simplest way possible, and another with core and advanced functions, which allow

completing the task in multiple ways. Initially, the UI can be restricted to the novice mode, offering an option for advanced users to jump to the advanced mode. Of course an UI that discloses every available system function is going to be very noisy and complex to use, but in fact advanced users find ways to optimise their performance in such a maze. In summary, this solution allows users to decide between flexibility and usability. Figure 7.1 shows a page provided by Google Search that is not often seen. This page provides advanced search features, such as searching for a particular file type. Since these search parameters are not frequently used, Google decided to only provide them on demand, thus making them invisible to novice users but still available to advanced users. UX designers should also consider that in specific contexts a UI is overwhelmingly used by experts. For instance, the software used in a contact centre is used by professionals that spend endless hours using the same UI. Furthermore, these professionals get initial training on using the UI. In these cases, the UX designer must consider that the novice mode is not only unnecessary but detrimental. The UI should be fully optimised for advanced usage. These types of UI can be found in many other areas such as air traffic control and flight ticket reservations, just to give two examples.

Progressive disclosure Progressive disclose is a design strategy to increase usability by breaking the user interaction into small steps, each requiring few cognitive effort. For instance, when buying books in Booktopia, users are guided through four main stages (Figure 7.2): • Add books to the shopping cart • Provide address details • Define the delivery options • Pay and review In Booktopia, progressive disclosure increases the usability of of buying a book by focussing the user on an initial set of actions, and then moving on to the next stage as soon as the user completes the initial stage. This approach also contributes to learning, since the decomposition provides the user with a rationale for buying a book.

Figure 7.2 In Booktopia, buying a book has been divided into four steps

! Of course we can also argue against progressive disclosure. Consider again the Booktopia example, but now in the case where a user has already bought books several times from the website and knows exactly what to do. In this case, progressive disclosure may result in reduced performance. Few things are more annoying than to make a user go through an endless sequence of stages, especially when each stage only provides a small contribution to reach a goal. So we have a dilemma here, which users to cater for, the ones that require usability or the ones that require flexibility?

References Lidwell, W., Holden, K., Butler, J.: Universal principles of design, revised and updated: 125 ways to enhance usability, influence perception, increase appeal, make better design decisions, and teach through design. Rockport Pub. (2010)

Efficiency-Thoroughness Trade-Off The Efficiency-Thoroughness Trade-Off (ETTO) principle says that people make a tradeoff between being efficient and being thorough at accomplishing a task. In this context, efficiency means using as few cognitive resources (attention, memory, decision making, etc.) as possible. Thoroughness means paying attention to detail, checking all possible consequences of a decision, and following rules and regulations. Ideally, users should be considerate to both. For instance, an air traffic controller should pay full attention to every plane displayed on the screen while attending all events, requests and communicating pilots. However, real life tells us that air traffic controllers would rapidly burnout if adopting such an extreme behaviour. What air traffic controllers do is balance very carefully their cognitive resources. If there’s no pressure to manage a particular airplane, attention will be placed on others that seem more critical. That is, thoroughness will be reduced (a plane will temporarily not be managed) to increase efficiency (avoid excessive attention). People trade thoroughness for efficiency all the time. Even in highly regulated areas such as aviation, military aviation and management of nuclear power plants, operators have been found to trade thoroughness for efficiency. Figure 7.3 Requiring thoroughness: The flap is there to remind the user to select the appropriate type of fuel (Source: Author)

!

Analysed in a slightly different angle, ETTO also says that if you decide to be thorough, then you cannot be efficient. That would perhaps explain why accountants will always take a lot of time to reach a decision? The consequence for UX design is that designers cannot be oblivious about this tradeoff. If a system requires users to be thorough, then it seems obvious that efficiency will be low and, for example, the UI should allow more time to complete a task and offer more support to be thorough. If the system requires users to be efficient, then thoroughness will be low and the UI should try to compensate for that, for instance by reducing the amount of noise, confirmations and other factors that demand users’ attention. Figure 7.3 illustrates how this principle can be used to increase thoroughness by design. You know that cars will fail when you fill the tank with the wrong fuel. So you are always careful and check if you are using the correct fuel, right? Or maybe not. If you are in a rush, there is this inclination to trade caution, which takes time and effort, for rushing out as fast as possible. So to increase thoroughness the designer decided to add some steps to the task: You have to lift the flap before using the handle. The expectation is that while lifting the flap you think about the meaning of the word “Diesel”. Figure 7.4 UI for industrial process with several pump switches and valve controls. Users have to control 2 pumps and 6 valves

!

Kim Vicente, in the book “Cognitive Work Analysis”, provides a more sophisticated example of how ETTO can influence UX design. The example uses a UI of an industrial process that manages a water reservoir. The UI is shown in Figure 7.4. This UI is demanding from a cognitive perspective, because the user has to control two pumps and six valves at the same time to maintain the containers at a certain level. Can we improve ETTO for this UI? One thing we can do is to consider that in some situations the operator can relax and control just two valves, because the others are redundant. So the UI can actually look like Figure 7.5. This configuration allows controlling the containers, but in a simpler, less demanding way. We could say this UI corresponds to the efficiency mode. In case something goes wrong, e.g. one of the valves has a malfunction, this efficiency mode does not allow the user to safely control the industrial process. In that case, the user would have to fall back to the thorough mode, as shown in Figure 7.4. Figure 7.5 The efficiency mode for the same industrial process. Users only have to control 2 pumps and 2 valves

!

References Hollnagel, E., 2009. The ETTO principle: efficiency-thoroughness trade-off: why things that go right sometimes go wrong. Ashgate Publishing, Ltd. Vicente, K. 1999. Cognitive work analysis: Toward safe, productive, and healthy computerbased work. CRC Press.


Performance Load As the cognitive effort to perform a task increases, the likelihood of user error also increases. Performance load is a combination of cognitive effort (including e.g. perception, attention, retrieving information from memory, and making a decision) and physical effort, such as moving the arm to press a series of buttons. Note that performance load concerns in particular the design of the UI for advanced users, since novice users will always face a significant load when dealing with an unfamiliar UI. For UX design, the challenge here is to define strategies for reducing performance load. This includes: • Eliminating unnecessary information and number of functions available in the UI • Reducing the number of steps necessary to accomplish a task • Reducing motion, bringing together the UI objects the user has to interact with • Assisting the user with decision aids • Avoiding repetitive tasks, which may cause injury • Automating tasks Figure 7.6 A bad example of performance load: The user inputs a file name in the middle of the screen and then has to move to the top-right corner. The distance is huge on a 27” screen

!

Figure 7.6 shows the file upload feature from the Blackboard tool. This gives a good (actually a bad design) example of performance load caused by physical effort. To upload a file, the user has to input the file name in the text box and then has to press the “Submit” button. The problem is that the button is very far away from the text box. The user will have to physically move the pointer from one side of the screen to the other, which is an unnecessary performance load. In the author’s own experience, it often

happens that the user presses the button bellow the text box instead of the “Submit” button. This happens because pressing the button below requires less effort, and is therefore cognitively seen as preferable. Another (bad) example of performance load is given by Apple’s cover flow. The idea of cover flow is that you can search a file by browsing a list of pictures representing the files (Figure 7.7). The problem though is that you have to physically move the mouse through a potentially very long list of pictures, which of course represents an excessive performance load.

!

Figure 7.7 Apple’s cover flow has excessive performance load: Users must keep repeatedly dragging with the mouse

Fitts’ Law Paul Fitts was interested in understanding the human capacity for performing motor tasks. He made multiple experiments with people operating different types of apparatus, such as moving a stylus to a target plate and transferring a disk from one washer to the other. The results from these experiments showed that the movement error is proportional to the logarithm of the ratio of the tolerance to the possible amplitude range. Even though these experiments did not use computers, they contributed significantly to understand how humans interact with computers. In particular, we can substitute the stylus and target plate with a computer mouse and screen. Then the Fitt’s law allows us to predict the accuracy and duration of moving a pointer to a visual object shown on the screen. Subsequent studies have been developing ways to measure the users’ performance using the mouse, while at the same time confirming the predictions suggested by Paul Fitts. The following equation describes the Fitt’s law: T = a + b . log2 (2 * A / W) T is the time to execute the task, a measure of speed using the mouse. A is the amplitude of movement and W is the width of the target. The constants a and b have been determined through empirical experimentation.

!

Figure 7.8 Fitts’ law

Figure 7.8 gives a visual annotation of how the equation is applied. It involves a pointer, controlled by the computer mouse, and a target, which can be a button, menu, text box, etc. W corresponds to the width of the target in the direction of the trajectory of the pointer. (This formula is unidimensional.) And A corresponds to the distance that has to be traversed for the pointer to reach the target. The formula should be used as a metaphor. The factor log2 (2 * A / W) reflects the difficulty in reaching the target with the pointer. The greater the amplitude of movement, the more difficult is to achieve the task. The larger the target, the more easy is to achieve the task. Regarding UX design, the Fitts’ law highlights that users have more difficulty reaching a distant object with the mouse than reaching a proximate object - and the difficulty increases logarithmically with the distance. It also highlights that making an object bigger reduces such difficulty - and that relationship is also logarithmic. This highlights that interacting with very small objects on the screen is difficult but can be improved with small increases in size. However, making objects too big does not bring additional benefits. The same can be said about the distance. Bringing an object closer to the pointer gives significant benefits, but there is no significant benefit when the objects are already too far away. Besides considering the distance to objects and the objects’ sizes, we can also consider the time to execute the task. In that regard, the formula tells us that if we want to move the pointer faster, then we need to reduce distance and increase size. However, when we design a UI, we tend to define fixed distances and sizes. So that option is not available and what happens is that if we increase speed then the accuracy will be lower. The consequences for UX design are: • Objects that users interact with in a sequence should be in proximity. Example: In an ecommerce website, put the “Select” and “Buy” options together, since users will probably press these buttons sequentially • Objects that users interact often with should be centred on the screen, to minimise distance (assuming the mouse will rest at the centre of the screen) • Avoid selections that are too small. Check the example in Figure 7.9. The numbers shown in Google Search are too small and therefore the error ratio in selecting them is high. Figure 7.10 shows that Amazon uses bigger numbers to reduce the error ratio • Make the most likely options bigger. Example: In an e-commerce website, “Buy” should be bigger than “Conditions of Use”

Sometimes Fitts’ law is maliciously used by designers to increase the time users take to perform a motor task. One obvious example is marketing. Check the example in Figure 7.11. Do you see where they placed the “X” button and the tiny size of it? They clearly want you to spend more time on the page, so that you read the message. Figure 7.9 The numbers in Google Search are too small

! Figure 7.10 Amazon uses bigger numbers

!

!

Figure 7.11 Closing this window has been made intentionally difficult

References Fitts, P., 1954. The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology, 47(6), p.381. MacKenzie, I.S., 1992. Fitts' law as a research and design tool in human-computer interaction. Human-computer interaction, 7(1), pp.91-139. Accot, J. and Zhai, S., 1997, March. Beyond Fitts' law: models for trajectory-based HCI tasks. In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems (pp. 295-302). ACM.

Hick’s Law William Hick was interested in understanding how fast people make decisions when facing multiple choices. He carried out several experiments to understand which formula would fit the choice-reaction time relationship. This was done with experiments that would either increase the number of choices, or demand the participants to choose faster. William Hick found out a relation between the reaction time and the number of choices, expressed in the formula: T = 0.518 . log10 (n + 1) Where T expresses the time to make a decision in seconds and n expresses the number of choices. The logarithmic relationship indicates that cognitive effort increases with the number of options that have the be considered. This reflects a dilemma of choice. Figure 7.12 Too many options makes it difficult to buy (Source: Ikhlasal Amal / Foter / CC)

! However, we observe the rate of increase is higher between, say, one and five choices than with five and ten choices (which was the maximum number investigated by Hicks), which suggests the dilemma of choice is more acute with a small number of options. The formula indicates that people, when faced with more options, will spend more time to select one. Conversely, another way of interpreting this phenomenon is to observe that, when seeking to increase speed, people will necessarily increase the chances of making wrong choices.

The Hicks’ law has been criticised in the cognitive field. A problem that has been pointed out is that it is based on an information processing model of humans that has been superseded by more sophisticated accounts of human behaviour. Still, some useful lessons can be derived for UX design. Have you observed the trend of selling fewer items in a store, rather than many items? You find that phenomenon in upmarket fashion stores (check Figure 7.12 and Figure 7.13). The main idea is to avoid the dilemma of choice, which may lead a buyer to delay a decision. You can also find this approach in many website designs. Figure 7.13 Upmarket stores reduce choice to increase sales (Source: Benjamin Page / Foter / CC)

!

Other recommendations to help reducing the dilemma of choice include: • Arrange information items by convenience • Split information in sub-levels • Customise information to users • Give previews of the available options

References Hick, W., 1952. On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), pp.11-26. Seow, S., 2005. Information theoretic models of HCI: a comparison of the Hick-Hyman law and Fitts' law. Human-Computer Interaction, 20(3), pp.315-352. Rosati, L., 2013. How to design interfaces for choice: Hick-Hyman law and classification for information architecture. In Proceedings of the International UDC Seminar (pp. 121-134).

Chapter 8

Rules of Thumb Consistency Consistency is a rule that states that all elements of the user experience should be presented in a consistent way. This includes.

Functional consistency Similar system functions should be executed using similar user interactions. Example: The “save file” and “delete file” functions should require similar interactions: 1) Click [file name] to select file. 2)Press [Save] or [delete] button to execute.

Visual consistency UI objects that are functionally related should be presented using a common theme. Example: “save file” and “delete file” operations should have buttons with same shape, size and colour. Besides, they should be placed close to each other.

Interaction consistency The sequence of actions necessary to achieve a goal should follow a common theme. Example: the whole “login” function, which usually involves multiple interactions, should have a distinctive story line: 1) “Login” button is shown on top-left corner, with label saying “you are not logged in” 2)User presses login and a distinctive text box shows up on top-left corner. A label is also displayed saying “logging in”. 3)The whole log in sequence of actions proceeds, always with a distinctive story line highlighting the intended goal.

Feedback consistency Feedback messages for common functions should follow a common theme. Example: feedback messages for the “login” and “sign up” functions should follow the same template and the same type of wording.

User consistency Similar types of users should perceive the user experience in the same way; and distinct types of users should see different UI. Example: the “login” function may concern, for instance, clients and suppliers. The login should be distinctive for each groups.

Applying consistency As noted above, consistency does not only pertain to the adopted visual language, i.e. selection of UI objects, colours, fonts, etc. Consistency should be applied to all other details involving the user experience. Lack of consistency in website design is common. Check for instance the example given in Figure 8.1 and Figure 8.2. The webpages are from the same site, but have very different look & feel.

!

Figure 8.1 Home page of computerlounge.co.nz



Figure 8.2 The other pages in computerlounge.co.nz lack affinity with the home page. Note in particular the menu on the left (which is missing in the home page), the cart (also missing in the home page), many significant differences in the heading (search box, size, colour scheme), and display of elements. In the end, how can the user build a mental model of this website?

!

Another example of lack of consistency is given by the BNZ website illustrated in Figure 8.3. The figure shows the main layout displayed after the user logs in. It is a clever design, clearly though out to be used on mobile devices. However, if the user decides to do an international transaction, an old layout is presented (Figure 8.4). The possible explanation for having both the new and old layouts is that the new layout has not yet been finished. However, for the user, there is a clear inconsistency: visual, functional and interactional.

Figure 8.3 BNZ current layout

!

However, functional inconsistency is particularly troubling in the BNZ example. The new layout, has been optimised for mobile devices, allowing users to execute certain operations by dragging UI elements. For instance, a payment can be made by dragging an account to a payee. But the old layout operates in a completely different way, by selecting options using menus. Therefore the user must handle two completely different paradigms at the same time, with additional cognitive effort and potential breakdowns.


Figure 8.4 BNZ old layout

!

Figure 8.5 shows a more subtle example of interaction inconsistency. This parking meter, which is very common around Wellington, allows users to pay with coins, credit card and text message. Observe that to pay with credit card and text message the user has to press a button on the left or on the right. However, to pay with cash, you do not press a button, you just put some coins on the slot. The interaction inconsistency is that, for the same task, sometimes you press the button but other times not. Of course the reason for this inconsistency is that the parking meter does not have a third button. The lack of this third button was cleverly substituted by a system function that determines that the user wants to pay with coins by noticing that they are inserted in the meter. But if you think carefully, it would be possible to design the system to also avoid using the buttons in the cases you pay with a credit card or a text message. This would make the system work exactly the same way, independently of the payment option.


Figure 8.5 Parking meter: Inconsistent interaction between paying with credit card and coins (Source: Author)

!

Minimalism The concept of minimalism has its origins in arts. It emphasises an aesthetic movement towards reducing the message to the essential. In painting, it is for instance expressed by geometric forms, flatness and simple colours. In architecture, it adopts simple and functional spaces. In writing, it is expressed through the economy of words. Figure 8.6 Helvetica: A revolution in minimalism

! Figure 8.7 Times: A preference for detail

!

Minimalism has been very relevant in every area of design. A good example is the predominance of the Helvetica font Figure 8.6, which brought a new perspective to typography by making the characters very utilitarian, using simple strokes. You can compare it with Times (Figure 8.7). Check, for instance, the character 5. In Times, the strokes have rounded, artistic ends. On the contrary, in Helvetica the character 5 shows straight, undecorated strokes. Check in particular the horizontal strokes.

The trend towards minimalism is recent in UX design. An obvious adopter is Apple, which in 2013 developed a flat design for their mobile operating system. Check Figure 8.8 and Figure 8.9. Observe that icons in the early design had tridimensional effects, using shadows, which would make them look more physically realistic. Figure 8.8 iOS5 icons were designed with a tridimensional effect and realistic detail

! Figure 8.9 In iOS 9, the icons are flat and more abstract

!

Figure 8.10 A classic example of extreme noise in website design: The www.arngren.net home page

! Furthermore, the icons had much more detail. A good example is the YouTube icon. In the later design, the tridimensional effects were removed and most icons have a simpler, more abstract presentation. The goal of minimalism is to increase clarity by reducing noise. In principle, clarity improves the user experience and noise represents an unnecessary cognitive burden. Consider the website shown in Figure 8.10. It is a classic example of extreme noise in website design. It is surprising that such design still exists today. Perhaps it exists to make fun of UX designers.

Figure 8.11 Minimalism in www.wordpress.com

!

In obvious contrast, observe the minimalistic approach adopted by Wordpress (Figure 8.11). The actual contents provided has been reduced to the bare minimum. The main focus is on a core function: create your new website for free. The picture, which adds interest, has been toned down to reduce noise. Though some may argue that sometimes minimalism may increase ambiguity. A good example of excessive minimalism is given by Apple’s “slide to unlock” feature. In previous non-flat designs, the “slide to unlock” used a sliding button with a text message, which made the interaction very intuitive (Figure 8.12). The new design, which is shown in Figure 8.13, eliminates all visual information.

The feature is there, as an hidden affordance, but the minimalist philosophy accepts that it can be hidden. This seems to be a regression in UX. Another unfortunate example of minimalism is given by Apple’s Calendar (Figure 8.14). The UI shows some small circles in the days where you have a meeting appointment, but no further details are given. Figure 8.12 “Slide to unlock” button in non-flat iOS design

! Figure 8.13 The new “slide to unlock” in iOS removes the button

!

Figure 8.14 Excessive minimalism: Information about appointments that is not not very informative

!

Some people nevertheless argue in favour of the minimalistic approach, noting that users are already familiarised with the iPhone. But such idea can only be argued when a UI is so popular that people already know how to use it. In summary, even though reducing noise is a good practice, UX designers should avoid excessive minimalism. Hidden affordances, ambiguity in information delivery, and lack of features should be avoided.

References Zhang J, Patel VL, Johnson KA, Malin J, Smith JW. Designing human-centered distributed information systems. IEEE Intelligent Systems 2002;17:42–47 


Golden Ratio The golden ratio (Figure 8.15) states that the relationship between two segments, a and b, expressed as (a + b) / a should follow the constant 1.618... (irrational number). This geometrical relationship has been found to occur in nature, e.g. trees and plants. The golden ratio has been the focus of attention of artists. Figure 8.15 Golden ratio: (a + b) / a = 1.618

! In the artistic field, the golden ratio has been advocated as a source of aesthetics. Paintings exhibiting this geometrical relationship are perceived as more balanced and pleasing than when not. The same argumentation has been used in architecture to advocate the adoption of the golden ratio in buildings. Figure 8.16 Which layout do you prefer? The bottom one uses the golden ratio

!

In UX design, it has also been suggested that UI layouts following the golden ratio are perceived as more aesthetically pleasing. For instance, consider the two layouts shown in Figure 8.16. Which one do you find more pleasant?

The top layout uses two areas with 100-points width. in the bottom layout, one area has 125-points and the other 75-points width (the ratio is not exactly 1.618, but close enough). There is a very good chance that you prefer the bottom layout, which uses the golden ratio. Even if you prefer the top layout, the best option is to nevertheless choose the bottom one. The main reason is the golden ratio has become part of our aesthetic cultural background. So it is very difficult to avoid it.


Worse is Better The idea of “worse is better” came from the software design and engineering fields. The idea is that, if you want to develop a system with excellent quality, it will take so much time that you will ultimately fail. Instead, if you just try to deliver something that barely works, you may have a win. This principle should be understood as a recommendation to deliver systems that do very few features, but that do them very well. By focussing on a small number of features, you can develop the system faster and get a strong foothold with the users’ preference. Later on, you can keep adding features to make the system more versatile. If you try to add all features to the system before releasing it, there may be a chance that you never get the users’ preference, losing the opportunity to your competitors. This principle may also concern UX designers. When systems are very complex, the development of a UI may take a lot of time, which puts the project at risk. It is therefore preferable to focus on simplicity and only develop the UI for the most critical functions. One highlight of this approach is that often we find systems that excel at doing one very simple thing. Twitter and Doodle are very good examples. Twitter started by doing one simple thing really well: Spreading messages with less than 140 characters. You may argue that 140 was too limiting. However, that decision solved many technical problems, which would have delayed the system development. In particular, with more than 140 characters, it would have been difficult to send tweets via SMS on old mobile phones. Doodle is a one trick pony: inviting people to meetings. Though it works very well because it works in a simple way and does not depend on any proprietary technology (as, e.g., Microsoft’s exchange). This approach should not be confounded with the now common decision to release beta versions of systems, which are supposed to be complete but usually have many bugs. Worse is better recommends focussing on a small set of features that work very well.

References Gabriel, R., 1991. The rise of “worse is better”. Lisp: Good News, Bad News, How to Win Big, 2, p.5.

Chapter 9

User Experience Threshold of Indignation Different users have different expectations and needs regarding a UI, which means that usability is an elastic concept. For example, if you have an urgent need to use a software tool that has a really bad UI, you may decide to use it anyway, trading usability for need. In other circumstances where there is no urgency, you may give more value to usability and avoid using the tool. Figure 9.1 Threshold of indignation

!

Paul Saffo suggested the concept of “threshold of indignation” as way of discussing the users’ elasticity regarding the user experience. The threshold of indignation is the point where a user refuses to use a UI because of an unfavourable tradeoff between user experience and capability/needs. On the one hand, this threshold depends on the ease of use: users will favour UI that are easy to use. On the other hand, the threshold also depends on the type of user: common users are unwilling to pay the costs of using a difficult UI, especially when they have other options; professional users will learn difficult

systems if they need to (e.g. in business environments, where certain tools are mandatory); and hackers are willing to use a difficult UI just for the challenge. Figure 9.1 illustrates how the threshold of indignation depends on the type of user and ease of use. Designers should be mindful about the threshold of indignation, especially when designing consumer products, where the threshold of indignation is very low and products can be easily rejected. An example of this behaviour is the supermarket self-checkout (Figure 9.2). This is a difficult UI. Users have to scan the goods, which does not work all the time. Often goods have to be searched using quirky codes or complex navigational UI elements. And paying for the goods can also be complex. Designers should be mindful that users have another option available: the traditional checkout. At the limit, users may leave the supermarket without the goods. So there are two ways to make this work: either make the self-checkout really easy to use, or increase the waiting time on the traditional checkout. Figure 9.2 The threshold of indignation when using the supermarket self-checkout is rather low: On the one hand, the UI is complex, while on the other hand users can choose to use the traditional checkout. ( Source: Kgbo / CC)

!

In reality, there is a third option. As noted by Paul Saffo, users tend to learn complex UI with repeated use, moving from common to professional users and hackers. We can expect that behaviour in the supermarket self-checkout, where users occasionally facing long lines in the traditional checkout may decide to learn the new system and gain expertise over time.

References Saffo, P., 1996. The consumer spectrum. In Bringing design to software (pp. 87-104). Winograd, T., ed. ACM.

Emotion Versus Utility The tensions between emotion and rationality have a long history. For long, philosophers like Descartes discussed the nature of being human as a split between mind and body, each being autonomous systems. While mind is concerned with high-level, abstract and rational reason, the body deals with the physical world, representing low-level, organic functions necessary to sustain life, using emotion as a mechanism for interacting with the physical world, e.g. through pain and pleasure. This dualistic view has been challenged by advances in neurosciences, which have identified closer interactions between the two realms than initially thought. For instance, studies of pain suggest that both physical and psychological pain can influence each other. Therefore the two systems are not really independent. By considering that mind and body are intertwined systems, one has to accept that user experience is an holistic combination of mind and body. As a consequence, we cannot conceive design as solely focused on the rationality of goals and functions - the utilitarian, task-oriented perspective. Rationality competes with emotion, reflecting how we interact with the physical world, both in positive and negative ways - the emotional perspective.

Expectation One aspect of UX related to emotion is that users can establish a relationship with the UI even before using it. For instance, the entertainment industry is very good at creating anticipation about a new movie or a new game. Marketing also explores expectations in their campaigns. The food industry is also very good at, for instance, composing and displaying dishes in a way that compels people to buy a product or to go to a restaurant. The fashion industry regularly uses smell to compel people to get into a store and to recognise a brand. The same argument can therefore be applied to UI design. Software systems can be designed to attract users even before they have been used.

Retention The other side of the coin of expectation is the emotional reaction after using a system. A positive experience will be retained in long term memory, so that users will be compelled to experience the system again. A negative experience will cause the opposite long-term effect.

Engagement Engagement concerns the immediate emotional reaction to system use. it is related with short-term interaction with a UI. The model shown in Figure 9.3 has been suggested to explain engagement. If we consider the strong lines, we note that direct experience generates either positive or negative emotions, which then influence judgement. Though judgement is also influenced by goals and mood. In turn, actions are influenced by goals,

memory of past events, and judgement. The weaker links suggest that mood and emotions are somewhat related, as well as experience and goals. Figure 9.3 A model of engagement. Dotted lines illustrate weaker relationships

! All in all, the model shows that engagement is a complex concept that relates many other psychological concepts. Designers may find it difficult to design for positive emotions without close contact with users to appreciate how to develop engagement.

Empathy Look at Figure 9.4 and Figure 9.5. Which design creates more empathy? One seems dull, cold, threatening, and quite distant from the user. The other seems more humanistic, friendly and approachable. One uses friendly text, images and symbols to create empathy. The other fails to use these elements in a positive way.

!

Figure 9.4 Creation of empathy through humanism

Figure 9.5 Lack of empathy through threat

!

Safety Many systems often lead users towards a sense of lack of safety, for instance, because information may be lost, or actions may result in negative impacts to the task goals. An example of this phenomenon can be found in Plunker. Plunker has a confirmation box that notifies users when the may lose file edits (Figure 9.6). Though the confirmation box is not very visible. Often users start working on a new file version without noticing they have not saved the previous one. Therefore, when they notice the problem, the dilemma is deciding what you are going to lose, either the previous version or the current version. One will be lost! A more complex example is the mythical fat finger when trading in the stocks markets. Cases have been reported where traders lost millions of dollars by pressing the wrong button.

Figure 9.6 What do you think of this confirmation box in Plunker, does it create a feeling of safety or not?

! A possible example of fat finger occurred on May 6 2010, where a frantic sellout led the stocks market down for no logical reason. The phenomenon may have happened because a trader entered a sell order of shares but accidentally entered billions instead of millions. The stocks market responded frantically and the Dow Jones lost almost 1000 points in few minutes. From a design perspective, this problem raises an interesting challenge: in the one hand, designers need to develop rapid response systems, so that traders may be able to sell fast in case of the market going down; in the other hand, designers need to include checks to avoid fat fingers, which necessarily delay the task.

Anxiety Anxiety emerges when users feel they do not have the skills necessary to complete a task. Designers can reduce anxiety by simplifying the UI, progressive disclosure, providing rich feedback, supporting undo/redo, and adding learning features in the UI.

Competence Competence refers to the feeling that one has the skills necessary to complete a task efficiently and effectively, and also the feeling of understanding how a system behaves. Designers can increase competence by designing interactions that exhibit clear mental models. Check the mechanical voting machine shown in Figure 9.7. It has been designed to give confidence about the capacity to operate it. A candidate is selected by rotating a mechanical lever. When the lever is rotated, a cross clearly identifies what candidate has been selected. This operation can easily been reversed. Electronic voting systems usually require much more complex interactions, which reduce the perceived level of confidence.


Figure 9.7 This mechanical voting machine makes every user feel competent to use it (Source: John Morton / Foter / CC)

!

Sense of control Sense of control contributes to a positive emotion towards using a system. Users like to know they have an influence over the task completion. Designers can improve the sense of control by allowing users to customise the UI, providing undo/redo features, and avoiding obscure automated features. A great example that illustrates the problem of control comes from aviation. In the early days, the control yoke was physical and therefore the pilot knew for sure that moving it would have a direct effect in the ailerons. However, nowadays the yoke has been substituted by a joystick that is controlled by a computer, which then operates the ailerons. This creates an effective sense of loss of control: the computer is in control, since it can override the pilot’s actions. Most times the computer follows the pilot’s instructions, but in many cases it does not. For instance, if the computer considers that the pilot’s actions are unsafe, it overrides the pilot. As a consequence, pilots know they are not in control.

Arousal Arousal is an interesting concept. It refers to the level of attention that a user applies to the intended task. Arousal influences the user’s efficiency accomplishing a task. With insufficient arousal, the user will be distracted by other things and therefore will lose efficiency. With too much arousal, the user will be overwhelmed and unable to perform adequately. As suggested by Figure 9.8, the best performance is achieved when arousal is sufficiently high to avoid disinterest and sufficiently low to avoid anxiety. Game designers understand the challenge very well, since a game where nothing really happens is soon abandoned for being too boring, and a frenzy game may be seen as too difficult and abandoned as well. Therefore game designers strive to have something happening all the time, keeping users in the curiosity zone. Figure 9.8 Efficiency is maximised when arousal is between coma and frenzy

!

References Mehta, N., 2011. Mind-body dualism: A critique from a health perspective. Mens sana monographs, 9(1), p.202. Kaasinen, E., Roto, V., Hakulinen, J., Heimonen, T., Jokinen, J.P., Karvonen, H., Keskinen, T., Koskinen, H., Lu, Y., Saariluoma, P. and Tokkonen, H., 2015. Defining user experience goals to guide the design of industrial systems. Behaviour & Information Technology, 34(10), pp.976-991. Sutcliffe, A., 2016. Designing for User Experience and Engagement. In Why Engagement Matters (pp. 105-126). Springer International Publishing.

Beyond Utility Non-instrumental goals People’s goals and needs vary with context. A person facing a life-threatening situation such as being in a desert without water and food has one single need: to survive. Another person facing no physical, economical or social threats may start considering other goals such as having better education or being well regarded by the others. A client in desperate need of a place to sleep will not mind the unusual sign shown in Figure 9.9, while people more concerned with self-esteem will consider sleeping in that hotel as a really bad experience. Figure 9.9 Unusual sign in a $7 dollar per night hotel room: Experience with this hotel will depend on the hierarchy of needs (Source: Author)

!

Abraham Maslow developed a theory suggesting that human needs evolve according to five different priorities, which reflect personal circumstances (Figure 9.10). The hierarchy starts with physiological needs and ends in self-actualisation. At first, it may seem unreasonable to take this theory to the UX domain. However, the theory highlights that users may deal with goals that go beyond the instrumental. Instrumental goals concern having to perform a mandatory or unavoidable task, which may be required by a job. Performing tasks is therefore at the bottom of the hierarchy of needs.

Figure 9.10 Maslow’s hierarchy of needs

!

Moving upwards the hierarchy, we may then find other immediate needs such as performing a task safely, effectively and efficiently. But then we could move further upwards addressing other needs that go beyond the instrumental. Ultimately, users would like to perform a task in a pleasing way. If more than one system can be used to fulfil a task, it seems reasonable the user will make considerations that go beyond the instrumental goals, e.g. choosing a tool that feels aesthetically better.

Aesthetics Beauty is nowadays considered an important quality in technology design. Of course one contributor to such added importance is that technology is nowadays more commoditised than it used to be, and therefore users can choose beauty as a critical factor in their decisions. For instance, early mobile phones were ugly and unfriendly, but people would stick with them for a long time because they were very expensive. Nowadays, cost is a lesser factor, so user may change phones as soon as one with better aesthetics appears in the market.

Familiarity Musical instruments are quite interesting artefacts, from a UX point of view. Most instruments are extremely difficult to use, such as the flute and the violin. So, why not improving their design? Why do we keep using violins instead of designing simpler, more user friendly alternatives? The answer is familiarity.

Hedonism Still regarding musical instruments, if they are so difficult to use, why people use them? Well, in most cases the answer is not utilitarian. After all, most people do not play musical instruments to achieve utilitarian goals. They do it for pleasure. Often they endure the extremes pains of learning to use a complex instrument such as the violin for the pleasure of listening to the music.

References Maslow, A.H. (1943). "A theory of human motivation". Psychological Review. 50 (4): 370– 96. Hassenzahl, M., 2013. User experience and experience design. The Encyclopedia of Human-Computer Interaction. Hassenzahl, M. and Tractinsky, N., 2006. User experience-a research agenda. Behaviour & information technology, 25(2), pp.91-97. 


Chapter 10

Prototyping Prototyping Mindset Prototyping is the practice of building mock-ups of a new system. You can find this practice in many fields. For instance, car makers regularly build various prototypes of new car models, ranging from static mockups made with wood and plastic, to concept cars showcased in auto shows, and fully-operational cars tested in race tracks. Architects also create physical prototypes of their designs, which they show to clients in order to get feedback and approval. Nowadays it is also common for architects to create virtual reality mockups that allow clients to wander around the building design. That gives a better feel of the designed solution. Figure 10.1 Analysis versus prototyping mindsets

Analysis mindset

Prototyping mindset

Learn from documentation

Learn by doing

Deduce from the existing facts

Consider different problems, view and solutions

Structure existing information using objective criteria

Exploration, serendipity and thematic vagabonding

Analysis is a one-off event

Prototyping is an iterative process

Make a clear recommendation based on the selected criteria

Develop and explore several alternatives

Output is a recommendation

Output is a quasi-functional solution

Take as much time as needed

Finish as soon as possible

There are many reasons to adopt prototyping when developing a new system. An important one is to reduce risk. If you do not know much about the system, do not know if it is feasible or not, and do not know if the users are going to accept it, it makes a lot of

sense to prototype. As you gain insights about the system you reduce the risk of developing the wrong system. Of course a fundamental requirement of prototyping is that its development should represent significantly less effort and cost than building the real thing, otherwise it would not make sense to do it. We could perhaps finish the chapter here saying that prototyping is a risk reduction technique. However, there is much more to consider. Prototyping can also be a mindset. In Figure 10.1 we contrast prototyping with the analysis mindset. This shows that prototyping is anchored on completely different assumptions about how to approach a problem, on how to develop a problem, and also on how to progress from the problem to the solution. Charles Owen summarises these differences by noting that thinkers can be divided in two categories: Finders and makers. Prototyping belongs to the makers category.

References Owen, C., 2007. Design thinking: Notes on its nature and use. Design Research Quarterly, 2(1), pp.16-27.

Purpose of Prototype Vertical prototype To discuss the concept of vertical prototype we can perhaps visualise the system as a round cake with a fancy icing. The whole cake corresponds to a set of functions implemented by the system, while the icing refers to the user stories, which describe the system functions from the users’ point of view. If you cut the cake in slices, each slice corresponds to one particular function. When you look at a single slice, it has a bit of the icing, which refers to one single user story. For instance, paying a product online. As you look at the slice from top to bottom, you find layers of functions necessary to implement the user story. For instance, in order to pay for a product online, the user needs to input the payment details (credit card number, expiration date, etc.). In a layer below, the system has to implement a UI object to gather the credit card number. In a lower layer, the system needs to send the user’s input to a server. And so on. So the idea of vertical prototyping is that you only prototype a small number of user stories - few slices of the cake. However, you prototype all functions necessary to implement the selected user stories. The main purpose of this type of prototype is to explore the system in a very detailed and realistic way. However, realism is limited to a small set of features, which have been developed from top to bottom.

Horizontal prototype In this case you do not slice the cake vertically. Instead, you slice it horizontally, close to the icing. This means that the prototype implements all user stories, but it does not go down into details. For instance, the prototype of an e-commerce shop would have the search, browse, select and pay user stories, but it would not implement the low-level functions necessary to actually search, browse or even select a product. The main purpose of this type of prototype is to develop a good overview of the system, but one that is not very realistic. It is common that horizontal prototypes only develop the UI, with all inputs and outputs but no associated system functions. The functionality is there, but it is very shallow.

“T” prototype A “T” prototype is a combination of vertical and horizontal prototypes. Using again our cake illustration, the idea is that the designer slices the whole cake horizontally, but also takes some complete slices, with all layers of functionality from top to bottom.

The main purpose of this type of prototype is to develop a complete overview of the system, which is not very realistic, except for a small set of functions, which can be explored with greater realism.

Evolutionary prototype Usually, prototypes are sent to the garbage bin after it has been decided what system to develop. However, in some cases a strategy can be adopted to evolve the prototype towards the final system. For instance, a website can be developed this way, where the designer builds an horizontal prototype using a platform such as Wordpress, but then, instead of scrapping the prototype, it is further developed and integrated with other software to build in the final system. Even though this approach is tempting, there is the risk of delivering a system with bad quality, as often prototypes are developed with the explicit purpose to be thrown away.

Fidelity of Prototype Low-fidelity prototype A low fidelity prototype is not expected to be very faithful to the final system. The other side of the coin is that a low-fidelity prototype can be done fast and cheaply. Very often low-fidelity prototypes are done with paper and pencil. Figure 10.2 and Figure 10.3 give two examples of paper prototypes. Figure 10.2 Paper prototype (Source: Author)

!

Well, actually the best way to develop a low-fidelity prototype is not to use paper but a combination of thick paperboard, transparencies and post-it notes. Post-it notes are very useful to design dynamic elements such as buttons, menus and alert messages. And transparencies can be used to develop multiple layers of data displays. Figure 10.4 shows a prototype developed for a mobile emergency management system, which involved locating assets in maps. Transparencies were used to handle map interaction. In the same picture, you can also see a transparency with a collection of postits with different sizes and colours. These post-its were used to implement the different buttons, menus and other UI objects required by the UI.

Figure 10.3 Paper prototype (Source: Author)

! Figure 10.4 Using transparencies and post-its in paper prototypes (Source: Author)

!

Figure 10.5 Running a paper prototype (Source: Author)

!

One issue that should be emphasised is that these prototypes are not static mockups. Paper prototypes should be executable. That is, it should be possible to have a user interacting with a paper prototype as likely as possible as with the real system. Figure 10.5 shows a paper-prototype being used in a trial session involving users and designers. Designers are responsible for “running” the prototype, changing the post-its according to the users’ interactions. Running such a prototype is similar to a theatre play. Finally, note that low-fidelity prototypes can be developed using different techniques and tools. For instance, they can be developed with sketchy wireframes.

High-fidelity prototype High fidelity prototypes provide much more detail about the system, emphasising the look and feel. They can be developed using wireframing tools, which provide several dynamic features that help to make the prototype very realistic. High-fidelity prototypes can respond to users‘ interactions, exactly like the real system would. Because of this level of detail, high-fidelity prototypes are much more expensive to develop than low-fidelity prototypes. They are also less flexible. Unlike paper prototypes, which can be redesigned on the spot, in trial sessions with the users, changing an high-fidelity prototype has to be done off line.

The other side of the coin is that high-fidelity prototypes allow capturing fine-grained details about the users’ interactions. Figure 10.6 shows a trial session with a user, which logged every user interaction with the tool while giving instructions and taking notes.

!

Figure 10.6 Trial session with a user (Source: C. Sapateiro)

Chapter 11

Evaluation Rigorous versus Rapid Rigorous evaluation Rigorous evaluations use data gathering methods that are simultaneously: • Precise. The method is capable to properly measure the variables being evaluated, within a reasonable margin of error • Reliable. The method establishes a consistent relationship between the variables being measured and the obtained measurements, which is stable across evaluations • Generalisable. The method provides measurements that can be confidently used across different cases and evaluation sessions These three properties are necessary to be able to make substantive claims about the evaluation results, like “design A is better than B”. Without a precise method, one can never be sure if the measured differences between two design are caused by the design changes or not. Without a reliable method, we may get results that cannot be compared against each other or against a baseline. And without generalisability, the results may not be meaningful beyond the project. The evaluation methods that fulfil the above properties have usually been proved by the research community multiple times, so they are robust and accepted as good practice. The problem with rigorous evaluations is that precision, reliability and generalisability usually bring a significant cost to the evaluation process. To start with, the UX designer must know well the constraints and requirements of the selected method. Such expertise is costly to obtain. The designer must also known the protocols and rules required by the method, which may be complex to implement and may significantly constrain the conditions under which the evaluation sessions are conducted. The evaluations sessions may take a lot of time, may have to be repeated multiple times, and may require an high number of participants.

Figure 11.1 Evaluation of a decision support tool related to crowdsourcing (Source: T. Nguyen)

!

For instance, consider the example illustrated in Figure 11.1. This case involved the evaluation of a decision support tool related to crowdsourcing. The purpose of the evaluation was to compare the performance of users either using or not using the tool. The evaluation required a careful planning of the whole process, including the definition of variables and measurements, definition of exercises, and selection of participants. A round of experimental sessions were conducted with 60 participants. However, the results showed that the experiment had been inadequately planned, because the exercises were too simple. Therefore new experiments had to be conducted, involving 200 new participants. Cases requiring to repeat a whole set of experiments are not uncommon and illustrate the cost of conducting rigorous evaluations. Considering the potential costs, a key question to analyse is why do you need a rigorous method? Do you need to make a strong claim about a design? In many cases, you can live with an indication that the design is good enough or is moving in the right direction. Do you need to protect yourself against risks? In some cases, you may have to provide assurances, either required by law, professional practice, or industry standards. Industries such as aviation, air traffic control, healthcare, and others, have specific standards and require specific evidence of quality, which have be rigorously demonstrated. You would not like to fly on airplanes that use computers and UI that have not been precisely, reliably and generally validated.

Figure 11.2 Before the experimental session: Training the participants (Source: C. Sapateiro)

! Figure 11.3 During the experimental session: Observation and data collection (Source: C. Sapateiro)

!

Figure 11.2 and Figure 11.3 illustrate an example of rigorous evaluation. The case involved the development of a mobile tool supporting situation awareness in emergency scenarios. Situation awareness is a subtle cognitive phenomenon, which is difficult to measure (one

reason is that it is built inside the mind and therefore difficult to observe). The claim that the tool increased situation awareness (versus not using the tool) seems difficult to make without a rigorous evaluation. The situation awareness elements have to be measured carefully, using indirect approaches such as logging what the user does and also using pop-up questionnaires to ask what the user is thinking. All in all, only a rigorous evaluation, using well known practices identifying the variables involved in situation awareness, and a well-known protocol defining how measurements should be obtained, was necessary. In particular, the protocol requires a careful training of the participants in the experimental sessions (Figure 11.2). it also requires using instrumentation to log what the user does with the UI (Figure 11.3). And it also requires the designer to observe users using specific methods (Figure 11.3). Figure 11.4 Serendipitous data gathering (Source: L. Carrico)

!

Rapid evaluation Rapid evaluation is just the opposite of rigorous evaluation. To start with, there are no specific requirements (other than the methods must be cheap to use). Precision, rigour and generalisability are not necessary or expected. Protocols and rules are usually simple and flexible. Each session can be tailored to suit a particular need or to follow a particular event. On the other side of the coin, not many claims can be made regarding the results. The evaluation results tend to be used opportunistically and internally to the project. Data os of low quality. Figure 11.4 illustrates a case of rapid evaluation. The project involved the development of a goal-assignment tool running on mobile devices. During the day, the tool would ask users to execute certain tasks, which involved data collection. Since the goal was to assess the data collection mechanism, and no claims would have to be made about performance

or benchmarking against other tools, there was no specific need for a rigorous evaluation. The adopted method consisted in data logging and using a camera to gather contextual information, about where the user was moving. Loose instructions were given to the users, and they were changed from user to user to focus on specific events that were considered of interest to the designer. Figure 11.5 Rigorous versus rapid evaluation

Rigorous evaluation

rapid evaluation

Formal process

No particular rules

Known method, accepted by community

Method may be known or unknown

Focus on precision, reliability and generalisability

Focus on fast feedback

High data quality

Low data quality

Figure 11.5 provides a short comparison between rigorous and rapid evaluation. We note such comparison can be detached from any considerations about evaluation methodologies and methods.

References Mcgrath, E., 1995. Methodology matters: Doing research in the behavioral and social sciences. In Readings in Human-Computer Interaction: Toward the Year 2000 (2nd ed).

Summative versus Formative Summative evaluation The distinctions between formative and summative evaluation are related to where in the design process the evaluation is made. A summative evaluation is done at the end of the project. The purpose is to have a final assessment of the complete design. One may ask why a summative evaluation may be necessary? Well, it seems reasonable that a complete design be assessed. Such results are important to have a strong indication about the final design but also to reflect about the whole design process, in order to improve it in future projects. Another reason may be that the final assessment is done before moving the project towards the development stage. Since development tends to be more expensive than design, it seems reasonable that project managers are confident in the design before committing to significant expenses. For developers, it is much more difficult and therefore costly to make changes than for designers. In order to increase confidence in the summative evaluation, it makes sense to use external evaluators, instead of the designers. This helps getting an independent view, which avoiding potential biases introduced by designers. Figure 11.6 Heathrow terminal 5, a failed inauguration because of inadequate evaluation before opening (Source: eGuide Travel / Foter / CC)

!

Complex project necessarily need robust summative evaluations. An interesting example that illustrates the value of a robust, external summative evaluation is given by the

inauguration of the terminal 5 of Heathrow airport (Figure 11.6 and Figure 11.7) in 2008. The planning of terminal 5 was immensely complex and took 19 years in the making. It involved state of the art technology, and 400.000 person-hours were spent only on engineering. Before the opening, which occurred on 27 March 2008, the terminal systems had to be thoroughly tested - summative evaluation. Though it seems there were some compromises. In particular, the baggage handling system was not tested using “real world” bags. The consequence was that, as soon as the terminal started operating, there were problems with the bags. After just 12 hours of operations, the system failed and 12.000 bags were lost. Figure 11.7 Summative evaluation should provide confidence in the design: In the Heathrow terminal 5 case, the baggage handling system was tested using standard bags, which does not seem very realistic (Source: Katy Warner / Flickr / CC)

!

Another curious failure was that the terminal personnel, when they arrived at the new terminal, did not know where to park - the signals were missing. Therefore staff did not arrive to work on time, a problem that compounded with the baggage handling problem resulted into disaster. The estimated costs of system failures at the terminal 5 were 16 million pounds. Had an appropriate summative evaluation been conducted, the problems may not have occurred.

Formative evaluation Formative evaluation is done during the project. It is centred on intermediate designs and therefore concerns different types of artefacts, from conceptual frameworks to paper prototypes and quasi-functional prototypes. The main purpose of a formative evaluation is to improve knowledge about the design by understanding what works and what does not. This emphasis on learning and reflection suggests that formative evaluations should be internal rather than external. In the one hand, the assessment does not need to be independent and robust, while in the other hand it makes sense to focus the assessment on aspects of the problem and solution that the designer considers most needing. Figure 11.8 provides a short comparison between summative and formative evaluation. Figure 11.8 Summative versus formative evaluation

Summative

Formative

At end of project, final step in development process

During the project, part of development cycle

Done once

Done multiple times

Complete, functional artefact

Intermediate artefacts

Ideally, external

Internal

References Brady, T. and Davies, A., 2010. From hero to hubris–Reconsidering the project management of Heathrow’s Terminal 5. International Journal of Project Management, 28(2), pp.151-157.

Quantitative versus Qualitative Quantitative evaluation Quantitative evaluation is focussed on describing a phenomenon using either quantities or the magnitude of variations (when the phenomenon is compared with another one, for which measurements can also be obtained). Because of such focus on quantification, the evaluation process will necessarily revolve around the precise definition of variables, scales and the procedures used to obtain the measurements. Quantitative evaluation is associated to a positivistic view of phenomena, which asserts that, if the phenomenon exists in the real world, then we can find empirical evidence of it, which can be objectively measured. Examples of quantitative data relevant to UX design include: • Efficiency - Number of clicks to perform task, time to complete task, system responsiveness, etc. • Effectiveness - Success rate, learning time, number of user errors, etc. • User feedback - Satisfaction, ease of use, ease of learning, likely to return, perceived speed, frequency of use, etc. Examples of quantitative data relevant to web design: • Number of visits • Page views • Visit duration

Qualitative evaluation Qualitative evaluation is focussed on describing a phenomenon without quantifying it. For instance, we may describe the constituents of a phenomenon, or we may categorize a phenomenon using nominal or ordinal scales, which allow making comparisons but with important limitations. In particular, ordinal scales represent order but not the distance between values. And nominal scales represent categories of values but not the order between them. Examples of qualitative data relevant to UX design include: • Task execution - Thinking process, learning process, interaction flow, • User behaviour - Eye movement behaviour, search behaviour, user difficulties, workarounds, etc.

• User perception - Understanding of concepts, perceived structure, safety, utility, satisfaction achieving the objectives, etc. Examples of qualitative data relevant to web design: • Click tracking • Click maps, heat maps, scroll maps • Searches In Figure 11.9 we provide a short comparison between qualitative and qualitative evaluation. Figure 11.9 Quantitative versus qualitative evaluation

Quantitative evaluation

Qualitative evaluation

Based on measurements

Based on concepts and indicators

Uses ratio, interval and absolute scales

Uses nominal or ordinal scales

Focus on quantification

Focus on understanding

Describes a phenomenon using quantities and magnitude of variations

Describes a phenomenon without quantifying it

References Whitmire, S.A., 1997. Object oriented design measurement. John Wiley & Sons, Inc. Lewis, J.R., 2014. Usability: lessons learned… and yet to be learned. International Journal of Human-Computer Interaction, 30(9), pp.663-684.

All Together Now We now discuss together the different approaches to evaluation. The first thing to consider is that, even though the three categorisations (rigorous/rapid, summative/ formative and quantitative/qualitative) can be considered independent, there are some strong relationships between them. Qualitative can be either rigorous or rapid. There is no basis for suggesting that qualitative research cannot be rigorous, just because it does not use quantitative variables. It all depends on the methods used and what the community accepts as rigorous. For instance, ethnography, when correctly applied is considered a rigorous method. The method has been developed to ensure good data quality, for instance avoiding biases and interference from the observers. Quantitative can be either rigorous or rapid. There is no basis for stating that quantitative assessments are always rigorous. Once again, it all depends on the method and the definition of variables and instruments used to gather the data. Counting the number of clicks on a webpage for a few days will provide a rapid assessment of the webpage use. Though counting page clicks could become a more rigorous method, if other aspects related to data gathering would be more carefully considered, for instance, having users performing a specific number of tasks in a laboratory under controlled conditions. Summative usually goes along with quantitative and rigorous. There is perhaps a practical reason for this. Quantitative approaches tend to require a careful consideration for what variables to use and the instruments necessary to gather the measurements. This usually results in adopting more rigorous approaches. In turn, since rigour represents more effort, it is just natural that the number of evaluations has to be reduced, which means the evaluation will tend towards the end of the design process. Formative usually goes along with rapid. If the objective is to improve the design, there is no justification for adopting a rigorous method, unless the data could be easily obtained. Formative usually goes along with qualitative. Qualitative data tends to support richer interpretations of phenomena than quantitative data. Therefore, if the purpose is to gain insights, it seems more adequate to use the added richness of qualitative data.

Rigorous Evaluation Methods Think-aloud-protocols This method is based on the direct observation of users. The users are instructed to think aloud while they execute a task (Figure 11.10). Then, the evaluator takes notes about what the users are doing and thinking. The method is particularly adequate to evaluate complex cognitive phenomena, which only exist in people’s mind, such as situation awareness, decision making and memory use. Figure 11.10 Using the think-aloud-protocol: the user is instructed to explain what she is doing. In this case, the task involved collecting geological information (Source: P. André)

!

The scientific literature is rich in explaining how to implement the method, emphasising in particular the selection of participants, data gathering procedures, and coding of elements provided by the participants.

Questionnaires There is a long tradition of using questionnaires in various research areas, but especially in social sciences. The use of questionnaires should be considered a rigorous method when best practices are adopted. In particular, the UX field has developed several standardised questionnaires for usability evaluation. SUMI. The SUMI (Software Usability Measurement Inventory) questionnaire assesses the user experience using 50 questions arranged around five areas: effectiveness,

efficiency, helpfulness, control, and learnability. Each question consists of a statement that the user musk answer either agree, do not know, or disagree. The questionnaire can be found online at http://sumi.uxp.ie/en/. The method requires a sample size of at least 10 participants to provide reliable results. Though data quality depends on the selection of typical users, typical task goals, and typical task environments. USE. The USE (Usefulness, Satisfaction, and Ease of use) questionnaire assesses the user attitude towards a system according to the three categories suggested by its name: usefulness, satisfaction, and ease of use. Users are requested to answer a set of 5 to 11 questions for each category using a seven-point Likert scale (from 1- strongly disagree to 7- strongly agree). The scores are averaged across all questionnaire items, and then averaged across all respondents and across all tasks performed by users. The questionnaire is available online at http://garyperlman.com/quest/quest.cgi?form=USE. SUS. The SUS (System Usability Scale) questionnaire provides a simpler approach to usability evaluation. It only uses 10 questions that provide a global view about a system’s usability. The questionnaire uses a 5-point Likert scale. The questionnaire is delivered after the user has the opportunity to use a system and the respondents are asked to provide immediate answers, instead of spending time thinking about the questions. The scores are globally considered, using all questionnaire items. Scores for individual questions are not meaningful on their own. The 10 questions used by this method are: 1. I think that I would like to use this system frequently 2. I found the system unnecessarily complex 3. I thought the system was easy to use 4. I think that I would need the support of a technical person to be able to use this system 5. I found the various functions in this system were well integrated 6. I thought there was too much inconsistency in this system 7. I would imagine that most people would learn to use this system very quickly 8. I found the system very cumbersome to use 9. I felt very confident using the system 10.I needed to learn a lot of things before I could get going with this system The overall score is calculated by adding: • Questions 1, 3, 5, 7, and 9: individual score minus 1

• Questions 2, 4, 6, 8, 10: 5 minus individual score Individual scores range from 0 to 4.

Cognitive walkthrough This method provides a way of checking how a user will interact with a system on a first contact. The method assumes the user will do the following steps to accomplish a task: 1. The user sets a goal 2. The user searches the UI for available actions 3. The user selects the action that seems likely to make progress toward the goal 4. The user performs the action and assesses progress toward the goals This is a rigorous analytical method because it includes detailed instructions on how to analyse the user’s steps toward the goal. For each action, the following questions should be answered: 1. Will the user try to achieve the right effect? (correct selection) 2. Will the user notice that the action is available? (visibility) 3. Will the user associate the correct action with the effect to be achieved? (labelling) 4. Will the user notice that progress is being made toward the goal? (feedback) Figure 11.11 The cognitive walkthrough method can be applied to paper prototypes (Source: Author)

!

An interesting aspect of this method is that it can be applied at early design stages, for instance after the development of an early paper prototype (Figure 11.11). Furthermore, the method does not require users, since the walkthrough can be done by the designers (Figure 11.12). Figure 11.12 The cognitive walkthrough method does not need users, since the walkthrough can be done by the designers (Source: Author)

!

References Seffah, A., Donyaee, M., Kline, R.B. and Padda, H.K., 2006. Usability measurement and metrics: A consolidated model. Software Quality Journal, 14(2), pp.159-178. Kirakowski, J., and Corbett, M., 1988, Measuring User Satisfaction, in Jones, D.M., and Winder, R., People and Computers, vol. IV, Cambridge University Press, UK. Lund, A.M., 2001. Measuring Usability with the USE Questionnaire12.". Usability interface, 8(2), pp.3-6. Brooke, J., 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry, 189(194), pp.4-7. Rieman, J., Franzke, M. and Redmiles, D., 1995, May. Usability evaluation with the cognitive walkthrough. In Conference companion on Human factors in computing systems (pp. 387-388). ACM.

Rapid Evaluation Methods Wizard of Oz The name of this method was derived from the Wizard of Oz movie. In the movie, The powerful wizard of oz is operated by a person behind a curtain using handles and switches (check the video at https://youtu.be/YWyCCJ6B2WE). So what this method does is to utilize people to emulate system functionality. Figure 11.13 Emulating a driving assistant (Source: Author)

!

The method is used in early to intermediate stages of design, especially when certain functionality is complex and difficult to prototype. For instance, a sophisticated photo classification mechanism. In this case, the classification function can be performed by a person. The typical Wizard of Oz evaluation involves a setting where the user interacts with a system using a computer, which is linked to another computer where the “wizard” performs some emulated functions. The emulator is not visible to the user, who believes the system is fully functional, which helps making the evaluation more naturalistic.

Figure 11.14 Emulating a driving assistant (Source: Author)

!

This method is adequate to evaluate complex systems at early stages of development with real users. It is classified as rapid because the complex functionality is not developed but instead emulated. Though the method requires developing a realistic UI for the system, and also a UI for the emulation. Figure 11.13 and Figure 11.14 illustrate a wizard of oz scenario. The scenario involved the evaluation of a driving assistant that would signal the driver about speed limits. Since the functionality would be too complex to developed, a person was used to signal the driver about the speed limits (Figure 11.14). In this particular scenario, the wizard is known to the user, which decreases the degree of realism of the evaluation.

Design walk-through The design walk-through method is an analytical approach to understand how a system is used. It is centred on the perspective of the user. The method is usually adopted at the very early stages of design, when the system is vaguely documented using sketches, storyboards or paper prototypes. The method can be used either with user representatives or expert evaluators. The method takes a team approach where the designers provide a guided walk through the artefact explaining the goals and what the user does at each step. Then the user representatives or expert evaluators identify possible problems and discuss then with the designers.

Scenario based evaluation The scenario based evaluation also takes a team approach. It is based on a set of scenarios describing how users interact with a system. Scenarios are usually text-based stories, addressing just the main features of a system. The design team guides user representatives through a prototype use, which could be a paper prototype. While using the prototype, the user representatives check it agains the scenarios identifying potential problems.

Guerrilla usability testing This method suggests that designers should go out to the street and casually ask people to test their prototype. To be effective, the method must focus on very specific concepts or features, which can be evaluated fast by the users. The method provides broad feedback about task comprehension, utility and perceived value. It is not adequate for detailed analysis.

Usability inspection This is one of the earliest developed evaluation methods. Essentially, it consists in asking a UX expert to inspect a prototype. The inspection is analytic and based on the evaluator’s expertise in HCI. The obtained feedback is totally dependent on what the inspector values in terms of user experience. Usually the inspectors report non-conformities to design principles. Studies show that using 2 or 3 inspectors is sufficient to identify most problems.

Heuristic evaluation This analytic method is also based on inspection but has been modified to avoid the dependency on HCI experts. In principle, designers, software developers and work domain experts can perform an heuristic evaluation. The method relies on a predefined list of heuristics or design principles, which is usually small, say 10 to 12 heuristics. Then, 6 to 8 people goes through the prototype and check if it complies with the heuristics. The team takes individual notes about the problems and then has a meeting to compare notes and merge the problem list. The participatory approach increases confidence in the evaluation.

Figure 11.15 Card sorting (Source: I. Enwereuzo)

!

Card sorting This method is often used for data gathering but can also be used for evaluation. As an evaluation method, it allows to understand how users structure information and identify priorities. The method works this way. Initially, a deck of cards is given to a user (Figure 11.15). Each card has an information item, for instance a goal, function, choice, or preference. Empty cards can be added to give more flexibility to the data collection process. The user is then invited to organise the cards according to categories and priorities. In Figure 11.15, the top cards identify a set of priorities, which have been ordered from left to right. The categories can be defined a priori or left open for the user to define (as in Figure 11.15). The cards placed below the categories then identify which elements belong to each category. The placement order can be used to identify priorities with a category. Realistically, this method can only be used with simple concepts, which may be expressed with few words. The method can be applied individually or in group.

Chapter 12

Design Processes Iterative Design Process e already characterised UX design as consisting of three main activities: Analysis, design and evaluation. However, we did not discuss how to put these activities together. The design process deals with the way these activities can be instantiated in actual design projects. Any process describes a set of interrelated activities, which are coordinated with the purpose to accomplish a goal. When discussing UX design, the goal is to come up with a UI for a certain system. The value of a process is that it provides a template, or blueprint, that can be used in multiple projects. Good templates transmit “how to” knowledge and reduce project risks by removing some uncertainty. The simplest design process we can think of is shown in Figure 12.1. It simply organises the design activities in four consecutive steps.

!

Figure 12.1 Iterative design process

Analysis Analysis is about understanding the project’s requirements. This is a very broad activity that not only includes user research but also an analysis of the client’s requirements and possible constraints imposed by the underlying system.

Design Design is about conceiving the UI. It involves a lot of thinking and making choices: considering what a system is expected to provide and how it can be done in the best interest of the users. Design also involves creativity, imagination, uniqueness, and experimentation. And finally, design also concerns defining the look & feel of the UI.

Implementation Implementation is about making the design work. Design is not finished with a solution. It is finished with a solution that has been deployed by the client. It involves work such as developing user manuals, training the users, and fitting the system into the organisations that will use it.

Evaluation Evaluation is about making sure the UI is usable and satisfies the users. The project cannot finish without verifying that a design is fit for purpose. Even though design experts may help identifying problems, the true evaluation must be based on feedback given by real users.

Iterative nature of process This process transmits a notion of determinism and rationality. The four activities are set up according to a rational perspective where, simply put, first things come first and every activity is necessary to accomplish the project goals. However, the link from evaluation to analysis also transmits a notion of iteration. It conveys the understanding that, to fully accomplish the project’s goals, design may have to cycle through the set of four activities multiple times. The main reason is that often the problems uncovered by the evaluation have to be reanalysed, which may lead to a redesign, etc.

More detailed activities As presented, the iterative process is so simple that it may only be applied to the most basic projects, which means its value as a blueprint is modest. Sometimes more extensive projects require control over more detailed project activities. In Figure 12.2 we show how the iterative process can be further detailed by decomposing the analysis and design activities into more distinctive and fine-grained activities.

Data gathering The purpose of this step is to make the user research more explicit, as it often requires a lot of time and effort. Figure 12.2 Expanding the iterative process with more detailed analysis and design activities

!

Modelling The purposes of this step are to reduce the amount of data gathered in the previous activity, to integrate data coming from different sources, and to make the data more useful for design by generating a set of models. Multiple techniques and annotations can be used to generate these models, such as personas, user stories, affinity diagrams, use cases, etc.

Visioning The main goal of this activity is to finalise the whole data analysis with a vision. A vision may consist of a simple scribble, done with paper and pencil, or it may consist of a textbased statement. Note that the vision complements the models developed in the previous step, but it does not substitute them. A vision suggests how to address the user requirements in a creative way. Its value is anchoring the next step into a set of creative possibilities emerging from the user research. In other words, data informs design but a vision drives the design.

Conceptual design This is the first step into actual design. It concerns the synthesis of existing knowledge about the project towards the generation of a set of possible solutions. Often at this stage, many design solutions are generated at a very fast pace. As the name suggests, a conceptual design in only focussed on the main concepts of a solution and therefore tends to be very abstract and sketchy. Abstract techniques such as conceptual frameworks, wireframes and storyboards help developing conceptual designs.

Intermediate design Moves from the conceptual towards the concrete, although not yet fully committing to a certain solution. Some of the solutions that emerged during the conceptual design stage can be dropped to make the project more manageable and cost effective. So the intermediate design is just focussed on the most promising candidates. This stage uses techniques such as storyboards and wireframes to explore solutions with further detail. In the case of wireframes, the focus is placed on the generic aspects of visual and interactional structure, for instance using sketchy UI objects.

Detailed design This is the final stage of design and therefore it is usually centred on the single best solution. The main goal is to generate a prototype with significant level of detail and realism. Wireframes are usually used at this stage, although with some significant differences compared to intermediate design. An emphasis is put on the final aspects of look & feel, which usually require using a more realistic set of UI objects.

References Holtzblatt, K., Wendell, J. and Wood, S., 2004. Rapid contextual design: a how-to guide to key techniques for user-centered design. Elsevier.

Product Design Process Based on a strict separation of concerns, we could say that UX design and product design are two completely different subjects. However, in many cases the importance of the UI may be so hight that spreading the project by two teams and having them dealing separately with these two design problems may be a mistake. This is the case, for instance, when the project involves designing an electronic service for a business. Yes, we could have one team focussed on business analysis and designing the service steps without considering the UX. Then another team would look into the specified service steps and design the forms necessary to implement the service. Though this approach would neglect that the separation between process steps and forms is artificial and UX design should focus on both in an holistic way. Usually the consideration for the business dimension of a system can be incorporated in the analysis activity. Figure 12.3 illustrates the set of changes that can be done to the design process to cover the business perspective.

!

Figure 12.3 Product design process

Market/business analysis Includes the set of activities necessary to identify the purpose and value of the design from a business perspective. Examples include benchmarking, comparative analysis, SWOT (strengths, weaknesses, opportunities, threats), and risk analysis.

Product analysis Includes data gathering, work modelling and specification of user requirements, which have already been discussed in the iterative design process.

Business plan Considers the feasibility of the project, which may result in a “go / no go” decision. It also identifies a set of business requirements and constraints, which have to be considered by the UX designer along with the user requirements.

References Bruce, M. and Cooper, R. 2004. Creative product design: a practical guide to requirements capture management. Wiley.

Star Process Often, many real-world design projects do not start from the logical beginning. For instance, when the design is centred on improving an existing system, it makes sense to start with an evaluation stage instead of analysis. Figure 12.4 Star process

!

For this reason, it has been suggested to view the design process in a star configuration (Figure 12.4) instead of an iterative flow. Even though the star configuration looks more confusing than the sequential configuration, it is more flexible. Furthermore, at the centre of the star model we find the evaluation activity. This indicates that evaluation is the most important activity in UX design. It also indicates that UX designers always need to evaluate their proposed solutions. And finally, it also suggests that evaluation should be done multiple times throughout a project. In particular, both conceptual, intermediate and detailed designs can and should be evaluated.

References Hix, D. and Hartson, H., 1993. Developing user interfaces: ensuring usability through product & process. John Wiley & Sons, Inc.

Soft Design Process As with the star process, the soft process appears in opposition to a rigid way of conducting a design project and calls for more flexibility. But the soft process goes even further towards flexibility than the star process. Figure 12.5 Soft design process

!

The soft view (Figure 12.5) suggests that any project spins around three intertwined sets of activities.

Research-centred activities These activities are focussed on data gathering, comprehending both the user and the market requirements.

Strategy-centred activities Concern a set of strategic choices, including the decision to go forward or cancel the project. However, many other strategic decisions have to be considered along the project. For instance, decide which users and markets will be targeted by the design team, decide which user requirements to develop, and decide which solutions to move forward from conceptual to intermediate design and from intermediate to detailed design.

Idea-centred activities This important dimension of design seems to be neglected or subsumed in other design process approaches. The soft design process emphasises that design encompasses creative activities and therefore ideation and exploration should be explicit components of the whole process. There is no order or central activity in the soft process. The template just emphasises that, whatever design activities have to be done, they should cover the three fundamental pillars of contributing to research, strategy and ideas.

References Bruce, M. and Cooper, R. 2004. Creative product design: a practical guide to requirements capture management. Wiley.

Chapter 13

Design Paradigms Iterative Design This is just a placeholder for the bread and butter of design paradigms. There is not much to find here. This paradigm is devoid of philosophical complications and is basically focussed on accomplishing the project’s goals. As expected, this is the paradigm of choice for project managers and inexperienced designers. This paradigm is strongly linked to the iterative design and product design processes, which by nature are also iterative and goal-directed. The main concern is to divide the design task in a set of iterative activities, which are self-contained and accomplished in a stepwise way until the project is finished. Each iteration is focussed on a particular goal and contributes to the following goals in a clear way and according to predefined boundaries. The UX designer knows what to do and when to do it. There are no distractions. The project minimises changes and maximises control. Figure 13.1 highlights some advantages and disadvantages of this paradigm. Figure 13.1 Advantages and disadvantages of iterative design

Advantages

Disadvantages

Simple, clear, linear, easy to explain

Focussed on goals, not on the users or the system

Easy to control

Low user participation

Easier to get the project finished on time

User requirements are usually frozen at the beginning of the project Lacks holistic view of the system

User-Centred Design The main idea of user-centred design is to focus the design on the users. This can be accomplished, for instance, by adopting a star process. The approach seeks to avoid excessive control from clients, developers and other stakeholders that may be involved with the UI but do not have to use the UI and may not care about UX. Often clients impose constraints to projects, in particular cost, resources and time, which may affect the users. Examples would include eliminating data elicitation from users, constraining the list of user requirements, and reducing user testing. Developers can also significantly constrain a project. For instance, by requiring the adoption of specific technologies that may not be user friendly. User-centred design shapes two important stages of design: analysis and evaluation. According to this paradigm, the analysis stage must involve data acquisition from users, in order to deeply understand how they work and the work context. And evaluation must necessarily involve user testing and not other cheaper forms of evaluation that eliminate the users. One argument in favour of user-centred design is the capacity to generate more productive systems. When UI are designed to be more adapted to the users, organisations are more productive. Conversely, when organisations decide to adopt systems less adapted the the users, the impact may be negative. This concept is often referred to as the productivity paradox (i.e. more technology does not necessarily means more productivity). User-centred design requires UX designers to analyse what users really need. Somehow, this process involves balancing the existing with the possible. Figure 13.2 Understanding the users

!

Kim Vicente, in the book “Cognitive Work Analysis”, describes the challenge as conflict between work possibilities and current practices (Figure 13.2). The users’ current practices with a system can be divided in two categories, the ones that work properly (functional practices) and the one that do not work as expected and require workarounds

(non-functional practices). Of course a primary goal of UX design is to eliminate nonfunctional practices, but that means they have to investigated, e.g. through ethnographic studies. UX designers also have to consider the domain of work possibilities, which comprises the functional practices and the unexplored practices. Finding unexplored practices is also a primary goal of design. Often the unexplored practices have to be looked for by the designer in collaboration with the users, e.g. in brainstorming sessions. In this perspective, we may say that design involves the following objectives: • Eliminate non-functional practices • Keep functional practices • Find unexplored practices In Figure 13.3 we provide a table with some advantages and disadvantages of the usercentred design paradigm. Figure 13.3 Advantages and disadvantages of user-centred design

Advantages

Disadvantages

Flexibility dealing with stakeholders

Users seen as information sources and guinea pigs

User requirements can always be revisited

Lack of deep understanding of users

More contact with users

Contact with users is centred on early (data acquisition) and later (evaluation) stages of design

Learning process

Lack of engagement from users, especially after the initial stage

Reduced risks through added contact with users

Significant time and effort gathering and processing data from users

More productive systems Evaluation with real users Can be easily integrated with software development

References Haklay, M. and Nivala, A., 2010. User-Centred Design. Interacting with geospatial technologies, pp.89-106. Vicente, K., 1999. Cognitive work analysis: Toward safe, productive, and healthy computer-based work. CRC Press.

Participatory Design The concept of participatory design emerged in Scandinavia. It was initially associated with unions, politics and advanced notions of democracy. As large companies started to adopt complex technology, unions felt that employees were mis-represented during the system development stages. Therefore unions started fighting for employees to be more actively involved in system design. To put it in a simple way, companies often buy systems thinking about costs, capital gains and features, but not much consideration for the users. More than that, a genuine consideration for the users means that user representatives should be seated at the tables where buy/make decisions are made. The participatory design paradigm considers that users must be fully involved in the design process. They are not just informants and evaluators, as in the user-centred design paradigm. In participatory design, users collaborate in all stages of the design process and help making decisions as members of design teams. They share control over the process. This paradigm has some interesting implications for the design profession. An important one is related to ownership. Who owns the design when it is shared with users? In many areas of design, designers are used to own the design. They make the decisions and define the solutions. Even if they do not implement them, they can claim intellectual ownership. However, because participatory design tends to be done in teams with multiple types of stakeholders, including users, ownership is more diffuse. The multidisciplinary design team owns the design. Another implication of participatory design is related with methods. Since design is done in teams, most of the adopted methods must have a collaborative component. Furthermore, since many participants do not have expertise in design, the adopted methods must use common-sense techniques. Techniques often used in participatory design include: • Stories and storytelling, which serve as triggers for analysing work activities • Visualisations, e.g. using paper prototypes and storyboards • Photographs and photo narratives, which document and give context to work • Brainstorming of design ideas • Design-by-doing, including the development of mock-ups using paper and pencil • Using theatre and drama to gain insights on the user experience • Games, such as card sorting

Some authors refer to participatory design teams as hybrid spaces, which are populated with concepts coming from different origins. Characteristics of hybrid spaces include dialogue, mutual learning, development of mutual knowledge, negotiation of conflicts, and reduced authority (the authority is diluted in the group). Two common types of hybrid spaces are: • Sittings, bringing the design team to the workplace, instead of bringing the workers to the design room • Workshops, which bring the different parties to the table to build mutual knowledge and negotiate conflicts • Build workshops, which are dedicated to create mock-ups and prototypes In Figure 13.4 we provide a table with advantages and disadvantages of the participatory design paradigm. Figure 13.4 Advantages and disadvantages of participatory design

Advantages

Disadvantages

Involving users at every stage of design

No process, can be chaotic

Commitment and buy in

hard to integrate with typical software development

Democratic, diverse, inclusive

Users are not designers

Consideration for ethical issues

Cultural differences between the participants Unclear ownership

References Muller, M., 2003. Participatory design: the third space in HCI. Human-computer interaction: Development process, 4235, pp.165-185.

Meta-Design Meta-design extends the notion of design beyond the traditional limits imposed by projects. In meta-design, design will continue after the system has been delivered to the client and the project is closed. The subsequent design activities will be done by the users themselves. Another way of looking at it, is that with this approach an unfinished design is delivered to the users. The users themselves are invited to finish the design in a way that suits their goals. Users will be designers. A compelling example of meta-design can be found in the town of Shibam in Yemen, which is a UNESCO World Heritage site and considered the home of the oldest high rising buildings in the planet (Figure 13.5). Buildings in this town have grown organically through time, sometimes up to 11 stores high. Figure 13.5 Shibam, Yemen: Buildings are extended vertically to accommodate family needs, so the design is never really finished (Source: J. Gao / CC)

!

As families increased in size, additional levels were added to accommodate them, built by the families themselves. So we can say that the design of these buildings has never finished. The users kept shaping the buildings to their own needs. More related to the UX design field, we often find the meta-design paradigm in games, usually associated to the capacity of players to define their own play-field. For instance, the game Little Big Planet, which is a platform game, has a “Create” component that allows users to create the game’s platforms, characters and other customisations (Figure

13.6). Furthermore, users can share their creations with other users, which gives a completely different UX when compared to more common games. Figure 13.6 Little Big Planet: Users can create their own game (Source: M. Schmid / Flickr / CC)

! Figure 13.7 iBooks Author allows users to insert widgets with HTML code in e-books

!

Characteristics of meta-design include: • UI is designed to be flexible, configurable and evolvable • UI must have two user modes: design and use • System must have advanced features allowing users to create complex customisations and attach plug-ins • System must support users as designers, usually with sophisticated customisation tools • System must foster ownership and control by the user • System must be integrated in a wider platform that promotes social participation and sharing An example of a system with these characteristics is Apple’s iBooks Author, which was used to develop this book. It provides at least two ways in which users can design their own books. One is the capacity to configure the book’s template. Another is the capacity to extend the book functionality by adding widgets with HTML code (Figure 13.7). Of course the capacity to add HTML to a book is not for everyone. It is targeted to very specialised, and also very demanding users, which are often designated as prosumers. A category of systems supporting meta-design is designated High-Functionality Applications (HFA). Examples include Word and Pages (text editors), PowerPoint and Keynote (presentations), and Wordpress and Joomla (web design). These applications, besides offering complex functionality, also open up their internal structures through application interfaces, so that prosumers can integrate external functional components.

References Fischer, G., Giaccardi, E., Ye, Y., Sutcliffe, A. and Mehandjiev, N., 2004. Meta-design: a manifesto for end-user development. Communications of the ACM, 47(9), pp.33-37. Fischer, G. and Giaccardi, E., 2006. Meta-design: A framework for the future of end-user development. In End user development (pp. 427-457). Springer Netherlands. Dick, H., Eden, H., Fischer, G. and Zietz, J., 2012, August. Empowering users to become designers: using meta-design environments to enable and motivate sustainable energy decisions. In Proceedings of the 12th Participatory Design Conference: Exploratory Papers, Workshop Descriptions, Industry Cases-Volume 2 (pp. 49-52). ACM.

Ecological Design This paradigm is centred on complex systems, for instance nuclear power plants and other industrial processes. Because of such complexity, the design approach also tends to be complex. The main reason is that designers have to attend to a more diverse set of requirements than usual. The idea of using the word “ecological” comes from the need to articular multiple abstractions of the design problem. Physical form. The configuration of physical devices necessary to conduct work. This includes, for instance, buttons and levers. They have to be located in certain places and must have a certain appearance. Physical functions. Address the users’ interactions with physical devices. For instance, certain buttons have to be activated, which produce a certain effect in the system and working environment. Generalised functions. The functions that have to be realised to accomplish a certain goal. Generalised functions are independent of physical functions. For instance, to delete a file (generalised function) you can either use the mouse to copy the file to the trash can (physical function), give a command using the keyboard (another physical function), or give a voice command. Domain values and priorities. These define the principles, standards and qualities to be maintained when doing work. They constrain the users when operating a system. For instance, by defining forbidden actions, priorities and guidelines. Domain purpose. Defines the overarching intention of the system. Furthermore, the design problem has also to be considered in a set of different decomposition levels. System. The system view as a whole. this contributes to maintain an holistic view of the system functionality. Subsystem. An independent partition of the system. It provides a coherent set of functions in an independent way. Component. The decomposition of a subsystem in multiple but not independent functions. Figure 13.8 illustrates the abstraction-decomposition space that results from the combination of these two dimensions of the design problem. This shows that UX designers, when working with complex systems, must consider the design problem and the corresponding solution in multiple dimensions. 


Figure 13.8 The abstraction-decomposition space

Abstraction

Decomposition System

Subsystem

Component

Domain purpose Domain values and priorities Generalised function Physical function Physical form

References Lintern, G., 2009. The foundations and pragmatics of cognitive work analysis: A systematic approach to design of large-scale information systems. 2009. Miller, C. and Vicente, K., 2001. Comparison of display requirements generated via hierarchical task and abstraction-decomposition space analysis techniques. International Journal of Cognitive Ergonomics, 5(3), pp.335-355.

Design for All The main principle of the design for all paradigm is to design systems that can be used by the widest range of people possible. Within this scope we can consider both accessibility and diversity.

Accessibility Accessibility concerns how to accommodate people with impairments, so that they have fair access to systems, tools and online services. Design for accessibility should consider a variety of impairments. Some examples are given next.

Visual impairments The most common problems to consider are blindness and colour blindness. Colour blindness is very common, affecting about 7% of the population. Solutions for colour blindness are very easy to implement, so there are no reasons for neglecting or avoiding them. UX designers just have to be careful when choosing colour schemes. In particular, they should avoid the red-green and blue-yellow combinations, which are the most challenging for the colour blinded. Designers can also adopt recommended colour palettes and should test their designs using colour filters (check for instance http://www.color-blindness.com). Blindness is more complex to address. It requires using assistive technology such as screen readers and braille readers. Screen readers became very popular in mobile devices, as they are built into most operating systems. In iOS it is known as VoiceOver and in Android as TalkBack. In both cases, it scans every UI element displayed on the screen and provides a corresponding voice description. This includes scanning labels, text, pictures (using alternate text descriptions), buttons, menus, etc. UX designers should experience these systems to appreciate how people use them and especially how design decisions affect their performance. The experience significantly changes the conception of what and how information should be presented to users. A fundamental problem to consider is that a screen reader may take a lot of time to describe all information elements displayed on the screen. Even though it is possible to configure screen readers to operate at a very fast pace, it may take a lot of time to read the whole UI. Therefore UX designers should be fully aware of the consequences of being lavish. To illustrate the point, analyse the webpages from three popular banks shown in Figure 13.9, Figure 13.10 and Figure 13.11. There should be no disagreement that these webpages should be accessible. After all everyone needs online banking. However, the first webpage seems quite difficult to use with a voice reader. There is too much information and too many UI elements on the page. Those that are not blind may easily filter out the irrelevant information. However, the blind may be forced to go through all of it. On the positive side,

the webpage has a regular structure that may ease the interaction after the initial learning - if the structure and contents is kept stable. Figure 13.9 Too much information on this webpage makes it difficult to use a screen reader

!

Figure 13.10 This webpage is less busy and therefore easier to use with a screen reader

!

Figure 13.11 This webpage is excellent for using with a screen reader

!

The webpage shown in Figure 13.10 seems more restrained, having fewer UI elements. Interestingly, if you just focus on the visuals, perhaps the empty space at the bottom appears somewhat awkward and feel the need to fill it up. However, filling up space with unnecessary contents is not the right strategy when considering screen readers. The webpage shown in Figure 13.11 seems ideal for screen readers. It has few pictures, very few UI elements and very sparse text. This page will be very easy to scan and interact with using a screen reader. Surprisingly, this page may also be the most usable for people not using screen readers. Often the best design solution is the one that works well for everybody.

In spite of the noted differences, we should nevertheless recognise that web design has evolved very significantly over the last few, making webpages much more accessible in general. One very important change was splitting text from pictures. Early webpage designs often would stamp text into pictures. That would completely defeat the functionality of screen readers by making it impossible to perceive the text contents. Nowadays, webpages developed in HTML5 layout labels and text boxes over images, which allow screen readers to differentiate and properly scan the text contents. Figure 13.12 The visual elements at the centre of this webpage suggest a particular reading structure that may be difficult if not impossible to reproduce using screen readers

!

A simple exercise that UX designers should always do is to check how a webpage is displayed without the CSS (cascade style sheet). That should give an indication of what relevant information will be missing when reading the screen. It also give a clear indication about the order UI elements are presented to users.

Finally, designers should avoid using pictures to guide the user. Observe the webpage shown in Figure 13.12 and consider in particular the visual elements shown in the middle. These elements suggest a particular structure for comprehension: if you say yes to increasing balance, then you get 2.5%; if no, you get 0.75%. This if-then-else decision is defined by the arrows pointing right shown in the background. This decision, which is visually intuitive, may not work in a screen reader. UX designers should therefore avoid using images to structure comprehension.

Hearing impairments Within this category we find problems such as deafness and hyperacusis (excessive sensitiveness). Hyperacusis may be resolved by providing mechanisms that allow users to configure audio feedback. UX designers should be particularly concerned with systems that use sound to raise attention to critical issues. Sound should be complemented with alert mechanisms using alternative channels, such as using visual feedback and the red colour to convey a sense of urgency along with an alarm.

Motor impairments With particular relevance to UX design, we should consider in this category the problem of repetitive strain injury. The problem occurs when users have to repeat the same operation very frequently, either using the keyboard or using the mouse. The problem is recurrent in form filling. In general, a key solution to reduce repetitive strain injury is to offer diverse interaction methods. The more diversity the better. Often, form filling can be done using input boxes, drop down menu options, and the keyboard. Ideally all options should be available. Having just a drop down menu with a list of options can be very convenient for many users, because it reduces memory recall. However, a user with repetitive strain injury may prefer recall over interacting with drop down menus, which may require too much effort using the mouse. As a general accessibility rule, forms should always include an extensive list of shortcuts to avoid excessive mouse use. Ideally, it should be possible for users to fill in a form without using the mouse. And once again, this feature is not only useful to people with motor impairments, it may be useful to everyone.

Cognitive impairments This category includes various types of disabilities, such as comprehension and learning disabilities. A strategy to deal with this type of impairments is to make the UI simple and intuitive to use, a strategy that also works with visual impairments, as previously discussed.

Two other aspects to consider are how to make information more perceptible and how to increase tolerance for error. As a general accessibility rule, every UI should provide undo/ redo options for every user interaction as a basic strategy for tolerating errors. Though more sophisticated strategies can be devised, such as improving legibility, readability, and suggesting preferred options.

Diversity Consider you would be involved in the design of a voting kiosk. What should be the height of the kiosk? Designers may use anthropometric databases with averages and standard deviations for a specific population and target age groups. But would you design for the average person? Would you design for the standard deviation instead? What would happen to the people not falling within the standard deviation? The intriguing aspect of this problem is that voting is an inherent part of democracy and democracy is about every citizen, not the average or the standard deviation. So you would have to consider the very tall and the very short persons, and the people that would vote in wheel chairs, and the people that would have back problems. That is an example of diversity. Accessibility accommodates a range of user needs that have been categorised by governments and other organisations as impairments. Often such classification is done with the purpose to legally require designers to comply with minimum requirements, or to suggest best practices that increase inclusion. The notion of diversity goes beyond the disability boundaries by recognising that people have diverse needs and such needs should be accommodated by design. Then the problem for UX designers is that diversity is an open concept. If accessibility already considers a wide range of problems, diversity widens the range even further. For instance, we already discussed blindness and how it can be addressed using screen readers. Non legally blind people will probably never use such technology, even if they would benefit from it (often because they do not want to be recognised as having an impairment).

Figure 13.13 Zoom feature in Mac OS

! Figure 13.14 The option to change font size in iBooks caters for diversity of needs

!

Figure 13.15 The AssistiveTouch feature of iOS allows users to access menus in more diverse ways

!

Therefore designers need to consider other less extreme options for people that are not blind but that also do not see that well. One option is using zoom. For instance, Mac OS offers a zoom function that allows users to magnify the portions of the screen around the mouse (Figure 13.13). However, that options seems very conspicuous and may be avoided by many users. A less conspicuous option is to allow users to configure the font sizes, such as in iBooks (Figure 13.14). This option has the advantage that it is subtle and can be configured by the user to address a wide range of needs. In general, we can say that allowing users to configure the way they interact with the UI with subtlety contributes to diversity. An interesting feature offered by iOS is AssistiveTouch (Figure 13.15). Originally, this function was developed to support people with physical disabilities by providing an alternative, configurable way to interact with menus. However, many people without disabilities also started using this feature, which became very popular on mobile phones. Initially the reason was avoiding the use of the physical home button, based on the myth that it could mechanically fail. But then the users got used to how AssistiveTouch gives fast and configurable access to menus. This is a good example of how accessibility also contributes to diversity, not only of needs but wants as well.

References Wickens, C.D., Gordon, S.E., Liu, Y. and Lee, J., 1998. An introduction to human factors engineering.


Chapter 14

Design Thinking Design Knowledge Design is an area of knowledge that is very distinctive from the sciences and humanities. Both the sciences and humanities have established their particular foundations a long time ago. Design, however, is still building its own distinctiveness. Figure 14.1 A comparison between design and other educational domains

Design

Science

Humanities

Phenomen on of interest

Artificial objects

Natural world

Human experience

Methods

Modelling, process, synthesis

Analysis, experimentati on

Analogy, evaluation

Values

technology, practicality

Objectivity

Subjectivity

Figure 14.1 provides a very short comparison between the three areas. It highlights that UX design is focussed on a distinct phenomenon, the construction of artificial objects. It has also been developing its own methods, which are mainly centred on modelling, the design process and the synthesis of workable solutions. Every knowledge area also tends to emphasise a distinctive set of values. For instance, science emphasises objectivity, while the humanities emphasise subjectivity. Quite distinctively, design is recognised as a practice-oriented domain centred on technology development. Nigel Cross calls this particular combination of phenomena of interest, methods and values as designerly ways of knowing. This phrase suggests that there are specific things to know (about the construction of artificial objects), ways of knowing them (emphasising certain values), and ways of finding about them (methods necessary to understand what objects to build) that pertain to design but not to other knowledge areas.

References Cross, N., 1982. Designerly ways of knowing. Design studies, 3(4), pp.221-227.


Wicked Problems Problem The way designers face problems is different from other disciplines like the sciences and engineering. Usually in sciences and engineering, problems are given and cannot be changed. For instance, a mathematical problem such as calculating the multiplication of 5 by 6 cannot be changed. An engineering problem such as building a bridge between A and B also cannot be changed. Though design problems can be changed. This happens because design is centred on wicked problems. As defined by Rittel and Webber, wicked problems are ill-defined social systems. They cannot be clearly formulated, information about them is confusing, have many stakeholders involved, rely upon elusive political judgement to express them, are intricately connected with other problems, and may not even generate consensus about what the problem really is. Unlike the engineering problem of building a bridge between A and B, the wicked problem of building a bridge between A and B would also have to equate if building a bridge is the real problem, and if yes, where A and B would be located. There are multiple alternatives to building a bride, including not building a bridge. And the choice for A and B can never be tamed because it depends on political criteria such as economical impact, cost, environmental impact, public acceptance, sustainability, etc. Problems such as global warming, pollution, poverty, and traffic in big cities are obvious examples of wicked problems. But developing a UI for a system may also in many circumstances reveal a wicked problem, especially if the system is complex, there are too many types of users to consider, along with many stakeholders, there is not much time available to deeply study the problem, the clients do not know what they want, the technology is constantly changing, etc. One characteristic of wicked problems is they can be seen from different perspectives and therefore are amenable for designers to expand or reformulate the whole problem, to focus on a particular sub-problem, and even to address a more general problem.

Solution As a practice-oriented domain, design is necessarily focussed on a solution. However, once again the nature of the design solution is significantly different from scientific and engineering solutions. For instance, in mathematics, the solution to multiplying 5 by 6 is known and has been proved correct, therefore not subject to discussion. In physics, the solution to a problem of calculating the escape velocity of an object leaving a planet has a certain level of precision

and rigour. In civil engineering, any solution has necessarily to conform to a set of precise and measurable criteria such as cost, safety, security, etc. However, in UX design, the solution does not have to satisfy such a precise set of criteria. It is more a matter of possibility than necessity. Also, the solution has to be evaluated according to a good-or-bad judgement, not with a right-or-wrong decision. Figure 14.2 A comparison between design and other domains

Other domains

Design domain

The problem is frozen

The problem is ongoing and depends on point of view

With enough time and effort, any complex problem can be properly analysed and solved

Problems are wicked and cannot be really solved, we can only suggest artefacts that address the problems

Confusion is unacceptable

Confusion is an opportunity

The documents state the facts

Any piece of information is an opportunity to find a more fundamental problem

Deduce from the facts

Explore data in creative ways

Structure existing information using objective criteria

Serendipity and thematic vagabonding

Make a clear recommendation based on the selected criteria

Develop and explore several alternatives

Avoid risks

Take risks

Process Another distinctive characteristic of design is that it does not preclude any specific process for reaching the solution. This is also unlikely the sciences and engineering. For example, if you would like to know the velocity of an object reaching the floor, you will use a known formula. In engineering, building a bridge must necessarily follow well-known and accepted methods. Even for building a simple retaining wall, councils require engineers to submit the calculations before signing off the construction plan. However, once again in UX design the process is much more elusive. For instance, even though a significant component of UX design is ideation, designers do not have to follow well-known and accepted ideation methods (even though many exist).

Also, even though it is important to acquire data from users, there is no specific requirement to follow certain methods, and in fact there is no specific requirement to link data acquisition to any subsequent design stage. Finally, UX design is also characterised by intuition, serendipity and thematic vagabonding, where the designer explores different viewpoints, problem frames, and solution possibilities to identify one solution that seems preferable. Therefore the impact of opportunistic behaviour should also be considered as crucial to design. All in all, the differences addressing problems, solutions and processes make design a distinctive domain. In Figure 14.2 we compare design with other domains.

References Rittel, H. and Webber, M., 1973. Dilemmas in a general theory of planning. Policy sciences, 4(2), pp.155-169. Buchanan, R., 1992. Wicked problems in design thinking. Design issues, 8(2), pp.5-21.

Problem Solving Experimental learning view We have already noted that one component of UX design concerns problem solving. However, what is the nature of the problem solving process in UX design? One approach to understand this phenomenon regards problem solving as a cyclic learning process involving two main activities: problem framing and problem solving (Figure 14.3). Figure 14.3 Design as experimental learning: Moving between problem framing and problem solving

! Figure 14.4 Design as evolution, which includes a aha experience

!

During problem framing, the designer tries to understand the problem from a particular point of view: the frame. The designer then performs a move to a solution. The evaluation of that move allows the designer to gather deeper understanding about the problem, which may lead to a new frame. In turn, the new frame leads to a new move towards a solution. This is an iterative and cumulative process. Considering, as previously noted, that design is centred on wicked problems, the reached solution is only related with one particular framing of the problem and one particular move towards a solution. It does not reflect an optimal solution to the problem. In a way, the actual problem can only be expressed by the adopted solution.

Evolutionary view Design can also be viewed as an evolutionary process (Figure 14.4). The most interesting characteristic of this model of design is the insight / aha experience stage where a key idea emerges. Often many iterations are necessary to elaborate a key idea towards a solution. However, designers can usually trace back the final solution to a key idea that emerged somewhere along the process. This aha experience is what establishes the bridge between the problem and the solution. Design can abstractly be seen as the search for the aha experience.

References Curry, T., 2014. A theoretical basis for recommending the use of design methodologies as teaching strategies in the design studio. Design Studies, 35(6), pp.632-646.

Knowledge Funnel The knowledge funnel is an operational framework for creating value, which identifies three basic stages of innovation (Figure 14.5). Figure 14.5 The knowledge funnel

!

Mystery The funnel starts with mystery (which could also be called inspiration). At this initial stage, knowledge is vague. As previously noted, typical knowledge such as problem statement, intended solution and required process to reach the solution are either missing or moving targets. So the designer does not know much and is actively seeking for an opportunity. Often the opportunity emerges with a good question. An example of a good question could be “what to do with a glue that does not work properly but is sticky”. That question occurred to a scientist at 3M after an attempt to develop a strong adhesive formula that did not work. It resulted in the now ubiquitous post-it.

Heuristic Good questions always inspire designers to think and act creatively. So the next phase involves trying to acquire knowledge about the mystery through exploration. As we previously noted when discussed wicked problems, this exploration is elusive. Since problems can be constantly reframed, solutions are just possibilities. Besides, solutions cannot be judged right or wrong. However, the systematic exploration of different possibilities provides a lot of knowledge about the project. This stage is called heuristic because the created knowledge is codified in terms of multiple frame-solution relationships. The created knowledge cannot be rationally justified, for instance using a particular theory or a detailed process model. It can only be explained through the knowledge acquired through experimentation.

Algorithm At this final stage, the designer has acquired some certainty about the project and may provide detailed how-to explanations about the proposed solution. This stage is called algorithm to emphasise the empirical nature of the acquired knowledge. It does not take the form of theory (why). It just reflects the method (how). According to this perspective (Figure 14.5), knowledge goes through the funnel from questioning and exploration to implementation. This movement involves reducing the amount of information that has to be processed. Furthermore, as the information moves down, it becomes more useful for design.

References Martin, R., 2010. Design thinking: achieving insights via the “knowledge funnel”. Strategy & Leadership, 38(2), pp.37-41.

Representation So far we have primarily discussed design as a collection of activities (design as a verb). However, we could instead focus on the outputs of these activities (design as a noun). For instance, some core design activities generate outputs like conceptual designs, intermediate designs, detailed designs, and prototypes. Each one of these outputs provides a different representation of the system being designed. In a more abstract way, we can say that designers use representations to understand and explain reality (Figure 14.6). Such representations can exist purely in the mind, as mental representations. However, we will focus here exclusively on external representations, which are stored in physical documents. Figure 14.6 Relationship between representation and reality

! The link from reality to the external representation must be established using a particular symbolic system. The symbolic system is necessary to allow moving from reality to representation and back in a systematic way. UX design uses three common symbolic systems. Natural language. For instance, a design brief is a piece of text that uses natural language to express the design project in terms of requirements and constraints. Sketches. For example, wireframes and storyboards fall in this category. Visual models. Collections of visual elements that follow a set of formal rules, which describe the types of elements that can be represented and the types of relationships between them.

Models are always formal. If a visual model does not follow a set of formal rules, then it should be classified as a sketch. The advantage of using models is that the set of rules avoids ambiguity in communicating the representation to various types of audiences. Figure 14.7 Relationships between types of representation and steps in the design process

Design Phases

Type of representation

Problem structuring

Natural language

Preliminary design

Sketches

Design refinement

Restricted subset of sketches and language

Detailing

Models

For example, you can design a bridge with a set of sketches, but it is better to use a visual model, otherwise the builders will not understand the exact characteristics of the bridge. Vinod Goel, in “Sketches of Thought”, observes that the types of representations used in design have correlations with the steps in the design process. Figure 14.7 illustrates these relationships. This suggests that, as the design process evolves, the type of representation also evolves from the more open to the more restricted forms of external representation.

References Goel, V., 1995. Sketches of thought. MIT Press.

Chapter 15

Design Theory Creation of Artefacts Herbert Simon, in the book “The Sciences of the Artificial”, contrasts design with science, noting that while science is focussed on understanding the complexity of the natural world, design is centred on the artificial world. The artificial world is where we build artefacts such as chairs, airplanes, computers, and UI. A recurrent problem with the art and science of building artefacts is that there is an infinite number of ways an artefact could be built. For instance, we can create an infinite number of chairs. So, should the mere act of building an artefact be called design? Herbert Simon argues that design is indeed more than just creating an artefact. It is the process of finding a satisfactory set of actions necessary to create an artefact. In more abstract terms, we may view an artefact as an interface between inner and environments (Figure 15.1). The inner environment is what constitutes the artefact: the parts, functions, etc. For instance, an airplane has multiple parts like engines, cockpit, wings, etc. A software system has also multiple parts dedicated to information processing, communications, and of course the UI. Figure 15.1 Simon’s view of design: The search for a satisfactory interface between inner and outer worlds

!

Artefacts must necessarily co-exist and interface with the outer environment. For instance, the airplane has to fly in the earth’s atmosphere. A UI has to interact with humans. The interface view of the artefact suggests that a main goal of design is adjusting the inner environment to the outer environment. The outer environment creates some constraints for design. For instance, you cannot forget gravity when designing an airplane. You cannot forget that people must be safely seated when designing a chair. And of course you cannot forget the cognitive limitations of humans (e.g. memory and attention) when designing a UI. The circular arrow shown in Figure 15.1 expresses the idea that adjusting the artefact to the outer environment is an adaptation process. Herbert Simon suggests that the adaptation process can be studies using scientific methods, which in a way suggests that doing design is also doing science. For instance, even though you can design any type of airplane, you will have to design it in a certain way if you want it to fly safely. The creation of the airplane’s inner environment will have to be adjusted to the multiple constraints imposed by the outer environment. And the the process of adjusting the airplane’s components depends on decisions, which can be predicted in a scientific way (e.g. investigating which methods generate better planes). Though one point to consider is that the adaptation process cannot be fully optimised. As there are too many constraints involved in building an artefacts, finding the optimal solution would take infinite time. Therefore the logic of design involves finding alternatives that are “satisficing”, a concept that expresses the notion that we should be accept solutions that “are good enough”. As Herbert Simon puts it, we can evaluate if a design solution is “better” or “worse” than other solution, but cannot conclude that a design solution is the “best”. This way of explaining design has been highly influential. One reason is that it presents design as a methodical and rational process, only bounded by the acceptance of satisficing solutions. Another reason that gives great importance to this view of design is that it puts design on par with science, away from being just a professional practice, like drawing and painting. However, this theoretical proposition of design has been criticised for being limited. In particular, this view neglects the important role of creativity in design. Furthermore, when discussing the concept of satisficing, Herbert Simon adopts criteria drawn from economical sciences, such as cost and resource availability. Though others have argued that design goes beyond the mere economical perspective.

References Simon, H., 1978. The sciences of the artificial. The MIT Press. 


Reflection in Action Donald Schön was interested in understanding how professionals decide on means best suited to implement technical solutions. He noted that, while in certain areas such as systems analysis and policy analysis the technical problems can be well defined, in other areas making decisions does not depend on fixed phenomena and well-known techniques. This discrepancy between the well-defined and muddling trough practice is somewhat problematic for professionals. Should they stay away from the muddling trough? Or instead should they embrace trial and error? Perhaps more importantly, Donald Schön asked if this muddling through behaviour corresponds to professional sloppiness, or instead is part of a different theory of knowledge. This discussion is of course important to design, since design is one area where technical problems are not well defined. Quite the contrary, design is focussed on wicked problems. Based on a body of research around professional practitioners in multiple areas of knowledge, Donald Schön proposed a theory of professional behaviour known as reflection in action. Reflection in action is the process of detecting anomalies in tasks and solving them through reflection. As an anomaly emerges, the professional asks “what is this?” and “what understanding lead to this?” Then the process evolves towards restructuring the understanding of the situation framing the problem. From the new problem frame, the professional develops, or invents, a new solution. The solution is then tried and the results are interpreted. If a new anomaly is detected, the process starts all over. Even though this process seems similar to improvisation, there is a key difference that should be highlighted. That key difference is the continuous cycle of doing reflection. Reflection empowers the professional with new knowledge. Professionals such as designers, engineers and doctors continuously reflect on the problem frames and on the solutions they are developing in terms of learning, diagnosing, performance, efficiency, problem solving strategy, etc. Such reflections are part of the practice and an important value of the services they provide. Donald Schön also developed the concept of reflection on action, as complementary to reflection in action. Reflection on action is centred on looking back to what happened to the process and making what has been learnt explicit (Figure 15.3). 


Figure 15.2 Reflection in action

! Figure 15.3 Reflection as a combination of reflection in action and reflection on action

!

References Schön, D., 1992. The crisis of professional knowledge and the pursuit of an epistemology of practice. Journal of Interprofessional Care, 6(1), pp.49-63.

Design Cognition A lot of research has been focussed on understanding the fundamental traits of design behaviour. in the following, we summarise some of the findings.

Problem viewing Designers always view problems and initial conditions as wicked, ill-defined or changeable. Studies of designers indicate that, even when the given problems and initial conditions are not wicked, designers will tend to look at them in that way.

Solution orientation It has been observed that designers do not follow the traditional problem-analysis pattern, which starts with a given problem, proceeds with problem analysis, identifying attributes and relationships, and finishes with a solution maximising certain attributes. Instead, designers always follow a solution-oriented pattern. The solution-oriented pattern moves immediately from the problem to a partial solution. Then, the qualities of the solution are analysed. If the solution is not satisfactory, the problem is reconsidered (problem framing) and another partial solution is developed. Expert designers never get stuck in analysis paralysis. There is some empirical evidence suggesting that successful designers ask less for information, prefer building problems frames in more experimental ways, and define success criteria early in the design process. On the contrary, less successful designers seem to spend more time acquiring information, delaying the actual design activities.

Experience More experienced designers tend to use generative reasoning (creativity, problem framing) instead of deductive reasoning (analysis).

Problem setting Problem setting (or problem scoping) is the process of naming the features of a problem that will be attended by the designer (constraints, requirements). Designers select the features of the problem domain that they choose to attend.

Problem framing Problem framing is the process of viewing the features of a problem from a certain point of view. Designers select the frames of the problem that they choose to explore. It has been observed that experienced designers are better at problem framing, as they have strong paradigms that help them viewing problems in certain ways.

Studies of outstanding designers suggest that they excel at exploring a problem from a particular perspective, which challenges them to innovate. Studies also show that spending more time on problem framing benefits quality. It has also been observed that designers work in bursts of problems frames throughout the whole project, not only at the beginning. Designers tend to generate at the beginning of the project an initial set of solutions (solution kernel). Though sticking to that set may lead to fixation.

Fixation It has been observed that inexperienced designers have more tendency to fix themselves on early solutions, instead of exploring alternatives. Though it has also been observed that outstanding designers are also fixated on certain problem frames, which make them more tenacious in the pursuit of certain personal goals. Often inexperienced designers fixate to given examples. They also fixate to their educational background.

Alternatives Studies have shown that the generation of a large number of alternatives and the generation of a small number of alternatives were equally weak strategies and lead to poor solutions. Expert designers have been observed to explore a smaller number of alternatives than novices. However, the main reason may be that they have rich repertoires of previous cases to work with. It has also been found that precise problems lead to the generation of more alternatives than vague problem. One explanation for this phenomenon is that precise problems give less scope for problem framing, which has to be compensated by generating more alternatives.

Creativity Creativity has been associated to the aha experience. Further studies indicate that aha experiences often emerge from the designers’ self-realisation of the fixation phenomenon. By breaking a fixation, the designer can develop a more creative solution.

Sketching it has been reported that sketching helps the designers to consider multiple aspect of the design. 


Opportunism Studies show that designers tend to behave in non-systematic ways, for instance, switching between top-down, bottom-up, breadth-first, and width-first approaches to design. In particular, aha experiences tend to result in opportunistic jumps.

Time It has been found that expert designers spend equal time doing design than novices. However, experts make more transitions between problem framing and problem solving than novices.

References Cross, N. (2001). Design cognition: results from protocol and other empirical studies of design activity. In: Eastman, C.; Newstatter, W. and McCracken, M. eds. Design knowing and learning: cognition in design education. Oxford, UK: Elsevier, pp. 79–103. Atman, C., Adams, R., Cardella, M., Turns, J., Mosborg, S. and Saleem, J., 2007. Engineering design processes: A comparison of students and expert practitioners. Journal of engineering education, 96(4), p.359. Visser, W., 2009. Design: one, but in different forms. Design studies, 30(3), pp.187-223.

Chapter 16

Design in Business Design-Oriented Organisations Design has often been viewed as belonging to specific niches like fashion industry, furniture manufacturing and print design. Businesses like banking, insurance and public administration seem to be completely unrelated. Though this view has changed and two major instigators were certainly Tom Peeters and Tom Kelley. Tom Peters, in “Re-Imagine”, states plain and simple that “all innovations come, not from market research or carefully crafted focus groups, but from pissed-off people”. Tom Peters then goes on deconstructing many of the barriers that make some businesses so distant from design. In Figure 16.1 we list some of the structural, behavioural and cultural changes necessary to make organisations more design-oriented. Tom Peters suggests that every business should promote creativity, experimentation, take risks, be agile, and have vision. In short, every business should embrace design and every employee should be a designer. Tom Kelley, in “The Art of Innovation”, explains how his company, IDEO, ended up being an exemplar of innovation and creativity in business, which other businesses approach in search for methods and best practices. IDEO achieved such a status by developing several managerial methods, techniques and practices that turn madness and chaos into business innovation. Some of the key practices adopted by IDEO involve getting inspiration by observing real people in real life situations, by doing an lot of prototyping and brainstorming, and solving problems in teams (Figure 16.2). All in all, society has been evolving from a product-oriented to a service-oriented model, where business functions are more virtualised and built on top of technology. The frontiers between clients, employees and managers almost disappeared. And management and control are nowadays more decentralised. In this context, the distinctions between UX design, system design, product design, service design, process design, and business design are extremely blurred, if not completely artificial. This explains why it makes sense to talk about design and business in the context of UX design. 


Figure 16.1 Traditional versus contemporary types of organisations

Traditional organisation

Contemporary organisation

Clunky bureaucracy

Alliances

Stable, inflexible, procedure centric

Unstable, flexible, customer centric

Tasks

Projects

Accountants rule

Innovators rule

Design department, design outsourced

Design is integral to the organisation

Technology supports change

Technology leads change, the network is the organisation

By-the-book management

Improvisation

Gigantic, complex

Small, simple

Great products and services

Awesome experiences and solutions

Rank, seniority

Meritocracy, ability to contribute

Pessimism

Optimism

Rejection of failure

Acceptance of failure

Figure 16.2 Traditional versus new managerial methods

Traditional Methods

New Methods

Analyse problem

Observe people

Deduction from problem analysis

Inspiration by observation

Static objects and thinking

Seeing systems in motion

Checking boxes

Brainstorming

Lone genius

Groups, team problem solving

Build

Build to learn

References Peters, T., 2003. Re-imagine! (p. 352). London: Dorling Kindersley. Kelley, T., 2007. The art of innovation: Lessons in creativity from IDEO, America's leading design firm. Crown Business.

Embedding Design in Organisations The design studio The highly influential Bauhaus school, which was created in Germany in 1919 (and closed by the Nazi in 1933), aimed at training people in the areas of creative work in combination with technology, science and theory (Figure 16.3). This kind of intellectual integration resulted in the design studio (or design shop). Figure 16.3 Wassily Kandinsky, the author of this painting, was one of the most famous teachers at Bauhaus school, where he conducted workshops on colour theory (Source: Public domain)

!

The design studio should be seen as a method. It is characterised by being practical and process oriented: taking a problem as input and generating action as an output. The use of the word “action” is interesting by itself. This word has been adopted instead of “making” because it is considered more generic. Action includes making, but also signifies that other activities may be accomplished. For instance, “not making” is also a perfectly acceptable action. To illustrate the argument, one of the praised characteristics of Apple is the capacity to reject certain consumer products (in particular cheap ones, and those that do not have a clear purpose) and to refuse incomplete designs. For some reason they were late-entry players in the mobile phone market.

The design studio utilises a combination of knowledge and technological tools. In particular, scientific knowledge contributes to generate more informed decisions. Technology helps visualising, materialising and experimenting ideas. And art provides inspiration and aspiration. An example of scientific knowledge are the Gestalt rules, which have been part of the Bauhaus teaching. These rules inform about the relationships between users and systems. However, in the design studio, scientific knowledge does not necessarily have a precedent over technology and art. The Bauhaus was truly integrative and free-spirited, and would look at knowledge, technology and art as mutually enriching. The design studio also relies on a combination of people with different skills, including artists, engineers and artisans. All bringing important capabilities to the design process. The fundamental behavioural structure of the design studio is the workshop. The workshop eliminates hierarchical responsibilities and managerial roles. It involves skilled people in building something. In the workshop, some participants are involved in building things, while others contribute to stimulate creative thinking. As a matter of curiosity, workshops in the Bauhaus school would start with breathing and relaxation exercises, and then would move to the problem of the day.

The convergence of design and management Companies face a chronic conflict between business-as-usual and innovation. The former seeks stability and reliability at all cost and does not like change. The later seeks to exploit new opportunities, which necessarily impact stability and reliability. Though companies may find it difficult to function without both the business-as-usual and the innovation components. Without innovation, they decline. Without business-as-usual they collapse. It has been suggested that a way to overcome this problem is to promote the design studio method in companies. The design studio contributes to generate and explore ideas, while the business-as-usual part of the organisation contributes to exploit ideas. Whenever the organisation finds new problems, challenges and opportunities, a problemaction project is created and the design studio method is engaged. Then, the resulting action can be assimilated by the other parts of the organisation. Roger Martin, the main proponent of embedding design in business, identifies three major implications of this approach. The first one is that business people have to behave more like designers, working more in the design studio environment than in the traditional office room, and participating more in workshops than traditional business meetings.

The second implication, is that companies will have to develop new structures and reward systems, which not only reflect the importance of problem-action projects but also the ephemeral existence of such projects. Finally, the third implication is that companies will have to change their focus on thinking of design as a confined part of the business, towards thinking about the design of business. That is, thinking about creative, thoughtful and elegant ways of working, producing, organising, managing, coordinating, and controlling.

References Findeli, A., 2001. Rethinking design education for the 21st century: Theoretical, methodological, and ethical discussion. Design issues, 17(1), pp.5-17. Lerner, F., 2005. Foundations for design education: Continuing the Bauhaus Vorkurs vision. Studies in Art Education, 46(3), pp.211-226. Martin, R. (2006), ‘‘Tough love,’’ Fast Company, No.109, pp. 54-57. Martin, R. (2013), ‘‘The Design of Business,’’ Rotman on Design. University of Toronto Press.

Managers as Designers Many managerial decisions are based on data mining and discovery, pursuing the idea that somewhere within the data there is a solution to a problem or there is an hint on how to optimise a work process. Though the design mindset suggests a different approach. Invention. Invention can hardly be deduced from existing data. It has to come from outside the box and managers need to stimulate and support out of the box ideas. Persuasion. The design process is always focussed on developing concrete artefacts, even though they may only consist of conceptual prototypes. However, people can be persuaded by interacting with these artefacts, and managers should persuade others about the benefits brought by innovative artefacts. Simplicity. Probably because of their focus on the users, design processes tend to generate visions and simple solutions. Quite the contrary, bureaucratic thinking tends to generate rules and complications. Managers should pursue and foster simplicity. Engagement. The constant focus on ideas, innovation and thinking outside the box engages people in changing the business and the organisation for the better. Though these changes require managerial support. Experimentation. Most problems nowadays are wicked and do not have solutions. For managers, this means making decisions without the right information. Experimentation helps progressing towards the objective, even when the problem, solution and process leading from one to the other are unknown. Inclusion. Design processes tend to give great consideration to the users, always during the analysis and validation phases, but often also during the design phase, for instance through participatory design, which brings users to the table where decisions are made. For managers, the implications are that power and control become less centralised and more participatory.

References



Liedtka, J. (2013), ‘‘If Managers Thought like Designers,’’ Rotman on Design. University of Toronto Press.


Suggest Documents