Intuition and Cognitive Load Theory

7 downloads 0 Views 13MB Size Report
Mar 23, 2016 - tive load is lower than those who received no instruction, PDF document, and the ..... A measurement device was constructed by using an Arduino and ...... musicians may be more efficient at maintaining a constant rhythm ...
Intuition and Cognitive Load Theory An investigation of the use of instruction to facilitate intuitiveness of an interface by reducing ineffective cognitive load

Þorgeir Gísli Skúlason - PDP 10 - Master Thesis

Department of Electronic Systems Engineering Psychology Fredrik Bajers Vej 7 9220 Aalborg Title: Intuition and Cognitive Load Theory: An investigation of the use of instruction to facilitate intuitiveness of an interface by reducing ineffective cognitive load Project Period: 10th semester, spring 2016 Project Group: PDP10-1087 Members:

Thorgeir Gísli Skúlason

Supervisor: Ditte Hvas Mortensen Number of pages: 126 Appendix no.: 6 Finished the: 16th of June

Abstract This study presents evidence for the use of instructional methods based on Cognitive Load Theory (CLT) as a means to facilitate intuitiveness of an interface. A considerable overlap was observed between the dualprocessing system from intuition literature and the automatic and controlled processing system from CLT. An experiment was therefore conducted where cognitive load was measured during an instruction phase, where participants learned how to use a software product, and intuitiveness of an interface was measured after participants had used the software to perform two tasks. Participants received instruction in the form of textual instruction, video instruction, and interactive video instruction. A fourth group that received no instruction was included to test if instruction had any effect on subjective ratings of intuitiveness. The results from the experiment suggest that due to limited sample size and age variation between the participants, it is not possible to draw any reliable conclusions based on the data. However, the results seem to suggest that instruction leads to more intuitive processing.

P REFACE This report was written on the 10th semester at Department of Electronic Systems at Aalborg University by Thorgeir Gisli Skulason from Engineering Psychology between February 15th to June 16th 2016.

Reading guide Terms within this report are in italics the first time they occur. Source references in this report will follow the Harvard method, where references appear with the author’s last name and the year of publication. A figure list can be found at the end of the report with citations to all figures.

Aknowledgements This project was conducted in cooperation with Tempo1 Software’s UX department. I would like to extend my gratitude and appreciation to the following employees at Tempo: Viðar Svansson (VP Product management and design) for the inspiration for this project idea. Arnþór Snær Sævarsson (Lead UX Designer) for his cooperation and guidance throughout this project, Unnur Ösp Ásgrímsdóttir (UX Designer) for her cooperation and support, David O’Donoghue (Technical Writer) for his cooperation and excellent grammar skills, Þórey Rúnarsdóttir (Data Scientist) for her cooperation and guidance during statistical analysis. This project wouldn’t have been made possible if it wasn’t for the help of my supervisor: Ditte Hvas Mortensen. Additionally, I would like to thank Fanney Reynisdóttir, Sunna Dröfn Sigfúsdóttir and Eyrún Diljá Sigfúsdóttir for their help with recruiting test subjects to participate in the experiment. I would like to thank my mother for her support and for proofreading this report. Finally, I would like to thank my girlfriend Arna Dögg Sigfúsdóttir and Bergrós Mjöll, my newborn daughter, for providing me with the love and support that I needed to work on this project, as well as tolerating all of the late night study sessions. Thank you all so much for your cooperation and support!

1

See Tempo’s official website http://www.tempo.io

Contents

C ONTENTS Contents

1

1

Introduction 1.1 Initial Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3

2

Literature Review 2.1 Intuitive Interaction in Product Design . 2.2 Intuition . . . . . . . . . . . . . . . . . . . 2.2.1 Dual-Processing System . . . . . 2.2.2 Intuitive Interaction . . . . . . . 2.2.3 Measuring Intuitive Interaction 2.3 Cognitive Load Theory . . . . . . . . . . 2.3.1 Active and Passive Learning . . . 2.3.2 Cognitive Schemas . . . . . . . . 2.3.3 Modality Effect . . . . . . . . . . 2.3.4 Cognitive Load Model . . . . . . 2.4 Literature Review Discussion . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

5 5 6 8 10 11 13 13 14 15 16 20

3

Project Concept 23 3.1 Interactive Instructional Videos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4

Problem Statement

5

Experiment Design 5.1 Design of the Instructional Material 5.1.1 Textual Instructional Manual 5.1.2 Video Instructions . . . . . . 5.1.3 Interactive Video Instruction 5.2 Measurement Methods . . . . . . . . 5.2.1 Subjective Ratings . . . . . . 5.2.2 Objective Measurements . . 5.3 Experiment Methodology . . . . . . . 5.3.1 The Main Experiment . . . . 5.3.2 Experiment flow . . . . . . .

6

Results

27

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

29 29 31 31 32 33 34 35 36 38 39 41 1

Contents 6.0.1 6.0.2 6.0.3 6.0.4 6.0.5 6.0.6

Rhythm Measurement data . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results from the Rhythm Measurement Data . . . . . . . . . . . . . . . . . Results from Intuitive Interaction Questionnaire and Subjective Ratings of Cognitive Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Correlation Between Rhythm data and Likert Scale Ratings . . . . . . . . . Frequently observed behavior during task performance . . . . . . . . . . . Summary from the exit interview . . . . . . . . . . . . . . . . . . . . . . . .

42 44 46 48 51 52

7

Discussion

55

8

Conclusion

61

9

Figure Reference List 63 9.0.1 Figures used in the instructional material . . . . . . . . . . . . . . . . . . . 63

Bibliography

65

A Appendix 69 A.1 Intuition is a Fuzzy Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 B INTUI Questionnaire 73 B.1 Effects that Reduce Extraneous Cognitive Load . . . . . . . . . . . . . . . . . . . . . 75 C Email Communication with INTUI Group

77

D Instructional Material 81 D.1 Textual Instructional Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 D.2 Video narration scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 E Experiment Design E.1 Design and Construction of the Vocal Booth E.1.1 Vocal Booth: Version 2 . . . . . . . . . E.2 Programming and Circuit . . . . . . . . . . . . E.3 Electronics and Foot Pedal . . . . . . . . . . . E.4 Arduino Rhythm Measurement code . . . . . F

2

Results F.1 Scatterplots from Participants . . . . . . F.2 QQ plots for the rhythm measurements F.3 Observations from the Experiment . . . F.3.1 Exit Interview . . . . . . . . . . . F.3.2 Answers from the Exit Interview

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

101 101 103 105 106 107

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

111 111 112 114 122 122

CHAPTER

1

I NTRODUCTION Tempo1 is a company, which makes work log management extensions for project management systems such as Atlassian JIRA2 . Tempo’s approach to product design has been to mimic the design principles of the host product as closely as possible. According to Viðar Svansson (Tempo’s VP of Product Management and Design), this design approach is assumed to result in a more user friendly and intuitive product for the user. This assumption is based on the premise that the user is not required to switch between multiple different design paradigms (i.e., JIRA and Tempo). Tempo utilizes various design aspects from JIRA, such as location of navigation elements, button design, page margins, typography, iconography, and much more. However, Tempo recently started developing their work log management solution for different and arguably non-related software domains, such as the chat client HipChat3 . This presented a challenge to Tempo’s designers because no prior solution existed that utilized a chat client in such a way. Therefore, Tempo needed to find a way to design intuitive interfaces without having to rely on design guidelines provided by the host product.

1.1 Initial Problem Statement This leads to an investigation of the psychological factors that play a role when a User Interface (UI) is intuitive. Additionally, this investigation seeks to answer the question of whether or not it is possible to systematically condition a person to induce the feeling of intuitiveness for a specific software product without interacting with the actual product.

1

Link to Tempo’s website: www.tempo.io Link to Atlassian JIRA website: www.tempo.io 3 Link to Atlassian HipChat www.hipchat.com 2

3

CHAPTER

2

L ITERATURE R EVIEW 2.1 Intuitive Interaction in Product Design The concept of intuitive interaction is understandably appealing for product-design companies. The idea of unlocking the mysteries of intuition and utilize it for interaction design has the potential to give designers access unparalleled knowledge and methods to design products that users simply know how to use. However, how realistic is this idea? The adjective “intuitive” has become buzzword in today’s technology industry (Blackler et al., 2005), (Blackler and Hurtienne, 2007), (Ullrich and Diefenbach, 2010a), (Ullrich and Diefenbach, 2010b). Here is an example of Apple describing their iOS 9 operating system: “iOS 9 is a big reason you won’t find anything else like iPhone. It brings together an elegant and intuitive interface, powerful features, and robust security. It’s designed to work as beautifully as it looks. So you can enjoy everything you do — on a device that does everything1 .” Similar usages of the word “intuitive” can be found from companies such as Oneplus2 , Samsung3 , Android4 , and many others. So is it true? Have companies such as Apple actually managed to unlock the mysteries of intuition, or is it just another buzzword used for marketing purposes? According to McEwan et al. (2014), intuition is “the end result of a cognitive process that matches current stimuli with a store of amalgamated experiential knowledge, built up through 1

A quote from Apple regarding their iOS 9 operating system http://www.apple.com/iphone-6s/ios-9/ (Accessed 11. March 2016) 2 Link to Oneplus website regarding the OxygenOS operating system https://oneplus.net/2/oxygenos 3 Link to Samsung’s news page about the Galaxy S6 Edge smartphone https://news.samsung.com/global/ intuitive-and-streamlined-user-experience-of-the-galaxy-s6-and-s6-edge 4 Link to Android Lollipop website: https://www.android.com/intl/en_nz/versions/lollipop-5-0/

5

Chapter 2. Literature Review time in similar situations. Strictly speaking, a device or interface is not ’intuitive’ in and of itself, however, the information processing applied to it can be” (McEwan et al., 2014, p. 2). What this means is that intuition is fundamentally based on previous experiences. Therefore, by definition, using the word “intuitive” in the context, which Apple provided, is rather misleading, partly because it assumes that the reader has previous experience of the entire interface in iOS, but also because Apple refers to the interface itself as being intuitive. However, by ignoring that an interface is not intuitive in and of itself, a counter argument could be made to say that perhaps Apple might have conducted the research necessary to prove that each and every aspect of their operating system interface is in fact “intuitive”. However, this statement seems implausible on many levels. Mainly because people’s prior knowledge of operating systems and technology differs from person to person. While one person might look at the Apple camera icon and see a camera lens, another person might immediately think of the Atlas robot from the Portal games made by Valve Software 5 (see Figure 2.1).

Figure 2.1: Which one is “intuitive” to you?

Perhaps that is not what Apple meant by with their use of the term “intuitive”, because arguably the scope of the term encompasses more than just previous experiences. Presumably, the term “intuition” should also include the subjective experience of using the product and how much effort one needed to perform certain operations within the user interface. Because if an interaction with a product UI induces a negative emotional response and takes a high amount of effort to perform, it is difficult to argue that the product in mention can be labelled as “intuitive” even though the user may have prior experience with it. So what does the term “intuitive” actually imply? In order to answer this question, let us examine the origins of intuition.

2.2 Intuition Intuition is a popular term that describes various aspects of human cognition and behavior. The use of the term has been adopted by various academic, as well as non-academic fields. Gut feeling, sixth sense, psychic, spiritual awakening, and mindfulness are all search results that are displayed for intuition. Perhaps unsurprisingly, the definitions and descriptions of intuition 5

6

Link to Valve Software website http://www.valvesoftware.com/

2.2. Intuition that can be found through a search engine are often shrouded in vagueness and ambiguity. These definitions vary a great deal from one to another based on little or no scientific evidence. “Ironically, people’s definition of intuitive is, well, intuitive, as they struggle to define the term in a specific, meaningful way” (McKay, 2010). Although there seems to be considerable disagreement concerning the definition of intuition, the underlying theme seems to converge on the point that intuition is an innate ability for one to understand or know something without knowing how one knows it. Currently, there is no generally accepted scientific unifying theory of intuition. In a paper written by Seymour Epstein, Epstein sums up this current problem by writing “Not only do authorities on intuition disagree with each other, they sometimes even disagree with themselves” (Epstein, 2010, p. 295). In Appendix A.1, a more detailed investigation of the problems associated with the definition of intuition can be seen. Although the definition of intuition is not universally agreed-upon, many psychologists do agree that there is something important captured by the construct of intuition (Epstein, 2010). In the article Exploring intuition and its role in managerial decision making, Dane and Pratt (2007) provide the following list of 17 different definitions of intuition from the field of psychology, philosophy and management: Source Jung (1933: 567–568) Wild (1938: 226) Bruner (1962: 102) Westcott & Ranzoni (1963: 595) Rorty (1967: 204) Bowers, Regehr, Balthazard, & Parker (1990: 74) Shirley & Langan-Fox (1996: 564) Simon (1996: 89) Shapiro & Spence (1997: 64) Burke & Miller (1999: 92) Policastro (1999: 89) Lieberman (2000: 111) Raidl & Lubart (2000-2001: 219) Hogarth (2001: 14) Myers (2002: 128–129) Kahneman (2003: 697) Epstein (personal communication, 2004)

Definitions of intuition Definition That psychological function transmitting perceptions in an unconscious way An immediate awareness by the subject, of some particular entity, without such aid from the senses or from reason as would account for that awareness The act of grasping the meaning, significance, or structure of a problem without explicit reliance on the analytic apparatus of one’s craft The process of reaching a conclusion on the basis of little information, normally reached on the basis of significantly more information Immediate apprehension A preliminary perception of coherence (pattern, meaning, structure) that is at first not consciously represented but that nevertheless guides thought and inquiry toward a hunch or hypothesis about the nature of the coherence in question A feeling of knowing with certitude on the basis of inadequate information and without conscious awareness of rational thinking Acts of recognition A nonconscious, holistic processing mode in which judgments are made with no awareness of the rules of knowledge used for inference and which can feel right, despite one’s inability to articulate the reason A cognitive conclusion based on a decision maker’s previous experiences and emotional inputs A tacit form of knowledge that orients decision making in a promising direction The subjective experience of a mostly nonconscious process, fast, alogical, and inaccessible to consciousness—that, depending on exposure to the domain or problem space, is capable of accurately extracting probabilistic contingencies A perceptual process, constructed through a mainly subconscious act of linking disparate elements of information Thoughts that are reached with little apparent effort, and typically without conscious awareness; they involve little or no conscious deliberation The capacity for direct, immediate knowledge prior to rational analysis Thoughts and preferences that come to mind quickly and without much reflection The working of the experiential system

Table 2.1: List of definitions of intuition from psychology, philosophy and management (Dane and Pratt, 2007, p. 35)

7

Chapter 2. Literature Review In Table 2.1, a majority of the definitions seem to describe what intuition is not, rather than what it is. Even though some of the authors avoid directly using the term without conscious awareness, they often use similar terms like nonconscious, subconscious, or with little apparent effort instead. In his or her own way, each author seems to suggest that intuition is some kind of information that is acquired without any conscious, deliberative reasoning involved. However, none of these authors managed definitively to identify what intuition actually is in any substantive way (Epstein, 2010, p. 296). “Where then does this leave us? It leaves us with the view that intuition is a fuzzy construct, and although some of its definition are of some use descriptively, they are of very limited value scientifically as they indicate nothing about the operation of intuition other than the one definition that states that is operates unconsciously, which several other definitions also imply” (Epstein, 2010, Page 296). The challenge remains to construct a better definition of intuition, or at least to indicate how it operates (Epstein, 2010, p. 296). In Table 2.1, the consensus seems to be that intuition is a rapid, non-conscious, non-verbal, and effortless cognitive process. This begs the question of what is the opposite of intuition? A slow, conscious, verbal, and effortful cognitive process?

2.2.1 Dual-Processing System In an article written by Epstein (2010) titled; Demystifying Intuition: What It Is, What It Does, and How It Does It, the author attempts to rectify the definition of intuition by describing it in terms of a dual-processing system. This dual-processing system identifies two specific informationprocessing systems for humans. The first system is the rational/analytic system that uniquely exists in humans. The second system, the experiential/intuitive system, is an associative learning system that humans share with other animals. “Intuition is considered to be a subsystem of the experiential/intuitive system that operates by exactly the same principles and attributes but has narrower boundary conditions” (Epstein, 2010, p. 295). According to Evans (2010), dualprocessing theories have sprung up in many fields of psychology, including the study of learning, memory attention, social cognition, and more. Table 2.2, shows the characteristics of both systems can be seen. According to Epstein (2010), the two processing systems interact bidirectional, simultaneously, and sequentially. The experiential/intuitive system is quicker to react, which indicates that people’s initial reaction to situations is usually based on the experiential/intuitive system. If initial mental response to a situation identified as unacceptable, the rational/analytic system may be able to modify or suppress its expression. If it is unable to modify or suppress its expression, perhaps due to working memory overload, the experiential/intuitive system reacts to the situation outside of conscious awareness. When this happens, the person expresses behavior according to the non-conscious action from the experiential/intuitive system. Unaware of the non-conscious action or behavior expressed by the experiential/intuitive system, the person seeks a rational explanation. Through rationalization based on the hedonic principle of the experiential/intuitive system and the reality principle of the rational/analytic system, the per8

2.2. Intuition Experiential/Intuitive System 1. Operates by automatically learning from experience 2. Emotional 3. Motivated by hedonic principle to maximize pleasure and minimize pain 4. Associative connections between stimuli, responses, and outcomes 5. Behavior mediated by automatic appraisal of events and “vibes” from past relevant experience 6. Nonverbal: encodes information in images, metaphors, scenaros, and narratives 7. Holistic 8. Effortless and minimally demanding of cognitive resources 9. More rapid processing: oriented toward immediate action 10. Resistant to change: changes with repetitive or intense experience 11. More crudely differentiated: broad generalization gradient; categorical thinking 12. More crudely integrated: context specific; organized by cognitive-affective networks 13. Experienced passively and we are seized preconsciously: by our emotions and have uncontrolled spontaneous thoughts 14. Self-evidently valid: experiencing is believing

Rational/Analytic System 1. Operates by conscious reasoning 2. Affect-free 3. Motivated by reality principle to construct a realistic, coherent model of the world 4. Cause-and-effect relations between stimuli, responses, and outcomes 5. Behavior mediated by conscious appraisal of events and of potential responses 6. Verbal: encodes information in abstract symbols, words, and numbers 7. Analytic 8. Relatively effortful and demanding of cognitive resources 9. Slower processing: oriented also toward delayed action 10. Changes more readily: changes with speed of thought 11. More highly differentiated; dimensional and nuanced 12. More highly integrated; organized by context-general principles 13. Experienced actively and consciously: we believe we are in control of our reasoning 14. Requires justification via logic and evidence

Table 2.2: The difference between the intuitive and reasoning system (Epstein, 2010, p. 299)

son arrives at the most favorable interpretation that the person can think of within acceptable reality considerations. The aim of the hedonic principle is to maximize pleasure and minimize pain, while the aim of the reality principle is to construct a realistic and coherent model of the world (Epstein, 2010). In other words, the person seeks to rationalize his or her actions and reactions in a self-enhancing manner. “Thus, rather than just an interaction between single responses in the two systems, the two systems can interact in the manner of a dance, in which a step in one of the systems elicits a step in the other system.” (Epstein, 2010, p. 300). “Why is it that intuition so often dominates reasoning? I think there are two answers to this. The first is that intuitive feelings largely reflect experiential learning. In the real world, dealing with familiar environments, intuition will often serve us well. The second reason is to do with fundamental cognitive architecture of the human mind. Although intuitive processes operate rapidly, in parallel and with no effort, reflective reasoning is quite the opposite. Reflection requires use of central working memory: There is only one such system - it has low capacity, requires high effort, and can be applied only to one task at a time. We are miserly with this cognitive resource because we have to be” (Evans, 2010, p. 323). The Dual-processing system theory solves a number of problems associated with the definition of intuition. However, it lacks the ability to explain the underlying relationship between intuition and the cognitive system, such as how intuition relates to memory, attention, and how to design products which are ‘intuitive’ to use. The problems associated with the definition of intuition is investigated largely in Appendix A.1. Considering the vagueness and ambiguity associated with the definition of intuition, one might be inclined to avoid the topic and focus more on specific Human Computer Interaction 9

Chapter 2. Literature Review HCI concepts such as usability, learnability or familiarity. However, although the scientific community has not agreed upon a precise definition of intuition, the various definitions mentioned in this section do share some generally agreed-upon characteristics. Based on this literature review, intuition sometimes is considered as being rapid, non-conscious, non-verbal, effortless, emotional, and based on previous experiences. These generally agreed upon characteristics noted by HCI and User eXperience (UX) researchers and used towards the goal of formulating a unified view of intuitive interaction. The concept of intuitive interaction is specifically focused on helping designers make user interfaces more intuitive to use (Blackler and Hurtienne, 2007, p. 1). The concept of intuitive interaction is described in the following section.

2.2.2 Intuitive Interaction During the past century, work on the theory of intuitive interaction has been steadily gaining pace (Blackler and Popovic, 2015). In the article “Towards a unified view of intuitive interaction: definitions, models and tools across the world”, Blackler and Hurtienne compare and contrast two previously independent studies on the topic of intuitive interaction, one from the German research group called Intuitive Use of User Interfaces (IUUI) and the Australian research group from Queensland University of Technology (QUT). The combined research of these two groups has laid the foundation for most of the empirical work done in the field of intuitive interaction. Both teams are interdisciplinary and their approaches grounded in literature based on experimental evidence. According to Blackler and Hurtienne (2007) the outcome from the works of these two research groups have shown to be complementary. Meaning that although the two groups conducted their research independently from each other, there were many similarities in their work, but also some differences. Looking at the definitions from both research groups, the similarities and differences between the definitions can be seen: Australian (QUT) Definition: “Intuitive use of products involves utilizing knowledge gained through other experience(s). Therefore, products that people use intuitively are those with features they have encountered before. Intuitive interaction is fast and generally non-conscious, so people may be unable to explain how they made decisions during intuitive interaction” (Blackler and Hurtienne, 2007, p. 2).

German (IUUI) Definition: “A technical system is intuitively usable if the users’ unconscious application of prior knowledge leads to effective interaction” (Naumann et al., 2007, p. 2).

As it can be seen, there are some distinct differences between these two definitions, which set them apart. The QUT group noted an additional fast requirement while the IUUI group noted an effective requirement (Blackler and Hurtienne, 2007). The effective requirement also identified in an earlier paper published by the QUT group as one of the criteria used to determine intuitive uses when analyzing experimental data (Blackler et al., 2004). “In fact, in many 10

2.2. Intuition cases intuitive interaction would likely be both fast and effective” (Blackler and Hurtienne, 2007, p. 13). One of the most important similarities between the definitions of these two groups is the non-conscious use of prior knowledge, which has become foundational for both groups (Blackler and Hurtienne, 2007). In fact, several different researchers on four different continents using a variety of different products, interfaces and experiment designs have all agreed that prior experience is the leading contributor to intuitive use (Mohan et al., 2015, p. 4). Because of intuition being non-conscious, it is also non-verbal and non-recallable. When asked why they made a specific decision, people are often unable to explain it. People often reply that they don’t know, or they may consciously rationalize their decision based on a preconceived assumption about their personality and general decision-making strategies (Blackler, 2008). Familiarity with features that people have prior experience with, allows people to use features quicker and more intuitively than people with lower level of familiarity (Blackler, 2008, p. 203), (McEwan et al., 2014). This familiarity can be transferred between different products or systems. This makes it possible for people to use a new product intuitively (Blackler, 2008, p. 229). Empirical studies have concluded that the appearance of a feature has a greater effect on intuitive interaction compared to its location. However, location does seem to play a critical role in helping to decrease the search time for individual features. The result of placing features in familiar locations has been shown to decrease response times, when looking for a specific feature (Blackler, 2008, p. 214). Blackler supports the argument of those authors who claim that intuition and intuitive interaction are based on past experience by asserting that empirical research and experiments have all shown that “familiarity with a feature will allow a person to use it more quickly and intuitively”(Blackler, 2008, p. 204). Blackler states that this is the foundational conclusion to come from the research of intuitive interaction. It informs the principles and tools which have been developed for the designing for intuitive interaction. Further research in the field of intuitive interaction has also shown that age can have an effect on how quickly and how intuitively people are able to complete tasks (Blackler, 2008).

2.2.3 Measuring Intuitive Interaction Some of the more recent contributions to the field of intuitive interaction has resulted from the work of Daniel Ullrich and Sarah Diefenbach. In their article “An experience perspective on intuitive interaction: Central components and the special effect of domain transfer distance”, the authors indicate that previous research on intuitive interaction has often revolved around the development of a definition and clear-cut criteria. Diefenbach and Ullrich suggested an alternative, more phenomenological approach to the research on intuitive interaction. Their research approach focused less on the seemingly illusive definition of intuition, but more on the experiential phenomenon and the subjective feelings associated to concept of intuitive interaction. Diefenbach and Ullrich managed to identify four sub-components of intuitive interaction based on literature review of various definitions of intuition from the fields of Human Computer Interaction and psychological literature on intuitive decision-making (Diefenbach and Ullrich, 2015). The four sub-components of intuitive interaction are; Effortlessness, gut feeling, verbalizability, and magical experience. Through personal communication with Ullrich and Diefen11

Chapter 2. Literature Review bach (see Appendix C), Ullrich specified that the term effortlessness represents mental effort.. Therefore, it might be worth investigating if the sub-components of intuitive interactions have any conceptual correlation to the mental effort assessment factor, described in Cognitive Load Theory (CLT). This idea is investigated in Section 2.3.4.

Figure 2.2: The four sub-components of intuitive interaction identified by Ullrich and Diefenbach (2010a)

Ullrich and Diefenbach investigated whether or not the components identified through literature review would also show up as separate scales in a factor analysis. Through several iterations and validity tests, they managed to identify four components which were represented by a set of sixteen items (Ullrich and Diefenbach, 2010a).

“A final main components analysis with the remaining items showed a clear fourfactor structure with 79% explained variance and also the internal scale consistency was satisfying (Cronbachs Alpha: Effortlessness: .96; Gut Feeling: .85; Verbalizability: .84; Magical Experience: .81)” (Ullrich and Diefenbach, 2010a, p. 6).

The outcome from their study was a questionnaire, called the INTUI Questionnaire, that can be utilized to evaluate the intuitiveness of an interface. A sample of the questionnaire can be seen in Figure 2.3, and the entire questionnaire can be seen in Appendix B. The purpose of the INTUI questionnaire was to measure the intuitiveness of a product on a 7-point-scale between two bipolar statements. The INTUI Questionnaire assesses the sub-components of intuitive interaction with a set of 17 questions, one of which measures a global rating of intuitiveness (Ullrich and Diefenbach, 2010a). In a recent article written by Blackler and Popovic, the authors review several of the recent contributions in the field of intuitive interaction. According to Blackler and Popovic the INTUI model proposed by Ullrich and Diefenbach is made up somewhat differently compared to previous frameworks proposed in the field. However, Blackler and Popovic assert that “none of these potential properties of intuitive use are incompatible with those already proposed in earlier work. Instead they allow for a more subjective view on the part of users” (Blackler and Popovic, 2015, p. 4). 12

2.3. Cognitive Load Theory

Figure 2.3: A sample of the INTUI Questionnaire (Ullrich and Diefenbach, 2010a)

2.3 Cognitive Load Theory Cognitive Load refers to the load that performing a particular task imposes on a learner’s cognitive system (Paas and Van Merriënboer, 1994a). Cognitive Load Theory (CLT) provides a framework for investigations into cognitive processes and instructional design by simultaneously considering the structure of information while also considering the cognitive architecture which allows learners to process that information (Paas et al., 2003a). Originating in the 1980’s, CLT research has become an acknowledged and broadly applied theory within the field of learning and instructional design (Van Merriënboer and Sweller, 2005), (Hollender et al., 2010), (Park and Brünken, 2015). CLT is built on the foundation of well-established cognitive psychological constructs such as schema construction, modality of information, and the distinction between Working Memory (WM) and Long-Term Memory (LTM) (Paas et al., 2003a).

2.3.1 Active and Passive Learning Fundamentally, CLT is based on the premise that the human cognitive system has a limited capacity to process and retain information. CLT recognizes that WM can only hold approximately five to nine information elements at the same time (Miller, 1956). Most novel information can only be held in WM for a few seconds before it decays. Rehearsal can increase the duration of information in WM indefinitely (Sweller et al., 2011). This limited capacity of WM is further reduced when the information elements interact (Choi et al., 2014). This is an important factor to consider for the design of instructional material. If too much information is presented at once, or too quickly, it is likely that the information will decay, or be replaced by other information, before the person is able to transfer it to LTM. According to Van Merriënboer and Sweller (2005), information retained within LTM can be altered through the process of learning, which can happen in the following two ways; by passively obtaining information directly from another human through instruction or observation, or through the active generation of new information through a process of problem solving (Van Merriënboer and Sweller, 2005, p. 154). Arguably, the line between what constitutes as an active or passive learning experience is arbitrary. Consider the process of reading text: One might argue that reading text is a passive observation of textual information that someone else wrote. 13

Chapter 2. Literature Review However, a certain amount of active engagement must be involved in order to translate the series of squiggles and lines of text into meaningful phonological information. In fact, one might even argue that all learning must be active on some level. For example, a brain with no active exchange of information between neurons is not able to form thoughts, process information, or build new memories, therefore it cannot learn. On the other hand, one might not be consciously aware of all of the active processes that are going on inside the brain. “(a) Learning is promoted when learners are engaged in solving real-world problems. (b) Learning is promoted when existing knowledge is activated as a foundation for new knowledge. (c) Learning is promoted when new knowledge is demonstrated to the learner. (d) Learning is promoted when new knowledge is applied by the learner. (e) Learning is promoted when new knowledge is integrated into the learner’s world” (Merrill, 2002).

2.3.2 Cognitive Schemas Cognitive schemas serve a useful purpose for both WM and LTM because they reduce the amount of individual interacting elements, which needs to be simultaneously processed (Paas et al., 2003a). This is possible because “schemas are used to store and organize knowledge by incorporating or chunking multiple elements of information into a single element with a specific function” (Choi et al., 2014, p. 226). These interacting elements would far exceed the working memory capacity if each element had to be processed separately by the cognitive system (Paas et al., 2003a). According to Paas et al. (2003a) people are able to reverse the letters of written words in their mind. This is possible because a schema is available for the written word, along with lower level schemas for the individual letters and further schemas for the specific shapes that make up the letters. This complex set of interacting elements, the authors argue, can be manipulated in WM because of schemas, which are held in LTM (Paas et al., 2003a). Over time, constructed schemata may become automated. Automation of a schema is a result of repeated application of the schema (i.e., through practice). Schema automation can free up working memory for other activities because an automated schema directly steers behavior without the need to be processed consciously in WM (Van Merriënboer and Sweller, 2005). Comparison between of the descriptive characteristics of automatic schemas and the theory of intuition reveals a striking amount of similarities. These similarities are further discussed in Section 4. The immense store of schematically organized information in LTM is central to all human cognitive activity. Virtually everything humans see, hear, or even think about is critically dependent on information stored in LTM (Van Merriënboer and Sweller, 2005). LTM is considered to be mostly resistant to large-scale changes in existing knowledge structure. Van Merriënboer and Sweller (2005) argue that this is an evolutionary advantage of the cognitive system, because large and rapid alterations in LTM could have detrimental effects on a person’s ability to solve problems that previously could be solved readily (Van Merriënboer and Sweller, 2005). For example, if this thesis report was written in a way to convince you, the reader, that the only language that you know how to read is Icelandic, you would most likely still be able to read this thesis due to the vast amount of schematically stored information that you have acquired which 14

2.3. Cognitive Load Theory incorporates the English language in some way. According to Van Merriënboer and Sweller (2005), the human cognitive system has a specific structure which ensures that rapid large-scale alterations to long-term memory do not occur: A limited WM, which is only able to process a small number of combination of elements simultaneously. Resistance to change seems to correlate with the principles of intuitive interaction, developed by Blackler (2008). The principles of intuitive interaction suggest that using familiar features from the same, or similar, domain is beneficial for the design of intuitive interfaces. The principles suggest that a design should incorporate as much of a user’s existing schemas as possible, otherwise the user is required to allocate additional and arguably unnecessary cognitive resources to the formation of new schemas. However, in some circumstances an existing schema may not exist for a particular problem, in that case, clear instructions should be provided that efficiently utilize the limited resources of the WM to maximize learning. Construction of automated schemas takes time, motivation and demands cognitive resources (Van Merriënboer and Sweller, 2005). Although it is sometimes convenient to think about WM as a singular process or structure, it is more accurate to think of WM as consisting of multiple processors that correspond to the modality of the information that is being received (Sweller et al., 2011).

2.3.3 Modality Effect The modality effect implies that the WM processor that deals with auditory information is different from the processor that deals with visual information (Sweller et al., 2011). This means that when the visual processing in WM becomes overloaded, the auditory channel remains somewhat unaffected. However, some authors argue that although humans do seem to have different processors to handle visual and auditory information, the processors are not entirely independent. In some circumstances, effective WM capacity may be increased by using both visual and auditory processors simultaneously (Sweller et al., 2011). The implications of the modality effect has obvious implications in the design of learning materials. The modality of information affects WM and therefore it can directly affect cognitive load in various ways. In a study performed by Kalyuga et al. (2000), participants were presented with one of the following: a diagram with a visual text explanation, a diagram with auditory explanation, a diagram with both visual and auditory text explanation, or the diagram only. The purpose of their study was to examine the role of learners’ experience with respect to instructional design, specifically with respect to dual-mode instruction (i.e., visual and auditory). The results from their study, and other similar studies on the split-attention-, multimodal- and redundancy effect, suggest that learning is superior when information is presented in visual format with an auditory instruction, compared to the alternatives (Mayer, 2010), (Kalyuga et al., 2000). According to Kalyuga et al. (2000), the limitations of WM can effectively be expanded by using more than one sensory modality and instructional materials with dual-mode presentation, such as visual diagram with auditory instruction, may be more efficient than equivalent single-modality formats. Novice learners benefit the most from the use of multi-modal instructional material. However, as learners become increasingly experienced with a learning task, the difference between single- and dual-modal disappears and eventually reverses to the point 15

Chapter 2. Literature Review where totally eliminating the text, or oral explanation, has been shown to be superior. The text or oral explanation effectively becomes so redundant that it actually imposes extraneous cognitive load on the experienced learner (Van Merriënboer and Sweller, 2005), (Kalyuga et al., 2000). In practice, many standard multimedia instructional presentations use narrated auditory explanations simultaneously with the same visually presented text. Just think of the last time that you attended a presentation where almost the entire transcript was projected while the presenter was talking. Although this approach may in some ways seem to effectively utilize the modality effect, the problem is that these dual-mode presentations are used under redundancy rather multi-modal conditions. “From the point of view of cognitive load theory, such duplication of the same information using different modes of presentation increases the risk of overloading working memory capacity and might have a negative effect on learning. Unnecessarily relating corresponding elements of visual and auditory content of working memory consumes additional cognitive resources. In such a situation, elimination of a redundant visual source of information might be beneficial to learning. Moreover, the auditory explanations may also become redundant when presented to more experienced learners. If an instructional presentation forces learners to attend to the auditory explanations continuously without the possibility of skipping or ignoring them, learning might be inhibited” (Kalyuga et al., 2000, p. 135). This indicates that learners’ prior experience with the specified knowledge domain is a fundamental aspect that needs to be taken into consideration during the design of effective instructional systems.

2.3.4 Cognitive Load Model Fundamentally, CLT describes the interaction between three specific groups of factors called Causal Factors, Cognitive Load, and Assessment Factors. In Figure 2.4 the relationship between these three groups of factors can be seen. Causal Factors correspond with factors that affect cognitive load, while Assessment Factors correspond with factors affected by cognitive load (Paas and Van Merriënboer, 1994a). The schematic representation shown in Figure 2.4, hereby referred to as the cognitive load model, was originally proposed by Paas and Van Merriënboer (1994a) in order to illustrate the theoretical cause-and-effect relationship that cognitive load has within the cognitive system. This figure has been modified using Adobe Photoshop to include descriptions of the assessment factors, along with their relation to causal factors. Additionally, a visual representation of the relationship between cognitive load and working memory was included to illustrate the model more detailed.

Causal Factors Causal factors are considered to have a direct influence on cognitive load (Paas and Van Merriënboer, 1994a), (Choi et al., 2014). Physical learning environment (E) is depicted as a factor that embraces the other two causal factors; Task- and Learner characteristics. The physical learning environment refers to the entire range of physical properties of a place where instruction and learning takes place. “These include physical characteristics of learning materials or tools (e.g., texture, color, size, shape, 16

2.3. Cognitive Load Theory

Figure 2.4: This figure is based on the model originally proposed by Paas and Van Merriënboer (1994a), modified by Choi et al. (2014) and has been further modified in Photoshop to include descriptions of the assessment factors along with their relation to causal factors (f.ex. E x T), as well as a visual illustration of cognitive load in relation to working memory based on descriptions from Paas and Van Merriënboer (1994a), Choi et al. (2014), and Van Merriënboer and Sweller (2005).

weight, and sound), the physical attributes of the built environment (e.g., volume, density, lighting conditions, arrangement, and thermal conditions), natural spaces, and the physical presence of other people. It covers sensory stimuli from the environment that can be perceived by human senses, that is, vision, hearing, smell, taste, touch, temperature, and balance” (Choi et al., 2014, p. 229). 17

Chapter 2. Literature Review Task characteristics (T) refers to the intrinsic difficulty of the task, the type of task, or the manner of instructional design according to the conceptual distinction made between task and environment. It is not always simple to distinguish the physical learning environment from the learning task, considering that in certain circumstances some elements of the environments should be regarded as part of the learning task when they are essential for learning. For example, if a learner uses a calculator to solve a specific mathematical problem, the calculator should be regarded as not just an object in the physical environment but as a learning tool which is an essential part of solving the task (Choi et al., 2014). Learner characteristics (L) refers to the cognitive capabilities of the learner, motivation, preferences and prior experiences. Considering the interaction between learner characteristics and the physical environment, it should be noted that in real world situations, both factors always interact. There is no learning without a learner (Paas and Van Merriënboer, 1994a).

Cognitive Load CLT generally recognizes two specific types of cognitive load that can affect the acquisition of new knowledge (i.e., learning); intrinsic- and extrinsic cognitive load. Extrinsic cognitive load can be divided further into effective (i.e., germane), or ineffective (i.e., extraneous) load.

Intrinsic Cognitive Load Refers to the intrinsic complexity of a task. Intrinsic cognitive load is determined by the interaction between the learning material and the expertise of the learner. It depends on the number of elements that must be processed simultaneously in WM. Learning material with a high amount of element interactivity is intrinsically difficult to process unless a previously constructed cognitive schema already exists for it (Van Merriënboer and Sweller, 2005). Intrinsic cognitive load provides a base load that is irreducible through other means than through the construction of additional schemas or by automating previously acquired schemas (Choi et al., 2014). This suggests that the only way to foster understanding of materials with high element of interactivity is to develop cognitive schemas that incorporate the interacting elements. Cognitive schemas reduce the amount of interacting elements by chunking them together into smaller packets, which are easier to process by WM (see Section 2.3.2 for more information about cognitive schemas).

Extrinsic Cognitive Load Extrinsic cognitive load can be either “ineffective (i.e., extraneous cognitive load) or effective (i.e., germane cognitive load)” (Choi et al., 2014, p. 227). Germane load refers to the WM resources allocated to deal with intrinsic cognitive load (Choi et al., 2014). Extraneous cognitive load, on the other hand, is not necessary for schema construction and automation (i.e., learning). Actually, it is often a consequence of improper design of instructional material, such as integrating information sources distributed in place or time, or requiring the learner to search for information needed to complete a learning task (Van Merriënboer and Sweller, 2005). In an article by Van Merriënboer and Sweller (2005), the authors mention the following six effects to 18

2.3. Cognitive Load Theory reduce extraneous cognitive load; Goal-free effect, worked example effect, completion problem effect, split attention effect, modality effect, and redundancy effect. Descriptions of the various effects that can be used to reduce extraneous load can be found in Appendix B.1. A description of the modality effect is presented in Section 2.3.3. “If we consider the various sources of extraneous cognitive load such as those that lead to the goal-free, worked example, split- attention or redundancy effects, the instructional procedures that facilitate learning all seem to involve a reduction in elements that learners need to simultaneously process” (Sweller et al., 2011, p. 125). What constitutes as intrinsic or extraneous cognitive load is fundamentally dependent on what needs to be learned (Sweller, 2010). For example, by adding incomprehensible text such as “!nuf si sdrawkcab gnidaeR” into this report, you are being affected by extraneous cognitive load. However, if the purpose of this report was to teach you the skill of reading backwards, one might argue that the cognitive load imposed by this jargon text would be intrinsic to the learning material. Extraneous and intrinsic cognitive load are additive in that, together, the total cognitive load cannot exceed WM capacity if learning is to occur (Choi et al., 2014), (Van Merriënboer and Sweller, 2005). This indicates that extraneous load is only problematic to learning when intrinsic load is high. “If intrinsic load is high, extraneous cognitive load must be lowered; if intrinsic load is low, a high extraneous cognitive load due to an inadequate instructional design may not be harmful because the total cognitive load is within working memory limits” (Van Merriënboer and Sweller, 2005, p. 150).

Assessment Factors Assessment factors can be described as a product of the interaction between causal factors and cognitive load. Assessment factors can allow a researcher indirectly to assess the amount of cognitive load that a participant experienced during learning. This can be done by measuring mental load, mental effort or performance (Paas and Van Merriënboer, 1994a).

Subjective Assessment Various techniques have been developed to assess mental load, mental effort and performance. Perhaps the most sensitive measure currently available to differentiate the cognitive load imposed by different instructional procedures is the simple subjective rating scale, regardless of the wording used (mental effort or difficulty) (Paas et al., 2003b), (Sweller et al., 2011), (Schmeck et al., 2015). In a subjective rating, participants are asked to rate their level of mental effort (or difficulty of learning material) during instruction on a Likert-type scale, ranging from one (extremely low mental effort) to seven (extremely high mental effort). It has been demonstrated that subjective ratings of mental effort or difficulty are sensitive to relatively small differences in cognitive load and that they are valid, reliable, and not intrusive (Paas et al., 2003b), (Paas and Van Merriënboer, 1994b). Interestingly, as can be seen in Figure 2.5, the subjective rating questions in CLT seem almost identical to the Likert-type scale questions in the INTUI intuitive interaction questionnaire from Section 2.2.3. This similarity is reinforced by the fact that one 19

Chapter 2. Literature Review of sub-components of intuitive interaction is effortlessness (i.e., mental effort). Mental effort is also an assessment factor of CLT. However, while the questions explained in CLT focus on measuring mental effort and difficulty during learning, the INTUI questionnaire is intended to be utilized succeeding product use.

Figure 2.5: A side-by-side comparison between the subjective ratings questions of mental effort and difficulty from CLT and INTUI’s questionnaire

Objective Assessment In addition to the subjective self-reported assessment methods, researchers have also developed various direct objective assessment techniques to measure cognitive load. These objective assessment techniques include observations of behavior, physiological conditions such as heart-rate or brain activity, primary task-performance, and secondary task-performance (Brünken et al., 2003). All of these measurement techniques have their advantages and disadvantages. Consider a scenario where cognitive load is been measured while participant runs 200 meters sprint while listening to an audio instruction. It would be considerably challenging, if not impossible, to assess the runner’s cognitive load using a heart rate monitor due to the significant effect that the primary task (i.e., running) has on the heart rate. In this scenario, an effective measurement technique could be to assess the participant’s performance on a secondary task such as measuring reaction time to an auditory signal. “Although secondary task performance is a highly sensitive and reliable technique, it has rarely been applied in research on CLT” (Paas et al., 2003b). It is therefore very important to choose carefully an appropriate measurement technique based on the primary task.

2.4 Literature Review Discussion The initial problem statement questioned whether it was possible to systematically condition a person to induce the feeling of intuitiveness for a specific software product without interacting with the actual product. Based on the literature on CLT and intuitive interaction can be affected by instruction. Additionally, the descriptive similarities between the experiential/intuitive system (from intuition literature) and automatic processing (from CLT) indicates that these two theories overlap. In Figure 2.6 the descriptive similarities between the dual-processing system 20

2.4. Literature Review Discussion and two of the assessment factors in CLT (automatic and controlled processing) can be seen. One might even argue that these two concepts are one and the same.

Figure 2.6: Comparison between some of the defining characteristics of CLT and Dual-Processing Theory from intuition literature

Based on these similarities, and the fact that subjective measurements of cognitive load are seemingly identical to two of the questions in INTUI’s intuitive interaction questionnaire, it was assumed that subjective and objective measurements of cognitive load during learning was directly correlated to subjective ratings of intuitive interaction. Various continuous cognitive load measurement techniques were considered for the instruction phase, including; measuring reaction time to a visual stimuli, galvanic skin response, heart rate, and EEG. However, these methods were either considered to be too intrusive, or the modality difference between text and video instructions might cause biased results. For example, a modality bias might be expected between text and video instruction while measuring cognitive load with a visual based reaction task. In such a task, participants are asked to press a button every time that they observe a change in a particular visual stimulus, such as a circle turning into a square. This type of secondary task has proven to be effective to measure cognitive load in visual based learning scenarios. However, because the video instructions included a narrated audio, while the textual instructions did not, the modality difference between text and video might cause biased results in favor of the video instructions. It was therefore decided to utilize a relatively new cognitive load measurement technique developed by Park and Brünken (2015) called “The Rhythm Method”. According to Park and Brünken (2015), the rhythm method has shown to be effective at measuring overall cognitive load of participants during learning. Although the modality of this method has not been empirically determined yet, the authors indicate that the “rhythm method is assumed not to be modality specific in comparison with visual or auditory secondary tasks, which were introduced in cognitive load research so far” (Park and Brünken, 2015). When deciding what specific task participants would need to perform in the experiment, there were a couple of considerations. According to Sweller et al. (2011), when people are required to process too many chunks of novel information simultaneously, the working memory tends to break down. Sweller et al. suggests that no more than two to three novel chunks of 21

Chapter 2. Literature Review information should be processed by WM at the same time in order to avoid the breakdown of WM. “When processing novel information, the capacity of working memory is extremely limited. We suggest the reason for that limit is the combinatorial explosion that occurs with even small increases in the number of elements with which working memory must deal” (Sweller et al., 2011, p. 43). Considering that participants in the main experiment were required to consciously maintain a specific rhythm during a learning task, it was decided to keep the difficulty of the instruction material to a minimum. Additionally, the learning material and the experiment task needed to be based on a scenario that is comprehensible by most people. Meaning that the learning task should not be based on expert knowledge, such as creating quarterly financial reports for administrative purposes. It was therefore decided that the instructional material should teach participants how to create an issue in JIRA and how to log work on that issue, using Tempo Timesheets.

22

CHAPTER

3

P ROJECT C ONCEPT The first part of the initial problem statement was to investigate the cognitive-psychological factors, which play a role when user interfaces feel intuitive to use. Through literature review of intuition and CLT, various similarities between intuition and the automated processing assessment factor described by CLT started to emerge. The first similarity was seen between the descriptive similarities of automatic processing in CLT and the experiential/intuitive system described by intuition literature. The second similarity was seen when the subjective questions of cognitive load were compared to the subjective questions of intuitive interaction. When compared side-by-side, one could argue that they essentially asked the same question, although the phrasing was somewhat different. The main difference was in the context of their utilization. While the CLT questions were intended as a measurement of mental effort and difficulty during instruction, the intuitive interaction (INTUI) questionnaire was intended to measure mental effort and difficulty after product use. This seemed to indicate that both theories were relevant to this project. Therefore, it was decided to measure cognitive load during instruction, while intuitiveness was measured after product use. The following project concept was developed in collaboration with Tempo’s UX department. The project idea was to investigate two of the instruction methods that Tempo utilized on their website (textual and pictorial documentation, and narrated video instructions) and see what effect these instructional methods had on the intuitiveness of a specific Tempo product. The plan was to objectively measure cognitive load of three groups of participants while they learned (or didn’t learn) how to use a specific Tempo product. The first group would receive no instructions, the second group would receive instruction in the form of a PDF document, and the third group would receive instruction in the form of a narrated video instruction. After participants had completed the instruction phase, they would use Tempo Timesheets and complete a number of tasks. When all tasks was completed, participants would fill in the intuitive interaction questionnaire explained in Section 2.2.3. The implementations of this project would therefore help Tempo to evaluate the effectiveness of these learning methods in relation to cognitive load and intuitiveness. By also including a group that received no instruction, this project might 23

Chapter 3. Project Concept reveal what happens when new users choose not to educate themselves prior to product use. Various technological advancements in online multimedia have led to an evolution of the conventional online-video format through the implementation of interactive functionality. This evolution essentially augments an interactive layer on top of the online video, making it possible for the viewer to engage actively with the video content by clicking directly on the video. In Figure 3.1 the placement of interactive video instructions can be seen alongside four additional learning methods on a conceptual model, which illustrates the theoretical relationship between different learning methods based on each method’s relation to modality specificity and interactivity. The y axis indicates the modality specific nature of the learning method, while the x axis indicates the active or passive nature of the learning method. This is a conceptual model of the relativistic relationship between various learning methods. The placement of each method is based on literature review of CLT and intuition. However, the model has not been tested, nor verified for accuracy.

Figure 3.1: Proposed theoretical relationship between different learning methods with relation to modality specificity and passive/active learning.

The idea of using interactive videos to facilitate understanding of a software seemed rather intriguing because it opens up various instructional opportunities that are fundamentally impossible with non-interactive videos. Like, for example, simulating the experience of interacting with an actual software while implementing passive multi-modal instructions directly into the experience. Considering that intuition is gained through practice and experience, it might be interesting to see if this evolution in multimedia has any significant effect on the construction of automatic schemas (i.e., making the interface more intuitive). It therefore was decided to include interactive videos as the fourth group.

3.1 Interactive Instructional Videos Interactive videos are relatively recent technological advancement, which extend the capabilities of non-interactive videos by allowing the content creator to implement live interactive elements on-top of the video clip. An interactive video is often a combination of multiple, 24

3.1. Interactive Instructional Videos short video clips which the user navigates through, using interactive elements presented on the screen. These interactive elements can be used to connect multiple different video clips together. This gives the user the ability to influence the video viewing experience by actively engaging with the video. Interactive videos can, in theory, be made to simulate the usage of an actual user interface, albeit with severely reduced functionality of the actual software. However, reduced functionality might actually offer some cognitive benefits to learners due to the reduction of extraneous load, caused by presenting irrelevant information to the viewer. Video platforms such as YouTube allow the content creator to add annotations to the videos, displayed as transparent squares with white edges anywhere within the video playback screen. Video annotations make it possible for the viewer to interact directly with the displayed video while it’s playing by clicking on the annotations with the mouse cursor. A YouTube content creator can decide to place annotations within a video that automatically directs the viewer to another YouTube video, playlist, channel, or other approved YouTube content. Alternative services such as Rapt Media1 have taken this concept one step further by allowing the content creator to have greater control over the design, customization, and logic of the on-screen annotations. In Rapt Media, users are able to create transparent annotations, which automatically directs the viewer to a specific web page, or video clip. When the viewer clicks on an annotation, which links to another video clip, the video that is currently playing seamlessly transitions to the next video clip without any disruption or indication that a new video file just started playing.

Figure 3.2: A visual illustration of the three different layers that make up a Rapt Media interactive video.

A study conducted by Zhang et al. (2006), examined the use of interactive videos in a learning situation. In their article Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness, the authors concluded that “interactive video achieved significantly better learning performance and a higher level of learner satisfaction than those in other settings” (Zhang et al., 2006, p. 1). The theoretical foundation behind that study differs somewhat from what is presented in this thesis. Regardless, it raises an interesting question of whether or not the use of interactive videos can help to accelerate automatic schema construction by allowing the viewer to engage actively with the instructional material while simultaneously receiving auditory instructions (i.e., multi-modal instructional material). Interactive videos might plausibly combine the ‘best of both worlds’ by enriching conventional multi-modal video instructions with active learning functionality.

1

Link to Rapt Media’s website: http://raptmedia.com

25

CHAPTER

4

P ROBLEM S TATEMENT Tempo is a software company that makes software extensions for the project management system JIRA. Tempo’s design approach has been to mimic the overall design and behavior its host product (i.e., JIRA). This, according to Tempo, is one of the major contributing factors that makes Tempo products intuitive. Although there is not a consensus on the precise definition of intuition, there are certain characteristics that show up frequently in various definitions of intuition. Primarily, intuition is regarded as being fast, effortless, non-verbalizable, non-conscious, based on prior experience. By reducing the extraneous load that is associated with switching between two separate software design paradigms, Tempo manages to create software extensions that look and feel like a part of JIRA. However, is that sufficient to state that Tempo’s design approach is “intuitive”? One may argue that by utilizing users’ previously acquired knowledge about the design and behavior of the host product (i.e., JIRA), the user can rely on the utilization of previously constructed schemas, compared to if the Tempo extension was designed with a completely different look and feel. On the other hand, what happens when Tempo needs to design software solutions where no prior design guidelines exist? Based on the literature review of intuitive interaction and CLT, one might argue that a plausible solution is to provide users with access to well-constructed instructional materials that effectively utilizes the limited resources of the human cognitive system to facilitate construction of automatic schemas (i.e., intuition). Tempo provides users with access to textual documentation and instructional videos on their official website. However, this instructional material is optional, meaning that some users may choose to use it while others may choose to ignore it. Considering that intuition is developed through experience and practice, it is expected that participants that receive some type of instruction have a more intuitive understanding of a software interface compared to participant that don’t receive any type of instruction prior to product use? This leads to the main hypothesis: H1: When participants have no prior experience with a particular software system, their subjective ratings of intuitiveness is lower (i.e., less intuitive) than par27

Chapter 4. Problem Statement ticipants that receive some form of instruction. The second hypothesis is based on the assumption that CLT and intuitive interaction are fundamentally correlative theories. It is therefore hypothesized that an increase of extraneous cognitive load during the instructional phase corresponds to a decrease in subjective ratings of intuitiveness (i.e., less intuitive) succeeding product use. CLT describes various effects shown to increase extraneous load in learners. The multi-modality effect specifies that instructional material that effectively integrates visual and auditory instructions in a cohesive and nonredundant way is superior to single-modality instruction. This leads to the second hypothesis: H2: When participants learn how to use a software system through a video instruction, before they interact with the system, their subjective ratings of intuitiveness is higher (i.e., more intuitive), and their objectively measured cognitive load is lower than those who received instruction in the form of a PDF document and those who received no instruction. The third hypothesis is based on the premise that interactive instructional videos can help to facilitate schema construction through a combination of the multi-modality effect and the addition of active engagement functionality. Considering that intuition is developed through experience and practice, it is hypothesized that interactive instructional videos can accelerate the construction of automatic schemas by allowing the user to actively learn from personal experience. This leads to the third and final hypothesis: H3: When participants learn how to use a software system through an interactive video instruction, before they interact with the system, their subjective ratings of intuitiveness is higher (i.e., more intuitive), and their objectively measured cognitive load is lower than those who received no instruction, PDF document, and the non-interactive video.

28

CHAPTER

5

E XPERIMENT D ESIGN This chapter details the experiment methodology, construction of the experiment equipment, video footage, text documentation, and code.

5.1 Design of the Instructional Material On Tempo’s official website, one can find numerous examples of textual and video instructions designed to teach their users how to use various Tempo software products. Tempo suggested that the main experiment could either utilize the existing instructional material that was available on Tempo’s website, or that new instructional material could be made based on the existing instructional material. Inspection of the video instructional material revealed that the videos were displayed with non-removable subtitles on the bottom of the video screen (see Figure 5.1).

Figure 5.1: A screen-shot from one of Tempo’s instructional videos.

29

Chapter 5. Experiment Design It is reasonable to assume that these subtitles were added in the case that some users might not be wearing headphones while watching the videos. However, according to CLT, when visual representations are presented with an accompanying text, they force the learner to invest significantly more mental effort during learning. This can have a negative effect on learning performance due to the increase in extraneous cognitive load, due to the split-attention effect, that is imposed on the learner (Park and Brünken, 2015). Inspection of the textual instructional material revealed that the textual learning material included numerous links and cross-references to other pages on Tempo’s website. In order to provide a better comparison between each learning method, it was decided to create the learning material for each instructional method from the beginning. It was decided to base the new learning material on the existing textual documentation material on Tempo1 and JIRA’s2 website. To gain a better overview of how different learning methods affect the rated intuitiveness and cognitive load, it was decided to include both text and video instruction as experimentation groups. In addition to text and video instructions, a third more recent type of interactive multimedia, called interactive videos, was also tested. This means that three distinct versions of the same instruction needed to be created. However, these three instructional methods are fundamentally and intrinsically different from one another in order to be effective. For example, textual instructions are designed to be clearly readable. Therefore, they are written in a specific grammatical style that is compatible with that format. Video instructions, on the other hand, are designed to be easily audible. Therefore, the video script is written in a grammatical style that is compatible with natural spoken language. Interactive videos also have their own grammatical differences compared to the other two methods. In an interactive instructional video, the video script frequently prompts the user to take action by interacting with the screen. In order to provide an impartial comparison between all three learning methods, a decision was made to create all instructions based on the same fundamental information, but allow grammatical adjustments to be made corresponding to the natural tendencies of each instructional method. Additionally, the visual information is inherently different between the textual instructions and video instructions due to the fact that a PDF document containing all of the static images that make up a 7 minute video would add up to approximately twelve thousand individual pictures. Therefore, the visual style for each instructional method varied somewhat. However, the main intention was to make each instructional method as similar as possible, without risking biasing any particular method specifically. All three instructional materials were designed to teach the learner the following two tasks: • Task 1 - Create a JIRA issue • Task 2 - Log work on the issue, using the work log Calendar feature

1 2

30

Link to Tempo’s documentation page: https://tempoplugin.jira.com/wiki Link to JIRA’s documentation page https://confluence.atlassian.com/jira/

5.1. Design of the Instructional Material

5.1.1 Textual Instructional Manual The instructional manual was written in collaboration with David O’Donoghue, a technical writer, who was working for Tempo. The information contained in this manual was copied from JIRA, and Tempo’s documentation websites. The new learning material was designed as a PDF document with all interactive cross-references removed. An introduction text about JIRA and Tempo was included at beginning of the document to give users some back-story regarding the existence these two systems. The introduction text briefly summarized the core functionality these two systems, which is to create issues in JIRA and log work on those issues by using Tempo Timesheets. The textual instructional manual is found in Appendix D.1.

5.1.2 Video Instructions The construction of both types of videos (interactive and non-interactive) was similar in many ways. Both videos were based on recordings that were made using QuickTime Player3 built in screen-capturing feature. The screen-recording software recorded JIRA and Tempo interacted with, from a user’s point of view. Both videos were trimmed, edited, and synchronized to vocal track, using Sony Vegas Pro4 . The vocal track was recorded using a Blue Yeti USB microphone5 . A portable vocal recording booth was also created to mitigate some of the unwanted audio effects of recording a vocal track in a moderately reverberant room (the narrator’s office). The construction of this portable vocal booth is described and illustrated in Appendix E.1. Where final version of the portable recording booth can be seen in Figure E.4.

Figure 5.2: Final version of the portable vocal booth.

3

Link to QuickTime Player website: http://www.apple.com/quicktime/what-is/ Link to Sony Vegas Pro official website http://www.sonycreativesoftware.com/vegaspro 5 Link to the Blue Yeti microphone website: http://www.bluemic.com/products/yeti/ 4

31

Chapter 5. Experiment Design Both interactive and the non-interactive videos were constructed from the same screen recording video files, as well as the video files that were used in the introduction. The introduction section recorded using the video recording setup, seen in Figure 5.3. This was a setup used in order to record the video track during the instruction phase. The video track was recorded using a Sony Alpha 6000 camera, a portable light-source, A3 paper and some printed cut-out graphics, such as text, arrows and icons. The full-length video clip was separated into the following four parts: 1) Introduction 2) Create JIRA issue 3) Log work using the work log calendar 4) Log work using the user Timesheet page. In Appendix D.2, the script that was used for both interactive and non-interactive videos can be seen.

Figure 5.3: Illustration of the video recording setup

5.1.3 Interactive Video Instruction As explained in Section 5.1.2, the construction of the audio and video track for both types of videos was very similar. However, the distinguishing factor between the interactive and noninteractive video was the fact that the interactive video was composed of 28 separate video 32

5.2. Measurement Methods clips, while the non-interactive video was composed of only four. The interactive video was designed based on the instructions from the instructional manual: “Best Practices for Interactive Video” (Rapt Media). In order to create an effective and engaging interactive video, Rapt Media recommends to: • Plan and prepare the overall structure of the interactive video. • Put yourself in the shoes of the viewer. Would you want to keep clicking through the video? • Choose a video branching structure. What type of video are you making? • Plan the placement of your call to action (CTA) points. • Limit the number of options for each interaction. • Test, iterate, and improve.

Figure 5.4: A top-down overview of the interactive video. The connections between each video clip is illustrated with arrows, if an interaction is required to progress the video, and a yellow line that illustrates that the next video clip is played automatically once the current video clip finishes.

Additionally, an interactive familiarization section was added to the start of the interactive video. The purpose of this familiarization was to demonstrate shortly how to interact with an interactive video. The familiarization section of the video is illustrated in Figure 5.5. The yellow box on the figure illustrates a functionality that is unique to the interactive video. This functionality, called Call To Action (CTA), represents the timing placement in the video where the viewer needs to interact with the video in order to continue the video.

5.2 Measurement Methods The measurement methods that were utilized in the main experiment can be divided into two categories; subjective ratings and objective measurements. 33

Chapter 5. Experiment Design

Figure 5.5: Illustration of the familiarization for the interactive video.

5.2.1 Subjective Ratings Because the proposed correlation between intuitive interaction and cognitive load purely theoretical at this point, it was decided to include separate subjective ratings of cognitive load. Various authors have recommended the use of two specific Likert scale questions to measure cognitive load. These questions assess the user’s level of mental effort and difficulty during instruction (Cierniak et al., 2009), (Sweller et al., 2011), (Park and Brünken, 2015), (Schmeck et al., 2015), (DeLeeuw and Mayer, 2008). The difficulty rating asked the learner to make a retrospective judgment after the learning session concerning the lesson’s difficulty. The mental effort rating asked the learner to make a retrospective judgment about the level of mental effort during the learning session. The questions that were used to measure cognitive load after the learning phase can be seen in Figure 5.6. Icelandic translation of each question and answer was also included.

Figure 5.6: 7 point Likert scale questions for mental effort and difficulty with Icelandic translations.

Some evidence from DeLeeuw and Mayer (2008) suggests that subjective ratings of mental effort are most sensitive to manipulations of intrinsic load, and the difficulty ratings are most sensitive to indications of germane load. This seems to suggest that the subjective ratings 34

5.2. Measurement Methods of mental effort and difficulty might be used to indicate what type of cognitive load participants were under. DeLeeuw and Mayer (2008) also argues that extraneous load can be measured by using secondary task measurements. However, not all secondary measurement techniques have been verified to assess extraneous load. Therefore, the secondary task measurement method that was used in the main experiment was primarily used to assess overall cognitive load during learning. Assessment of intuitive interaction was made possible by using the INTUI questionnaire, developed by Ullrich and Diefenbach (2010a). This questionnaire contained a considerable amount of relatively complex English words. It was therefore decided to include Icelandic translations for the complex words, such as: intuition, unconsciously, deliberately, and more. An interesting conflict became apparent during the translation of the term ‘intuition’. The reason for this conflict has to do with the fact that the Icelandic language has no uniquely equivalent term for intuition. Intuition, translated to Icelandic, is ‘innsæi’. The term ‘innsæi’ however, is also directly translated to the English word ‘insight’, which arguably does not encompass the same conceptual meaning as intuition. This point will be further discussed in Chapter 7. The INTUI questionnaire can be seen in Appendix B.

5.2.2 Objective Measurements Various different measurement techniques have been developed and used objectively to measure cognitive load during learning. The main requirement, for the purpose of this project, was to make sure that the chosen measurement method was non-intrusive, not modal specific, and that the measurement equipment was portable and not reliant on a time-consuming calibration process. A relatively new secondary-task method called The Rhythm Method was found that met all these requirements. The rhythm method was proposed by Park and Brünken (2015) in an article called “The Rhythm Method: A New Method for Measuring Cognitive Load-An Experimental Dual-Task Study”. In this method, participants are instructed to maintain a specific rhythm, seen in Figure 5.7, during a baseline measurement and also during a learning phase. Rhythm precision is defined as the mean rhythm, in milliseconds from the learning phase measurement minus the individual’s rhythm baseline in milliseconds (Park and Brünken, 2015). “Therefore, participants with perfect precision received a score of zero. The higher the absolute value of the deviation from zero, the lower the rhythm precision. The precision can be calculated for both inter-tap intervals, the short rhythm component (digitally played: 500 milliseconds), and the long rhythm component (digitally played: 1500 milliseconds)” (Park and Brünken, 2015, p. 238). Some evidence from DeLeeuw and Mayer (2008) seems to suggest that secondary task methods, such as the rhythm method, are effective at measuring differences in extraneous cognitive load. However, this assertion has not been specifically verified or validated for the rhythm method. Therefore, the purpose of the objective measurement was to measure the overall cognitive load of participants during learning, while the purpose of the subjective measurement was to identify if the cognitive load was contributing to effective or ineffective to schema construction (i.e., intrinsic or germane load) 35

Chapter 5. Experiment Design

Figure 5.7: Illustration of the rhythm. The numbers indicate the time between “taps” in milliseconds

The rhythm data was recorded by using a foot pedal, seen in Figure 5.8, which participants interacted with by using their preferred foot.

Figure 5.8: The foot pedal created for the purpose of this experiment.

A measurement device was constructed by using an Arduino and various circuit components. A program was then created and uploaded to the Arduino to calculate and output the time between each ‘tap’ on the foot pedal. This code for this program is shown in Appendix E.4. The construction of the measurement equipment is shown in Appendix E.1.1. Taken were screen recordings for each participant. The intention for the recordings was to measure timeon-tasks, navigation errors and more.

5.3 Experiment Methodology Prior to the main experiment, carried out were two separate small-scale experiments in order to identify possible problems with the experiment methodology and the equipment setup. All problems found were addressed before the main experiment was conducted. The experi36

5.3. Experiment Methodology

Figure 5.9: Illustration of the Arduino circuit

ment was conducted on a 15 inch Macbook Pro with a retina screen with a standard two button mouse with a scroll wheel connected. The first small scale experiment involved testing if the rhythm method could also be used to measure cognitive load during product use. The experiment facilitator therefore attempted to create a JIRA issue while maintaining the experiment rhythm. However, when the facilitator started to type in the description field, he started having problems maintaining the rhythm. In fact, it soon became clear that maintaining the rhythm while doing any complex simultaneous movements with the hands was seemingly impossible. It was therefore decided to seek validation on this problem by performing a small experiment with two participants. This experiment involved having two participants maintain the experiment rhythm while trying to type whatever they wanted on the computer. The observation from this experiment confirmed the problem with operating multiple complex movements with the hands and feet simultaneously. Participants seemed to cope well with operating the mouse while maintaining the rhythm. However, small abnormalities were also observed when participants clicked on the mouse button, although much less severe. When clicked the mouse, it seemed as if the concurrent rhythm component was delayed for a brief moment. However, this effect was so minuscule that it might have been a result of confirmation bias from the facilitator’s point of view. Because of this experiment, it was decided to use only subjective ratings of cognitive load in the product use phase. Additionally, it was decided to split the non-interactive video into four parts and make participants in the main experiment switch between each part by using the mouse. This was decided because both textual instructions and interactive video required some amount of interaction, while the non-interactive video did not. The second small-scale experiment involved testing the experiment flow and to identify problems with the experiment methodology before the main experiment conduction. Instruction to one participant was to watch the interactive video while simultaneously maintaining the rhythm. This experiment indicated that the experiment flow was working according to plan. However, the participant had not managed to learn how to accomplish the tasks that the instructional material taught. When asked about this, the participant responded that she had been so obsessively focused on maintaining the rhythm that she was not able to maintain attention on the learning material itself. This seemed to indicate that the intended secondary task (i.e., maintaining the rhythm) had become this participant’s primary task. To avoid this problem in the main experiment, it was decided to add an additional dialog to the welcome 37

Chapter 5. Experiment Design interview. This additional dialog instructed participants to focus on the instructional material primarily. All participants that received instructions were told not to worry if they lost the rhythm, they should just try to maintain it as well as they could, while focusing primarily on the instructional material. During this experiment, it was also noticed that the JIRA and Tempo sandbox-server that Tempo provided was often sluggish and unpredictable. Therefore, the decision was to install JIRA and Tempo locally on the computer and use it in the experiment. This ensured quick loading times and relatively consistent performance.

5.3.1 The Main Experiment A between-groups design experiment was conducted in order to investigate the difference in cognitive load and intuitiveness between participants that received interactive video instruction, non-interactive video instruction, and textual instruction. Additionally, a fourth group was included to test whether or not instruction had any effect on participants’ subjectively rated intuitiveness and cognitive load. The experiment involved two phases: a learning phase, and a product-use phase where participants interacted with the actual software. In the instructional phase, participants had to learn how to perform two specific tasks in JIRA and Tempo Timesheets, which was to create an issue and log work on that issue. In the product-use phase, participants had to interact with JIRA and Tempo Timesheets and perform the task taught in the instructional material. Test subjects (n = 40) were split in four groups (10 per group). The first group received an interactive video tutorial. The second group received a non-interactive video tutorial. The third group received written instructions in the form of a PDF document. The fourth group received no instructions, only a brief introduction to the experiment. The requirements for participating in the experiment were the following: • Participant must not have any experience with JIRA or Tempo Timesheets. • Participant need to have experience with computers and the Internet. • Participant needs to be able to understand fluently written and spoken English. The main objective of the participant recruitment phase was to test as many participants of similar age as possible. This was because literature review of intuitive interaction has shown that age can have an effect on intuitive interaction, especially concerning effortlessness. Initially, the experiment was to be taken in an Icelandic college or university. This would have allowed for a large potential sample size with relatively low age variance between test subjects. However, unfortunately colleges and universities in Iceland started their examination period approximately three to four weeks earlier than in Aalborg University. This meant that the experiment was planned to be conducted during school examination period. The result of this was that the available sample size was drastically reduced and the age variance within the sample increased drastically because the experiment facilitator needed to recruit each participant by planning a meeting on a specific day, at a specific time. The experiment was conducted on three different locations in Iceland. Each location was specifically chosen due to low ambient noise and low risk of distraction. Additionally, participants wore a pair of Bose QuietComfort 38

5.3. Experiment Methodology 256 noise cancelling headphones during the learning phase in order to reduce the risk of unpredictable auditory distractions.

5.3.2 Experiment flow This subsection describes the experiment flow which is illustrated in Figure 5.10.

Figure 5.10: Illustration of the experiment flow and the various measurements.

The experiment started with a welcome interview, where the participant’s role in the experiment was explained. Participant’s that received instruction were asked if they had any learning disabilities which might affect their ability to read or watch the instructional material. If a participant reported as being dyslexic, they would not receive textual instruction. Participants were also asked about their age and if they had any prior experience with project management, or worklog management systems such as JIRA or Tempo Timesheets. Participants that received video instructions (interactive and non-interactive) were instructed to learn at their own pace. This meant that test subjects were allowed to pause and skip forward and back in the video time-line as needed. All participants that received instruction were not allowed to interact with the product while they learn, take notes or screenshots, or in any way utilize the environment to offload information from working memory. During the product-use phase, they were not allowed to review the learning material. This forced participants to count on what they remembered from the learning phase, or else rely on their problem-solving skills. When the facilitator had finished asking the questions and seeing if the test subject fit the participation criteria, the facilitator moved on to familiarizing the subject to the subjective ratings (i.e., Cognitive load and the intuitive interaction Likert scale questions). When the facilitator was finished explaining the questions and getting a verbal agreement that they understood the rating system, the facilitator started to familiarize the subject to the rhythm method. This 6

Link to information about the Bose QuietComfort 25 noise cancelling headphones: https://www.bose.com/en_us/products/headphones/over_ear_headphones/ quietcomfort-25-acoustic-noise-cancelling-headphones-apple-devices.html

39

Chapter 5. Experiment Design was done by letting the test subjects listen to an audio recording of the rhythm and letting them imitate the rhythm whilst listening to the rhythm. When the test subject became familiarized with the rhythm, the facilitator stopped the audio and the test subject were told to continue holding the rhythm on his or her own for a period of one minute to create a baseline. After the baseline measurement had been recorded, the facilitator notified the participant that this is the rhythm that they would have to maintain during the entire learning phase, which would approximately take 7 minutes. The participant was asked if he or she uses PC or Mac computers regularly. If participant used PC, the facilitator would change the scroll direction on the mouse wheel to match the scrolling behavior on PC. The facilitator then asked if the participant was ready to start the learning session. If they replied yes, the facilitator re-familiarized the test subject to the rhythm by letting them hear the rhythm for a couple of seconds, before the learning session started. The facilitator started the screen recording and rhythm measurement software and signified to the participant that the learning session had started and the participant should start holding the rhythm. When participants finished the learning session, they were asked to stop the rhythm and retrospectively rate their level of mental effort and the difficulty of the learning material during the learning session. Next, the facilitator closed the learning material and opened up a browser window with JIRA opened and already signed in. In the case for participants that received no instruction, the experiment had just started. Participants got a piece of paper with two tasks written on it. Task 1 was to create a JIRA issue and task 2 was to log work on that issue by using the work log calendar feature. Both of those tasks were taught during the learning session. If participants seemed to be completely lost and were unable to find out how to perform the task, the facilitator gave them a small hint to direct them forward. When the test subjects had finished their task they were asked to rate their subjective mental effort whilst using the product and then they were asked to fill in the Intuitive Interaction questionnaire (INTUI). The final step of the experiment involved asking the participants the following questions: • 1) In your opinion, how effectively did you manage to maintain the rhythm? • 2) Did the rhythm have any affect on your ability to learn what was presented in the instructional material? • 3) Do you have any comments on the instructional material? • 4) How effective was the learning material at teaching you what you needed to know in order to create a JIRA issue and log work on that issue? • 5) Was there anything that you did not understand, or that you were unsure of? • 6) Do you have any comments?

40

CHAPTER

6

R ESULTS The purpose of this study was to investigate the connection between learning and intuition. CLT is a theory that has been widely used for the creation of learning materials that effectively utilize the limited cognitive processing of the WM. This study further examined if three specific learning methods have a different effect on intuitiveness and cognitive load. The measurement methods used in this experiment were based on the assumption that intuition is conceptually equivalent to the automatic-processing assessment factor explained in CLT. Therefore, the results displayed in this chapter examines the correlation between cognitive load, measured as rhythm precision, and intuitiveness of the interface, measured on a 7-point Likert scale developed by the INTUI group. Significant correlations between cognitive load and intuitiveness of an interface might imply that cognitive load is a predictor variable for intuitive interaction. This might open up the field of intuitive interaction to allow empirically studied methods from CLT to be utilized for the creation of various instructional materials to facilitate intuitive interaction. Initially, the experiment was to be carried out in a college or university in Iceland. This would have provided access to a relatively large pool of potential participants from a similar age group. Unfortunately, however, the schools in Iceland started their examination season in the same week as this experiment was planned. Therefore, the experiment facilitator was obliged to recruit participants by planning individual meetings based on each individual participants’ schedule. The result of this meant that fewer participants were able to take part in the experiment and the age variance was bound to increase. The main experiment was conducted over a span of 11 days in three different locations in Reykjavik Iceland. The first requirement for participation in this experiment was that individuals should have no prior experience with JIRA or Tempo. The second requirement was that individuals should have adequate experience with computers, meaning that they frequently used computers and the Internet in their day-to-day life. The third requirement was that individuals were able to read and listen to English instructions without much effort. In total, forty people participated in the experiment with two participant dismissed from the results due to poor technical skills and/or inadequate language skills. The average age of participants was 41

Chapter 6. Results 33.4 years (SD = 15.4). The youngest participant was 14 years old, while the oldest was 66. The data that was gathered from each participant was processed using RStudio1 and Google Sheets 2 . The following subsections describe the processing and analysis of the data gathered from the main experiment. All Likert scale questions from the INTUI questionnaire can be seen in Appendix B.

6.0.1 Rhythm Measurement data The Arduino program, explained in Section 5.2, was designed to output the rhythm measurement for each participant in a Comma Separated Values (CSV) compatible format. This data was cleaned up and modified to include the experiment phase (baseline or main experiment), test subject number, and instruction method. The data then checked for errors and modified according to the same procedure as described by Park and Brünken (2015) in the article “The Rhythm Method: A New Method for Measuring Cognitive Load - An Experimental Dual-Task Study”. The error checking and modification included counting and removing all data points that were over 2000 ms, which is the total duration of both rhythm components (i.e., short and long) combined. In Figure 6.1, the difference between the long and short rhythm component can be seen.

Figure 6.1: Illustration of the rhythm components. The numbers represented in milliseconds.

Additionally, all data-points that were below 250 milliseconds were also removed based on the methodology described by Park and Brünken (2015). The next part of the error checking involved distinguishing the threshold between a short (500 milliseconds) and a long (1500 milliseconds) rhythm component. Park and Brünken suggested grouping all rhythm components below 1000 milliseconds as ‘short’ and all rhythm components above 1000 milliseconds as ‘long’. Scatterplot was made for each participant to analyze how a 1000 millisecond threshold affected the distribution of the rhythm components. The x-axis represents time in millisec1 2

42

Link to RStudio website: https://www.rstudio.com/ Link to Google Sheets about page: https://www.google.com/sheets/about/

onds and the y-axis represents the time between each rhythm component in milliseconds. The scatterplots illustrated that, for some test subjects, a considerable portion of the long rhythm components was being categorized as short components. This was clearly a problem that could have a significant effect on the results. It was therefore decided to test two alternative methods of specifying the threshold. In Figure 6.2, the rhythm data from the first test subject shown with a 1000 millisecond threshold. In Figure 6.3, the same data shown with a threshold specified using a k-means cluster analysis. As shown from the k-means cluster, there are still some long rhythm components categorized as short components. It was therefore decided to specify the threshold manually based on visual inspection of each individual scatterplot. In Figure 6.4, the threshold can be seen by using the manual threshold.

Figure 6.3: TS1 with a threshold Figure 6.4: TS1 with manually Figure 6.2: TS1 with a 1000ms specified using k-means cluster specified threshold with manual threshold error correction analysis

The data was checked for errors for the final time. The final part of the error checking involved counting and removing all data-points that had the same rhythm component showing up twice, or more, in a row. During this process, sometimes the data would suggest that using a static threshold was causing some data-points to be registered as errors. To give an example of this, consider the following series of data-points with a threshold of 800 (milliseconds); Long 1700, Long 801, Long 1650, Short 790. Considering that an ideal sequence alternates between short and long, it shows that the second long rhythm component (801) is on the verge of being considered a short component. However, because the threshold is static, it is now categorized as a long component error, and is therefore removed. A decision was made to rectify this problem by manually sifting through the data by using a spreadsheet formula and look for any cases of duplicate rhythm data-points. If two duplicate rhythm components were identified in a row, the second component was counted as an error and removed. If three or more duplicates rhythm components were identified in a row, the data analyst had to try to determine if this error was simply caused by the participant, or by the effect of having a static threshold. This was decided by using the original threshold (1000 milliseconds), derived from the research of Park and Brünken (2015), as a reference. This means that if three long rhythm components were detected in a row, the data analyst would check if the second rhythm component was below 1000 milliseconds. If it were, then the analyst would re-specify that data-point as a short rhythm component. The threshold and scatterplots that were produced for each participant can be seen in Appendix F.1. 43

Chapter 6. Results

6.0.2 Results from the Rhythm Measurement Data In the original article, Park and Brünken (2015) used rhythm precision as an assessment of cognitive load. The article defined rhythm precision as the mean of each participant’s baseline rhythm measurements minus the mean of the instruction-phase rhythm measurements. The article also suggested performing two separate rhythm precision calculations, one for each rhythm component. The resulting rhythm precision data was then checked for normal distribution to evaluate whether parametric, or non-parametric tests are applicable. In Figure 6.6, and Figure 6.5 the distribution of the data can be seen compared to a normal distribution. The data was also tested for normal distribution by using a Shapiro-Wilk test for normality. The results for the short rhythm component indicates that the data is not normally distributed (W = 0.95627, p-value = 0.2652). On the other hand, the long rhythm component appears to be normally distributed (W = 0.9213, p-value = 0.03291).

Figure 6.5: Quantile comparison plot of short rhythm component’s rhythm precision

Figure 6.6: Quantile comparison plot of long rhythm component’s rhythm precision

The data for the long rhythm component was analyzed using a one-way ANOVA test with pairwise comparison of means. The purpose of the ANOVA test was to identify if any significant difference could be seen between learning methods, based on the long rhythm component data. In Figure 6.7 it can be seen that the confidence interval overlaps for all three learning methods, therefore there is no significant difference between the long rhythm component with regards to all three learning methods (F = 0.594, p-value = 0.559). A follow-up test was performed on the short rhythm component using Kruskal-Wallis one-way analysis test, which is a non-parametric equivalent of the one-way ANOVA test. This test also showed no significant differences between the learning methods (X2 = 3.93, p-value = 0.1402). A suggestion was proposed, based on the scatterplots that were produced for the analysis of the rhythm component threshold. In the scatterplot, one could see that some participants would consecutively change their rhythm tempo throughout the learning session. In Figure 6.4, the long rhythm component can be seen slowing down at first, then speeding up towards the end of the session. This raises the question of whether or not using the mean might actually produce misleading results due to the fact that the rhythm generally changes over time? It was therefore decided to include an additional calculation of variance, hereby referred to as rhythm variance, in order to rectify this issue. The rhythm variance was calculated using the Table of 44

Figure 6.7: Pairwise comparison of means between learning methods

Statistics functionality in R Commander3 . ‘Test subject’, ‘Rhythm Type’ and ‘Experiment phase’ (i.e., baseline or learning session) selected as a factors, with ‘Time between taps’ as a response variable. The program was then set to calculate the variance. This was the same method used to calculate the mean, for the rhythm precision calculations, instead of selecting variance; the program was set to calculate the mean. Four separate Paired Wilcoxon Tests were computed to test if rhythm variance might be a reasonable addition to the analysis of the rhythm data. All four tests compared the difference between the baseline and the learning session. Two tests (one for each rhythm component) were computed based on the rhythm precision and the other two tests were computed based on the rhythm variance. The results from all four tests can be seen in Table 6.1.

MEAN VARIANCE

Long_Baseline and Long_Learning Short_Baseline and Short_Learning Long_Baseline and Long_Learning Short_Baseline and Short_Learning

V 234 211 435 424

p-value 0.7332 0.8983 3.725*10°9 2.049*10°7

Table 6.1: Paired samples Wilcoxon signed rank test, comparing the baseline to the learning phase.

Based on the results from the table, it shows that the rhythm variance produces a significantly lower p-value compared to the rhythm precision (i.e. mean). This indicates that the difference between the mean from the baseline measurements to the learning session is not very high. However, the variance between the baseline and learning session is highly significant. This might suggest that rhythm variance could be a more sensitive measurement of cognitive 3

Link to R Commander website: http://www.rcommander.com/

45

Chapter 6. Results load compared to rhythm precision. To investigate this, it was decided to include the rhythm variance in all following analysis. It is theorized that participants’ subjective rating of cognitive load is correlated to rhythm variance, as well as rhythm precision. This correlation is investigated in the following section.

6.0.3 Results from Intuitive Interaction Questionnaire and Subjective Ratings of Cognitive Load The Likert scale questions were first analyzed using Kruskal Wallis test by rank, which is a oneway ANOVA equivalent for non-parametric data, to check if any of the Likert scale questions produced significant results with regards to the learning method. The purpose of this analysis was to identify which Likert questions indicated correlation between the learning methods. Shown in Figure 6.8, are the results from the Kruskal Wallis test can be seen. Based on an alpha level of p < 0.05 and three degrees of freedom, we get a critical X2 value of 7,815. H0 : The probability distribution of the four instructional methods is identical. Ha : There is a difference between at least two of the four learning methods. This means that if the X2 value of the Likert scale questions is greater than 7.815, we can reject H0 . The Likert scale questions that reject the H0 were investigated further.

Figure 6.8: Results from the Kruskal Wallis test by rank. A one-way ANOVA equivalent for non-parametric data.

In Figure 6.8, it can be seen that four questions managed to reject the null hypothesis. A fifth question (E_03) was added to the following analysis for the sake of curiosity. Although Figure 6.8 suggest that there is a difference between the four learning methods, it doesn’t specify anything about which method was considered more, or less, intuitive. In order to investigate this effect, five boxplots were created based on the questions that rejected the H0 in the Kruskal Wallis 46

Figure 6.9: An illustration of the Likert questions with the rational/reasoning on the left side and intuitive on the right.

analysis. The boxplots illustrate the answers from the Likert scale on the x-axis and the four different learning methods on the y-axis. Although there is a considerable overlap between the learning methods, the general tendency seems to indicate that the interactive and non-interactive video instructions are rated slightly more to the right compared to the other methods. Additionally, the textual and noinstruction seem to be rated slightly more to the left. Generally, the left side of the x-axis represents rational thinking, while the right side represents intuitive thinking (see Figure 6.9). The exception to this can be seen in Figure 6.14 where the x-axis is reversed (Intuitive on the right and rational on the right).

Figure 6.10: Boxplot for E_2

Figure 6.13: Boxplot for G_3

Figure 6.11: Boxplot for E_3

Figure 6.12: Boxplot for G_1

Figure 6.14: Boxplot for G_4 (Reversed scales)

All Likert questions, including the non-significant ones, were also tested using a multiple Kruskal Wallis pairwise comparison function4 . The purpose of this test was to provide a more comprehensive overview of all of the Likert questions in order to be able to indicate visually if any patterns or tendencies could be spotted. In Figure 6.15, the results from that test can 4

The multiple Kruskal Wallis pairwise comparison function “kruskalmc” is a part of the R package “pgirmess”

47

Chapter 6. Results be seen. Each cell displays an observed difference value between the instructional methods (on the left) and each particular question (on the top). A significance level is reached when the observed difference surpasses the critical difference value (seen to the right). This means that a higher value indicates a greater significance. Lower values indicate that there is little or no significant difference between the two learning methods. The figure uses color coding to indicate significance levels. Dark red indicates p-value of less than 0.05; dark orange indicates p-value between 0.05 and 0.1, yellow indicates p-value of 0.1 to 0.4. The gray color indicates the lowest observed difference value, indicating little or no difference between learning methods. This figure seems to indicate that the instructional methods are colored gray (i.e., lowest value) more frequently, compared to no-instruction.

Figure 6.15: Kruskal Wallis multiple comparison test between all intuition measuring Likert scale questions and the learning methods. The formula that was used to achieve these results in R was: kruskalmc(Likert_Question, Instructional_Methods)

6.0.4 Correlation Between Rhythm data and Likert Scale Ratings To investigate the correlation between the rhythm data and the subjective ratings of cognitive load, a Spearman correlation matrix analysis was performed. The purpose of this analysis was to investigate if cognitive load measured via the rhythm method correlates to the subjective ratings of cognitive load and intuitive interaction. The Spearman correlation test is a nonparametric measure of rank correlation, which is appropriate for ordinal variables such as a Likert scale. This analysis was made possible using a correlation matrix formula called “corstars” which was obtained from Stha.com5 . The results from the correlation matrix are shown with significance levels as low as 0.001, and as high as 0.1. The closer that the numbers are to 1 the more they are similar. Significance scores are marked with stars: p < .001 “***”, p < .01 “**”, p < .05 “*”, p < .1 “.”. The first correlation matrix investigated the correlation between the Likert scale questions from the previous section and the rhythm precision, along with the rhythm variance. The purpose of this analysis was to investigate if the subjectively rated intuitiveness of a product is correlated to the objectively measured cognitive load during the learning phase. In Table 6.2, it is shown that none of the questions produces a significant value for neither rhythm precision, 5

Link to the article that explains the formula that was used to create the correlation matrix http://www.sthda. com/english/wiki/elegant-correlation-table-using-xtable-r-package

48

nor rhythm variance. However, E_03 is significantly correlated to age. This might indicate that age was an influencing factor in this experiment. E_02 E_03 G_01 G_03 G_04 Mean_Short Mean_Long VAR_Short VAR_Long

Age -0.12 -0.34* -0.17 -0.02 0.08 -0.01 0.11 0.27 0.13

E_02

E_03

G_01

G_03

G_04

Mean_Short

Mean_Long

VAR_Short

0.70*** -0.12 -0.05 0.39* 0.00 0.17 -0.17 0.18

-0.03 -0.10 0.31. -0.10 0.00 -0.18 0.14

0.56*** -0.39* -0.08 -0.24 0.08 -0.08

-0.28. 0.14 -0.20 -0.09 -0.07

0.25 0.26 -0.21 0.05

0.21 -0.37* -0.30

0.04 -0.19

0.15

Table 6.2: Correlation matrix analysis of the five significant questions from Section 6.0.3 and rhythm precision, along with rhythm variance.

A follow-up analysis was performed with all of the remaining Likert scale questions that did not produce significant results in the previous analysis. These questions were tested in relation to age, rhythm precision, and rhythm variance. The purpose of this analysis was to investigate if any patterns or statistical anomalies could be seen. Shown in Table 6.3, is the results from the analysis. Based on Table 6.2, it is noticeable that age seems to be correlated to DIFF (i.e., learning material difficulty). Additionally, it can also be seen that age is somewhat correlated to four other Likert scale questions which are associated to mental effort and magical experience. This raises the question regarding what effect this may have on the overall results from this experiment. To investigate this question, an age distribution plot was created. In Figure 6.16, the difference age distribution between the four experiment groups can be seen. Based on this, there is a considerable age distribution difference between the groups. Shown further in Chapter 7. On a different note Table 6.3 seems to show a significant correlation between some of the Likert questions and three of the rhythm components. Mean_Short shows a significant correlation to X_04 and minor correlation to X_01 and V_02. Mean_Long shows a significant correlation to X_03 with minor correlation to ME2, G_02 and V_02. VAR_Short shows a significant correlation to G_02 and X_02 with minor correlation showing up for E_01. VAR_Short did not show any correlation to any Likert question. Additionally, it can be seen that age shows no correlation to neither rhythm precision nor rhythm variance. To investigate if the correlations shown in Table 6.3 are a result of actual correlation, or if they may have been caused by statistical chance, a similar correlation matrix was created. However, instead of testing correlation between factors that are compatible with intuitive interaction and CLT, it was decided to investigate if any correlations appear where they theoretically should not. Therefore, a correlation matrix was created which compares the result from the subjective ratings to the baseline rhythm measurements. Any correlation shown in this case should indicate that there is a correlation between the baseline rhythm data and the subjective measurements of intuitive interaction and cognitive load. Note that the baseline measurement was measured before participants had gone through the learning session or even seen the product interface. In Table 6.4, a correlation can be seen between the baseline rhythm measurement and at least three of the Likert questions. This effect was not seen for the five Likert questions in 49

Chapter 6. Results

Figure 6.16: Age distribution between the groups

ME DIFF ME2 E_01 E_04 E_05 G_02 X_01 X_02 X_03 X_04 INT_01 V_01 V_02 V_03 Mean_Short Mean_Long VAR_Short VAR_Long

Age 0.23 0.40* 0.13 -0.30. 0.30. 0.31. -0.09 0.05 -0.19 0.10 0.28. 0.18 -0.12 0.14 -0.17 -0.01 0.11 0.27 0.13

ME

DIFF

ME2

E_01

E_04

E_05

G_02

X_01

X_02

X_03

X_04

INT_01

V_01

V_02

V_03

0.48** 0.28 -0.40* 0.32. 0.38* 0.04 0.25 -0.07 0.13 0.17 0.17 -0.32. 0.02 -0.14 -0.29 -0.10 0.05 0.21

0.41* -0.53** 0.58*** 0.44* 0.08 -0.01 -0.28 0.15 0.11 0.49** -0.36. 0.31. -0.28 -0.17 -0.04 0.07 -0.02

-0.61*** 0.80*** 0.69*** -0.19 -0.17 -0.28. 0.13 0.09 0.23 -0.62*** 0.39* -0.34* -0.12 -0.36. 0.12 -0.19

-0.81*** -0.63*** 0.19 0.11 0.27 -0.14 0.03 -0.30. 0.61*** -0.26 0.35* -0.06 -0.04 -0.34. 0.23

0.72*** -0.23 -0.12 -0.35* 0.10 0.07 0.35* -0.68*** 0.41* -0.42** 0.02 -0.11 0.22 -0.20

-0.22 -0.05 -0.24 0.03 0.20 0.46** -0.58*** 0.47** -0.45** 0.00 -0.25 0.09 -0.20

-0.06 0.17 0.04 -0.20 0.18 0.11 -0.22 0.17 -0.02 0.34. -0.44* -0.12

-0.35* -0.08 0.56*** 0.18 0.16 0.11 0.28. -0.32. -0.18 -0.09 0.13

0.18 -0.57*** -0.26 0.11 -0.12 -0.02 0.26 -0.16 -0.39* 0.02

-0.23 0.09 -0.29. 0.18 -0.19 0.01 -0.39* 0.00 0.15

0.03 -0.06 0.13 0.05 -0.40* -0.09 0.18 0.10

-0.12 0.47** -0.31. -0.03 -0.01 -0.05 -0.04

-0.30. 0.61*** 0.02 0.31 0.00 0.06

-0.47** -0.33. -0.34. 0.22 -0.06

-0.21 0.12 0.16 -0.05

Table 6.3: Correlation matrix analysis of all of the remaining Likert questions with age, rhythm precision and rhythm variance included. Red cells indicate some correlation between age and the Likert questions. The green cells indicate some correlation between the Likert questions and rhythm precision and rhythm variance.

Figure 6.8. This seems to indicate that an alternative factor, such as age distribution, may have 50

had a significant effect on the results. Short_Baseline Long_Baseline VAR_Short_baseline VAR_Long_Baseline

Age 0.16 0.05 -0.30 -0.14

ME -0.02 0.03 -0.05 -0.17

DIFF 0.07 -0.10 -0.14 0.10

ME2 -0.01 -0.06 -0.05 0.16

E_01 -0.18 -0.06 -0.02 -0.37*

E_04 0.06 -0.15 -0.08 0.30

E_05 -0.11 -0.17 0.08 0.22

G_02 0.13 0.56** -0.16 -0.12

X_01 0.05 -0.05 0.04 -0.12

X_02 -0.27 0.06 0.12 0.04

X_03 -0.08 0.01 0.02 -0.21

X_04 0.25 -0.15 -0.21 -0.15

INT_01 -0.33. -0.09 -0.11 0.02

V_01 -0.24 0.01 -0.12 -0.06

V_02 -0.25 -0.43* -0.15 0.09

V_03 0.27 0.30 -0.02 0.05

Table 6.4: Correlation matrix between the baseline measurements and the remaining Likert questions. Any correlation seen here indicates that there is a relation between the baseline rhythm measurement (before participants saw the UI or learning material) and the subjective ratings of intuitive interaction (after participants had gone through the learning session and used the actual software).

6.0.5 Frequently observed behavior during task performance • Most participants re-read the text instructions. However, none of the participants rewatched the video instructions. • Many participants attempted to synchronize their mouse clicks with the rhythm (tap, tap, and click). • Sometimes, the rhythm precision was shortly interrupted during interaction with the learning material. This was not observed all of the time and varied greatly between participants. Some participants simply stopped holding the rhythm during interaction, although they were instructed to keep holding the rhythm constantly. • Some participants had major problems figuring out how to do the tasks. However, these same participants subjectively reported having no problems at performing the tasks. Often reported that the tasks were rather easy to perform. • Most participants that received no instruction had no idea how to perform the tasks. They clicked on pretty much all of the buttons on the page. Interacted with most all of the elements. One participant even created a page widget and attempted to log work through the widget. • Observations of the participants indicates that participants whom received instructions had much easier time performing the tasks. Although some had forgotten various parts of the instructional material, most of them only needed subtle hints like: “Tempo makes this functionality possible”. • In the interactive video instruction, there were participants who attempted to write in the fields shown on the video while the interactive video was playing. When asked about this behavior, they answered, “well, I can click on the video, so I just assumed I could write in it also”. 51

Chapter 6. Results • Although participants in the interactive video clicked on the interface in a similar way to using an actual interface, some of them were unable to recall the placement of the create button and the Tempo Timesheets functionality. This seems to indicate that the emphasis that was placed on these two buttons was too little. It may also have been possible that new information was presented too rapidly, so participants did not have the chance to learn the placement of this functionality. • Many participants were nodding their head while holding the rhythm.

6.0.6 Summary from the exit interview All answers from the exit interview questions can be seen Appendix F.3.1. • Participants that received text instructions wanted to use the software while reading the text, or switch between them freely. • Few participants noted that musical experience should be beneficial for rhythm precision, especially drummers. Although the exit interview did not specifically ask about music experience, participants sometimes reported it. One participant with drumming experience noted that the rhythm did not disturb his learning, but it was a conscious activity. • Many participants reported that they synchronized their interactions with the rhythm in some way. Some participants said that they interacted in-between rhythm components; some said that interacted and tapped the rhythm simultaneously. One participant noted that she read the text in sync with the rhythm. • Most participants noted that doing any movement related task (i.e., moving mouse, scrolling, or clicking) affected the rhythm for a brief moment. The degree to which the rhythm affected seemed to vary from person to person. Some had major difficulty with simultaneous motor movements, while others did not. • Some participants noted that while focusing on the instruction, the rhythm precision affected. On the other hand, while focusing on the rhythm, the comprehension and attention to the instruction was affected. • When asked about their preferred method of learning how to use a new software, participants showed the tendency to prefer having a person teach them (i.e., a mentor). If a mentor was not an option, participants preferred to try it out for themselves. If they are unable to learn how to use the software through trial and error, that is when most of them would look for instructions. • Preference for video or textual instruction was somewhat varied. Most seemed to prefer video instructions. However, this question was not included in the exit interview for all participants, so it not possible to state anything with certainty. It is also plausible that this 52

preference could be correlated with age, because the older generation did not have access to video services such as YouTube when they grew up. • Most participants reported that the act of maintaining the rhythm was a conscious activity that required attention. One participant even noted that it demanded considerable cognitive resources, stating that almost half of his attention was dedicated to the rhythm. • Although some participants had considerably difficult time completing the tasks during product use. Most of them reported that things went pretty well and that they did not have any problems understanding the system.

53

CHAPTER

7

D ISCUSSION The present study addressed the question whether or not instruction leads people to rate a software interface as being more intuitive than without instruction. The experiment also investigated if three different types of instructional materials have different effect on subjective ratings of intuitive interaction, as well as objective measurements of cognitive load. The following discussion will review the various limitations and decisions related to this study.

Interactive Videos in the Experiment A major limitation of non-interactive instructional videos is the fact that the user is either required to passively observe the instruction, or micro-manage activities such as pausing the video to browse back and forth between the software interface and video tutorial. Another limitation of non-interactive video tutorials is the inability to efficiently skip through the video in order to find the specific the information that the viewer wishes to learn. The viewer is often required to maintain attention while listening to redundant, irrelevant, or already known information. This, according to CLT, increases extraneous load through the split-attention effect, redundancy effect and may even cause frustration on the learner’s behalf. This inspiration led to the investigation of interactive videos as a means to accelerate automatic schema construction (i.e., intuition). The practical benefit of utilizing interactive videos for instructional purposes is that it enables the viewer/user to “interact with the software interface without using the actual software”. Interactive videos should, in theory, allow the user to engage in active problem solving while also providing the user with guided auditory instructions. Additionally, this might reduce the user’s need to switch constantly between the actual software UI and the instructional material because the instructional material itself is a oneby-one representation of the actual software. Thereby, effectively utilizing the multi-modality effect and the split-attention effect to reduce extraneous cognitive load and facilitate schema 55

Chapter 7. Discussion construction. The limitation of the interactive video created for the purpose of this project was that it was never intended to facilitate problem solving. The purpose of the interactive video in this project was to investigate if interactive videos can accelerate the construction of automatic schemas (i.e., intuition) by allowing the user to learn actively from personal experience. Shown in Figure 7.1, is the main functionality of the interactive video. This figure illustrates the Call To Action (CTA) video clip. A CTA video clip simply prompts the user to take action on some information presented on the screen. This information could theoretically be anything within the domain of graphical user interfaces. Figure 7.1 illustrates the computation logic that is involved when participants made incorrect navigation interaction in the experiment. When participants made an incorrect navigation interaction, the video simply paused and stayed paused until participants realized that the video was paused. When participants realized it, they could either ask the facilitator for help, or resolve the issue by clicking on the video to resume playback. None of the participants seemed particularly verbally influenced by this interaction behavior.

Figure 7.1: Use of interactive video elements in this study.

Based on this arguably ineffective interactive behavior, an improved CTA logic is hereby proposed that should help to facilitate schema construction through repetition, practice, and inclusion of additional information. The proposed CTA improvement, seen in Figure 7.2, can essentially be understood as an ‘if’ programming logic statement. It goes as follows: When a user makes an incorrect navigation error, the user is told that the navigation was incorrect. The user is then provided with a multi-modal explanation that illustrates what the user did incorrectly, an how a correct interaction should be. This way, the navigation error can possibly be avoided in the future. According to CLT and intuitive interaction, construction of automatic schemas (i.e., intuition) is achieved through repetition and practice. Test subject 36 reported that he preferred receiving instructions in multiple steps. First step is to receive an easy example and work his way through it. After that a more challenging example should be provided, and after that even more challenging example. Interactive videos could, in theory, be able to adapt to users current level of knowledge through a combination of short multi-modal instructional sections in series with sequential knowledge checks designed to test if the user has understood the instruction. If the user fails to perform the task correctly, the instructional material should either elaborate on the task by providing additional information or directing the user to alternative form of information that further explains the information required to complete the task. The use of interactive video logic elements in this project was inherently limited by the experiment design. The purpose of this experiment was to investigate the use of three different 56

Figure 7.2: Improved use of interactive video elements to promote active learning. This proposed improvement can essentially be described as an ‘if’ programming logic statement. If navigation is correct, play the next clip, else provide the user with additional information

learning methods. It was therefore paramount to the validity of the project that the information contained and displayed in all of the instructional methods was designed to be as similar as possible. If, however, we consider a similar experiment, with the exception that the interactive video utilizes problem-solving logic such as the one illustrated in Figure 7.2, the additional information provided by the problem-solving logic would have to be added to the other instructional methods as well, to make them comparable. This would add a considerable amount of additional learning material that all participants would be required to learn, except participants that make no navigation errors in the interactive video. This is a fundamental difference between interactive videos and non-interactive videos. Interactive instructional videos should not be designed to be watched entirely. The fundamental functionality of an interactive video is to provide the viewer/user with the ability to choose. Numerous participants reported that their preferred method of learning how to use a software interface was by asking someone else to teach them. Theoretically, interactive videos are only limited by technological development and human imagination. One might contemplate a fully interactive simulated software experience with integrated artificial personal assistant that predicts an optimal time to provide the user with additional information to accelerate schema construction. However, creating such an incredibly advanced instructional tutorial might eventually become more complex than the actual software system that it is trying to simulate. It is therefore paramount to identify the limitations of this technology in order to maximize its instructional applicability without sacrificing a fortune to manufacture.

Instructional Material The facilitator noted an interesting observation during the experiment. Majority of the participants that received textual instruction chose to read the instructions again, after reading them once. The reason why this behavior was interesting was that none of the participants that received interactive or non-interactive video instruction chose to re-watch any section of the 57

Chapter 7. Discussion videos again. This observation was validated through review of the screen recordings. This raises the question of why did readers choose to read the text again, but the viewers chose not to re-watch the videos. One might argue that reading does not just involve looking at the words to absorb the information contained in the text. Reading requires the reader to create phonological understanding based on visual information of the text. This means that careful reading of important instructional material is not an automatic process, it requires active engagement on the participant’s behalf. The active engagement of the reader might therefore help to explain why participants that received video instruction did not re-watch the video. Considering that watching a video is more of a passive process, it seems plausible that viewers simply did not notice that they were not noticing what was going on in the video. A video does not pause when the viewer loses attention. If, for example, a person’s attention drifts towards some other activity for a brief moment, the person might not be aware of it. As a result, the person continues watching the video without realizing that critical information in the learning material was not fully processed by the cognitive system due to the attention drift. The option to re-watch any section of the video (i.e., skip forward, or back using the video time-line) was clearly explained and demonstrated to all participants in both interactive and non-interactive groups.

Limitations of the Experiment A major limiting factor for the analysis of the experimental data was the fact that the entire sample size was 38. This meant that in the group that received non-interactive video and no instruction, there were only 9 participants. Considering that three participants in the non-interactive group were over 50 years old, while only 1 participant in the textual instruction group was over 50, one can recognize that the results from the experiment might are not ecologically valid. Given that a single participant in a good mood that recently had his morning coffee accounts for approximately 10 percent of the instructional method data, makes it abundantly clear that a considerably larger sample size is needed for an experiment of this scale. Another limiting factor of the experiment is the fact that some of the questions asked in the INTUI questionnaire had no compatible translation in the Icelandic language. An example of this was the term ‘intuition’, which has a dual meaning in Icelandic, because it translates to the English equivalent of ‘insight’, which arguably does not convey the same conceptual definition. Another point is, regarding the ecological validity of the learning situation presented in the experiment. Some participants noted that their preferred method of learning how to use a software involved reading, or watching, instructional material while continuously switching back and forth between the learning material and the actual software. Other participants also noted that their preferred method is to try out the software first, if they cannot figure out how it operates and then they resort to instruction. For the purpose of this project, it was decided to separate the learning phase from the software interaction phase in order to eliminate the splitattention effect as a plausible influencing factor of extraneous cognitive load. Majority of the participants were not asked about their musical experience. However, the facilitator noticed that some participants seemed to be much less affected by the rhythm com58

pared to others. In such cases, the facilitator asked the participants if they had any musical experience. However, not all participants were asked about their musical experience. It is entirely possible that participants with expert musical experience will have been able to maintain the rhythm more constantly than non-musically trained participants. This was observed and reported by multiple participants whom said that the rhythm was very challenging. They had to spend a lot of mental effort on focusing specifically on the rhythm. A study conducted by Fischinger (2011) concluded that professional drummers show a change in rhythm performance when their attention is drawn to another task. This indicates that even though expert musicians may be more efficient at maintaining a constant rhythm during a primary task performance, their rhythm perception is affected by the primary task. Interestingly, in his article Fischinger (2011) proposes a dual-route model of rhythm perception. Fischinger provides evidence of the existence of two different cognitive pathways, based on fundamental psychological principles of perception, action control, and neurobiological findings in rhythm processing and sensorimotor synchronization. It seems rather intriguing that empirical studies in the field of musicology, cognitive load theory and intuition all seem to characterize mental processing in terms of a dual processing system. Time on task and Error rate could not be reliably measured due to the indefinite intervention protocol established by the facilitator regarding what to do when participants made navigation mistakes during product interaction. Generally, if a participant navigated to a page that was never mentioned in the instruction, the facilitator instructed the participant to press the back button on the browser. If the participant used the User Timesheet view instead of the work log calendar view to log work, the facilitator would say “This is an alternative way to log work in Tempo, but your task is to log work using the worklog calendar”. However, participants would often make many unpredictable navigation errors, or even just spend few minutes looking at the screen while not moving the mouse. This was not accounted for.

Theoretical Foundation

Although the scientific community does not seem to agree on the precise definition of intuition, there does seem to be a consensus regarding what the concept of intuition exemplifies. INTUI measurements - Effortlessness: Perhaps a result of automatic processing which bypasses mental effort (CLT). This should correlate with subjective measures of cognitive load (Mental Effort) - Magical Experience: Perhaps a result of the automatic processing is relation to emotion. - Non-Verbalizability: Perhaps due to the non-conscious nature of automatic processing. - Gut feeling: Perhaps a result of bypassing mental effort and the non-conscious nature of automatic processing 59

Chapter 7. Discussion

Rhythm Method The rhythm method, as described by Park and Brünken (2015), participants easily understood. Although some participants seemed rather perplexed by the idea of maintaining a specific rhythm while learning how to use a software system, most participants were able to maintain the rhythm without problem. However, two participants seemed to have a considerably difficult time maintaining the rhythm. Neither were musically trained and one suffered from arthritis. The data analysis described by Park and Brünken (2015) seemed to be incompatible with the data that was gathered from this study. Park and Brünken explained that the threshold between what is considered as being a short rhythm component and what is considered as being a long rhythm component is 1000 milliseconds. The main consideration for this experiment had to do with the fact that the rhythm was dynamic. This meant that a single, static threshold between what is considered as being a short rhythm component and what is considered as being a long component was not sufficient. A static threshold would categorize some part of the rhythm data in an incorrect category. This author suggested that future investigations should improve upon this by utilizing a dynamic threshold calculation formula based on sequential rhythm component succession. For the purpose of this project, a static threshold based on visual inspection with manual error checking was utilized. However, this was a time consuming process that should ideally be avoided. According to Park and Brünken (2015), the long rhythm component should not exceed 2000 milliseconds. If it did, then it should be removed and counted as an error. In this study, it was noticed that the long rhythm component, for some participants, was being registered greater than 2000 milliseconds (see Test subject 32 in Appendix F.1). This meant that the majority of the long rhythm components for some participants were being categorized as errors. This might also need some adjustments for future investigations. This author also proposed an additional calculation of variance to be included as a means to rectify the issue of calculating mean values for a dynamic rhythm. However, due to the limitations of the experiment conditions, the validity of this proposal cannot be verified.

60

CHAPTER

8

C ONCLUSION It is not possible to determine conclusively if instruction had an effect on subjective ratings of intuitive interaction based on the results from the experiment. However, the general tendency seems to suggest that participants that received no instruction relied more on rational processing compared to participants that received some sort of instruction. The same can also be said for participants that received textual instructions, the general tendency seems to suggest that they relied more on rational processing compared to the video instructions. The results from the correlation analysis indicates that age was a significant confounding variable in the experiment. Because the age distribution between the groups was not equal, it is unfortunately not possible to draw any meaningful conclusions based on the subjectively rated questions of intuitive interaction and cognitive load. The correlation analysis did however show some correlation between rhythm measurements and the Likert questions. However, those same questions did not show a significant difference between the learning methods. Further analysis of the data is overshadowed by the fact that each group only had a maximum of 10 participants. This meant that a single participant in each group accounted for approximately 10 percent of the overall data. It is therefore not possible to draw any reliable conclusions from the dataset. The secondary focus of the present study was to investigate how different learning methods affect intuitiveness of an interface. The ability to engage the user in active problem solving while receiving auditory instructions is arguably the primary advantage of interactive instructional videos. However, it has to be noted that in order to provide a reasonable comparison between all learning methods used in the experiment, the use of active problem solving elements in the interactive video had to be excluded.

61

CHAPTER

9

F IGURE R EFERENCE L IST • Figure 2.1: Camera image: http://shop.usa.canon.com/wcsstore/CanonB2BStoreFrontAssetStore/ images/32250_1_l.jpg • Figure 2.1: Atlas robot: http://www.figures.com/forums/attachment.php?attachmentid= 224982&d=1409850669 • Figure 2.1: Canon camera: http://shop.usa.canon.com/wcsstore/CanonB2BStoreFrontAssetStore/ images/32250_1_l.jpg • Figure 2.2: Four sub-factors of intuitive interaction: http://intuitiveinteraction.net/model/ • Figure 3.2: Rapt Media video illustration: http://www.raptmedia.com/product • Figure 5.1: Tempo video tutorials: http://tempo.io/training/

9.0.1 Figures used in the instructional material • Cashier box: https://www.cosmoshop.de/wordpress/wp-content/uploads/kasse.jpg • Cash and envelope: https://nebula.wsimg.com/473cb949642d7be28088da1680227d03? AccessKeyId=C011D7C7F71AAA64DB78&disposition=0&alloworigin=1 • Software bug figure: http://lerablog.org/wp-content/uploads/2013/05/software-bug.jpg • Project management figure: http://tinyurl.com/hx9946m

63

Bibliography

B IBLIOGRAPHY Alethea Blackler. Intuitive Interaction with complex artefacts: empirically-based research. PhD thesis, 2008. 11, 15 Alethea Blackler and Vesna Popovic. Towards intuitive interaction theory. Interacting with Computers, 27(3):1–7, 2015. ISSN 09535438. doi: 10.1093/iwc/iwv011. 10, 12 Alethea Blackler, Vesna Popovic, and Doug Mahar. Studies of Intuitive Interaction Employing Observation and Concurrent Protocol. In Proceedings Design 2004 8th International Design Conference 1, pages 135–142, 2004. URL http://eprints.qut.edu.au/archive/00003639. 10 Alethea Blackler, Vesna Popovic, and Doug Mahar. Intuitive Interaction Applied to Interface Design. Proceedings International Design Congress - IASDR 2005, pages 0–10, 2005. 5 Alethea Liane Blackler and Jorn Hurtienne. Towards a unified view of intuitive interaction : definitions, models and tools across the world. MMI-Interaktiv, 13(2007):36–54, 2007. URL http://eprints.qut.edu.au/19116/. 5, 10, 11, 69 Roland Brünken, Jan L Plass, and Detlev Leutner. Direct Measurement of Cognitive Load in Multimedia Learning. Educational Psychologist, 38(1):53–61, 2003. ISSN 0046-1520. doi: 10.1207/S15326985EP3801_7. 20 S Chaiken and Y Trope. Dual-process Theories in Social Psychology. Dual-process Theories in Social Psychology. Guilford Press, 1999. ISBN 9781572304215. URL https://books.google.is/ books?id=5X{_}auIBx99EC. 70 Hwan-Hee Choi, Jeroen J. G. van Merriënboer, and Fred Paas. Effects of the Physical Environment on Cognitive Load and Learning: Towards a New Model of Cognitive Load. Educational Psychology Review, 26(2):225–244, jun 2014. ISSN 1040-726X. doi: 10.1007/ s10648-014-9262-6. URL http://link.springer.com/10.1007/s10648-014-9262-6. 13, 14, 16, 17, 18, 19 Gabriele Cierniak, Katharina Scheiter, and Peter Gerjets. Explaining the split-attention effect: Is the reduction of extraneous cognitive load accompanied by an increase in germane cognitive load? Computers in Human Behavior, 25(2):315–324, 2009. ISSN 07475632. doi: 10.1016/j. chb.2008.12.020. URL http://dx.doi.org/10.1016/j.chb.2008.12.020. 34 Erik Dane and Michael G Pratt. Exploring intution and its role in managerial decision making. Academy of Management Review, 32(1):33–54, 2007. ISSN 03637425. doi: 10.5465/amr. 65

Bibliography 2007.23463682. URL http://search.ebscohost.com/login.aspx?direct=true{&}db=buh{&}AN= 23463682{&}site=ehost-live. 7, 69 Krista E. DeLeeuw and Richard E. Mayer. A comparison of three measures of cognitive load: Evidence for separable measures of intrinsic, extraneous, and germane load. Journal of Educational Psychology, 100(1):223–234, 2008. ISSN 1939-2176. doi: 10.1037/0022-0663.100.1.223. URL http://doi.apa.org/getdoi.cfm?doi=10.1037/0022-0663.100.1.223. 34, 35 Sarah Diefenbach and Daniel Ullrich. An experience perspective on intuitive interaction: Central components and the special effect of domain transfer distance. Interacting with Computers, 27(3):210–234, 2015. ISSN 09535438. doi: 10.1093/iwc/iwv001. 11 Seymour Epstein. Demystifying Intuition: What It Is, What It Does, and How It Does It. Psychological Inquiry, 21(4):295–312, 2010. ISSN 1047-840X. doi: 10.1080/1047840X.2010.523875. URL http://www.tandfonline.com/doi/abs/10.1080/1047840X.2010.523875. 7, 8, 9 Jonathan St. B T Evans. Intuition and reasoning: A dual-process perspective. Psychological Inquiry, 21(4):313–326, 2010. ISSN 1047-840X. doi: 10.1080/1047840X.2010.521057. URL http://ovidsp.ovid.com/ovidweb.cgi?T=JS{&}PAGE=reference{&}D=psyc7{&}NEWS= N{&}AN=2010-25197-004. 8, 9 Timo Fischinger. An integrative dual-route model of rhythm perception and production. Musicae Scientiae, 15(1):97–105, 2011. ISSN 1029-8649. doi: 10.1177/1029864910393330. 59 Robin M. Hogarth. Intuition: A Challenge for Psychological Research on Decision Making. Psychological Inquiry, 21(4):338–353, 2010. ISSN 1047-840X. doi: 10.1080/1047840X.2010. 520260. 69 Nina Hollender, Cristian Hofmann, Michael Deneke, and Bernhard Schmitz. Integrating cognitive load theory and concepts of human-computer interaction. Computers in Human Behavior, 26(6):1278–1288, 2010. ISSN 07475632. doi: 10.1016/j.chb.2010.05.031. URL http://dx.doi.org/10.1016/j.chb.2010.05.031. 13 Slava Kalyuga, Paul Chandler, and John Sweller. Incorporating learner experience into the design of multimedia instruction. Journal of Educational Psychology, 92(1):126–136, 2000. ISSN 0022-0663. doi: 10.1037/0022-0663.92.1.126. URL http://doi.apa.org/getdoi.cfm?doi= 10.1037/0022-0663.92.1.126. 15, 16 J Locke. An Essay Concerning Human Understanding: In Four Books. Number v. 2 in An Essay Concerning Human Understanding: In Four Books. H. Woodfall, 1768. URL https://books. google.is/books?id=dQYOAAAAYAAJ. 71 Richard E. Mayer. Multimedia learning : Are we asking the right questions ? Educational Psychologist, 32(February 2015):37–41, 2010. doi: 10.1207/s15326985ep3201. URL http: //www.tandfonline.com/doi/pdf/10.1207/s15326985ep3201{_}1. 15 Mitchell McEwan, Alethea Blackler, Daniel Johnson, and Peta Wyeth. Natural Mapping and Intuitive Interaction in Videogames. CHI PLAY ’14 Proceedings of the First ACM SIGCHI 66

Bibliography Annual Symposium on Computer-Human Interaction in Play, pages 191–200, 2014. doi: http://dx.doi.org/10.1145/2658537.2658541. 5, 6, 11 Everett McKay. Intuitive UI: What the heck is it?, 2010. URL http://www.uxdesignedge.com/ 2010/06/intuitive-ui-what-the-heck-is-it/. 7 M. David Merrill. First Principles of instruction. Educational Technology Research and Development, 50(3):43–59, 2002. ISSN 1042-1629. doi: 10.1007/BF02505024. 14 George Miller. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological review, 101(2):343–352, 1956. ISSN 0033-295X. doi: 10.1037/h0043158. 13 Gowrishankar Mohan, Alethea Blackler, and Vesna Popovic. Using conceptual tool for intuitive interaction to design intuitive website for SME in India: A case study. IASDR 2015: Interplay Proceedings, pages 1500–1521, 2015. 11 Anja Naumann, Jörn Hurtienne, Johann Habakuk Israel, Carsten Mohs, Martin Christof Kindsmüller, Herbert A. Meyer, Steffi Hußlein, and IUUI Research Group. Intuitive use of user interfaces: Defining a vague concept. Engineering Psychology and Cognitive Ergonomics, pages 128–136, 2007. ISSN 03029743. doi: 10.1007/978-3-540-73331-7_14. 10 Fred Paas and Jeroen J. G. Van Merriënboer. Instructional control of cognitive load in the training of complex cognitive tasks. Educational Psychology Review, 6(4):351–371, dec 1994a. ISSN 1040-726X. doi: 10.1007/BF02213420. URL http://link.springer.com/10.1007/BF02213420. 13, 16, 17, 18, 19 Fred Paas and Jeroen J. G. Van Merriënboer. Variability of worked examples and transfer of geometrical problem-solving skills: A cognitive-load approach. Journal of Educational Psychology, 86(1):122–133, 1994b. ISSN 0022-0663. doi: 10.1037/0022-0663.86.1.122. 19 Fred Paas, Alexander Renkl, and John Sweller. Cognitive Load Theory and Instructional Design: Recent Developments. Educational Psychologist, 38(1):1–4, mar 2003a. ISSN 00461520. doi: 10.1207/S15326985EP3801_1. URL http://www.tandfonline.com/doi/abs/10. 1207/S15326985EP3801{_}1. 13, 14 Fred Paas, Juhani E. Tuovinen, Huib Tabbers, and Pascal W. M. Van Gerven. Cognitive Load Measurement as a Means to Advance Cognitive Load Theory. Educational Psychologist, 38 (1):63–71, mar 2003b. ISSN 0046-1520. doi: 10.1207/S15326985EP3801_8. URL http://www. tandfonline.com/doi/abs/10.1207/S15326985EP3801{_}8. 19, 20 Babette Park and Roland Brünken. The Rhythm Method: A New Method for Measuring Cognitive Load-An Experimental Dual-Task Study. Applied Cognitive Psychology, 29(2):232–243, mar 2015. ISSN 08884080. doi: 10.1002/acp.3100. URL http://doi.wiley.com/10.1002/acp. 3100. 13, 21, 30, 34, 35, 42, 43, 44, 60, 105 Rapt Media. Best Practices for Interactive Video: A Guide to Creating Effective Interactive Videos. URL http://info.raptmedia.com/interactive-video-best-practice-guide. 33 67

Bibliography Annett Schmeck, Maria Opfermann, Tamara van Gog, Fred Paas, and Detlev Leutner. Measuring cognitive load with subjective rating scales during problem solving: differences between immediate and delayed ratings. Instructional Science, 43(1):93–114, 2015. ISSN 00204277. doi: 10.1007/s11251-014-9328-3. 19, 34 Marta Sinclair. Misconceptions About Intuition. Psychological Inquiry, 21(4):378–386, 2010. ISSN 1047-840X. doi: 10.1080/1047840X.2010.523874. 69 John Sweller. Element interactivity and intrinsic, extraneous, and germane cognitive load. Educational Psychology Review, 22(2):123–138, 2010. ISSN 1040726X. doi: 10.1007/ s10648-010-9128-5. 19 John Sweller, Paul Ayres, and Slava Kalyuga. Cognitive Load Theory. Springer New York, New York, NY, 2011. ISBN 978-1-4419-8125-7. doi: 10.1007/978-1-4419-8126-4. URL http://link.springer.com/10.1007/978-1-4419-8126-4{_}6http://link.springer.com/10.1007/ 978-1-4419-8126-4. 13, 15, 19, 21, 22, 34 Daniel Ullrich and Sarah Diefenbach. INTUI. Exploring the Facets of Intuitive Interaction. Mensch & Computer 2010, (March):251–260, 2010a. URL http://www.researchgate.net/ publication/221439784. 5, 11, 12, 13, 35 Daniel Ullrich and Sarah Diefenbach. From magical experience to effortlessness: an exploration of the components of intuitive interaction. Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, (March):801–804, 2010b. doi: 10.1145/1868914.1869033. 5 Jeroen J. G. Van Merriënboer and John Sweller. Cognitive Load Theory and Complex Learning: Recent Developments and Future Directions. Educational Psychology Review, 17(2):147–177, jun 2005. ISSN 1040-726X. doi: 10.1007/s10648-005-3951-0. URL http://link.springer.com/ 10.1007/s10648-005-3951-0. 13, 14, 15, 16, 17, 18, 19, 75 Dongsong Zhang, Lina Zhou, Robert O. Briggs, and Jay F. Nunamaker. Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. Information and Management, 43(1):15–27, 2006. ISSN 03787206. doi: 10.1016/j.im.2005.01.004. 25

68

APPENDIX

A

A PPENDIX A.1 Intuition is a Fuzzy Concept “Many researchers have postulated so-called dual process theories (Chaiken & Trope, 1999) that distinguish between what can be roughly called “intuitive” and “analytic” decision-making processes, although these go under different names, for example, “System 1” and “System 2” (Stanovich & West, 2000), “experiential” and “rational” (Epstein, 1994), or “tacit” and “deliberate” (Hogarth, 2001).” (Hogarth, 2010, Page 338). • Sinclair (2010) proposed a comprehensive intuition model based on three types of intuition: Intuitive expertise, intuitive creation, and intuitive foresight. • Dane and Pratt (2007) focus also on dual process theory. However, they conceptualized intuition both by its process (which we refer to as intuiting), as well as its outcome (termed intuitive judgments). • Hogarth (2010) talks about four different levels of intuition. • Blackler and Hurtienne (2007) describes intuitive interaction as a continuum. • Other sources describe intuition as nothing more than a word with multiple meanings. Even though the previous sections have attempted to shed some light on the definition of intuition, the concept still seems ambiguous and fuzzy. Daniel T. Gilbert writes about the problem of trying to define, or count, the processes of the human mind in the article What the Mind’s Not. He writes: “So which is it? Two, roughly 30, or something between? It depends, of course, on how one counts. The neuroscientist who says that a particular phenomenon is the 69

Appendix A. Appendix result of two processes usually means to say something unambiguous-for example, that the inferior cortex does one thing, that the limbic system does another, and that together the electrochemical activities of these two anatomical regions produce a feeling of ennui, the aroma of stale cabbage, or the sneaking suspicion that one’s spouse has been replaced by a replica. In such instances the phrase “Dual processes” refers to the activities of two different brain regions that may be physically discriminable, and the neuroscientist says there are “two processes” because the neuroscientist is talking about things that can be counted. But few of the psychologists whose chapters appear in this volume would claim that the dual processes in their models necessarily correspond to the activity of two distinct brain structures” (Chaiken and Trope, 1999, Page 7). In order to gain a better understanding of why intuition seems to be such a fuzzy construct, lets look at the history of the word. In Figure A.1 a graph illustrating the usage of the word “intuition” can be seen. Based on this graph it can be seen that the usage of the word intuition in literature dates back at least 400 years. It is safe to assume that the word has been used for longer than that. If we compare that to the first known use of “intuitive interaction” and “intuitive use” we can see a big difference. In Figure A.1 a graph can be seen which illustrates how the terms “intuition”, “intuitive use”, and “intuitive interaction” have occurred in books digitized by Google from the year 1550-2016. This graph was created using Google Books Ngram1 .

Figure A.1: This figure illustrates the number of references from the year 15502016 to “intuition”, “intuitive interaction” and “intuitive use”

The x-axis illustrates the span of years for which the term appears. The y-axis shows what percentage of Google’s sample of books written in English and published in the United States contain the phrase “intuition”, “intuitive use”, or “intuitive interaction”. In the figure, it can be seen that the use of the term “intuition” far exceeds the use of “intuitive use” and “intuitive interaction” both in terms of percentage of books containing the phrase and the first known use of the term in Google’s sample of books. By removing the term “intuition” from the figure and reducing the timespan to 1820-2016, the use of the remaining terms becomes more apparent. This can be seen in Figure A.2 where the use of the phrase “intuitive use” started in the mid 1

70

Link to Google’s ngram service: https://books.google.com/ngrams

A.1. Intuition is a Fuzzy Concept 1800’s, while the use of the phrase “intuitive interaction” started in the mid 1900’s and has seen a considerable rise in use during the past 20 years.

Figure A.2: This figure illustrates the number of references by year to intuitive interaction and intuitive use

These graphs may provide a clue to the reason why intuition is considered to be such an ambiguous and fuzzy construct. Considering that the term intuition has been used in literature at least since the 16th century, where people were being burned alive on the assumption of witchcraft, it is reasonable to suspect that the meaning of the word may have changed somewhat over time. In the following quote, an example of how the idea of intuition was described by John Locke in 1768 can be seen: “The different Clearness of our Knowledge seems to me to lie in the different Way of Perception the Mind has of the Agreement or Disagreement of any of its Ideas. For if we will reflect on our own Ways of Thinking, we shall find that sometimes the Mind perceives the Agreement or Disagreement of two Ideas immediately by themselves, without the Intervention of any other : And this, I think, we may call intuitive Knowledge. For in this, the Mind is at no Pains of proving or examining, but perceives the Truth, as the Eye doth Light, only by being directed toward it. Thus the mind perceives, that white is not black, that a Circle is not a Triangle, that Three are more than Two, and equal to One and Two. Such kind of Truths the Mind perceives at the first sight of the Ideas together, by bare Intuition, without the Intervention of any other Idea; and this Kind of Knowledge is the clearest and most certain, that human Frailty is capable of” (Locke, 1768, pp. 103-104). Despite the fact John Locke wrote this essay more than 240 years ago, his use of the term seems relatively similar to the definitions listed earlier in this chapter. Although the writing style of this era has changed considerably from what we are familiar with today, it is possible to extract some conceptual understanding of what Locke is trying to convey through this text. Based on this author’s interpretation of this text, Locke seems to describe intuition as an immediate (rapid) agreement or disagreement of two ideas without the intervention of any other ways of thinking (or ideas). He seems to indicate that intuitive knowledge is the mind’s easiest, clearest and most certain way of knowing. However, without consulting with the original author, or a literature expert from this era, this interpretation is up for debate. 71

APPENDIX

B

INTUI Q UESTIONNAIRE

73

Please&recall&the&use&of&the&product&and&describe(your(experience(using(the(following(pairs( of(expressions.&The&pairs&represent&extreme&opposites,&with&possible&graduations&between& them.& Perhaps&some&of&the&expressions&are¬&quite&suitable&to&the&product.&Nevertheless,&please& checkmark&one&box&in&each&row,&indicating&which&term&you&deem&applicable.&Please&consider& that&there&are&no&"correct"&or&"incorrect"&answers&–&only&your&own&personal&opinion&counts!& !! 1!!

!

!

2!

!

3!

!

4!

!

5!

!

6!

7!

!!

While!using!the!product…!

! !

!

! ! !

...I!acted!deliberately!

!

!

!

!

!

!

! ...I!acted!on!impulse!

!

G_01!

...it!took!me!a!lot!of!effort!to! reach!my!goal!!

!

!

!

!

!

!

! ...I!reached!my!goal!effortlessly! !

E_01!

...I!performed!unconsciously,! without!reflecting!on!the! individual!steps!!

!

!

!

!

!

!

! ...I!consciously!performed!one! P! G_02! step!after!another!!

...I!was!guided!by!reason!

!

!

!

!

!

!

! ...I!was!guided!by!feelings!

!

G_03!

...I!felt!lost!

!

!

!

!

!

!

! ...I!easily!knew!what!to!do!

!

E_02!

...I!acted!without!thinking!

!

!

!

!

!

!

! ...I!was!able!to!explain!each! individual!step!

P!

G_04!

!

! ! !

Using!the!product...!

!

...required!my!close!attention!

!

!

!

!

!

!

! ...ran!smoothly!

!

E_03!

...was!inspiring!

!

!

!

!

!

!

! ...was!insignificant!

P!

X_01!

...was!easy!!

!

!

!

!

!

!

! ...was!difficult!

P!

E_04!

...was!nothing!special!

!

!

!

!

!

!

! ...was!a!magical!experience!

!

X_02!

...was!very!intuitive!

!

!

!

!

!

!

! ...wasn't!intuitive!at!all!

P! INT_01!

...was!trivial!

!

!

!

!

!

!

! ...carried!me!away!

!

X_03!

...came!naturally!

!

!

!

!

!

!

! ...was!hard!

P!

E_05!

...was!fascinating!!

!

!

!

!

!

!

! ...was!dull!

P!

X_04!

!

! ! !

In!retrospect…!

!

...it!is!hard!for!me!to!describe! the!individual!operating! steps!

!

!

!

!

!

!

! ...I!have!no!problem!describing! ! the!individual!operating!steps!

...I!can!easily!recall!the! operating!steps!!

!

!

!

!

!

!

P! V_02! ! ...it!is!difficult!for!me!to! remember!how!the!product!is! operated!

...I'm!not!able!to!express!in! which!way!I!used!the!product!

!

!

!

!

!

!

! ...I!can!say!exactly!in!which! way!I!used!the!product!

&

© INTUI (English), http://intuitiveinteraction.net/, Ullrich, D., Diefenbach, S. (2010).

!

V_01!

V_03!

B.1. Effects that Reduce Extraneous Cognitive Load

B.1 Effects that Reduce Extraneous Cognitive Load Effect

Description

Extraneous load

Goal-free effect

Replace conventional problems with goal-free problems that provide learners with an a-specific goal

Worked example effect

Replace conventional problems with worked examples that must be carefully studied

Replace conventional problems with Completion completion problems, providing a parproblem tial solution that must be completed by effect the learners Replacemultiple sources of information Split atten- (frequently pictures and accompanying tion effect text) with a single, integrated source of information Replace a written explanatory text and another source of visual information Modality such as a diagram (unimodal) with a effect spoken explanatory text and a visual source of information (multimodal) Replacemultiple sources of information Redundancy that are self-contained (i.e., they can effect be understood on their own) with one source of information

Reduces extraneous cognitive load caused by relating a current problem state to a goal state and attempting to reduce differences between them; focus learner’s attention on problem states and available operators Reduces extraneous cognitive load caused by weak-method problem solving; focus learner’s attention on problem states and useful solution steps Reduces extraneous cognitive load because giving part of the solution reduces the size of the problem space; focus attention on problem states and useful solution steps Reduces extraneous cognitive load because there is no need to mentally integrate the information sourcesReduces Reduces extraneous cognitive load because the multimodal presentation uses both the visual and auditory processor of working memory Reduces extraneous cognitive load caused by unnecessarily processing redundant information

Table B.1: Effects that reduce cognitive load. This table was copied from the article “Cognitive Load Theory and Complex Learning: Recent Developments and Future Directions”, by Van Merriënboer and Sweller (2005)

.

75

APPENDIX

C

E MAIL C OMMUNICATION WITH INTUI G ROUP

77

From: Subject: Date: To: Cc:

Daniel Ullrich [email protected] Re: Master thesis question regarding intuitive interaction 23 March 2016 at 13:11 Ðorgeir Gisli Skúlason [email protected] [email protected]

Hi Thorgeir, you are right with your assumtion. Personally I would assume that effortlessness would be related to both, mental and physical effort. Nonetheless the concept refers primarily to mental effort - keep in mind that it is derived from intuition theories where mental effort is in focus. Therefore, it's quite plausible that there could be a link between mental effort and cog. load. Best regards, Daniel

Am 23.03.2016, 12:53 Uhr, schrieb Ðorgeir Gisli Skúlason : Greetings Daniel, Thank you for a very informative reply! I will definitely take a look at processing fluency. At first glance it seems like the description of effortlessness shares some similarities with Cognitive Load Theory, which I have been looking into. Is it right to assume that effortlessness is referring specifically to mental effort (not the physical effort of clicking on a mouse button)?… If so, do you think there might be a link between mental effort and cognitive load? Best regards, Thorgeir Gisli Skúlason, PDP10 [email protected]

On 21 Mar 2016, at 13:27, Daniel Ullrich wrote: Dear Thorgeir, thank you for your mail. We are happy to hear that you like our approach and appreciate your feedback. Since you send your mail to our intuitiveinteraction.net emails I assume you already know our website. Here are my thoughts about your questions: (1) Intuition is based on unconscious processing. It can only work if you have already learned the relevant things in the past. In our view intuitive interaction works in a similar way: if you have learned certain interaction principles you can (sooner or later) use them with little or no effort. Gigerenzer (scientist in the field of intuition and decision making) gave the example that driving a car would be a good example for intuition since he could drive a car without thinking about it. Thus, his driving experience was effortless. In our view, driving a car is not the best example for intuition/intuitive interaction - just think about novice drivers and how lost and overexerted they feel while trying to keep track of all the information and controls within a car. Now, who is right? You could argue for both positions but the point is that it gives an important hint about the changing nature of intuitive interaction. With ongoing interaction time the interaction requires less and less effort until it reaches the stage of complete effortlessness. Therefore different people with varying experience rate the same product differently regarding its intuitiveness. So, to answer your question (how does a feature become effortless): -interaction time (=experience; the more the better) -low complexity (influences learning curve) -high usability rating (see the next point) -usage of known interaction paradigms, e.g. metaphors (activation of already learned principles) -suitable usage domain (product is suitable for user and task)

-suitable usage domain (product is suitable for user and task) This list is a mix of my experiences and study results. (2,3) Until now we have not checked for compatibility with other models because pinning down the phenomenon was our priority (our model went through several stages of development; you would be surprised if I showed you the first version...). Checking for compatibility is on our research agenda but not top priority. Regarding the question of a equivalent theory in the field of cog psychology: we do not think there is a suitable equivalent, maybe the concept of processing fluency comes close to it if you want a starting point for further research. I hope my thoughts were helpful and did not cause confusion :) Best regards, Daniel

Am 18.03.2016, 13:04 Uhr, schrieb Ðorgeir Gisli Skúlason : Greetings Sara and Ullrich, My name is Þorgeir Skúlason and I am a master thesis student from Aalborg University (Denmark), studying Product and Design Psychology. I am currently researching the topic of intuitive interaction for my master thesis report and lately I have been reading alot of your work. I really like the phenomenological approach that you have taken towards this topic and I plan to do something similar for my project. I think this approach makes a lot more sense than to chase the arguably illusive definition of intuition. In your research, you managed to identify four factors which seem to be highly correlated to intuitive interaction: Gut feeling, verbalizability, magical experience and effortlessness. The last point really sparked my interest and I was wondering if you might be able to inform me about it. I’m interested in learning about the underlying mechanics, or possible sub-components of effortlessness. For example: - How does a feature become effortless to use? (is it a function of time, or how often it is used, or perhaps it involves the intensity of cognitive processing over time)… is there a learning curve? - What is the cognitive psychological equivalent theory for effortlessness... Something to do with affordances, memory, innate knowledge…? - Most authors agree that intuition and intuitive interaction is based on previous knowledge. That must mean that intuition involves memory in some way. But have you considered if your take on intuitive interaction is compatible with existing memory models such as Baddeley’s model of working memory? I hope you have time to answer some of these questions, I really appreciate it. Keep up the great work! Best regards, Thorgeir Gisli Skúlason, Master Thesis Student: Product and Design Psychology [email protected]

APPENDIX

D

I NSTRUCTIONAL M ATERIAL D.1 Textual Instructional Manual

81

User’s Guide to Tempo Timesheets for JIRA  In  this  instructional  manual,  you  will  learn  how  to  use  two  features  of  a  project  management  tool  called  ​Tempo  Timesheets  for  JIRA​.  This  guide  explains  how to create an ​issue  in JIRA and how to  log  work  on  that  issue  using  Tempo  Timesheets.  Before  we   dive  into  that,  let’s  answer  the  question; what is JIRA and Tempo Timesheets? 

What is JIRA?  JIRA  is  an  issue​­tracking  software  that  provides  companies  and  organizations  with  bug­tracking,  issue­tracking,  and  project­management  functions.  JIRA  enables  managers  and employees to plan  projects, manage teams, track the status of projects, share information, and do much more.    

What is Tempo Timesheets?  Tempo  Timesheets  is  a  software  add­on  for  JIRA.  Tempo  Timesheets  primarily enables you   to  log  work  on  JIRA  issues.  The  worklogs  can  be  used,  for  example,  to  calculate  employee  salaries  or   customer  billing information. Before  you  can  log  work  using  Tempo  Timesheets,  you must learn how to create a JIRA​ ​issue.      

What is a JIRA issue?  Different  organizations  use  JIRA  to  track  different  kinds  of  issues.  Depending   on  how  your  organization  uses  JIRA,  an  issue  could,  for  example,  represent  a  software  bug  or  a  project  task. An issue describes a work task that an  employee  must  do;  for  example, fix a  software bug, attend a meeting,  or  develop a new feature.  

 

 

Creating a JIRA issue  1.

On the top menu bar, click ​Create​ (Keyboard shortcut: ​c​) 

2.

In the Create Issue window, fill in the requested information.  Project  Link the issue to a JIRA project.  Issue Type  Choose the type of issue (For example:   Bug, improvement, new feature, task).  Summary  A brief one­line summary of the issue.  Description  In the description, you can mention other  JIRA users. An email message is sent to  their email addresses after you update the  issue. You can also link the issue to other  issues, and insert macros and images.  Team  If you want to assign this issue to a team, select the team here.  Create another 

If you want to create a series of similar issues, for the same project and of the same issue  type, select the ​Create another​ check box.    Jira allows you to customize the issue in many more ways, but for now, let’s create the issue and  move on.       

3.

Click ​Create​.  

Logging work on a JIRA issue  The core function of Tempo Timesheets is to log time that employees work on JIRA issues. 

Logging work from the worklog calendar  Go to the time view of the worklog calendar by completing the following steps:    a) On the top menu bar, click ​Tempo​ > ​Timesheets​.

    b) On the second menu bar, click ​Worklog Calendar​.

    c) On the second menu bar, click the ​clock icon​. 

    On the worklog calendar, in the Today column, click a time that you want to log work for. 

     

 

  On the form that is displayed, fill in the requested information, and click ​Save​. 

   

Tip: ​Dragging­and­dropping  Alternatively, you can log work by clicking an existing worklog card or by dragging a suggestion  card from the right sidebar to the worklog calendar. 

 

 

 

Logging work using the Log Work button  Another way to log work is by using the Log Work button ​→     

1.

Go to the user timesheet view by completing the following steps: 

  a) On the top menu bar, click ​Tempo​ > ​Timesheets​.

  b) On the second menu bar, click ​User Timesheet​.

   

2.

On the right side of the timesheet, click ​Log Work​. (Keyboard shortcut: ​w​)

     

3.

In the Log Work window, fill in the requested information.   Left sidebar  Choose the type of work that   you want to log.  Issue  Search for an issue by   starting to write in the ​Issue ​field.  Period  If you select the Period checkbox,  an ​End date​ field for the time   period is displayed. 

Date  If you are logging work for a date other than today’s date, select the date.  Time  The start time of the work.  Worked  The number of hours you want to log on this issue.  Description  A brief description of the work.  Log another  If you want to log more work after this log, select the ​Log another​ checkbox.   

4.  

Click ​Log Work​.   

Appendix D. Instructional Material

D.2 Video narration scripts

88

NON­INTERACTIVE VIDEO  Hello and welcome. In this instructional video you will learn how to use two features in a project  management tool called ​Tempo Timesheets for JIRA.​  You will learn how to create an ​issue​ in JIRA  and how to ​log work​ on that issue, using Tempo Timesheets (Illustrate using keynote)    Before we dive into that, let’s answer the question; what is JIRA and what is Tempo Timesheets? 

  JIRA is an​ issue​­tracking software that provides companies and organizations with bug­tracking,  issue­tracking, and project­management functions. JIRA enables managers and employees to   plan projects, manage teams, track the status of projects, share information, and do much more.     Tempo Timesheets is a software extension for JIRA. Tempo Timesheets primarily enables you to  log work on JIRA issues. The worklogs can be used, for example, to calculate employee salaries or  customer billing information. Before you can log work using Tempo Timesheets, you must learn how  to create a JIRA​ ​issue.  

 

Different organizations use JIRA to track different kinds of issues. Depending on how your  organization uses JIRA, an issue could, for example, represent a software bug or a project task.  An issue describes a work task that an employee must do; for example, fix a software bug,  attend a meeting, or develop a new feature.  

 

 

Creating a JIRA issue  Now, let’s go over how to create a JIRA issue. We start by clicking on the ​Create​ button on the  top menu bar. You can also use the keyboard shortcut “C”. This opens up the Create issue  window.    In the Create Issue window, fill in the requested information.    First, Link the issue to a JIRA project.    Next, Choose which type of issue you are creating (For example: Bug, improvement, new  feature, task)    Write ​a brief one­line summary of the issue in the summary field.    Describe the issue in more detail. ​Here, you can mention other JIRA users. An email message  is sent to their email address after you update the issue. You can also link the issue to other  issues, and insert macros and images.    If you want to assign this issue to a team, select the name of the team here.    If you want to create a series of similar issues, select the ​Create another​ check box.    Jira allows you to customize the issue in many more ways, but for now, let’s create the issue and  move on.   

4.

 

Click ​Create​.  

 

Logging work on a JIRA issue  The core function of Tempo Timesheets is to log time that employees work on JIRA issues. You can  log work in two ways: Using the worklog calendar, or the log work button. 

Logging work from the worklog calendar  Go to the time view of the worklog calendar by completing the following steps:    d) On the top menu bar, click ​Tempo​ > ​Timesheets​.

    e) On the second menu bar, click ​Worklog Calendar​.

    f) On the second menu bar, click the ​clock icon​. 

    On the worklog calendar, in the Today column, click a time that you want to log work for. 

     

 

  On the form that is displayed, fill in the requested information, and click ​Save​. 

   

Tip: ​Dragging­and­dropping  Alternatively, you can log work by clicking an existing worklog card or by dragging a suggestion  card from the right sidebar to the worklog calendar. 

 

 

 

Logging work using the Log Work button  Another way to log work is by using the Log Work button ​→     

5.

To open the User timesheets view, we start by clicking on the Tempo menu. 

  a) And select ​Timesheets​.

  b) Then we click on the ​User Timesheet ​button.

   

6.

Here, we can see the log work button. Now we can either click on it, or use the keyboard  shortcut: ​w​ (for work). 

     

7.

In the Log Work window, we fill in the requested information.   Left sidebar  First, let’s choose the type of work that we want  to log.  Issue  Then we search for the issue by   starting to write in the ​Issue ​field, or select it  from the dropdown menu.  Period  If you select the Period checkbox,  an ​End date​ field is displayed. This is useful if you want to log work for multiple days in a  row.  

Date  If you are logging work for a date other than today’s date, select that date.   Time  The start time of the work. When did you start working?  Worked  The number of hours you want to log on this issue.  Description  A brief description of the work.  Log another  If you want to log more work after this log, select the ​Log another​ checkbox.   

8.

Click ​Log Work​. 

 

 

 

INTERACTIVE VIDEO  Start  Hello and welcome to this interactive video. Over the next couple of minutes you will learn how to  use two features in a project management tool called ​Tempo Timesheets for JIRA​. You will learn  how to create an ​issue​ in JIRA and how to ​log work​ on that issue, using Tempo Timesheets.    Before you dive into that, let me explain how to use an interactive video. An interactive video means  that you, the viewer, have the ability to interact directly with the information presented on the screen  by clicking on it with your mouse. Try it out right now, click on the picture of a cat, on the screen  now:    [Option 1 ­ Dog]  Uhm.. That’s not a cat.. Try again.    [Option 2 ­ Cat]  Great! You got that right. Now you know how to interact with information on the screen. You can  also use the video timeline to pause, or rewind, if you need to listen to something again.    Now you should be ready to learn about what JIRA and what is Tempo Timesheets are. Click  continue​ to start learning.     [Interaction ­ Continue]  JIRA is an​ issue​­tracking software that provides companies and organizations with bug­tracking,  issue­tracking, and project­management functions. JIRA enables managers and employees to   plan projects, manage teams, track the status of projects, share information, and do much more.     Tempo Timesheets is a software extension for JIRA. Tempo Timesheets primarily enables you to  log work on JIRA issues. The worklogs can be used, for example, to calculate employee salaries or  customer billing information. Before you can log work using Tempo Timesheets, you must learn how  to create a JIRA​ ​issue.  

 

Different organizations use JIRA to track different kinds of issues. Depending on how your  organization uses JIRA, an issue could, for example, represent a software bug or a project task.  An issue describes a work task that an employee must do; for example, fix a software bug,  attend a meeting, or develop a new feature.  

 

 

Alright, are you ready to create your first JIRA issue?  [Interaction ­ Yes]  Great! Start by clicking on the ​Create​ button on the top menu bar, right over there.     [​Interaction ­ Create issue button​]  Well done. What you did there is opened up the Create issue window. You can also use the  keyboard shortcut “C” to open up this window…. In the Create Issue window, fill in the  requested information.    First, Link the issue to a JIRA project…. Select the project from the dropdown menu right here.  [Interaction ­ Project]    Next, Choose which type of issue you are creating (For example: Bug, improvement, new  feature, task).... Click on the​ issue type​ now….   [Interaction ­ Issue Type dropdown]    Now, Write a brief one­line summary of the issue in the summary field…. Click on the summary  field.  [Interaction ­ Summary field]    Describe the issue in more detail… Click on the ​Description​ field to fill in details about the  issue.   [Interaction ­ Description field]  Here, you can mention other JIRA users. An email message will be sent to them after you  update the issue. You can also link the issue to other issues, and insert macros, images and  much more.    If you want to assign this issue to a team, select the name of the team here. Select which team  will work on the issue…. Select the team right here...  [Interaction ­ Team]    If you want to create a series of similar issues, select the ​Create another​ check box right here.    Jira allows you to customize the issue in many more ways, but for now, save this issue by clicking  on the ​create button​….. Click on the ​Create button…​.   Click on the ​Create ​button….  [Interaction ­ Create]    Great! Well done, you have now created your first JIRA issue!. If there was anything that you  missed, or didn’t understand about the create issue window, you can press ​replay​ OR you can  press ​continue​ to learn how we can log work on this issue. ​[Option ­ Replay] [Option ­  Continue]   

The core function of ​Tempo Timesheets ​is to log work on JIRA issues. You can log work in two  ways: Using the worklog calendar, or the log work button. Choose which method do you want to  learn first?... Click on the screen to choose which method of logging work you want to learn first.    [Option 1 ­ Worklog Calendar] → Then play Log Work button  [Option 2 ­ Log work Button] → Then play Worklog Calendar 

OPTION 1 ­ Worklog Calendar  Go to the worklog calendar by completing the following steps:    Click the Tempo menu, on the top menu bar…. (POINTER)  [Interaction ­ Tempo menu click]    Now select Timesheets  [Interaction ­ Timesheets]    Click on the Worklog Calendar button, on the top right….. (POINTER)  [Interaction ­ Timesheets]  Great, now you are in the Worklog calendar view… Make sure that the clock icon selected.    On the worklog calendar, in the Today column, we select a time to log the work…. On the form that  is displayed, we fill in the requested information.    Click ​Save ​to log this work… Click on the save button.  [Interaction ­ Save]    Great work! You just logged work using Tempo timesheets! :)  Alternatively, you can also log work by dragging a suggestion card from the right sidebar to the  worklog calendar.  

 

 

  OPTION 2 ­ Log work button  Go to the User Timesheet view by completing the following steps:    Click the Tempo menu, on the top menu bar…. (POINTER)  [Interaction ­ Tempo menu click]    Now select Timesheets  [Interaction ­ Timesheets]    Click on the User Timesheet button, on the top right….. (POINTER)  [Interaction ­ Timesheets]  Great, now you are in the User timesheet view… Here you can see the log work button.    Click on the Log work button.  [​Interaction ­ Log work button​]  Well done. What you did there is open the log work window. You can also use the keyboard  shortcut “W” (for work) to open up this window.     When the log work window has opened. Fill in the requested information.    First, choose the type of work that you want to log by selecting it on the left sidebar. We can see  that ​issue ​is selected by default, so let’s continue.     Select which issue you want to log work on, search for the issue by starting to write in the ​Issue ​text  field, or select it from the dropdown menu…. (pointer)... Click, to select an issue.  [​Interaction ­ issue​]    If you want to log work over a specific period of time, then you can select the period checkbox here.  This is very useful if you want to log multiple days in a row.     If you are logging work for a date other than today’s date, select that date. By default, today is  always selected.    Here you can select at what time you started working?     Next, write the number of hours that you want to log on the issue. Clicking on the “worked” text  field.... Click on the worked text field right here (pointer)...  [​Interaction ­ Worked​] 

  After that, the remaining estimate should fill out automatically. Then, write a short description of  the work… What did you work on specifically?... Click on the description field to fill out the  worklog description...  click on the description field here (pointer).  [​Interaction ­ Description​]    If you want to log more work after this worklog, select the ​Log another​ checkbox.    Now, the only thing left is to click Log work. Click ​Log Work​ to save your worklog.  [​Interaction ­ Log work​]    Now you should be ready to create a JIRA issue and log work using Tempo Timesheets. This  concludes the video. Thank you for watching.         

Task 1 ­ Búa til verkferil (issue)  Project: Tango OnDemand  Issue Type: New Feature  Summary: Add Skype support  Description: Make Skype available in the menu.  Team: GreenCloud Tango   

Task 2 ­ Skrá vinnutíma á verkferilinn í gegnum​ Worklog Calendar  Issue: Add Skype support  Date: Today  Time: 8:30 am  Worked: 3h  Description: I had a meeting with a Skype programmer. 

APPENDIX

E

E XPERIMENT D ESIGN E.1 Design and Construction of the Vocal Booth A Blue Yeti microphone 1 was used to record the narration for the interactive and non-interactive videos. The microphone gain was set to 50% and the microphone pattern mode set to cardioid. The frequency response of the Blue Yeti microphone can be seen in Figure E.1.

Figure E.1: Frequency response of the Blue Yeti Microphone in cardioid mode

In order to minimize outside noise and reduce unwanted reverberation resulting form the natural resonating frequencies of the room, it was decided to build a miniature vocal booth. The materials for this booth were gathered and bought from a store which sells a large variety of used objects such as furniture, electronics and more. A triangular wooden object was bought, along with a thin foam mattress, some Velcro and rubber strips. The construction process is illustrated in Figure E.2. 1

Link to the official website of the Blue Yeti microphone http://www.bluemic.com/products/yeti/

101

Appendix E. Experiment Design

Figure E.2: Construction of the microphone booth

102

E.1. Design and Construction of the Vocal Booth Initial testing of the microphone booth revealed some resonating frequencies in the 160 to 200 Hz range. This problem was partly resolved by moving the microphone approximately 10 centimeters away from the middle of the vocal booth. An equalizer was also added to the vocal track with 3 to 6 dB decrease in frequencies between 160 to 200 Hz. This resolved the problem with the resonating frequencies caused by the vocal booth. A follow-up test indicated that some reverberation could be heard from the room during vocal recording. However, this reverberation was barely audible. Final cut pro X -> EQ on the voice to minimize any leftover resonating frequencies. When listening to initial recordings, some unwanted audio-phase effects were noticed. The problem was identified as coming from the actual sound-booth. It is likely that the soft, mattress foam panels that were used to line the walls on the inside of the vocal booth, were not dense enough to dampen some range of the frequency spectrum. Through experimentation with various materials, a considerably more dense packing foam was found and used as a replacement.

E.1.1 Vocal Booth: Version 2 In order to minimize problems that was associated with the first version of the vocal booth, the mattress foam was replaced. The new, more denser, packing foam was cut, and reshaped into angles similar to the shapes found in semi-anechoic rooms. The purpose of this booth was not to make an anechoic recording conditions. Instead, it was to minimize the reflectivity of the narrator’s natural speaking vocal range. In Figure E.3, a spectrogram of the narrators vocal range can be seen. This figure indicates that the narrators natural vocal range is somewhere between 0 Hz and 1000 Hz. This may explain why the design of the first version of the vocal booth was not absorbing the narrators voice sufficiently, since low frequencies are better able to pass through material.

Figure E.3: A spectrogram illustrating the narrators natural voice range

The final design that was used to record the narration for the instructional videos can be 103

Appendix E. Experiment Design seen in Figure E.4.

Figure E.4: The final design of the vocal booth.

After the vocals had been recorded, the audio was listened through using the same headphones that the participants used in the experiment, which were Bose QuietComfort noise canceling headphones. The equalization that was applied to the narrator can be seen in Figure E.5

104

E.2. Programming and Circuit

Figure E.5: The equalizer settings that were applied to the narrator vocal track

In order to measure changes in rhythm precision, the authors of the article “The Rhythm Method: A New Method for Measuring Cognitive Load- An Experimental Dual-Task Study” used an instrument foot pedal which was connected to a computer which recorded each tap using an audio editing software. Park and Brünken (2015) did not specify precisely how the audio data was transformed into numerical data. It is likely that this was performed manually by measuring the time between each rhythm component (i.e., foot-tap). However, this is not known.

E.2 Programming and Circuit For the purpose of this project, I decided to utilize a different data collection and transformation method. Instead of using an audio software and an instrument foot pedal, I decided to build a circuit and program an Arduino R3 microcontroller 2 . The circuit, which can be seen in Figure E.6, consists of a simple pulldown resistor circuit linked to a momentary on-off stomp-switch. This type of switch is commonly used in guitar effect pedals because it allows the guitar player to stomp fairly heavy on the switch without breaking it. The Arduino was programmed to log the time in milliseconds between each rhythm component and categorize it in terms of short or long rhythm component. Everything above 1000 ms was categorized as a long rhythm component, while everything below 1000 ms was categorized as short rhythm component. The output from the Arduino serial monitor can be seen in Table E.1. The code can be seen in Appendix E.4.

2

Link to Arduino home page https://www.arduino.cc/

105

Appendix E. Experiment Design

Figure E.6: Illustration of the circuit

Number 1 2 3 4

Time between rhythm components (milliseconds) 0 540 1500 500

Type of rhythm component Short Short Long Short

Time since first rhythm component (milliseconds) 0 540 2040 2540

Table E.1: Output from Arduino serial monitor

E.3 Electronics and Foot Pedal In Figure E.7, the foot pedal device, breadboard and Arduino can be seen connected. The foot pedal device was made from an old lamp which was disassembled and modified in order to fit the new momentary stomp-switch. During disassembly of the lamp a fairly heavy transformer was found. I decided to keep this transformer in the pedal because it added additional weight to the pedal. The momentary stomp-switch was then soldered to a 2 meter long speaker cable, the other end of the cable was connected to the circuit via the breadboard. Underneath the foot pedal device, some rubber insulating pads were added in order to increase the friction between the foot pedal and the surface of the floor. These rubber pads, along with the weight of the transformer was enough so that the device did not during use. The pedal device firmly stays in place.

106

E.4. Arduino Rhythm Measurement code

Figure E.7: Foot pedal, Arduino microcontroller and breadboard setup that was used to gather the rhythm data

E.4 Arduino Rhythm Measurement code // P r i n t s out Taps , between taps , mean between taps , and t o t a l time from program s t a r t . // t h i s constant won ’ t change : unsigned long time ; // Time v a r i a b l e

107

Appendix E. Experiment Design

const in t buttonPin = 2 ; const in t ledPin = 13; // Variables w i l l change : int buttonPushCounter = 0 ; int buttonState = 0 ; int l a s t B u t t o n S t a t e = 0 ;

// the pin that the pushbutton i s attached to // the pin that the LED i s attached to

// counter f o r the number of button presses // current s t a t e of the button // previous s t a t e of the button

// the following v a r i a b l e s are long ’ s because the time , measured in miliseconds , // w i l l quickly become a bigger number than can be stored in an i n t . long lastDebounceTime = 0 ; // the l a s t time the output pin was toggled long debounceDelay = 1000; // the debounce time ; increase i f the output f l i c k e r s long l a s t t i m e = 0 ; long betweenTaps = 0 ; long t o t a l = 0 ; long meanBetweenTaps = 0 ; long previousBetweenTaps = 0 ; long originalTime = 0 ; long buttonStartTime = 0 ; long shortTotal = 0 ; long shortCounter = 0 ; long shortMean = 0 ; long longTotal = 0 ; long longCounter = 0 ; long longMean = 0 ;

void setup ( ) { // i n i t i a l i z e the button pin as a input : pinMode( buttonPin , INPUT) ; // i n i t i a l i z e the LED as an output : pinMode( ledPin , OUTPUT) ; // i n i t i a l i z e s e r i a l communication : S e r i a l . begin (9600) ; }

void loop ( ) { // read the pushbutton input pin : buttonState = digitalRead ( buttonPin ) ; // compare the buttonState to i t s previous s t a t e i f ( buttonState ! = l a s t B u t t o n S t a t e ) { // i f the s t a t e has changed , increment the counter i f ( buttonState == HIGH) { // i f the current s t a t e i s HIGH then the button

108

E.4. Arduino Rhythm Measurement code // went from o f f to on : buttonPushCounter ++; time = m i l l i s ( ) ; i f ( buttonPushCounter Timesheets). She remembered the ability to drag and drop the worklog onto the calendar, so she did. • Test subject 12 (no instruction): Starts of by clicking the issues dropdown menu, then Projects, then Dashboards, then tools, then boards, then Tempo. Then stumbles upon the create button and clicks it, fills in the information. In the next task, the participant clicks on issues and opens up the issue that he just created, finds a log work button and clicks on it. He has no idea where to go, or what to do, so the facilitator has to give the participant several hints in order to get to the required page. He cliks on the issues in the right side-bar (which is drag-and-drop), nothing happens. He looks around and clicks on the calendar, which opens up the log work dialogue, then he fills in the information. • Test subject 13 (text): Rhythm seemed to be very steady, no matter what she was doing, scrolling or clicking. Facilitator asked about music experience after the exam, she replies that she has alot of music experience. During the task, she first clicked on the issues dropdown menu. Then saw the create button and clicked on it. In the second task, the mouse pointer stopped on the Tempo menu for a while before she clicked on it. She looked at the contents in the tempo menu, then clicked out of it. She clicked on Projects, then issues, then boards. Looks around for a while, clicks on dashboards, projects, opens up the create window again and looks around, clicks on boards and opens up a page which doesn’t load. She remembered it being under projects. She looks at Tempo for a while again, then clicks out of it.. The facilitator says “it is Tempo which makes it possible to log work”. She says “oh yes ok” and clicks on Tempo, then on Timesheets, then on Worklog Calendar. She scrolled back up and read some parts of the manual again. • Test subject 14 (video): During the video instruction, she says “I don’t understand anything about this”. She stopped holding the rhythm during video change. She says that she is not used to listen to instructions in English, although she says her English understanding is adequate. In the first task, she clicks directly on the create button and fills in the information. Seems like the mouse sensitivity is high. In the next task, she clicks directly on Tempo, then Timesheets. When choosing the Worklog calendar view, she clicked on the User Timesheets view (see Figure F.34). The facilitator indicates to the user that she clicked on the same page that she is currently in. She notices this and clicks on the Worklog Calendar. She clicks around in the worklog calendar, clicking on settings. Looks around some more. Clicks on the issue, which opens up the issue. Facilitator instructs her to go back. She hovers over the issue card and says “this is supposed to be here”. She clicks on the worklog calendar and fills in the information, although she seems quite perplexed by the form options.

Figure F.34: Dark indicates pressed button, white button indicates unpressed button

118

F.3. Observations from the Experiment • Test subject 15 (interactive): He got ready to write into the interactive video. Clicks on the Create button right away. In task 2, he clicks on Tempo, timesheets, and finds the worklog calendar. Clicks on the worklog calendar, fills in everything correctly. • Test subject 16 (no instruction): Looks around on the middle of the page. Clicks on some of the windows and says “does this do anything?”. Looks around, clicks on a lot of things. Then the facilitator repeats that he should create an issue. He then clicks on the issues dropdown menu and looks around. Says “there must be an option to create a new issue here. Why isn’t there any way to create a new issue”. He clicks at filters. When it seems apparent that the participant has no idea what or how to do this, the facilitator directs him towards the create issue concept. He still can’t find the button, he looks around in other menus. “CREATE PROJECTs... wait... projects.. No I need issues... HERE it is!”. He clicks on the create button. In the second task, he has no idea where to find the log work functionality. Looks at Tempo dropdown menu and clicks away. He is told that Tempo makes this possible. He clicks on teams, he clicks on accounts. He is told to click on Tempo -> timesheets. He clicks on “log work” button. He is told to use the Worklog Calendar, so he clicks on that. He drags the issue card and drops it on the calendar. • Test subject 17 (interactive): Clicks on the issues dropdown menu. Sees the create button and clicks it, fills everything in. In the next task, he clicks on Tempo, then clicks on Timesheets, then Worklog calendar, then drags and drops the issue card on the calendar, fills in the rest and saves the worklog. • Test subject 18 (no instruction): She has no idea where to find this functionality. Clicks on pretty much every menu and button she could find. The facilitator then directs the participant towards the create button. She clicks it and fills everything in. In task 2, she opens up the issue that she just created and clicks the log work button. Facilitator then directs her towards the Tempo dropdown menu. She needs to be guided for all remaining steps. • Test subject 19 (text): Scrolled back up, after reading the text, and read some of the sections again. In task 1, she clicked on Tempo, hovered over it for a while then clicked on the create button and filled in the information. In task 2, she clicked on Tempo, timesheets, worklog calendar and did all remaing steps correctly. • Test subject 20 (video): Had a difficult time maintaining the rhythm while interacting with the computer. He double clicked on the x-button while double tapping on the rhythm. He used the drag and drop. In task 1, he clickd directly on the create button. When logging work, he clicked on the issue that he created. Facilitator says “now log your work hours”, he clicks on timesheets and worklog calendar. He uses the drag-and drop functionality. • Test subject 21 (interactive): Gets ready to type in the fields displayed on the screen. When alot of information was being shown on the screen, she pressed multiple times in a row (short). Rhythm became almost entirely short nearing the end. In task 1, she found the create button, filled in the information. Said “wasn’t tempo where you create issue? or what”. Then she clicked on Tempo, timesheets. Clicked on the worklog calendar and did the rest correctly. 119

Appendix F. Results • Test subject 22 (no instruction): Clicks on create issue. In task 2 she has no idea where to find this information. Clicks on mostly every menu and button she can find. Then she’s instructed that Tempo makes this functionality possible. She can’t find the Tempo menu. Facilitator directs her towards the menu, she looks for the Worklog Calendar and finds it after some time looking. The rest of the steps she figures out on her own. • Test subject 23 (video): Clicks directly on the create button and fills everything in. In task 2, he clicks on Tempo, timesheets, clicks on the clock icon, and does the rest correctly. • Test subject 24 (no instruction): Clicks on Project managers, looks around, clicks on issues menu, projects menu, dashboard menu,.. Finally stumbles upon the create button and clicks it. In task 2, he clicks on the issue and logs work from there. He is then asked to go back and use the worklog calendar. He doesn’t know how to, so facilitator gives a hint about that Tempo makes it possible. He clicks on Tempo, timesheets and log work button. He is asked to close it and use the worklog calendar. He clicks on the worklog calendar and does the rest correctly • Test subject 25 (no instruction): Clicks on issues menu, looks around in the menu for a while.. Doesn’t see any create functionality. She finds the create button and fills the issue in. When asked to log work, she opens up a team window and edits that. Scrolls down, clicks on the teams again and cancel. She clicks on the dashboard menu, projects menu, issue menu, tempo menu, boards, etc.. Doesn’t find the functionality. Has to be instructed that Tempo makes it possible to create worklogs. She clicks on Tempo tracker... Then Tempo Accounts... She is instructed that the product is called Tempo Timesheets that makes this worklog functionality possible, she then finally she finds Tempo Timesheets. The rest is filled in correctly. • Test subject 26 (no instruction): Same story here.... He clicks on just about everything before being instructed on where to find the functionality. He even added a new gadget to the dashboard... • Test subject 27 (no instruction): Same story here also... • Test subject 28 (interactive): Clicked directly on the create button. In task 2 he did everything exactly according to the instructions, even using the drag-drop functionality. • Test subject 29 (text): Clicked on issues menu, looks around and sees the create button, clicks on it and fills in. In task 2, he clicks on tempo menu, looks at it for a while. clicks out of it and looks at the project menu, issues menu and more. He finally clicks on Tempo, timesheets, goes to worklog calendar and does the rest correctly. • Test subject 30 (text): Re-read the instructions. In task 1, she clicked directly on the create button. In task 2 she clicked on issues, opened up the issue that she just created. She is instructed to use the worklog calendar, so she clicks on Tempo, Timesheets, worklog calendar and does the rest correctly. • Test subject 31 (text): Re-read some of the instructions. He says that his attention influenced by the rhythm. In task 1, he looks around for a while and then finds the create 120

F.3. Observations from the Experiment button and fills it in. In task 2, he looks around for a while and finally decides to search for “worklog” in the search field. The facilitator instructs him that Tempo makes this possible. He clicks on Tempo timesheets and clicks on worklog calendar. • Test subject 32 (video): Stopped tapping between videos. Clicks on issues menu, then create button and fills in. In task 2 he clicks on create again, looks around scrolls down, clicks on projects. Then says “I dont remember”. Facilitator says that tempo makes this possible. He clicks on tempo, timesheets, clicks on the log work button, is asked to use the worklog calendar. He closes the log work dialog and opens worklog calendar. Does the rest correctly. • Test subject 33 (video): Says that she has no idea how to do this, but the first thing she does is click “create”. In Task 2, she looks around in the menus, looks in issues, projects, etc... Ends up clicking on the issue that she created. Facilitator hints that Tempo makes it possible to log work. She clicks on Tempo, timesheets, worklog calendar, and does the rest correctly. • Test subject 34 (text): she wobbled her head back and forth while reading. She seems to use her head to maintain the rhythm. Uses the scroll wheel in between rhythm components. Her rhythm was very steady, but occasionally got confused while scrolling. She wanted to read the document again. She clicks on create, and fills it in. In Task 2, she clicks on project managers, account managers and looks around. Facilitator hints that Tempo makes this possible. She clicks on Tempo, timesheets and looks around for the Worklog Calendar function. She finds the issue and drags it onto the calendar. • Test subject 35 (text): This participant was hearing impared. Therefore she received textual instructions. Clicks directly on the create button and fills everything in. In task 2 she scrolls down, clicks on the issue that she just created. The facilitator hints that Tempo makes this possible. She clicks on other links in Tempo, then finally clicks on Timesheets. Looks for Worklog calendar, clicks on it. clicks on the issue, goes back. Finally clicks on today and fills in the info. • Test subject 36 (interactive): Clicks directly on create and fills it in. In task 2, he clicks directly on Tempo, timesheets, and clicks on User Timesheets. He says he didn’t mean to do that, so he clicks on Worklog calendar and does the rest correctly. • Test subject 37 (interactive): Said that holding the rhythm was painful due to arthritis. The facilitator told her to stop the rhythm if it was causing discomfort. After a couple of minutes, she stopped. In the task, she clicked on create and filled it in. She had a difficult time with task 2. She had to be instructed a few times. • Test subject 38 (video): Used the keyboard shortcut “C” and filled in the issue. In task 2 he clicked on Tempo, Timesheets, then used the keyboard shortcut “W”. The facilitator told him that this is a very good way to access the log work window, but the instructions are to use the worklog calendar. He closed the window and clicked on worklog calendar. He did the rest without problems 121

Appendix F. Results

F.3.1 Exit Interview The questions are translated from Icelandic to English. They are as follows: • 0) To your knowledge, do you suffer from any learning disabilities such as dyslexia? • 1) In your opinion, how effectively did you manage to maintain the rhythm? • 2) Did the rhythm have any affect on your ability to learn what was presented in the instructional material? • 3) Do you have any comments on the instructional material? • 4) How effective was the learning material at teaching you what you needed to know in order to create a JIRA issue and log work on that issue? • 5) Was there anything that you didn’t understand, or that you were unsure of? • 6) Is there anything else that you want to comment on?

F.3.2 Answers from the Exit Interview • TS1 (interactive): 1) It was difficult at first, but as the experiment progressed I almost stopped noticing that I was doing it. 2) No, I don’t think so, or maybe a little bit, I’m not sure. 3) The voice fitted very well. It was easy to listen to. 4) It was very good, but I seem to have forgotten some of it, because I didn’t remember to click the “create” button. I went straight for the “issue” drop-down menu. 5) No, not really 6) I don’t think so. No not really. • TS2 (interactive): 0) I suspect it, but I’m not sure. 1) It was kinda difficult, required a lot of focus. 2) Yes definitely 3) Awesome, very helpful 4) Yes, likely. 5) No. 6) No. 7) I think this is pretty cool system, I would like to have it. • TS3 (interactive): 0) No. 1) Pretty well. 2) A little bit. 3) Very good. 4) Yes. 5) No. 6) No. 7) Nope. • TS4 (text): 1) Alright, but not always, I noticed that sometimes I tapped three times in a row. I noticed that there was a short and long rhythm. I also noticed that I was too quick sometimes. 2) Yes. The instructional material was easy. It’s pretty straight forward. Just create issue and log work. But the rhythm made it more difficult somehow. It was harder to learn. I think I would be able to memorize it better without the rhythm. 3) It was easy and transparent. 4) Yes, I think so. 5) No I don’t think so. 6) Yea, the scroll wheel was in the opposite direction. I also noticed that I focused mostly on the pictures. I read something, but I didn’t understand it until I saw the picture. • TS5 (text): 0) No. 1) Not very well. 2) No. 3) well. 4) apparently not. 5) Don’t quite remember. 6) Would like to read the instructions again 122

F.3. Observations from the Experiment • TS6 (video): 0) No. 1) Pretty well. 2) Just at first. 3) comfortable. 4) Yes I think so. 5) I don’t think so. 6) No • TS7 (interactive): 0) No. 1) No problem, made some mistakes though. 2) No. 3) Very well constructed. 4) Yes. 5) No. 6) The teachers voice was calm and comfortable to listen to • TS8 (No instruction): I think it was difficult to start with, but when I realized the first step in the task, then everything was fine. • TS9 (video): 0) I hope not. 1) Pretty much normally I think. 2) Yes it was rather distracting. 3) All good. 4) Yes. 5) No, the software was pretty self-explanatory. 6) No. • TS10 (video): 0) Yes. 1) Pretty well, but it was distracting while switching between videos. 2) No. 3) Pretty good, except that some of the instruction was recorded video while other was computer generated text. Some of the recorded video wasn’t aligned perfectly, that was distracting, perhaps because I learned clothing design 4) Yes. 5) No not really. 6) No, not really. • TS11 (Text): 0) No. 1) Pretty well, except when I had to move the mouse. Then I found myself struggling, just a little bit. I tried synchronizing the reading with the rhythm (taptap, readread, taptap). 2) Was rather distracting at first, but when I focused more on the reading the rhythm followed. 3) Pretty basic, the english was easy, everything was well put forth. Also, I liked the translations of the difficult english words. 4) Yes I think I would be able to use it now. But It takes a few times to start to be able to use it without thinking (practice). 5) No not really. 6) No • TS12 (No instruction): No comments • TS13 (text): 0) No. 1) Just fine. 2) No. 3) Very nice and clear instruction. 4) Yes. 5) Got a little bit lost, but it was okay. 6) No • TS14 (video): 0) No. 1) Just fine 2) Just a little bit 3) Fine 4) Yes I think so 5) No. 6) No • TS15 (interactive): 0) No. 1) Allright. 2) No. 3) Interesting. 4) Would like to watch it again. 5) No. 6) The sound from the pedal was too high • TS16 (no instruction): I don’t know how it went, it was pretty difficult to use without knowing anything about it. • TS17 (interactive instruction): 0) No. 1) Pretty well. 2) The rhythm got affected when the video told me to do something. But no, it didn’t affect me. 3) Pretty straight forward. 4) Yes. 5) No. 6) I would think that the create button should be under issues. Because the default action of JIRA seems to be about issues, but there is no create button under issues. Perhaps the instructional video should be more clear about this.. Say something like: The default action of JIRA is to create issues, that’s why we put the create button straight in front of you, so you can access it always with one click. • TS18 (no instruction): No comments 123

Appendix F. Results • TS19 (text): 0) No. 1) Pretty well, except when I started looking at pictures, the rhythm got affected. It was no problem reading. Also, when scrolling, the rhythm got troubled. I scrolled in rhythm with the rhythm. 2) Yes, it made it more difficult. 3) Very clear. 4) Yes. 5) No. 6) No • TS20 (video): 0) No. 1) Well to begin with, but when I started focusing on the information on the screen, then the rhythm got problematic. 2) If I focused on the rhythm, then I had problems with understanding. But when I focused alot on the instruction, the rhythm got disrupted. However I think that if I could hold the rhythm with the mouse, I would be better. 3) Straight forward. 4) Yes. 5) No. 6) I would like to use this system in real-world scenario. • TS21 (interactive video): 0) No. 1) Veeeery badly! When I focused on the video, the rhythm was affected many times. When I was thinking about something, the rhythm got confused. 2) I don’t know, I would need to be in another experiment without the rhythm to know. But I think so. 3) Pretty good. 4) I think so. 5) No. 6) I went to tempo up in the top menu bar, but I didn’t remember why I needed to go there. • TS22 (no instruction): Why would you need to create these worklogs? What do you do with this system? • TS23 (Instruction Type?): 0) No. 1) Pretty well. I lost it a few times, but nothing problematic. I have played the drums often, so I think I was pretty good at it. 2) The rhythm didn’t disturb me, it just was an additional thing to do. So it was conscious, but not problematic. 3) Very good. 4) Yes. 5) No. 6) Nope • TS24 (no instruction): It went pretty good. I understand it now, but it took some time to figure it out. • TS25 (no instruction): It went pretty well. Some problems, but I figured it out. When I usually learn how to use a program I open it and try to figure it out. (The facilitator decided to add this question to the questionnaire) • TS26 (no instruction): It went pretty well. Took some time to understand things. (When you learn how to use a new software, how do you prefer going about doing that?) Two things, first is a mentor, the second thing is just trial and error. • TS27 (no instruction): It went pretty well. When you’re doing something for the first time, it just takes time. (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error I think. If the program is too difficult, then I prefer to have someone show me how to do it (mentor). • TS28 (interactive): 0) No. 1) Pretty well. 2) Yes I think. I needed to think about the rhythm a little bit. So it did have an effect, but not much. 3) Pretty good. 4) Yes. 5) No. 6) No (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error, but if I can’t figure it out then I go to YouTube, or Google it. 124

F.3. Observations from the Experiment • TS29 (text): 0) No. 1) Went okay, but when I was scrolling then it was problematic, but very shortly. 2) Very simple material, but I think the rhythm definitely was distracting. I wouldn’t want to do this while learning something very difficult. 3) Very good 4) Yes. 5) No 6) Nope. (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error, just try to see if I can do it. If I can’t figure it out, then I look at the manual. I am also not very used to using mac computers. • TS30 (text): 0) No. 1) If I had to scroll or switch pages, then it was problematic. 2) Possibly. I was thinking about it sometimes, even though I shouldn’t have. 3) Pretty good. Very good to have pictures. 4) I would prefer to try it out first, then look at the manual 5) Nope. 6) Nope. (When you learn how to use a new software, how do you prefer going about doing that?) Probably just observing others, watching a video or something. • TS31 (text): 0) No. 1) Not well.. I probably was like 35 percent correct. 2) In the beginning, yes. I couldn’t decide not to focus on the rhythm. It took like half of my attention, but if I read it again it would be much better. 3) I didn’t focus on the pictures enough. But the text was very good. 4) Yes. 5) No not really. 6) Nope. (When you learn how to use a new software, how do you prefer going about doing that?)Trial and error, but if I needed to learn something more about it, then I would watch a YouTube video about it.. If there wasn’t any video, then I would probably look through the text. The text is just not as good as video. It doesn’t show you nearly as much as video. The video can show you the entire screen, with someone explaining it to you. Text obviously is okay, but it requires much much more concentration. • TS32 (video): 0) No. 1) Mostly well, some troubles but not much. 2) Very much, yes, it was very distracting. 3) Very easy, only if I could have focused on it. The rhythm was disrupting. 4) Yes, the basics, yes. 5) No. 6) Nope. (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error, then read instructions. I prefer reading instead of videos. • TS33 (video): 0) Yes. 1) well, I think. 2) It was distracting, because I needed to focus on it. 3) It was good. 4) Yes. 5) Something about Tempo and Jira was confusing, I didn’t know which was which. 6) Nope. (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error, problem solving. If I can’t figure it out, then I would Google it. Depending on what I was looking for, I would search for a video or text. • TS34 (text): 0) No. 1) Pretty well. 2) Yes I think so. I had to focus on it somewhat. 3) Pretty good, nice to have pictures. 4) Yes. 5) I didn’t remember the Tempo timesheets function. 6) Nope, (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error, see what everything does. If I didn’t understand, then I would search for a video. I would prefer video rather than text. • TS35 (text): 0) No, but hearing impared. 1) Pretty well. 2) Yes, because I needed to focus on the rhythm. 3) Everything was very well explained and well put forth. 4) No, because I think the rhythm had a negative effect. 5) Just where to find how to log work. 6) No (When you learn how to use a new software, how do you prefer going about doing that?) Having 125

Appendix F. Results someone teach me, else I would try to figure it out by myself, otherwise I would search for a video. • TS36 (interactive): 0) No. 1) pretty well, but if I lost the rhythm, I noticed, then I focused on it. 2) A little bit, not much though. 3) Very good, very informative. 4) Yes. 5) No, except that I though I was in the worklog calendar view, when I was in the user timesheets view. Usually the white indicates that the button is pressed. 6) No, (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error, go through it, test it 5 or 6 times. Get one example, do it, then get another example that is more difficult, do it.. etc.. In this interactive video, I would like to do something more difficult next time. • TS37 (interactive): 0) No. 1) Very difficult, probably due to arthritis. But I also tapped too fast. 2) Yes, I think so. 3) Very good. Very smart to be able to click. 4) Yes I think so 5) These systems are difficult to understand, but it is pretty straight forward through 6) no, (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error. I would prefer to test it myself, learning by experience. • TS38 (video): 0) no. 1) Sometimes well, sometimes bad, overall well. 2) No I don’t think so. The rhythm was more or less automatic. I have musical experience. Piano and guitar for about 10 or 11 years. 3) Very informative. 4) yes. 5) No. 6) No, (When you learn how to use a new software, how do you prefer going about doing that?) Trial and error, just test it out for myself. If I couldn’t figure it out, I would Google it. Search for text instruction.

126

Suggest Documents