Hartson, Andre, Williges, & van Rens
1
The User Action Framework: A Theory-Based Foundation for Inspection and Classification of Usability Problems H. Rex Hartson*, Terence S. Andre**, Robert C. Williges**, and Linda van Rens* Department of Computer Science* Department of Industrial and System Engineering** Virginia Tech Blacksburg, VA 24061
[email protected],
[email protected],
[email protected],
[email protected] Motivation
Because of growing awareness of the importance of usability, organizations are expending ever-increasing resources for "doing usability;" enviable usability laboratories are built, developers are trained in usability methods and considerable resources are devoted to conducting usability evaluations. However, most organizations are not maximizing returns on their usability investment. The usability evaluation process yields valuable raw data, but ad hoc methods limit the effectiveness of usability development activities following data gathering. There is a clear need for higher quality descriptions and classification in usability problem reports. This work provides a unifying framework as a foundation for structured methods and integrated tools to get the most value from usability data by way of analysis, classification, storage and retrieval, reporting, and redesign. This unifying work is proposed in the form of the User Action Framework (UAF). The UAF is a theory-based structure for organizing usability concepts, issues, design features, usability problems, and design guidelines that has evolved as a common core of structure and content for a multiplicity of usability methods and tools for usability practitioners. The UAF is manifest within a usability support tool by mapping the content and structure into an expression specific to the purpose of the tool. We describe the UAF concept and one such tool, the Usability Problem Inspector (UPI) tool. Integrating Framework
Previous work on the Usability Problem Taxonomy by Keenan (in press) and the Usability Problem Classifier by van Rens (1997) provided useful methods for classifying usability problems by type. However these classification schemes needed an improved structure before reliable tools could be based on them. The solution came in the form of an adaptation and extension of Norman’s theory of action (Norman, 1986), resulting in what we call the Interaction Cycle, which provides a high-level structure for organizing the contents of the Usability Problem Classifier. The UAF is the Interaction Cycle plus the underlying structure and content of usability concepts and issues that used to be in the Usability Problem Classifier, now reorganized under the relevant parts of the Interaction Cycle. The circular form of the Interaction Cycle combined with the hierarchical structure of the usability concepts gives the UAF an overall conical conformation, as shown in Figure 1. Although there are other usability inspection methods, for example, based on Norman’s model, this underlying structure of usability concepts and issues distinguishes the UAF and its derivative tools from other such approaches. Interaction Cycle
Execution
Evaluation
UAF
Hierarchical structure of usability issues, concepts from UPC
Physical Activity
Figure 1. Combining an adaptation of Norman’s seven stages of action (on the left) with the hierarchical structure of the Usability Problem Classifier to form the User Action Framework.
Hartson, Andre, Williges, & van Rens
2
This is not the first time Norman’s model has been used as a basis for usability inspection, classification, and analysis. Several studies (e.g., Cuomo, 1994; Lim, Benbasat, and Todd, 1996; Rizzo, Marchigiani, and Andreadis, 1997; Sutcliffe, Ryan, Springett, and Doubleday, 1996) have used Norman’s model and found it helpful for communicating about usability problems, identifying frequently occurring problems within the model, and providing guidance for diagnosis of usability problems. The Interaction Cycle
Our research makes a significant contribution by operationalizing Norman’s seven stage theory of action into an evaluation tool of practical utility. The Interaction Cycle shown in full detail in Figure 2, adapted from Norman (1986), is the core of the UAF and provides high level organization and entry points to the underlying structure for classifying details. Finding the correct entry point for a usability issue, concept, guideline, or problem is based on determining the part of the Interaction Cycle where the user is affected.
Figure 2. Interaction Cycle with areas representing sites for locating types of usability problems. The Interaction Cycle gives a picture of how interaction happens, expressed in terms of effects on users doing tasks and with a focus on user actions (cognitive and physical). As in Norman's model, the top three-fourths of the cycle is for cognitive actions, with the remaining sector for physical actions, shown as three subsectors (two for physical user actions and one – the shaded one – for physical system actions). A new Independent category is introduced in addition to accommodate usability issues that are independent of user actions, as indicated by its location outside the goal/task/intention path. Table 1 illustrates how usability issues are related to a specific part of the Interaction Cycle.
Hartson, Andre, Williges, & van Rens
3
Table 1. Parts of the Interaction Cycle. Then the entry point is If the issue is about: this part of the Interaction Cycle Establishing goals, tasks, or Planning intentions Translating intentions into action Translation descriptions Making physical input actions Physical Action Software bugs only (no usability System Action issues) Physically perceiving (e.g., Perception of Outcome seeing, hearing, sensing, noticing) the outcome Understanding and evaluating Assessment outcome Something independent of a Separate category specific task or action Independent of Interaction Cycle Where other methods simply identify problems, the UAF classifies and contextualizes design problems in terms of user’s interaction-based behavior. This allows us to understand the significance of design problems in the totality of the cognitive and physical aspects of task performance. For example, using a heuristic or guideline, one might decide a given usability problem was a "consistency problem." In contrast, because the deeper classification structure of the UAF aids description, we can formulate a much more complete identification of the problem as "a problem where lack of consistency in a data format led to a failure in cognitive affordance for avoiding errors in a data field during a form-filling interaction." By placing the effect of the usability problem more precisely within the user interaction cycle, a more complete usability problem report supports a more specific focus for redesign solutions. Mapping to an Inspection Tool
The UAF has become a common conceptual framework for our usability work. We apply the framework by mapping it into specific usability support tools such as the Usability Problem Classifier and the UPI. The meaning and structure are retained, but the concepts are expressed in a way that is tailored to the purpose of that tool. For the UPI, the expression is in terms of the types of problems to look for in a usability inspection. The UPI offers several benefits to practitioners. First, the UPI is based on the supporting infrastructure of detailed usability concepts in the UAF. Therefore, a capability to document problems found (i.e., complete description in terms of problem type and subtype, including the effect on the user within the interaction process) is built into the inspection tool, allowing the evaluator to focus on finding problems. Second, because of entry through the Interaction Cycle, evaluators and developers have a greater chance of understanding the description of multiple task-thread problems that are usually encountered in user interaction design. The UPI does this by proactively directing the inspection process to follow-up on potential related problems. For example, by finding an Assessment problem with the content of a feedback message, the UPI reminds the evaluator of a possible Planning problem if the feedback message also does not easily direct the user to the next, correct intention. Finally, the UPI allows the evaluator to consider significant physical usability issues (e.g., Fitts' law, object design, disability accommodation) in addition to the traditional cognitive problems.
Hartson, Andre, Williges, & van Rens
4
An Initial Usability Evaluation Study Using the UPI Tool
To test the concepts of the UPI, and the Interaction Cycle and UAF it is based on, we responded to an opportunity to evaluate a commercial message management service developed by the CrossMedia Networks Corporation. As the goal of the evaluation was rapid turn-around of results, we focused on comparing the UPI to the traditional heuristic method developed by Nielsen (1990). Results showed that evaluators can easily be trained on the UPI in a short amount of time, just as evaluators in the heuristic method. In addition, since the UPI is a task-based approach, it found more severe problems than the heuristic method. Finally, relatively consistent classification behavior on the part of UPI evaluators leads us to think that high reliability of classification, for at least some problem types, is possible when combining evaluator judgments. This is important because it could mean acceptable results from fewer evaluators than required by, say, the heuristic evaluation. Conclusion
The UAF provides a model-based framework for examining usability issues in the context of user action. From the UAF, we are able to map to various usability tools, including the UPI for prediction of usability problems. We expect that the UAF, and associated mappings, will provide usability professionals with comprehensive tools to conduct more efficient and effective usability evaluations through an easily understood framework, a complete way to understand problems, and built-in links to possible design solutions. References
Cuomo, D. L. (1994). A method for assessing the usability of graphical, direct-manipulation style interfaces. International Journal of Human-Computer Interaction, 6(3), 275-297. Keenan, S. L., Hartson, H. R., Kafura, D. G., & Schulman, R. S. (in press). The Usability Problem Taxonomy: A framework for classification and analysis. Empirical Software Engineering. Lim, K. H., Benbasat, I., & Todd, P. (1996). An experimental investigation of the interactive effects of interface style, instructions, and task familiarity on user performance. ACM Transactions on Computer-Human Interaction, 3(1), 1-37. Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. In CHI ’90 Conference Proceedings (pp. 249-256). New York: ACM Press. Norman, D. A. (1986). Cognitive engineering. In D. A. Norman and S. W. Draper (Eds.), User centered system design: New perspectives on human-computer interacation (pp. 31-61). Hillsdale, NJ: Lawrence Erlbaum Associates. Rizzo, A., Marchigiani, E., & Andreadis, A. (1997). The AVANTI project: Prototyping and evaluation with a cognitive walkthrough based on the Norman’s model of action. In Designing Interactive Systems (DIS ’97) Conference Proceedings (pp. 305-309). New York: ACM Press. Sutcliffe, A., Ryan, M., Springett, M., & Doubleday, A. (1996). Model mismatch analysis: Towards a deeper evaluation of users’ usability problems (School of Informatics Report). London: City University. van Rens, L. S. (1997). Usability problem classifier. Unpublished master’s thesis, Virginia Polytechnic Institute and State University, Blacksburg, VA.