In-process Usability Problem Classification ... - Semantic Scholar

2 downloads 354 Views 177KB Size Report
Southern Methodist University, Dallas, Texas. Email:{rgeng,mingruic ... been widely used in software development process, but with insufficient attention to ..... company (www.mediatechresearch.com) for a Web-based con- tact management ...
2014 14th International Conference on Quality Software

In-Process Usability Problem Classification, Analysis and Improvement Ruili Geng∗ , Mingrui Chen∗ and Jeff Tian

∗†

∗ Department

of Computer Science and Engineering Southern Methodist University, Dallas, Texas Email:{rgeng,mingruic,tian}@lyle.smu.edu † School of Computer Science Northwestern Polytechnical University, Xi’an, China sometimes wrongly consider usability as being just related to the design of the visual elements of the graphical user interface (GUI), and therefore it can be dealt with late in the development [4]. Also, ODC treats usability only as a defect impact attribute value without a systematic consideration of usability problems from different perspectives and available data. Therefore before applying the empirical SE practices for usability engineering process improvement, it is necessary to adapt and tailor them.

Abstract—In recent years, more organizations began to address usability problems to improve their competitive position and foster customer loyalty. Although there are some classification schemes to address usability problems, there is no cause-effect framework to enable in-process usability feedback and improvement. Orthogonal defect classification (ODC) had been widely used in software development process, but with insufficient attention to usability problems. In this paper, we present a classification and analysis framework to integrate usability practices with ODC framework. With our framework, the cause-effect relationship can be identified. By addressing the usability problems during the whole development cycle, human computer interface (HCI) and software process can be improved together. A case study was conducted to demonstrate the viability and effectiveness of our framework.

Based on the analysis above, we developed our usabilityODC problem classification and analysis framework. We adapted original ODC framework to integrate most of traditional usability practices to classify and analyze usability problems from different perspectives and to address usability problems during the whole development process.

Keywords—Usability, Defect classification and analysis, Software process, Human Computer Interaction, Usability inspection and testing

I.

The remainder of this paper is organized as follows: the next section presents the related work. Section III describes our usability problem classification framework and related usability problem analysis. A case study is discussed in section IV. Section V presents our conclusion and perspectives.

I NTRODUCTION

As one of the critical quality attributes of software systems, usability has become a core competency that is necessary for customer loyalty and business survival [10]. Usability is defined as the effectiveness, efficiency and satisfaction with which specific users can complete specific tasks in a particular environment. A lot of research and empirical studies show most of usability techniques can be grouped and mapped to the relevant software engineering (SE) activities. With the improvement of development process, the usability level of software products can be substantially improved. However, there is no existing effective cause-effect framework to measure the usability and address the related problems from different perspectives in the whole development process and provide the comprehensive feedback to designers and developers.

II.

In this section, we first explore some usability problem classification schemes commonly used by usability engineers. Then we discuss the defect classification taxonomies, especially ODC effectively used in the field of software engineering, and the related defect analysis. A. Problem Classification in Usability Engineering In the field of usability engineering, usability problems are often categorized according to various classification schemes. Many researchers have explored usability problem classification models from various perspectives.

It is possible to integrate the SE practices, especially those that can deal with quality issues, into usability engineering life cycle. Orthogonal Defect Classification (ODC), developed initially at IBM, is the most influential among general frameworks for software defect classification and analysis [3]. It can lead to fundamental improvement for the whole software development process via quantitative measurement and qualitative root causal analysis.

Usability problem taxonomy (UPT) [7] was considered a complete and reliable framework for usability problem type classification. It can be taken as a taxonomic model in which usability problems are classified from two perspectives: artifact and task. UPT is made up of four hierarchical levels. Artifact and task components are located at the top level. The artifact component comprises of three categories: visualness, language, and manipulation. The task component has two categories: task mapping and task facilitation. Each primary category is again

However, current SE practices do not address usability issues throughout the development process. Developers 1550-6002/14 $31.00 © 2014 IEEE DOI 10.1109/QSIC.2014.49

R ELATED W ORK

240

composed of multiple subcategories. Classification of usability problem scheme (CUP) was developed to classify usability problems on the basis of collective review of previous defect classification schemes [14]. The CUP specifies 10 attributes to characterize a usability problem such as Identifier (ID), Description, Defect Removal Activity and others. Most of these attributes have a set of values of their own. A classification scheme based on cyclic interaction model was developed aimed at examining low-level interaction problems [11]. The cyclic interaction model strives to model a recognitionbased interaction by considering three paths: action-effect path, effect-goal path, and goal-action path. These three paths result in three kinds of usability problems.

A usability problem is classified across multiple dimensions Causal attributes Artifact Task provides feedback on the development process Trigger provides feedback on the verification processes analysis, prototyping,...

Learning Performing Perception the resultant effect on the usability components Severity uses 1-4 level, where 1 is the cosmetic, while 4 is significant

Fig. 1: Cause-effect usability problem classification

Although these usability problem classification frameworks can capture some important usability problems and provide some feedback for usability improvement, they are not the cause-effect classification models and cannot be used to analyze the effect on users and task performance. In additions, it is not the systematic framework to link usability problems into the development context and activities [6].

problems can be classified, measured and linked into the development context and activities. In our research, we call the adapted ODC attributes as usability-ODC attributes:

A new framework for categorizing usability problems from the development perspective was proposed in [6]. This framework comprises of three viewpoints: context of use, design knowledge, and design activities. It aims to connect the usability problems into the activities of designing user interfaces and task. However, it is only a conceptual model serving as a thinking tool for dealing with usability problems. B. Defect Classification Schemes in Software Engineering Among the existing defect classification schemes in SE, Orthogonal Defect Classification (ODC) has been considered as a complete, effective and flexible framework for defect classification and analysis throughout the product life cycle [3], [8]. It strives to provide an in-process measurement paradigm that extracts key properties from defects and enables measurement of cause-effect relationships between software defects and their effects on development activities as opposed to a mere taxonomy of defects for descriptive purposes. ODC framework has multiple dimensions or attributes to characterize defects: defect-type, defect trigger, defect impact, severity, source, target, phase found, etc. Each attribute has its own values. Typically, once defects are classified into different categories they can be analyzed using one-way, two-way, and multi-way analyses methods [3], [13].



Considering usability problem-type should be examined from the two dimensions: the artifact and the task [2], [12], we divided the problem type attribute in the original ODC into two usability-ODC attributes: artifact related problem and task related problem.



The impact attribute in original ODC is divided into three attributes in our framework: problem in learning, problem in performing given tasks by users, and user perception.



Severity and triggers attributes are also two important attributes in our usability-ODC framework as they do in the original ODC.

These seven attributes more directly reflect the correlation between the defects and the development. Also they are commonly considered as cause-effect attributes to diagnose the problems. Figure 1 shows these seven attributes and their purposes in the general cause-effect framework, with details given below. B. Artifact / Task Related Problem Attributes As we had discussed in related work, usability problem taxonomy (UPT), is the only taxonomic model in which usability problems are classified from both perspectives: artifact component and task component [7]. The summative evaluation shows UPT can be used in various product to help improve user interface process. Each usability problem is classified in the artifact component as well as the task component. The artifact component and the task component are not intended to be mutually exclusive but are used together to describe two different aspects of an individual problem.

Although ODC has been successfully applied in the software life cycle, it treats usability only as a defect impact attribute value without sufficient attention to usability problems. III.

Effect attributes

A N EW F RAMEWORK

The traditional usability practices and empirical software engineering techniques surveyed in the previous section provide us the theoretical foundation to build our usability-ODC framework. In this section, we will describe how we identify, tailor and adapt them to classify and analyze usability problems for in-process usability improvement.

We extracted the two components and used them in our usability-ODC framework as two individual attributes: artifact and task . We called them as artifact related problem and task related problem attributes. However, currently user interface is no longer confined to GUIs, but also include gestural interfaces and voice interaction [12]. To accommodate the diverse usability requirements, we adapt and expand the attribute values under the two attributes. For example, for artifact attribute, the original category “visualness” is replaced by “representation” as the primary category. Two subcategories “visualness” and

A. Overall Framework and Usability ODC Attributes As we had discussed above, traditional ODC scheme needs to be adapted to integrate usability practices so that usability

241

STARTING POINTS

3 PRIMARY CATEGORIES

Usability Engineering Activities

SUBCATEGORIES

Software Engineering Activities

Screen layout Visualness

Iteration Release

Object movement

Pluralistic walkthrough

Presentation of information/results No-message feedback Voice sounds

Heuristic evaluation Context use

Prompts

Audibleness

Task Analysis User Analysis

Text/feedback in speech

Artifact Related Problem

Language

Expert evaluation Cognitive walk through

Object appearance

Representation

Usability Problem Triggers

Requirement Analysis &validation

Cognitive analysis (GOMS)

Naming/Labeling

Usability testing

Wording in message/text/feedback

Physical aspects

Manipulation

Prototyping

Keyboard press Mouse click Finger touch Voice control Visual cues

Cognitive aspects

Thinking aloud

Design& Implement

Performance measurement Interaction Design

Laboratory Usability Testing

Audible cues Direct manipulation

Usability Evaluation

Test& Evaluation

Field Usability Testing

Fig. 2: Artifact related problems Follow-up studies Focus Group Iteration Release 2 PRIMARY CATEGORIES

STARTING POINTS

Questionnaires and surveys

SUBCATEGORIES

Logging Actual Use

Interaction

Task-mapping

Navigation

Online user feedback facilities

Functionality

Task Related Problem

Alternatives

Task-facilitation

Fig. 4: Usability problem triggers

Task/function automation User error tolerance Keeping the user task on track

practices [4]. Among them, we identified 12 techniques that focus on triggering usability problems during the development process. We integrated them into our usability-ODC framework as the categories of trigger attribute. These techniques can be categorized into three big families: expert evaluation, usability testing and follow-up studies, and mapped to development activities [4].

Fig. 3: Task related problems

“audibleness” are refined. For task attribute, we use “user error tolerance” as the subcategory to replace the original subcategory “user action reversal” to include more user error tolerance facilitation. Figure 2 shows the hierarchical structure of artifact related problems attribute. Artifact related problems attribute means the user experience problems when they directly view (feel), read and manipulate user interface objects. It is divided hierarchically into 3 primary categories: representation, language, manipulation. Representation usability problems are about the user’s ability to see, listen or feel objects in the user interface. Language problems focus on the user’s ability to understand the text objects that are used in the interface. Manipulation is concerned with the user’s ability to understand visual cues and directly manipulate user interface objects.

Detailed descriptions of these techniques can be found in [4] [10] [12]. Figure 4 shows usability problem triggers defined in our usability-ODC framework and the correlation with the important SE and UE activities. In order to effectively employ these triggers for the continuous usability improvement, iterative software development process should be used to match the iterative user-centered development [10]. D. Learning, Performing and Perception Attributes Usability is a cumulative component of a product. By decomposing usability into usability components and performing the related measurement, designers can understand the potential factors from the viewpoint of the users and work on eliminating or reducing the impact of design-induced mistakes.

Figure 3 shows the hierarchical structure of task related problem attribute. Task related problems occur because the mapping was inadequate or because the system was not programmed to help the user complete the task. The attribute includes 2 primary categories: task-mapping and taskfacilitation. Task-mapping problems focus on the structure (mapping) of the user task on the system. Task-facilitation refers to the systems’ ability to help the user follow the task structure and complete the task.

Since abundant research has already focused on finding and defining the optimal set of components that compose usability [5], in our research we identified the set of components most commonly cited by other researchers and adapted them into three attributes: problem in learning, problem in performing given tasks by user and user perception. Below are the definition for each attribute and the related measurement methods [1]:

C. Trigger Attribute As a framework for usability improvement, it should offer usability techniques that can address individual usability problems.



A framework was proposed to integrate 35 selected HCI techniques characterized from a SE viewpoint into the software development process to help developers to understand the possible relationship of usability with their development 242

Problem in learning attribute means the system should be easy to learn and easy to remember. It includes these values: learnability, memorability and retention over time. The three categories can be measured by examining a specific metric (such as time or effort

to learn & retention rate) by trial for each task or aggregated across all tasks. •



TABLE I: Correlation of trigger attribute with artifact attribute and task attribute

Problem in performing given tasks by users attribute is defined as the effectiveness and efficiency of the system. Therefore, the attribute has two values: effectiveness and efficiency. Effectiveness can be measured by calculating the task success rate for each task and each user. Efficiency can be measured by calculating the time or the amount of effort required to complete the task.

Triggers Pluralistic walkthrough Cognitive walkthrough Heuristic evaluation GOMS Think alound Lab usability testing Field usability testing Performance analysis Questionnaires and surveys Focus group Logging actual use Online user feedback facilities

User perception attribute means the system should be pleasant to use. User perception attribute values can be flexibly specified based on the satisfaction of users with the application system. It can be measured by calculating the degree to which the users was happy with his or her experience while performing the task.

+ + + +

+ + + +

-

-

M +

Task problem TM TF + +

+ + + +

+ + +

-

+ +

+ + + +

+ +

Note1: R-Representation, L-Language, M-Manipulation, TM-Task mapping, TF-Task facilitation Note2: “+” indicates the trigger could address the usability problems in a given category. “ ” blanks indicate that triggers could moderately addressed the usability problem categories. “-” indicates the trigger could not address usability problems in a given category.

E. Severity Attribute Severity attribute in the original ODC framework is used to measure the degree of the impact of a defect. Similarly, it is meaningful to determine the severity of usability problems to assess their negative impact on the system. Currently there are a lot of severity evaluation methods. Each includes an extreme at one end, indicating a problem that prevent completion of the task, and the opposite extreme, indicating an enhancement that would be nice to do. In our research, we measure severity on a four-point scale adapted from [10], as follows: 1= Cosmetic problem only, 2= Minor usability problem, 3= Major usability problem and 4= Usability catastrophe.

interactions, including emails, faxes, calendars, project- and sales-related documents, and more. A. Usability Measurement for Problem Classification Considering some measurement methods are often used to assist the experts to analyze usability problems, we carried out the basic usability measurement using three metrics: task completion rate, task completion time and user satisfaction. Twenty students were recruited to participate in the measurement study. All participants were asked to perform six specific tasks in the contact management system within an expected period of time.

F. Problem Analysis Based on Usability-ODC Framework For usability problem analysis, we can follow the common procedure of ODC defect analysis in SE and incorporates the traditional usability measurement methods in UE.

All the usability issues from the participants’ performance and verbal protocol for each task were recorded. The problems among 70 usability problems previously found were identified and associated with the specific task. Further the frequency of their occurrence among 20 participates were recorded.

First one-way analysis can be performed for all attributes to identify what kinds of problems were encountered most often.

If the participant was able to complete the task without any help during the session within the time limit, we recorded the task as successfully completed for the participant. The task completion rate was calculated as the ratio that the number of participants who completed the task to the total 20 participants for each task. It can be used to analyze the effectiveness related to performing attribute. The task completion rates for six individual tasks are 50%, 100%, 80%, 25%, 95% and 20%.

We can perform two-way analysis with the combination of any two attributes. The previous research had identified the association between usability problem trigger and usability problem types [7], [9], [12] shown in Table I. The combination of triggers with the artifact and task related problem attributes can help understand why such problems escape into the field. Finally, the results from the one-way, two-way or multiway analysis and measurement can be consolidated. The root causes identified in the previous stages are summarized, and the remedial actions how to improve the usability are recommended and prioritized. IV.

Artifact problem R L + +

The completion time for each task and each participant who completed the tasks were recorded. These time data will be used to assist the efficiency analysis related to performing attribute.

C ASE S TUDY

When the participants completed the six tasks, they were required to complete a questionnaire to provide their rating for 10 specific questions. All questions used a Likert-type scale from 1 (Strongly Disagree) to 5 (Strongly Agree) with 3 as the neutral mid-point. The minimum, maximum and mean rate for each question were calculated. The 10 questions were taken as the attribute values under perception attribute. Questionnaire is an effective method to assess users’ perception and identify the related problems.

We performed a case study to provide an initial validation of the applicability and effectiveness of our usability-ODC framework. This case study is based on 30 usability problems collected from the heuristic evaluation and 40 problems from lab-based usability test performed by Mediatech Research company (www.mediatechresearch.com) for a Web-based contact management application. This system allow people and businesses to record and track friends, customer, and client

243

TABLE II: Distribution task related problem attribute

B. Usability Problem Classification

Type I Count 24 % 34.3

Three specialists with substantial experience in user interface design and usability engineering were given the materials and training with our usability-ODC framework. Then they were required to individually classify these usability problems into the categories defined in our seven usability problem attributes with their domain knowledge, experience and the data collected from our measurement study.

F 15 21.4

N 13 18.6

T 4 5.7

A 4 5.7

K 1 1.4

E 1 1.4

Null 8 11.4

Note: I-Interaction, F-Function, N-Navigation, T-Task automation, A-Alternative, K-Keep on track, E-Error tolerance

Basically, artifact and task related problems can be identified and classified based on the problem descriptions. For the classification under trigger attribute, the specialists have to employ their rich knowledge and experience in UE because although the 70 usability problems were found by heuristic evaluation and lab-based usability test in the test and evaluation phase, it is possible that these problems can be found by other trigger techniques in the earlier phase. The experts were encouraged to choose the best potential trigger techniques that could catch the problems.

Fig. 5: Problem type clusters among artifact and task attributes

C. Problem Analysis

For the classification related to learning, performing, perception and severity attributes, each specialist was required to use the data from our measurement study to assist their decisions. The measurement result shows task completion rates are very low for Task 1, 4, 6 (≤ 50%). Task completion rate is closely related to the effectiveness of the system. The specialists examined all problems occurred in Task 1, 4 and 6 and identified these problems that can cause the failure of the three tasks with higher likelihood, and then they classified these problems into effectiveness category under perform attribute. Additionally, these problems that impact the task completion also received the higher severity.

Usability problem analysis can be performed based on our usability-ODC classification for the improvement of product usability and process. One-way analysis were performed for all seven attributes. Typically, task related problem attribute is the most interesting attribute for the designers and developers. The problem distribution among the categories under task attribute is presented in Table II. We found that task-mapping problems such as interaction, navigation and functionality problems dominate. Interaction problems are typically due to the lack of usercentered or cognitive direct task mapping or task structure.

The problems that impact the usability efficiency can be identified by examining task completion time. The experts checked all the problems encountered by the participates who completed the tasks, but spent longer time than what was expected, and then they classified these problems into efficiency category under performing attribute and severity categories. Only these problems that didn’t receive any effectiveness category will be classified into efficiency category.

Two-way usability problem analysis are often used for the open issues left from the one-way analyses. Further we associate task related problem attribute with artifact related problem attribute to perform two-way analysis and to identify the problem cluster. Figure 5 shows the problem distribution among the association between the two attributes. From this figure, we noticed that interaction problems received more visual clues subcategory of cognitive aspect classification. Checking the description of these problems, we figured out that the interaction problems were mainly caused by the missing or inconsistent visual cues such as the same icon/graphic representing different objects as well as the difficulty to recognize and understand visual cues. Therefore the designers should check whether the interface objects are metaphoric representation of the real task objects, and whether they provided clear clues to help users understand the task structure.

Similarly, with the results from the questionnaire, three experts checked the problems encountered by the participates who gave the low rating for each question (≤ M ean) and classified these problems into the related category. As we had mentioned above, each question is taken as a category under perception attribute. Finally, 70 usability problems were independently classified into the related categories under the seven attributes by our three experts with agreement of 90%, and then cross-validation were carried out to resolve their difference. In this case study, we didn’t perform the measurement to learning attribute due to the time limit. Typically, collecting and measuring problem in learning attribute need multiple trials with the same participants, while allowing a period of time between the two trials. Therefore the problem classification under learning attribute had to be carried out based on the domain knowledge and experience of the specialists.

In order to identify which aspects of the system the usability problems were seriously affecting, we carried out three-way analysis. Table III shows the result from three-way analysis. We noticed that 9 interaction, 4 navigation and 2 functionality problems were seriously impacting the effectiveness of the system. By checking these problems, we found that many users were confused about the task structure and sequence presented in the user interface of the system. Therefore we suspect that

244

in requirement phase, designers and developers didn’t perform effective task analysis. They failed to capture the essential structure and map the tasks to interface objects and actions in the interaction design. We recommend that designers consider redoing the task analysis for the tasks with low completion rates.

feedback for the usability problems and techniques to improve the usability engineering, software process and product. In the future, we plan to adapt and integrate other original ODC attributes such as source, target into our usability ODC framework to identify additional usability-specific problem characteristics. We also plan to incorporate additional measurement technique to assist usability problem classification. This enhanced classification and analysis framework will be applied to more and diverse applications to further validate its applicability and effectiveness.

TABLE III: 3-way analysis of performing attribute by severity and task attributes Severity Attribute

Task Attribute

Major

Interaction Functionality Navigation

Performing Effectiveness 9 2 4

Attribute Efficiency 10 2 2

ACKNOWLEDGMENT This research is supported in part by NSF Grant #1126747, Raytheon and NSF Net-Centric I/UCRC.

D. Process improvement

R EFERENCES

With our usability-ODC classification and analysis framework, we can improve the whole development process. All 70 usability problems were found by heuristic evaluation and lab-based usability test in the test and evaluation phase, but we found many problems should be detected in the earlier phases.

[1]

Manuel F. Bertoa, Jose M. Troya, and Antonio Vallecillo. Measuring the usability of software components. Journal of Systems and Software, 79:427–439, 2006. [2] John M Carroll and Mary Beth Rosson. Getting around the task-artifact cycle: how to make claims and design by scenario. ACM Transactions on Information Systems, 10(2):181–212, 1992. [3] Ram Chillarege, I. Bhandari, J. Chaar, M. Hallidayand D. Moebus, B. Ray, and M.-Y. Wong. Orthogonal defect classification — a concept for in-process measurements. IEEE Trans. on Software Engineering, 18(11):943–956, November 1992. [4] Xavier Ferre, Natalia Juristo, and Ana M Moreno. Which, when and how usability techniques and activities should be integrated. In HumanCentered Software Engineering: Integrating Usability in the Software Development Lifecycle, pages 173–200. Springer Netherlands, 2005. [5] Eelke Folmer, Jilles van Gurp, and Jan Bosch. A framework for capturing the relationship between usability and software architecture. Software Process: Improvement and Practice, 8(2):67–87, 2003. [6] Dong-Han Ham. A new framework for characterizing and categorizing usability problems. In EKC2008 Proceedings of the EU-Korea Conference on Science and Technology, pages 345–353. Springer, 2008. [7] Susan L Keenan. Product usability and process improvement based on usability problem classification. PhD thesis, Virginia Polytechnic Institute and State University, 1996. [8] Li Ma and Jeff Tian. Web error classification and analysis for reliability improvement. Journal of Systems and Software, 80(6):795–804, June 2007. [9] Deborah J Mayhew. The usability engineering lifecycle. In CHI’99 Extended Abstracts on Human Factors in Computing Systems, pages 147–148. ACM, 1999. [10] Jakob Nielsen. Usability Engineering. Morgan Kaufmann, San Francisco, 1993. [11] Hokyoung Ryu and Andrew Monk. Analysing interaction problems with cyclic interaction theory: Low-level interaction walkthrough. PsychNology Journal, 2(3):304–330, 2004. [12] Ben Shneiderman and Catherine Plaisant. Designing The User Interface: Strategies for Effective Human-Computer Interaction, 5/e (New Edition). Prentice Hall, 2009. [13] Jeff Tian. Software Quality Engineering: Testing, Quality Assurance, and Quantifiable Improvement. John Wiley & Sons, Inc. and IEEE CS Press, Hoboken, New Jersey, 2005. [14] Sigurbj¨org Gr´oa Vilbergsd´ottir, Ebba Thora Hvannberg, and Effie LaiChong Law. Classification of usability problems (cup) scheme: augmentation and exploitation. In the 4th Nordic conference on Humancomputer interaction: changing roles, pages 281–290. ACM, 2006.

Among the 30 problems found by heuristic evaluation, 9 problems could be detected by cognitive walkthrough. For the 40 problems found by lab usability test, 10 problems could be found by cognitive walkthrough. Typically cognitive walkthrough can be performed in requirement phase to validate prototyping and usability specification depicted in Figure 4. In additions, 15 usability problems found in lab usability testing were expected to be detected by heuristic evaluation earlier. Therefore we concluded that some techniques such as cognitive, pluralistic walkthrough and heuristic evaluation were not effectively performed in the earlier requirement analysis or design phases for problem removal. To improve the development process, the project managers should examine whether appropriate techniques were effectively carried out in the corresponding phases, and whether the related team have the appropriate skills for usability assurance. V.

C ONCLUSIONS AND P ERSPECTIVES

This paper presented a usability-ODC framework to classify and analyze usability problems. The original ODC (Orthogonal Defect Classification) scheme has been proved effective for addressing quality defects to improve software development process. By adapting the original ODC, we integrates the traditional usability engineering practices that can address usability issues for different human computer interface (HCI) activities. In the framework, seven usability problem attributes were identified as the classification foundations, namely (1) artifact related problem, (2) task related problem, (3) problem trigger, (4) problem in learning, (5) problem in performing given tasks by users, (6) user perception, and (7) problem severity. With our framework, the cause-effect classification and analysis can be performed. Meanwhile, by addressing the usability problems during the whole development, HCI activities and software process can be improved together. The proposed framework was evaluated through a case study. The case study showed that our usability-ODC classification and analysis framework can give the constructive

245