usability evaluation methods: a literature review

41 downloads 490 Views 94KB Size Report
Noida, U.P. 201303, India ... It is the key factor in the development of successful ... of software systems inspite of great expenses on their development. This is ...
Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST)

USABILITY EVALUATION METHODS: A LITERATURE REVIEW ANKITA MADAN CSE Department, Amity University, Noida, U.P. 201303, India [email protected] http://www.amity.edu

SANJAY KUMAR DUBEY CSE Department, Amity University, Noida, U.P. 201303, India [email protected] http://www.amity.edu Abstract: Usability is an important factor for all software quality models. It is the key factor in the development of successful interactive software applications. Usability is the most widely used concept in the software engineering field and defines the software system’s demand and use. Due to such wide importance of this quality factor various usability evaluation methods are proposed by usability experts and researchers. This paper presents a comprehensive study of different usability evaluation methods. The objective of this paper is to lay down the intensive and conceptual study of the usability concepts. Keywords: Usability; software; system; approach. 1. Introduction Demand for quality software system is increasing rapidly. But at the same time there is wide range of rejections of software systems inspite of great expenses on their development. This is due to non-interaction of the system and the failure of the software system to fulfill their tasks. Usability is a product attribute that influences the quality of a software system. It is a transient and elusive concept which has various sub attributes related to it for the explanation of its abstractness. There exists several quality models like McCall (1977), FURPS(1987), Capability Maturity Models(1989), IEEE(1992), Dromey (1995), ISO(1991,1998,2001), given by researchers and experts, which accounts usability as an indispensable quality attribute for the development of a quality software system. Usability is defined as ‘the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component’ (IEEE Std.1061, 1992). Usability correlates with the functionality of the system and helps in its evaluation. The lack of usability causes failure of the software system that leads to a substantial monetary loss, user dissatisfaction, staff unproductivity and time wastage. Therefore, usability evaluation is very important for the process of designing usable software system. But still there are no apt criteria or models for usability evaluation because of its fuzzy characteristics. Therefore, this paper presents extensive survey of the usability concepts and evaluation. The tour to the evolution of usability concept and evaluation methods over the past three decades has been covered in this paper. 2. Usability Models The usability models are conceptual view which lays down the focus areas to demonstrate the usability of the existing software. These criteria are helpful in the usability evaluation of the software system. Eason’s Model (1984) characterized usability into three sections based on their independency on the platform in which the task is being performed i.e. Task Characteristics, User Characteristics, System Characteristics and User Reaction which is variable dependent. Later, Shackel (1991) gave the importance of usability engineering and the relativity of its concept. He gave the four important characteristics of usability namely effectiveness, learnability, flexibility, attitude. Nielsen Model (1993) studied and recognized usability as an important attribute to influence the acceptance of a product. He divided acceptability into practical and social acceptance and further on gave five sub attributes of usability namely learnability, efficiency, memorability, errors, and

ISSN : 0975-5462

Vol. 4 No.02 February 2012

590

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST)

satisfaction. The international organization of standardization gave a model consisting of three basic sub attributes namely effectiveness, efficiency, and satisfaction (ISO 9214-11, 1998). Moving ahead, ISO 9126 (2001) laid down the following sub attributes of usability namely understandability, learnability, operability, attractiveness, usability compliance. Usability model and their definitions are mentioned in Table 1. Table 1. Taxonomy of Usability Models

Model

Eason Model (1984)

Sub-Attributes Task Frequency Openness User Knowledge

System

Motivation Discretion Ease of learning Ease of use Task match

Effectiveness

Shackel Model (1991)

Learnability

Flexibility Attitude Learnability Efficiency Nielsen model (1993)

Memorability Errors Satisfaction

Effectiveness ISO 9241-11 (1998)

Efficiency Satisfaction Understandability

Learnability ISO 9126 (2001)

Operability Attractiveness Usability compliance

ISSN : 0975-5462

Definitions Number of times a task is performed by a user. Extent to which a task is modifiable. The knowledge that the user applies to the task. It may be appropriate or inappropriate. How determined the user is to complete the task. The user's ability to choose not to use some part of a system. The effort required to understand and operate an unfamiliar system. The effort that is required to operate a system once it has been understood and mastered by the user. The extent to which information and functions that a system provides matches the needs of the user. It is described as system’s performance is better than some required level, by some required percentage of the specified target range of users, within some required portion of the range of usage environments. It is the training of users after some specific time from installation of the system. Also, includes user’s re-learnability time for training and support systems. It is the positive changes or variations in the system to the existing ones. It is the acceptance of users within their levels of discomfort, tiredness, frustration and personal effort. The system should be easy to learn and understand. It should be easy for the user to get their job or task executed using the software system. Efficiency of the system is directly related to its productivity. The more efficient a system is its throughput is correspondingly high. It is best suited for intermittent users. The user can return to the system’s previous state without starting away from the beginning. The error rate in any system should be less. If any error is occurred, the system should be able to recover from it. It is the pleasant feeling that user gets while or after using the system. It can be observed as likeability for the system and fulfillment of specified task. It is the performance measure of a system to complete a specified task or goal successfully within time. It is the successful completion of a task by a system. It relate to accuracy and completeness of the specified goal. It is acceptability of a system by the users, in specified context of use. The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. The capability of the software product to enable the user to learn its application. The capability of the software product to enable the user to operate and control it. The capability of the software product to be attractive to the user. The capability of the software product to adhere to standards, conventions, style guides, or regulations related to usability

Vol. 4 No.02 February 2012

591

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST)

3. Literature Survey Down the time usability has been studied and discovered by great researchers and scholars. It is a multidimensional concept that opens areas for research. It has evolved over time and has got its relevance in many aspects. Foley and Van Dam (1982) described it with respect to user interface guidelines as a property of the syntactic and semantic analysis of a user interface. Smith and Moiser (1984) made the next attempt by describing it as a product attribute, which defines the concept by naming product or system attributes or qualities that influence the usability. In the same year Eason supported the view, usability is the question of how well users can use that functionality. Gould (1985) defined usability as any system designed for people to use should be built by keeping in mind that it should be easy to learn and remember, it should be useful , it should contain functions that people really in their work and be easy and enjoyable to use. B. Shackel (1986) gave an outstanding definition of usability that could be used throughout the system development lifecycle. As per the description the system must accomplish following criteria: effectiveness, learnability, flexibility and attitude. Doll and Torkzadeh (1988) presented a model for satisfaction measurement called End User Computing Satisfaction Instrument (EUCSI). This was used for specific application. Ravden and Johnson (1989) they presented a usability evaluation mechanism, software inspection and gave a detailed checklist of 122 items divided into 9 dimensions. Igbaria and Parasuraman (1989) considered fun to be very influential in acceptance of any software system. Booth (1989) thought it difficult to specify and measure flexibility of a system and believed that being useful should be fundamental to usability, thus he modified Shackel’s criteria into usefulness, effectiveness, learnability (or ease of use), and attitude (or likeability). Polson and Lewis (1990) suggested for the user interface design solutions. They contributed by giving problem solving strategies for novice users when they interact with the complex interface. Holcomb and Tharp (1991) presented a software usability model for the system designers to decide which usability sub attributes should be included. It provides a consistent metric for usability. ISO 9126(1991) defined usability as “a set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of user”. Brian Shackel (1991) elaborated the usability concept, “usability of a system or equipment is the capability in human functional terms to be used easily and effectively by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the specified range of environmental scenarios”. Mayhew (1992) reviewed the guidelines and instruction for user interface development. It also includes the general usability principles which describe the desirable properties of the interface. Grudin (1992) proceeded towards practical acceptability of the system within the various categories like cost, support, system usefulness. System usefulness was linked to “system in use”. The study was later characterized into usability which referred how successfully a user can use system’s functionality. The reaction of user to the interface can be related to its usability on the basis of efficiency and efficacy of the interface (Hix and Hartson, 1993). Their classification of usability depends upon learnability, retainability, initial performance, long term performance, advanced feature usage, first impression and prolonged user satisfaction. Nielsen (1993) presented usability heuristics for the inspection method of usability evaluation to check the usability principles of the software system. The usability principles of the heuristics were utilized by the evaluators for examining the interface. According to his classification, usability has five sub attributes, they are, learnability (easy to learn), efficiency (efficient to use), memorability (easy to remember), errors (the relevance of catastrophic errors for applications, and satisfaction (pleasant to use). Dumas and Redish (1993) explained their definition of usability on the basis of focus on users, usability means, use of product by users for productivity, users are busy people trying to accomplish tasks, decision of user about when the product is easy to use. Preece et al. (1993) categorized usability into sub attributes namely: safety, effectiveness, efficiency and enjoyableness. There can be a variation in usability as it depends on user’s prior experience with the similar software systems. Later they proposed a new classification composed of learnability, throughput, flexibility and attitude (Preece et al., 1994). Bevan and Macleod (1994), who discusses the ISO 9241 approach regarded usability as “a property of the overall system: it is the quality of use in a context". Nielsen’s and Levy’s (1994), both worked on user satisfaction assessment of product with the aim of usability evaluation. Logan (1994) cited the most different concept of usability; he divided usability into social and emotional dimension. In the emotional dimension, it was explained that a usable product will attract user’s attention, enable learning and relive computer anxiety. Caplan (1994) was another expert with a new approach of usability; he defined apparent usability as an important consideration in the design of a software system. He defined it as “the ease of use that is perceived by a customer upon first looking at a product, but not using it” and the actual usability as “the ease of use experienced during operation of the product”. Lamb (1995) claimed usability issues are not restricted to interface usability; it is a wider concept which includes content usability, organizational usability and inters organizational usability. Guillemette (1995) refers usability to “the degree to which an information system can be effectively used by target users in the performance of tasks.” Usability was later divided into:i) Inherent usability (Kurosu and Kashimura 1995) – it is defined as the functional or dynamic

ISSN : 0975-5462

Vol. 4 No.02 February 2012

592

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST)

part of the interface usability. The sub attributes focus on how to make the product easy to understand, easy to learn, efficient to use, less erroneous and pleasurable; ii) Apparent usability (Kurosu and Kashimura 1995, Tractinsky 1997) - is more related to the visual impression of the interface. Nielsen (1995) presented “Discount usability engineering”; Botman (1996) “Do it yourself usability evaluation’. Butler (1996) dealt with usability engineering which includes software system models, user models, interface’s model, links between these, development of the standards and prototyping activities. Harrison and Rainer (1996) reviewed a model used for computing satisfaction –EUCSI, which is defined as “an affective attitude towards a specific computer application by someone who interacts with the application directly”. It included following sub attributes: content, accuracy, format, timeliness and ease of use”. Products are tools and high degree of usability can be determined when the error rate of usability is minimum was given by Kanis (1997) and Hollnagel (1997). Gluck (1997) correlated Usability to usefulness and usableness. Usableness answers to the question “Can I invoke this function?” and Usefulness answers to question “Did it really help me?” or “Was it worth the effort?” ISO 924111(1998) is “Guidance on usability” which discusses usability for the purposes of system requirement specifications and its evaluation. Lecerof and Paterno (1998) provided a definition addressing importance of a system to users, efficiency of software system, user’s subjective feelings, learnability, and a system’s safety feature. Thomas (1998) categorized usability sub attributes into three categories: outcome, which includes effectiveness, efficiency and satisfaction; process, defines ease of use, interface, learnability, memorability and error recovery; and task which defines functionality and compatibility. Microsoft also regarded usability as strategies to attract user’s reactions for a system and utilizing them into the various development stages (Veldof, Prasse, and Mills, 1999). However, Head (1999) pointed a simple easy to use interface is the main value point for usability: the core value of “usability is rooted in cognitive science - the study of how people perceive and process information through learning, the use of memory, and attention”. Design guidelines contain instructions and principles that are required in building of an effective user friendly interface. This was conceptualized by the methods described into five categories: design rules, ergonomic algorithms, style guide, standards and collection of guidelines (Vanderdonckt, 1999). Usability concept was regarded for pedagogical value as an important for e-learning systems. Hence, improvement based on the UE results made the systems more usable, still such a system may not have any pedagogical sense (Squires and Preece, 1999)”. In technical writing, clear and accurate definitions are critical” (Alred, Brusaw, and Oliu, 2000) and there is a requirement of concise usability definition as it is potentially affected by technical/system and human factors. Arms (2000) stated that usability comprised of aspects including interface design, functional design, data and metadata, and the computer systems and networking. Whitney Quesenbery (2001, 2003, and 2004) says “the five E’s of usability” which include effectiveness, efficiency, engagement, error tolerance, and ease of learning. This explained the requirement of an interface design that must be easy to learn, remember, and use, with few errors for its implied users and the tasks that it is assigned to use support by (Battleson, Booth, and Weintrop, 2001). The concept of web usability was described as, user’s experience in reading or interacting with a Web site (Brophy and Craven, 2007; Hudson, 2001). The notion of human-computer interaction extends to Web technology. Turner (2002) illustrated a checklist for the evaluation of usability. For this usability could be characterized into navigation, design of the page, its consistency, and content, context of use, accessibility and interactivity. Blandford and Buchanan (2002a) cited the usability concept as technical, cognitive, and social design based. In the context of web usability, Palmer (2002) defined usability, which explains ease of navigation for task performance, clarity of interaction, ease of reading, information organization, speed and layout. The combination of analytical and empirical evaluation method called “systematic usability evaluation” was devised for usability measurement (Matera et al., 2002). Oulanov and Pajarillo (2002) stated for successful communication interface effectiveness is one of the most important aspects of because it is the medium of interaction. Guenther (2003) marked out the difficulties; he stated “defining usability is complicated”. Pack (2003) also added on to this by expressing that “the term has been used so often in so many different contexts, it is in danger of losing its precise meaning”. Campbell and Aucoin (2003) explicitly stated that “usability refers to the relationships between tools and their user and it is the quality of a system that makes it easy to learn, easy to use, easy to remember, error tolerant and subjectively pleasing”. Abran et al. (2003) referred usability as a set of multiple concepts, performance of the system, execution time of a specified task, user satisfaction and ease of learning. Villers (2004), Dringus and Cohen (2005) all had common expression, the talked about usability evaluation methods should consider pedagogical factors also. Hence, the evaluators should take into account learning theory, learning cycle, educational test research and then use it for e-learning evaluation. Krug (2006) studied usability from the user’s perspective based on their experience. Similarly, Dee, and Allen (2006) noted that when an end-user interface is easy to use and intuitive, it conforms to usability principles. 10 usability factors namely, efficiency, effectiveness, productivity, satisfaction, learnability, safety, trustfulness, accessibility, universality, and usefulness are associated with twenty-six usability measurement criteria classified by Seffah, Donyaee, Kline and Padda (2006). Tom Tullis and Bill Albert (2008) presented ‘Tips and Tricks for Measuring the User Experience’ this includes certain points, they are: Know your data; Show your confidence (intervals); Deal with binary success

ISSN : 0975-5462

Vol. 4 No.02 February 2012

593

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST)

data (appropriately); Compare means; Consider using expectation measures; Use the System Usability Scale (SUS); Show frequency distributions; Combine different metrics; Use appropriate tools; Present data appropriately. Thomas S. Tullis, (2009) explained some of the myths regarding usability, regarded them as ‘Top Ten Myths about Usability’. Gardner-Bonneau D. (2010), he talked about the software system’s capability to sustain the changes in the technical prospects without hampering the usability effectiveness. Jennifer C. Romano Bergstrom et al. (2011) carried out a demonstration and explained the benefits and challenges faced by the designers while usability testing of website design. Table 2. gives the comprehensive overview of the usability concepts. Table 2. Quick review at the development of usability concept from 1982-2011

Researchers Foley and Van Dam (1982) Smith and Moiser (1984) Eason (1984) Gould(1985) Shneiderman (1986) Shackel(1986) Tyldesley (1988) Doll & Torkzadeh (1988) Ravden & Johnson (1989) Igbaria & Parasuraman (1989) Booth (1989) Polson & Lewis (1990) Holcomb & Tharp (1990) Brian Shackel (1991) Mayhew (1992) Grudin (1992) Nielsen (1993) Dumas & Redish (1993) Preece et al. (1993) Beimal et al. (1994) Nielsen & Levy (1994) Logan (1994) Caplan(1994) Preece et al. (1995) Lamb (1995) Guillemette (1995) Kurosu & Kashimura (1995) Nielsen (1995) Botman (1996)

ISSN : 0975-5462

Usability Concepts User interface guidelines. Described usability as product’s attribute. Interrelated usability and functionality. Defined usability in terms of learnability, usefulness and ease of use. Guidelines for error prevention, discussed the system’s response time, data entry within HCI. Defined usability with the factors effectiveness, learnability, flexibility and attitude. Mentioned 22 factors that could be used to build the metrics and specifications. End User Computing Satisfaction Instrument (EUCSI). Presented software inspection as usability evaluation mechanism. Enjoyability is directly proportional to acceptance of a system He modified Shackel’s criteria into usefulness, effectiveness, learnability, and attitude. He gave problem solving strategies for novice users to interact with the complex interface. Presented a software usability model for the system designers to decide which usability sub attributes should be included. Elaborated the usability concept. Reviewed usability principles to describe the desirable properties of the interface. Practical acceptability of the system within the various categories like cost, support, system usefulness. Presented usability heuristics for the inspection method of usability evaluation. He classified usability to, learnability, efficiency, memorability, errors, and satisfaction. explained their definition of usability on the basis of focus on users, usability means, use of product by users for productivity, users are busy people trying to accomplish tasks, decision of user about when the product is easy to use. Categorized usability into sub attributes namely: safety, effectiveness, efficiency and enjoyableness. Principles of acceptance for usability. Worked on user satisfaction assessment of product. Divided usability into social and emotional dimension. Defined apparent usability as an important consideration in the design of a software system. Related usability to overall performance of the system and user satisfaction. Claimed usability as a wider concept which includes content usability, organizational usability and inter organizational usability. Reviewed and defined usability with respect to effective use of information system. Divided usability into Inherent usability and Apparent usability. Presented “Discount usability engineering”. Presented “Do it yourself usability evaluation”.

Vol. 4 No.02 February 2012

594

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST)

Researchers Butler (1996) Harrison & Rainer (1996) Kanis & Hollnagel (1997) Gluck (1997) Tractinsky(1997) Lecerof & Paterno (1998) Thomas (1998) ISO 9241-11(1998) Veldof, Prasse, & Mills (1999) Vanderdonckt (1999) Kengeri et al. (1999) Squires & Preece (1999) Arms (2000) Alred et al. (2000) Battleson et al.(2001) Hudson (2001) Turner(2002) Blandford & Buchanan (2002) Palmer (2002) Oulanov & Pajarillo (2002) Matera et al. (2002) Guenther (2003) Pack (2003) Campbell & Aucoin (2003) Abran et al. (2003) Whitney Quesenbery (2001,2002, 2003) Villers (2004), Drigus & Cohen (2005), Miller (2005) Krug(2006) Dee & Allen (2006) Seffah, Donyanee, Kline & Padda (2006) Brophy & Craven (2007) Tom Tullis & Bill Albert (2008) Thomas S. Tullis (2009) Gardner-Bonneau (2010) Jennifer C. Romano Bergstrom et al. (2011)

ISSN : 0975-5462

Usability Concepts Dealt with usability engineering. Reviewed a model used for computing satisfaction –EUCSI. High degree of usability can be determined when the error rate of usability is minimum. Correlated Usability to usefulness and usableness. Contributed in explaining the concept of Apparent usability. Declared functionality being essential to usability. Categorized usability sub attributes into three categories: outcome, process, and task. “Guidance on usability” which discusses usability for the purposes of system requirement specifications and its evaluation. Related usability, user’s reaction and system development Design guidelines and principles to build an effective user friendly interface. Explained usability using effectiveness, likability, learnability and usefulness. Usability concept was regarded for pedagogical value for e-learning systems. Aspects of usability that are interface design, functional design, data and metadata, and the computer systems and networking. Related usability to technical/system and human factors. Explained interface design that is easy to learn, remember, and use, with few errors. The concept of web usability was described. Illustrated a checklist for the evaluation of usability. Explained usability in terms of technical, cognitive, and social design. Also, looked into the future work on methods for analyzing usability. Explained usability in context of web usability. Interface effectiveness as one of the most important aspects of interaction”. Gave “Systematic usability evaluation”. Illustrated the difficulties in defining usability. Explained usability as a relationship between tools and its users. Referred usability as a set of multiple concepts, performance of the system, execution time of a specified task, user satisfaction and ease of learning. Presented “the five E’s of usability” which include effectiveness, efficiency, engagement, error tolerance, and ease of learning. Expressed usability evaluation methods should consider pedagogical factors.

Studied usability from the user’s perspective based on their experience. End-user interface conforms to usability principles. Gave 10 usability factors namely, efficiency, effectiveness, productivity, satisfaction, learnability, safety, trustfulness, accessibility, universality, and usefulness are associated with twenty-six usability measurement criteria. Explained web usability. Presented ‘Tips and Tricks for Measuring the User Experience’. Explained ‘Top Ten Myths about Usability’. Explained the effectiveness sustained by the software system when technical changes are made to it. Conducted iterative usability testing.

Vol. 4 No.02 February 2012

595

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST)

4. Usability evaluation Various methods are available in the literature for usability evaluation like Inspection, DRUM, QUIS, SUMI MUSIC, Empirical testing. 4.1 Inspection This method is proposed by Boehm et al. (1976).The users are observer or testing and evaluation of the design layout of the software system is done by the experts. It provides expert’s views and opinions which are essential for development of various aspects of the software system. The two most widely used inspection methods are: 4.1.1. Heuristic Evaluation It is fast, cheap and easy method to figure out the shortcomings and problems in a user interface design. The evaluators use the usability principles or the heuristics for its implementation. 4.1.2. Cognitive Walkthrough This method is based on assessment of the user interface by the experts who consider the opinion and experience of the users. It is useful in identifying the problems of user interface. 4.2 Empirical testing This method was proposed by Marciniak, J.J., (2002). It is a lab oriented methodology which accounts user experience as a requirement for the design and development of the software system. It also examines performance and attitude of the users involved in testing the system (Lund, A. M., 1997). The naive users are allowed to interact with the system and the behavior of the user and the system’s response is recorded. The system can either be a prototype or the final product on which the testing is performed. In case of prototype system, amendments can be made for the successful design of the product whereas in the case of final product user acceptance can be measured. Accordingly, if required a system can be discarded. 4.3 Metrics for Usability Standards in Computing (MUSiC) MUSiC (Bevan, 1995; Macleod et al., 1997) was developed at the National Physical Laboratory, UK, for the purpose of quantitative and qualitative data required to support usability engineering. This method evaluates the measures of effectiveness and efficiency as follows: •

Effectiveness is the capability of a software system to carry out the specified task successfully. It is defined as a function of two components, the quantity of the task attempted by the users, and the quality of goals they achieve (Miles Macleod et al., 1998). Effectiveness = f(Quantity, Quality)



Task Effectiveness (TES), = (Quantity*Quality) % 100 Efficiency and cost of task performance, this can be formulated by calculating the amount of effort put into that is the basically the input. Hence, two definitions can be generated which are mentioned as: User Efficiency = Effectiveness Task Time

where [Task Time= time spent by user to complete the task]

Human Efficiency = Effectiveness Effort Corporate Efficiency = Effectiveness Total Cost

ISSN : 0975-5462

where [Total cost=cost of labor + cost of resources + cost of training]

Vol. 4 No.02 February 2012

596

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST)

4.4 Software Usability Measurement Inventory (SUMI) SUMI was developed by University College Cork Software Usability Measurement Inventory as part of the MUSIC project (Kirakowski, Porteous and Corbett, 1992). It measures the quality of the software system from the end user’s point of view. It consists of industry standardized questionnaire statements which are answered by the user according to whether they Agree, Don’t Know, or Disagree. This internationally structured 50-item questionnaire is available in variety of languages for the convenience of the users like in English, German, Dutch, Spanish and Italian. It is not at all time consuming and takes 10 minutes or so. We can decide to administer it on paper or on the other hand can decide for the internet, online option. 4.5 Diagnostic Recorder for Usability Measurement (DRUM) DRUM (Macleod and Rengger, 1993) is a software tool developed at NPL within the MUSIC project, for usability evaluation. DRUM has a graphical user interface, online context-sensitive help and a comprehensive user manual (Macleod et al., 1992). It analyses the tests of a product and derives performance-based usability metrics from the results and send it to the usability engineer. The video session recorded is analyzed in real time during first pass of recording. The DRUM increases the pace of the analysis greatly and automates the activity wherever possible. The component of the DRUM, Log Processor provides calculation to its database of performance measures and performance based usability metrics, which includes task time; snag, Search and Help Times; Efficiency; Relative Efficiency; Productive Period (Miles Macleod, Rosemary Bowden and Nigel Bevan, 1998). There is a tabular and graphical representation of the measures and metrics. The report is given to the product’s designers who are concerned with the usability defects. 4.6 The Questionnaire for user interaction Satisfaction (QUIS) QUIS (Chin et al. 1988, Harper and Norman 1993) evaluation is based on the factors mentioned in the ‘User Evaluation of Interactive Computer Systems’ given by Ben Shneiderman (1986). QUIS is designed in modular format to get section wise accessibility as well as to specific aspects. This questionnaire provided is effective in providing guidance in the design or redesign of systems. It also helps the evaluators find the areas of potential improvement in the software system. Thereby, it serves as a testing instrument in usability laboratories and operates on the concrete product features and the user experience.

5. Conclusion Usability concept has been under focus over the years and has evolved with different definitions by researchers. Different attributes have been built for a clear view of usability and its aspects. The usability has been decomposed into several sub attributes which are hypothetical constructs to define the success of a system. User involvement plays key role in determining the software usability after it has been developed. This paper has surveyed research papers, published articles and views of usability experts to describe usability models, usability evaluation methods and has determined the sub attributes of usability which form the basis for the usability evaluation of the software system. The paper will be beneficial for the both the students and the researchers who are working in the field of software engineering. There is still a dilemma about appropriate selection of measurement technique for usability evaluation of a software system. To find such an appropriate technique is the future scope of this paper.

References: [1] [2] [3] [4] [5] [6] [7] [8] [9]

Alred, G. J.; Brusaw, C.T.; Oliu, W.E. (2000). Handbook of technical writing, 6th edn. New York: Macmillan. Abran, A.; Khelifi, A.; Suryn; Seffah, Ahmed W. (2003): Consolidating the ISO Usability Models. Proceedings of International Software Quality Management Conference Springer). Glasgow, Scotland, UK. Alonso-Ríos, D.; Vázquez-García, A.; Mosqueira-Rey, E.; Moret-Bonillo, V. (2010): Usability: A Critical Analysis and a Taxonomy Department of Computer Science, University of Coruña, Spain, Int. Journal Of Human–Computer Interaction, 26(1), pp. 53–74. Arms, W. Y. (2000). Digital libraries. Cambridge, Mass.: MIT Pr. Battleson, B.; Booth, A.; Weintrop, J. (2001): Usability testing of an academic library Web site: A case study. The Journal of Academic Librarianship, 27(3), pp. 188-198. Bevan, N. (1995): Measuring usability as quality of use, Software Quality Journal 4, pp. 115–130. Bevan, N.; Macleod, M. (1994): Behaviour and Information Technology, Usability measurement in context Nigel Bevan and Miles Macleod National Physical Laboratory, Teddington, Middlesex, UK. 13, pp. 132-145. Blandford, A.; Buchanan, G. (2002a): Usability for digital libraries. Proceedings of the second ACM/IEEE-CS Joint Conference on Digital Libraries. New York: ACM Press, pp. 424. Brophy, P.; Craven, J. (2007): Web accessibility. Library Trends, 55(4), pp. 950-972.

ISSN : 0975-5462

Vol. 4 No.02 February 2012

597

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST) [10] Boehm, B.W.; Brown, J.R.; Lipow, M. (1976): Quantitative Evaluation of software quality, international Conference on Software Engineering Proceedings. [11] Booth, P. (1989). An introduction to human-computer interaction. Hillsdale, USA: Lawrence Erlbaur Associates Publishers. [12] Botman, H. (1996): Do-it-yourself usability evaluation: Guiding software developers to usability. Taylor & Francis, London, pp. 5966. [13] Butler, K. A. (1996): Usability engineering turns 10, Interactions, 3, pp. 59-75. [14] Campbell, K.; Aucoin, R. (2003). Value-based design of learning portals as new academic spaces, In: Jafari, A. and M. Sheehan, Designing Portals: Opportunities and Challenges, Hershey, PA: IRM Press, pp. 162–185. [15] Caplan, S.H. (1994). Making usability a Kodak product differentiator. In: Wiklund, M.E. (Ed.), Usability in Practice. AP Professional, NY, pp. 21–58. [16] Chin, J.P.; Diehl, V.A.; Norman, K.L. (1988): Development of an instrument measuring user satisfaction of the human-computer interface. Proc. ACM CH1'88 Conf. (Washington, DC 15-19 May), pp. 213-218. [17] De Villers, R. (2004): Proceedings of the annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries, Usability evaluation of an e-learning tutorial: criteria, questions and case study. pp. 284-291. [18] Doll, W. J.; Torkzadeh, G: (1988): The Measurement of End-User Computing Satisfaction, MIS Quarterly, pp. 259-274. [19] Dringus, L.P.; Cohen, M.S., (2005): Proceedings 35th Annual Conference Frontiers in Education. FIE'05, an adaptable usability heuristic checklist for online courses. pp. T2H-6. [20] Dee, C.; Allen, M. (2006): A Survey of the Usability of Digital Reference Services on Academic Health Science Library Web Sites. The Journal of Academic Librarianship, 32(1), pp. 69-78. [21] Dumas, J.S.; Redish, J. A. (1993). Practical Guide to Usability Testing, Ablex Publishing Norwood NJ. [22] Eason K. D. (1984): Towards the experimental study of usability, Behaviour and Information Technology, 3(2), pp. 133-143. [23] Foley, J.; Dam, A. (1982): Fundamentals of Interactive Computer Graphics. Rea, Addison-Wesley, USA. [24] Gardner-Bonneau, D. (2010): Is Technology Becoming More Usable or Less and With What Consequences, Journal of Usability Studies, 5(2), pp. 46-49. [25] Gluck, M. (1997): A descriptive study of the usability of geospatial metadata. Annual Review of OCLC Research. Accessed: October, 2011, www.oclc.org/research/publications/arr/1997/gluck/gluck_frameset.htm. [26] Gould, J.D.; Lewis, C. (1985): Designing for Usability, Key Principles and What Designers Think, Communications of the ACM, 28(3). [27] Grudin, J. (1992): Utility and Usability: Research Issues and Development Contexts, Interacting with Computers, 4(2), pp. 209-2 . [28] Guenther, K. (2003): Assessing Web site usability. Online, 27 (2), pp. 65-68. [29] Guillemette, Ronald A. (1995). The evaluation of usability in interactive information systems. In Human factors in information systems: Emerging theoretical bases, Jane M. Carey. Norwood, N.J.: Ablex. [30] Harrison, A.W.; Rainer Jr, R.K. (1996). A General Measure of User Computing Satisfaction. In Computers in Human Behavior, 12(1), pp. 79-92. [31] Head, A. (1999): Web redemption and the promise of usability. Online, 23(6), pp. 20-32. [32] Holcomb, R.; Tharp, A. (1991): Users, a software usability model and product evaluation, Interacting with Computers, ButterworthHeinemann, Oxford, UK, 3(2), pp. 155-166. [33] Hix, D.; Hartson, H.R. (1993). Developing User Interfaces: Ensuring usability through product and process, chap2, Wiley and Sons, NY. [34] Hollnagel, E. (1997): Cognitive ergonomics or the mind at work. Proceedings of the 13th Triennial Congress of the International Ergonomics Association, Tampere Finland 1997, Finnish Institute for Occupational Health, Helsinki, 3, pp. 3-5. [35] Hudson, L. (2001): From theory to (virtual) reality. Library Journal, 126 (11), pp. 12-15. [36] IEEE Std. 1061. (1992): IEEE standard for a software quality metrics methodology, New York, IEEE Computer Society Press. [37] Igbaria, M. and Parasuraman, S. (1989): A path analytic study of individual characteristics, computer anxiety, and attitudes toward microcomputers. Journal of Management, 15, pp. 373-388. [38] ISO 9126. (1991): Software Product Evaluations- Quality characteristics and guidelines for their use, ISO DIS 9126. [39] ISO 9241. (1998): Ergonomics requirements for office work with visual display terminals (VDTs) – Part 11: Guidance on usability. [40] ISO/IEC 9126-1, (2001): Software engineering - Product quality -Part 1: Quality model. [41] Bergstrom, J. C. R.; Olmsted-Hawala, E. L.; Chen, J. M.; Murphy, E. D. (2011): Conducting Iterative Usability Testing on a Web Site: Challenges and Benefits Journal of Usability Studies, 7(1), pp. 9-30 [42] Kanis, H. (1997): Usability centered research for everyday product design. Proceedings of the 13th Triennial Congress of the International Ergonomics Association, Tampere Finland, Finnish Institute for Occupational Health, Helsinki, 2, pp. 153-155. [43] Krug, S. (2006). Don't make me think: A common sense approach to Web usability. Berkeley, CA: New Riders Publishing. [44] Kurosu, M.; Kashimura, K. (1995): Apparent usability vs. inherent usability: Experimental analysis on the determinants of the apparent usability. Conference on Human Factors and Computing Systems. New York: ACM Press, pp. 292-93 [45] Lamb, R. (1995). Using online resources: Reaching for the *.*s. In Digital Libraries’95, F. M. Shipman, R. Furuta, and D. M. Levy, 137–46. Austin, TX: Department of Computer Science, Texas A&M University. [46] Leventhal & Barnes, (2009). Defining Usability and Models of Usability Engineering: Process, Products and examples, chapter 3. [47] Lecerof, A.; Paterno, F. (1998): Automatic Support for Usability Evaluation. IEEE Transaction on Software Engineering, 24(10), pp. 863-888. [48] Logan, R.J. (1994): Behavioral and emotional usability: Thomson consumer electronics. In: Wiklund, M.E. (Ed.), Usability in Practice. AP Professional, NY, pp. 59-82. [49] Lund, A. M. (1997): Expert ratings of usability maxims. Ergonomics in Design. A study of the heuristics design experts consider important for good design. 5(3), pp. 15-20. [50] Macleod, M.; Drynan, A.; Blayney, M. (1992): DRUM User Guide. National Physical Laboratory, DITC, Teddington, UK. [51] Macleod, M.; Bowden, R.; Bevan, N.; Curson, I.; (1997): The MUSiC performance method, Behaviour and Information Technology 16 , pp. 279-293. [52] Marciniak, J.J. (2002): Encyclopedia of software Engineering, 2, 2nd edn, Chichester: Wiley. [53] Matera, M.; Costabile, M.F.; Garzotto, F.; Paolini, P.; e Inf, D.E. (2002): SUE inspection: an effective method for systematic usability evaluation of hypermedia, IEEE Transactions on Systems, Man and Cybernetics, Part A, 32(1), pp. 93-103. [54] Mayhew, D. J. (1992): Principles and guidelines in software user interface. Prentice Hall, Englewoog Cliffs, NJ. [55] Macleod, M.; Rengger, R. (1993): The Development of DRUM: A Software Tool for Video-assisted Usability Evaluation National Physical Laboratory DITC HCI Group Teddington, Middlesex, TW11 0LW, UK. [56] Macleod, M.; Bowden, R.; Bevan, N. (1998): The MUSiC Performance Measurement Method, NPL, Draft 0..8

ISSN : 0975-5462

Vol. 4 No.02 February 2012

598

Ankita Madan et al. / International Journal of Engineering Science and Technology (IJEST) [57] Nielsen, J. (1993): Usability Engineering. Academic press, San Diego, CA. [58] Nielsen, J.; Levy, J. (1994): Measuring usability: Preference vs. Performance, Communications of the ACM, 37 (4), pp. 66-76. [59] Nielsen, J. (1995): Scenarios in Discount Usability Engineering in Caroll, J.M., (Ed), Scenarios-Based Design: Envisioning Work and Technology in System Development. John Wiley and Sons. [60] Oulanov, A.; Edmund, F. Y. Pajarillo, (2002): CUNY + Web: Usability study of the Web-based GUI version of the bibliographic database of the City University of New York (CUNY). The Electronic Library 20 (6), pp. 481–87. [61] Pack, T. (2003): Fiddling with the Internet dials: Understanding usability. Online, 27 (2), pp. 36-38. [62] Palmer, J. W. (2002): Web site usability, design, and performance metrics. Information Systems Research, 13(2), pp.151 - 167. [63] Polson, P.G.; Lewis, C. H. (1990): Theory–based design for easily learned interfaces. Human-Computer Interaction, 5, pp.191-220. [64] Porteous, M.; Kirakowski, J.; Corbett, M. (1993). SUMI User Handbook. Human Factors Research Group, University College Cork, Ireland. [65] Preece, J.; Benyon, D.; Davies, G.; Keller, L.; Rogers, Y. (1993). A guide to usability: Human factors in computing. Reading, MA: Addison-Wesley. [66] Preece, J. Y. Rogers; H. Sharp; D. Benyon; S. Holland, T. (1994). Carey, Human-Computer Interaction, Addison Wesley. [67] Quesenbery, W. (2003): Dimensions of usability. In Albers, M., & Mazur, B., Content and complexity: Information design in technical communication. Mahwah, NJ: Lawrence Erbaum Associates. [68] Ravden, S.; Johnson, G. (1989). Evaluating usability of Human computer Interfaces: A practical method. Ellis Hardwood Limited, New York. [69] Seffah, A.; Donayaee, M.; Kline, R.B.; Padda, H.K. (2006): Usability measurement and metrics: A consolidated model, Software Quality Control, 14(2). [70] Shneiderman, B. (1986): Designing the user interface: Strategies for effective human-computer interaction. Addison-Wesley, Reading MA. [71] Shackel, B. (1986): Ergonomics in design for usability. In Harrison, M. D. and A. F. Monk (Ed.) Proceedings of the Second Conference of the British Computer Society Human Computer Interaction Specialist Group: people and Computers – Design for Usability Cambridge. British Computer Society Human Computer Interaction Specialist Group, New York: Cambridge University Press, pp. 44-64. [72] Shackel, B. (1991). Usability—Context, framework, definition, design and evaluation, in B. Shackel and S. Richardson, Human Factors for Informatics Usability, Cambridge, MA: University Press, pp. 21–38. [73] Squires, D.; Preece, J. (1999): Predicting quality in educational software: Evaluating for learning, usability and the synergy between them, interacting with computers, 11(5), pp. 467-83. [74] Smith, S.; Mosier, J. (1984): Design Guidelines for the User Interface for Computer-Based Information Systems, Bedford, MA: The MITRE Corporation. [75] Thomas, R. L. (1998). Elements of performance and satisfaction as indicators of the usability of digital spatial interfaces for information-seeking: Implications for ISLA. PhD diss., Univ. of Southern California. [76] Thomas S. Tullis, (2009): Top Ten Myths about Usability, Simmons College. [77] Tractinsky, N. (1997): Aesthetics and apparent usability: Empirically assessing cultural and methodological issues. CHI’97 Conference proceedings, pp. 115-122. [78] Tullis, T.; Albert, B. (2008): Tips and Tricks for Measuring the User Experience Usability and User Experience, UPA-Boston's Seventh Annual Mini UPA Conference. [79] Turner, S. (2002): The HEP test for grading Web site usability. Computers in Libraries 22 (10), pp. 37–39. [80] Tyldesley, D. A. (1988): Employing usability engineering in development of office products. Computer Journal, 31(5), pp. 431-436. [81] Vanderdonckt, J. (1999): Development Milestones towards a Tool for Working with Guidelines. Interacting with Computers, 11 (4). [82] Veldof, J. R.; Prasse, M. J.; Mills, V. A. (1999): Chauffeured by the user: Usability in the electronic library. Journal of Library Administration, 26(34), pp.115-140.

ISSN : 0975-5462

Vol. 4 No.02 February 2012

599