Template: Front Matter - CiteSeerX

6 downloads 29406 Views 2MB Size Report
DEVELOPMENT OF A DECISION TOOL FOR COST JUSTIFICATION OF. USABILITY by ...... Table 2.12- Benefits for Organizations Developing their Websites,.
DEVELOPMENT OF A DECISION TOOL FOR COST JUSTIFICATION OF USABILITY

by Burchan Aydin, M.A.A A Dissertation In Industrial Engineering Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

Approved Dr. Mario G. Beruvides, Ph.D., P. E. Chair of Committee Dr. Patrick Patterson, Ph.D., P.E., C.P.E.

Dr. Milton Smith, Ph.D., P.E.

Dr. Jay Conover Mark Sheridan Dean of the Graduate School

August, 2014

Copyright 2014, Burchan Aydin

Texas Tech University, Burchan Aydin, August 2014

ACKNOWLEDGMENTS I would like to thank my parents and my sister for their unconditional mental, spiritual, and financial support, and for their patience. Feeling their love, support and encouragement from thousands of miles away, every single day in this six years journey has been a crucial driving force. I would like to thank and dedicate this dissertation to my grandmother who has been praying for my success every single day, but passed away two weeks before my final defense. I hope she sees this from somewhere over the clouds. I also thank my grandfather for all the support in our short but happy phone calls. I would like to thank Dr. Mario G. Beruvides for being my mentor and advisor. His advices in all aspects of life have let me be strong and build confidence to finish PhD. I have definitely learned a lot from him with respect to researching, teaching, leadership, and management. I would like to thank my dissertation committee of Dr. Milton Smith, Dr. William J. Conover, and Dr. Patrick Patterson for their support and statistical advice. Their teachings and advices certainly enhanced this research experience. I would like to thank all my colleagues in the Systems Lab, special thanks to Himlona, Javier, Ean, Charlie, Steve, Rodrigo, Gana, Chin, and Pepe. I would like to thank Nicole, Jolinda, and Greg Cruz for their continuous support and motivation. The times I was losing hope, Nicole’s smiles, jokes, and her endless love and trust on me helped me rebuild my self-confidence. When you have someone in your life that loves you and trusts you that much, success is inevitable. I would like to thank Dr. Mukaddes Darwish for being no different than family to me, for teaching me how to stand strong and how to handle problems. Last but not least, I would like to thank the Construction Engineering Department for giving me the opportunity to work there last four years.

ii

Texas Tech University, Burchan Aydin, August 2014

TABLE OF CONTENTS ACKNOWLEDGMENTS .................................................................................... ii ABSTRACT ........................................................................................................................ iv LIST OF TABLES .......................................................................................................... viii LIST OF FIGURES .......................................................................................................... xi I. INTRODUCTION ............................................................................................. 1 History and Background ................................................................................... 1 Problem Statement ............................................................................................ 4 Research Questions ........................................................................................... 4 Research Question for Research 1 .............................................................. 4 Research Question for Research 2 .............................................................. 5 Research Question for Research 3 .............................................................. 5 General Hypotheses .......................................................................................... 5 Research 1 ................................................................................................... 5 Research 2 ................................................................................................... 6 Research 3 ................................................................................................... 6 Research Format................................................................................................ 6 Research Purpose .............................................................................................. 7 Research Objective............................................................................................ 7 Limitations & Assumptions .............................................................................. 7 Limitations .................................................................................................. 7 Assumptions ................................................................................................ 8 Relevance of this Study..................................................................................... 9 Need for this Research ...................................................................................... 9 Benefits of this Research ................................................................................... 9 Theoretical benefits ..................................................................................... 9 Practical Benefits ...................................................................................... 10 Research Outputs and Outcomes .................................................................... 10 II. LITERATURE REVIEW ............................................................................. 11 Introduction ..................................................................................................... 11 Usability, User Centered Design, and Usability Engineering ......................... 11 iii

Texas Tech University, Burchan Aydin, August 2014 Usability .................................................................................................... 11 User Centered Design ............................................................................... 16 Usability Engineering................................................................................ 17 Usability Problems .......................................................................................... 18 Usability Methods ........................................................................................... 18 Perspectives on Usability ................................................................................ 22 Cost Justification of Usability ......................................................................... 28 Usability Cost Justification Models ................................................................ 30 Mayhew and Mantei’s Analysis and Later Updates ................................ 30 Karat’s Analysis and Later Updates .......................................................... 35 Ehrlich and Rohn’s Analysis and Later Updates ..................................... 37 Bohmann’s Analysis ................................................................................. 38 Bevan’s Model .......................................................................................... 39 Potential Benefits of Usability Investments .................................................... 41 Potential Costs of Usability............................................................................. 50 Theoretical Model ........................................................................................... 51 III. RESEARCH I: A METHODOLOGY FOR DATA COLLECTION AND ANALYSIS TO ASSIST COST JUSTIFICATION OF USABILITY ........... 52 Abstract ........................................................................................................... 53 Introduction ..................................................................................................... 53 Methods ........................................................................................................... 56 Data Collection.......................................................................................... 56 Sample Distributions ................................................................................. 62 Two-way Interactions................................................................................ 63 Bootstrap ................................................................................................... 63 Results and Discussion.................................................................................... 63 Distribution Results................................................................................... 63 Two-way Interactions................................................................................ 67 Bootstrap Results ...................................................................................... 70 Conclusion ...................................................................................................... 78 References ....................................................................................................... 80 IV. RESEARCH 2: GOAL PROGRAMMING MODEL FOR USABILITY BENEFIT EXPECTATIONS............................................................................. 87 Abstract ........................................................................................................... 88 Introduction ..................................................................................................... 88 iv

Texas Tech University, Burchan Aydin, August 2014 Methodology ................................................................................................... 90 Increased Sales and Increased Traffic Rate Model ................................... 91 Decreased Task Time and Decreased Error Rate Model .......................... 94 Sensitivity Analysis ......................................................................................... 96 Sensitivity Analysis for Sales and Traffic Model ..................................... 96 Sensitivity Analysis for Task time and Error Rate Model ...................... 100 Conclusion .................................................................................................... 103 References ..................................................................................................... 104 V. RESEARCH 3: DETERMINING CONSTRAINTS OF GOAL PROGRAMMING MODEL FOR USABILITY BENEFITS ....................... 106 Abstract ......................................................................................................... 107 Introduction ................................................................................................... 107 Method .......................................................................................................... 109 Results ........................................................................................................... 113 Sales and Traffic Quantile Regression Results ....................................... 113 Task Time and Error Rate Quantile Regression Results ......................... 117 Conclusion .................................................................................................... 122 References ..................................................................................................... 123 VI. GENERAL CONCLUSIONS AND FUTURE WORK ........................... 124 Research Features and Outcomes.................................................................. 124 Findings ......................................................................................................... 128 Future Research ............................................................................................. 132 References ..................................................................................................... 132 BIBLIOGRAPHY ............................................................................................. 134

v

Texas Tech University, Burchan Aydin, August 2014

ABSTRACT This dissertation follows a three paper format. Abstract for each paper is presented below. 

First Research Study

A Methodology for Data Collection and Analysis to Assist Cost Justification of Usability Abstract Usability investments help companies increase benefits and reduce costs. There are many categories of increased benefits and reduced costs mentioned in the cost justification literature. It is hard for usability practitioners to justify usability through all these categories. This paper examines the significant gains from usability that practitioners should focus on based on their company. These can be listed as: (1) increased sales (2) reduced error rates, (3) reduced task times, and (4) increased traffic on website. The study involves literature review, Pareto analysis, distribution fitting and bootstrap sampling.



Second Research Study

Goal Programming Model for Usability Benefit Expectations The key objective of this study is to optimize the usability benefit expectations. The fact that different types of companies expect different benefits is taken into consideration. The paper is consisted of two parts: (1) an optimization model for the assumptions of percentage increase in sales and percentage increase in website traffic, (2) an optimization model for the assumptions of percentage change in error rate and percentage change in task time. The optimization model used is goal programming. Finally, a sensitivity analysis is performed to test how sensitive the results are to small changes in the constraints and the variables.

vi

Texas Tech University, Burchan Aydin, August 2014 

Third Research Study

Determining Constraints of Goal Programming Model for Usability Benefits The objective of this study is to analyze the interaction constraints of the goal programming models developed by Aydin and Beruvides (2014b). In Aydin and Beruvides (2014b), two goal programming models were developed for increased sales and traffic rates, and decreased error rate and task time benefits. Ordinary least square regression method was used two determine the two-way interactions. Equal slope was assumed for all quantiles of the dependent variables. The results of this study indicated that there were not any significant differences in the slopes of the regression lines depending on the quantiles. Thus, this research proved that the approach taken in Aydin and Beruvides (2014b) was accurate in terms of interaction constraints used in the models.

vii

Texas Tech University, Burchan Aydin, August 2014

LIST OF TABLES Table 2.1- Trend in Share of Project Budget (Nielsen et al., 2013) ...................... 12 Table 2.2- The Characteristics of Usability (Nielsen, 1993) ................................ 13 Table 2.3- Characteristics of Usability (Qusenbery, 2001)................................... 14 Table 2.4- Combined List of Usability Characteristics ......................................... 14 Table 2.5- User Interface Evaluation Methods (Nielsen, 1994) ........................... 19 Table 2.6- Usability Inspection Methods (Nielsen, 1994) .................................... 19 Table 2.7- Latest Usability Methods (www.usability.gov/methods) .................... 20 Table 2.8- Usability Evaluation Techniques (Holzinger, 2005) ........................... 22 Table 2.9- Potential Benefit Categories for Web Sites (Mayhew&Tremaine, 2005) .......................................................... 33 Table 2.10- The Usability Maturity Model ........................................................... 39 Table 2.11- Benefits for Organizations Developing IT Systems for Internal Use ................................................................................... 43 Table 2.12- Benefits for Organizations Developing their Websites, Intranets, or Extranets ................................................................... 44 Table 2.13-Organizations Developing E-commerce, Intranet or Extranet for External Parties ......................................................... 45 Table 2.14-Benefits for Organizations Developing Commercial Software ........................................................................................ 47 Table 2.15-Benefits for Vendor Companies ......................................................... 48 Table 2.16- Costs of Usability .............................................................................. 51 Table 3.1- Number of Hypothetical and Real Data............................................... 57 Table 3.2 Increased Sales Distribution Fitting, Outliers Excluded ....................... 64 Table 3.3 Increased Sales Distribution Fitting with outliers ................................. 64 Table 3.4 Decreased Task Time Distribution Fitting ............................................ 65 Table 3.5 Decreased Error Rate Distribution Fitting ............................................ 66 Table 3.6 Increased Traffic Rate Distribution Fitting ........................................... 66 Table 3.7-Decreased Training Time Distribution Fitting ..................................... 67 Table 3.8-Increased Sales x Increased Traffic Rate Regression Summary Output ........................................................................... 69 Table 3.9-Decreased Task Time x Decreased Error Rate Regression Summary Output ........................................................................... 70 Table 3.10-Bootstrap Sample Sizes and the Number of Samples ......................... 71 viii

Texas Tech University, Burchan Aydin, August 2014 Table 3.11-Descriptive Statistics of the Lower Quantile Statistic of the Bootstrap samples of Sales............................................................ 72 Table 3.12-Descriptive Statistics of the Lower Quantile Statistic of the Bootstrap samples of Web Site Traffic ......................................... 74 Table 3.13-Descriptive Statistics of the Lower Quantile Statistic of the Bootstrap samples of Error Rate ................................................... 75 Table 3.14-Descriptive Statistics of the Lower Quantile Statistic of the Bootstrap samples of Task Time ................................................... 77 Table 4.1- Variables of the Sales x Traffic Rate Model ....................................... 92 Table 4.2- Variables of the Error Rate and Task Time Model.............................. 94 Table 4.3- Sales Traffic Model Sensitivity Report with equal weights ................ 96 Table 4.5- Task Time Error Rate Model Sensitivity Report with equal weights ........................................................................................ 100 Table 5.1-The Number of Interaction Data Points in the Data Table (Aydin and Beruvides, 2014a) .................................................... 111 Table 5.2-Random samples ................................................................................. 112 Table 5.3-Quantile Regression Coefficients and Confidence Intervals for Sales and Traffic with Original Data Points .......................... 113 Table 5.4- Comparison of Ordinary Least Square regression and Quantile Regression .................................................................... 114 Table 5.5-Quantile Regression Coefficients and Confidence Intervals for Sales and Traffic with Random Sampled Ranked Data Set ....................................................................................... 115 Table 5.6- Comparison of Ordinary Least Square regression and Quantile Regression .................................................................... 116 Table 5.7-Quantile Regression Coefficients and Confidence Intervals for Task Time and Error rate with Original Data Points ............. 118 Table 5.8- Comparison of Ordinary Least Square regression and Quantile Regression .................................................................... 119 Table 5.9-Quantile Regression Coefficients and Confidence Intervals for Task Time and Error Rate with Random Sampled Ranked Data Set .......................................................................... 120 Table 5.10- Comparison of Ordinary Least Square regression and Quantile Regression .................................................................... 121 Table 6.1 Number of Hypothetical and Real Data .............................................. 126 Table 6.2 Benefits of Software Usability Investments........................................ 128 Table 6.3 Summary of the Theoretical Distributions .......................................... 129 ix

Texas Tech University, Burchan Aydin, August 2014 Table 6.4 Summary List of the Population Lower Quantiles for Usability Benefits ........................................................................ 129

x

Texas Tech University, Burchan Aydin, August 2014

LIST OF FIGURES Figure 2.1- The Impact of Usability on System's Acceptability (Nielsen, 1993, p. 25) .................................................................... 15 Figure 2.2- The Ease-of-Use Pyramid (Sherman, 2006, p. 22)............................. 24 Figure 2.3- Iterative Process for Usability Goals (Sherman, 2006, p.23) ............. 26 Figure 3.1 - Pareto Analysis for Usability Benefits Mentioned in the Literature ....................................................................................... 58 Figure 3.2- Pareto Chart for Numeric Data Availability....................................... 59 Figure 3.3- Pareto Chart for Percentages Data...................................................... 61 Figure 3.4 - Increased Sales Lower Quantile Bootstrap Distribution ................... 73 Figure 3.5 - Increased Traffic Lower Quantile Bootstrap Distribution ................ 75 Figure 3.6 - Decreased Error Rate Lower Quantile Bootstrap Distribution ................................................................................... 76 Figure 3.7 - Decreased Task Time Lower Quantile Bootstrap Distribution ................................................................................... 78 Figure 4.1 Sales Traffic Model Sensitivity Analysis ............................................ 97 Figure 4.2 Sensitivity Analysis for Higher Weights for Traffic ........................... 98 Figure 4.3 Sensitivity Analysis for Higher Weights for Sales .............................. 99 Figure 4.4 Task Time and Error Rate Model Sensitivity Analysis ..................... 101 Figure 4.5 Sensitivity Analysis for Higher Weights for Error Rate .................... 102 Figure 4.6 Sensitivity Analysis for Higher Weights for Task Time ................... 103 Figure 5.1 Comparison of the Quantile Regression and Ordinary Least Square Regression Coefficients for Sales and Traffic Rate ............................................................................................. 117 Figure 5.2- Comparison of the Quantile Regression and Ordinary Least Square Regression Coefficients for Task Time and Error Rate ............................................................................................. 122

xi

Texas Tech University, Burchan Aydin, August 2014

CHAPTER I INTRODUCTION History and Background Usability is the ease of use and acceptability of a system for a specific class of users carrying out particular tasks in a specific environment (Holzinger, 2005, p.1). The rise of the popularity of Web in the last decade increased the significance of usability. A wider and less technical user population has dominated the web, asking for ease of use (Black, 2002). In order to sustain in the software market, usability has been an important investment alternative for organizations in the last decade. Investing in usability reduces various types of costs such as support, training, and development costs, increases sales for organizations, and improves effectiveness, efficiency, and satisfaction for end users. However, there are costs to be dealt as with all investments. Costs can be calculated by finding the resources needed to perform required usability tasks and techniques. Benefits, on the other hand, are estimated by usability practitioners mainly by using their expertise by making assumptions on possible improvements in the performance measures. There are very few case studies in the literature that are empirical based and performed by substantiated data. This causes a problem since there are other investment opportunities that have validated data. Usability is competing against these alternatives, but generally with non verified data. However, in order to convince management for investing on usability, it must be demonstrated that usability engineering is better than other potential investments, and that its main benefit is increasing return on investment over time (Lund, 1997). There are several problems detected in the usability cost justification research; possible impact of other factors on increased sales, applicability of the model for differing product types, ignored sustaining costs, lack of model validation, and reliability of practitioners’ assumptions.

1

Texas Tech University, Burchan Aydin, August 2014 While evaluating the return on investment, usability researchers mostly assume the improvement on a benefit category to be only due to usability. For instance, Staples.com web site was improved by Human Factors International by using usability principles in 1999, and the sales increased by 491% compared to the previous year’s figures. Yet, it is hard to determine if the whole increase on sales was due to usability, or other factors also impacted this result. Thus, calculating the return on investment by accounting usability as the only reason of the whole 491% increase may be a biased approach to justify usability. Many other case studies ignored the impact of other factors that might have affected the sales. Rosenberg (2004) claimed that a change in price, a doubling in size of the sales force, successful marketing, and bad press for a competitor could also affect the revenue increase. Hanson and Castleman (2006) discussed that addition of new features, impact of an enhancement in overall economy, a new advertising campaign, or exiting of a competitor may be possible reasons. Therefore, the real impact of usability in increased sales could actually be less than it was shown in those case studies. This indicates that justifying increased sales has most likely been biased in cases where the whole sale increase was tied to usability improvement. Current models are mostly developed for usability applications for software products, not for hardware products. It is not clear if the models can be applied for hardware products that do not have any user interfaces. Moreover, it is not solved yet whether these models fit both mass produced and tailored products (Rajanen, 2003). The current usability cost-justification models have limited applicability. Majority of the cost-justification models consider only the initial costs of usability investment, yet ignore possible sustaining costs throughout the lifetime of the specific usability project. The benefits are added each year, but costs are merely initial costs in these models. A few studies discussed that there is the possibility of sustaining costs. However, they claimed that those sustaining costs could be ignored in the cost justification, since they would be too low to impact the results of the justification. On the contrary, there is a possibility that sustaining costs could have a dramatic effect on 2

Texas Tech University, Burchan Aydin, August 2014 the results. Thus, it may be wrong to ignore sustaining costs in usability cost justification process. There is a lack of validation of usability cost-justification results. Many cases are found that show estimations of return on investment of usability, yet comparison of these estimations against the actual results after the investment has been ignored. There are a few studies found that mentioned comparison of estimated return on investment values against actual return on investment values (Aydin, Millet, and Beruvides, 2011). Wilson and Rosenbaum (2005) discussed that there were cases where actual return incurred came out less than the estimated return. However, these results were not provided in the literature. They were only reported in workshops and private conversations. According to Nielsen (1993), 63% of large software projects significantly overrun their estimated budgets. The four reasons having the highest responsibility were related to ‘usability engineering’ based on software managers (Nielsen, 1993). Thus, discrepancy in model estimations and actual results is a problem in current cost justification models. Benefit-cost ratio has been used to show return on investment in most part of the literature. However, benefit-cost ratio is an analysis technique used for governmental organizations for public investment analysis (Newnan, Lavelle, and Eschenbach, 2009). It is not suitable for private organizations. There has not been any researcher who has discussed this issue yet. The major problem is that usability cost-justification process requires many assumptions on expected level of usability benefit, and these assumptions are key factors affecting the return on investment results. There is no method to find a reliable estimation for these assumptions. Usability practitioners use their experience in specific fields to make assumptions to justify usability. Inaccurate estimates of benefits may mislead decision makers. Therefore, it is important to develop a tool to aid usability practitioners in making assumptions for the benefits of usability.

3

Texas Tech University, Burchan Aydin, August 2014 Problem Statement Usability cost justification models require parameters and assumptions to measure costs and benefits of usability investment. Parameters can be determined by usability practitioners and they provide actual data. However, the assumptions of the usability benefit expectations are mainly based on the experience of the usability practitioners, and not comparable between different industries. Any small variation in assumptions can cause dramatic changes in justification results. Since there is no consistent method to reach these assumptions in the literature, justification of usability currently depends on the accuracy of the usability practitioners’ estimates as well as on assumptions that may not be measurable. Therefore, it is important to develop a decision tool for standardization of assumptions that usability practitioners can rely on to do a proper economic usability evaluation.

Research Questions What effects do software usability investments have on various types of companies? Could an optimization model be developed that provides estimations for software usability effects? What are the relationships between these effects? How do these effects change based on the quantiles of the usability benefits?

Research Question for Research 1 Although there are lots of usability benefits mentioned in the usability literature, it is hard to find quantitative data for most of them. It is neither feasible nor practical to focus on all benefits types for justifying purposes. Usability practitioners need to know which benefits should be focused on based upon the company type and how much return is expected. Considering this, the first research aims at tackling the following research question: 'What are the significant benefits of software usability investments? Which of these benefits can be quantified? Are there any dependencies between these benefits, or are they independent? How are these benefits distributed? What are the population means and lower quantiles for these benefits?’

4

Texas Tech University, Burchan Aydin, August 2014

Research Question for Research 2 There has not been any study that tried to model the expectations of usability benefits. There is a strong need for optimization of these usability benefit expectations. Thus, the second research will tackle the following research questions: ‘How can a goal programming model be developed that estimates changes in the significant benefits for dependent usability benefits? How would the target values for goal programming model be decided upon? How can usability practitioners retrieve different values from this model to justify usability investments depending on the company type and key expectations from usability investments?

Research Question for Research 3 In the goal programming models of the second research study, the interaction constraint’s slope was assumed to be constant in all quantiles. How does the slope of the regression line behave in different quantile regions of the dependent variable? What is the impact of a one percentage point increase in a benefit on other benefits for different levels of percentage change levels? In other words, what is the relationship between usability benefits for different quantiles of that benefit type? Therefore, this sensitivity analysis tackles the following research question: 'What are the relationships between usability benefits in low, medium and high levels of the dependent variable?’ General Hypotheses The general hypothesis for this research is that expectations of companies from usability investments differ depending on the company type. The research questions are hypothesized in three researches.

Research 1 There are many benefits listed in literature that shall be gained by investing on usability. However, the focus of attention in justification process should be the significant benefits that can be quantified. In this research, these benefits are being explored and the dependencies in between. The hypothesis for this research is as follows: H1: There are some benefits of usability that are dependent on each other. 5

Texas Tech University, Burchan Aydin, August 2014

Research 2 Considering the variables and dependencies determined in the first research, a goal programming model is developed. This model will provide optimal solutions for usability benefit assumptions based on different weights assigned to deviations from target values. These weights can be assigned depending on company type and expectations. H2: Company types affect the focus of usability investments, thus yields different benefit assumption outcomes.

Research 3 This research explores the tradeoffs between correlated usability benefits. The relationship between sales and traffic depending on varying levels of percentage changes in sales is analyzed. Similarly, the connection between error rate and task time is examined. H3: A percentage point change in a usability benefit affects a dependent usability benefit differently according to the level of percentage change in the dependent benefit. Research Format This research consists of three sequential studies that investigate key benefits gained by investing on usability improvements in any type of software. The first study involves data collection and analysis, theoretical distribution determination to estimate the population distribution, and bootstrapping of data table samples in order to estimate population lower quantiles for these benefits. It also includes regression analysis to determine the relationship between those assumptions. The second study, on the other hand, includes development of a goal programming model that provides numerical values for usability benefit assumptions. The third study examines the slope of the regression lines based on differing quantiles by utilizing the quantile regression method.

6

Texas Tech University, Burchan Aydin, August 2014 Research Purpose The primary purpose of this research is to build a decision tool that estimates reliable usability benefit expectations for usability practitioners and engineering managers based on company-wise focus of benefit expectations, by examining the existing cost-justification models and case studies. The major benefits of software usability investments will be estimated via a goal programming model to help usability practitioners in cost justification of software usability. Research Objective The primary objectives of this research are: (1) to determine major benefits of software usability investments, (2) to analyze the distribution of those major benefit functions, (3) to develop a goal programming model to estimate values of the major benefits, (4) to analyze the relationships between the major benefits, (5) to estimate population statistics for the major benefits, (6) to analyze quantiles of the major benefits, Several secondary objectives must be met for this research: 1. To analyze the existing usability cost-justification models. 2. To analyze the real and hypothetical case studies demonstrating usability cost justification. Limitations & Assumptions There are limitations and assumptions as with all researches. The following are some limitations to this specific research effort:

Limitations This research has the following limitations: 1. The product scope is limited to software products and services. Usability cost justification for hardware products are not taken into consideration. 2. Due to limited number of studies providing financial data, some important benefits of usability will not be analyzed in this research. These benefits can be listed as: Reduced maintenance and support costs, reduced

7

Texas Tech University, Burchan Aydin, August 2014 development cost and time, increased user satisfaction and reduced employee turnover, reduced learning times, and reduced drop-off rates from websites. 3. There are very few empirical studies in the research domain that provide quantitative data, which constraints statistical analysis. 4. The distinction between mass produced and tailored products, in terms of cost justification, is not an interest of this research. 5. The changes in usability benefits are assumed to be a result of solely the usability investment. Thus, external factors that could have affected the changes in benefits such as global economical trends, and competitors’ strategies etc., are not incorporated in this study due to data unavailability. 6. This dissertation mainly focuses on products that are already released, and needed some sort of usability improvement in post-release stage. There are very few cases reported in the open literature where a product was designed with high usability in software development lifecycle in this dissertation.

Assumptions As with the limitations, there are assumptions in this research: 1. Reduction in a cost type is assumed to be a benefit. 2. All data are converted to annual terms by using effective rate formula. 3. Ceteris paribus assumption is used to set all factors other than usability constant. "Ceteris paribus assumption is holding all other factors constant so that only the factor/factors being studied is/are allowed to change (Nicholson, and Snyder, 2007)". 4. Usability, user centered design, and usability engineering terms are cost justified in the literature. In this research, the term ‘usability cost justification’ covers ‘user centered design cost justification’ and ‘usability engineering cost justification’.

8

Texas Tech University, Burchan Aydin, August 2014 Relevance of this Study This research is related to industry as it provides a tool to cost justify usability of software products for usability practitioners in private and public sector with more reliable assumptions for benefits. It helps cost justification of usability in (1) organizations developing IT systems, commercial software, websites, intranets, extranets for internal or external use, (2) vendor organizations (sells software and also uses internally). The research is relevant to academics as it provides a collection of all studies in usability cost justification research area. It provides an entire list of usability cost and benefit components, pioneer models, secondary theories, and case studies of usability cost justification. It is a reference that provides information about usability, usability methods, and return on investment of usability. It will be the first optimization model developed in the usability cost justification research. Need for this Research This research is essential because existing usability cost justification models require many assumptions, and these assumptions are drawn by usability practitioners’ personal experiences or estimations. It can be seen in the literature that usability practitioners presented up to 1:1000 cost-to-benefit ratios (Nielsen & Gilutz, 2003). Economically infeasible and unreliable financials misdirect some decision makers. It is a discouragement for the decision makers that have a strong economic background. Rather than promoting investment on usability, it depresses the management. This research will be beneficial to organizations by presenting an optimization model for usability cost justification. There has not been any effort for modeling the expectations of usability benefits before, thus; this is a pioneer endeavor to optimize them. Benefits of this Research This research has theoretical and practical benefits.

Theoretical benefits The theoretical benefits of this research can be listed as follows: 9

Texas Tech University, Burchan Aydin, August 2014 1. First modeling attempt for usability benefit assumptions. 2. Improved reliability of assumptions in cost justification of usability. 3. Optimization of expectations from usability by considering the relationship between major usability benefits. 4. Enhanced economical theoretical basis on usability cost justification.

Practical Benefits The practical benefits of this research can be listed as follows: 1. A goal programming model for usability. 2. Makes cost justification methods easier as it provides reliable assumptions. It will decrease the time spent on estimating usability benefit levels for all cost justification models in the literature. 3. An up-to-date literature review on usability, and usability cost justification.

Research Outputs and Outcomes The results of this research will be: 1. A goal programming model for software usability. 2. A data table for the ‘cost justification of usability’ research that shows the benefits of usability mentioned in the studies. 3. A data table for the ‘cost justification of usability’ research that shows quantitative data provided in the studies.

10

Texas Tech University, Burchan Aydin, August 2014

CHAPTER II LITERATURE REVIEW Introduction Usability is an investment alternative that increases revenue and decreases costs for development and vendor organizations, and offers ease of use and ease of learning for users. This dissertation specifically focuses on usability of software; including web applications, intranet applications etc. Usability of hardware is not taken into consideration except for the user interface of the hardware products. “User interface includes all aspects of the system design that influence the interaction between the user and the system” (Dray, 1995, p. 18). These aspects can be listed as (Dray, 1995, p. 18):      

The match with the tasks of the user, The metaphor that is used, The controls and their behaviors, Navigation within and flow between screens, Integration among different applications, The visual design of the screens.

This section will explain usability and illustrate present methods of cost justification of usability. Usability, User Centered Design, and Usability Engineering Usability, user centered design, and usability engineering concepts are explained in this section in order to understand the differences between these familiar concepts.

Usability Usability is frequently described as ‘ease of use’. However, it is definitely well beyond this short definition (Qusenbery, 2001). It is defined in ISO 9241-11 guidelines as “the extent to which a product can be used by specified users to 11

Texas Tech University, Burchan Aydin, August 2014 achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (1998, p. 2). Thus, according to the ISO standards, the three main factors that should be considered in order to design for usability are: effectiveness, efficiency, and user satisfaction. Effectiveness is “the accuracy and completeness with which users achieve specified goals”, whereas efficiency is “the use of resources in relation to the accuracy and completeness with which users achieve goals” (ISO 9241-11, 1998, p. 2). Common effectiveness metrics are task completion rates and error rates. On the other hand, efficiency is usually measured by time on task and task flow deviation. Task flow deviation is “the ratio of the optimal number of steps to complete a task to the average number of steps to complete the task” (Sherman, 2006, p. 22). User satisfaction, on the other hand, is “the freedom from discomfort, and positive attitudes towards the use of the product” (ISO 9241-11, 1998, p. 2). Usability is very important for software systems and “software usability is accounted as a measure of how well a computer system (a software program, an e-commerce site etc…) helps learning, aids users remember what is learned, reduces errors, enables them to be efficient, and makes them satisfied with the system” (Donahue, Weinschenk, Consulting, & Nowicki, 1999, p. 33). Usable products add value to the company as customer’s needs and values are included in the design (Turner, 2011). Usability investments showed a rapid increase in the dot.com bubble era, yet it has stabilized since 2001. See Table 2.1 for share of project budgets for usability (Nielsen et al., 2013). Table 2.1- Trend in Share of Project Budget (Nielsen et al., 2013) Year 1971 1989 1994 2001 2002 2006 2007

Share of Project Budget 3% 5% 6% 13% 10% 10% 12%

12

Texas Tech University, Burchan Aydin, August 2014 Some researchers introduced different sets of characteristics for usability. For instance, Nielsen (1993) explained the characteristics of usability as; learnability, efficiency, memorability, low error rate, and satisfaction (Nielsen, 1993, p. 26). Table2.2 shows Nielsen’s explanations for these characteristics.

Table 2.2- The Characteristics of Usability (Nielsen, 1993) Usability Characteristics Learnability

Explanation The system should be easy to learn so that user can start getting work done rapidly.

Efficiency

The system should be efficient to use, so that once the user learns the system, a high level of productivity is possible.

Memorability

The system should be easy to remember, so that the casual user is able to return to the system after not having used it for a while, without having to learn everything all over again.

Errors

The system should not have a high error rate, so that users have fewer errors, and if they do commit errors, they can recover from them easily. Moreover, catastrophic errors must not occur.

Satisfaction

The system should be pleasant to use, so users like it. They are subjectively satisfied when using it.

Moreover, Qusenbery (2001) introduced characteristics of usability as in Table-2.3.

13

Texas Tech University, Burchan Aydin, August 2014 Table 2.3- Characteristics of Usability (Qusenbery, 2001) Usability Characteristics

Definition

Explanation

Completeness and accuracy with which users achieve specified goals Speed in which users can complete the tasks for which they use the product

Effective

Efficient

Engaging

Error tolerant

Easy to learn

Were users goals met successfully, is all work correct?

How many number of clicks or keystrokes required executing a task, what is the total time on task? Is the product or design meeting Pleasant and satisfying design the expectations and needs of the user audience? Is it difficult to take incorrect or Designing to prevent errors, invalid action, is it difficult to take and to help the user recover irreversible actions, is the design or from any errors that occurred product planned for unexpected? Can users built not only on their An interface or product prior knowledge, but also any allowing users to build on interaction patterns they have their knowledge without learned through use in a deliberate effort predictable way?

In these sets of characteristics of usability (ISO, Nielsen, and Qusenbery), there are many similarities. However, Nielsen (1993) did not consider effectiveness, while Qusenbery omitted memorability. Combining these three sets of usability characteristics, a new set can be developed. Table 2.4- Combined List of Usability Characteristics Usability Characteristics

Source

Effectiveness

Qusenbery (2001) & ISO

Efficiency

Nielsen (1993), Qusenbery (2001) & ISO

Memorability

Nielsen (1993)

Error/Error tolerance

Nielsen (1993) & Qusenbery (2001)

Satisfaction/Engaging

Nielsen (1993), Qusenbery (2001), & ISO

Learnability/Ease of learning

Nielsen (1993) & Qusenbery (2001)

14

Texas Tech University, Burchan Aydin, August 2014 Given the definitions and characteristics of usability, it is also important to know the position of usability from a systems perspective. The impact of usability on the acceptability of a system was introduced by Nielsen (1993).

Utility Easy to learn

SystemAcceptability

Social Acceptability Usefulness

Efficient to use Cost Practical Acceptability

Usability Easy to remember

Compatibility Few errors Reliability Subjectively pleasing

Figure 2.1- The Impact of Usability on System's Acceptability (Nielsen, 1993, p. 25) Nielsen (1993) explained that system acceptability is the sufficiency of the system to satisfy all needs and requirements of the users and other potential stakeholders such as the user’s clients and managers (p. 24). To achieve system acceptability, the system has to be socially and practically acceptable. As an example for social acceptance, imagine a website that asks for ethnic information. For some this might be acceptable, but for others it could be assumed as racial discrimination. If the system is socially acceptable, then one can start taking into consideration the practical acceptability. The system should be analyzed in terms of its cost, reliability, compatibility with existing systems, and usefulness. Usefulness is explained as the system’s capability to achieve some predefined goal. Usefulness is separated into utility and usability. Utility is the functionality of the system to do what is needed, and 15

Texas Tech University, Burchan Aydin, August 2014 usability is how well users can use that functionality (p. 25). ‘Utility’ in Nielsen’s categorization may be considered similar to the effectiveness in ISO standards and Qusenbery’s list. Usability and quality are different investment types. They are similar in terms of improving system quality, cost-effectiveness, and reducing development time. However, there is a key difference: “Usability focuses on the elements of a product or service that end users directly interact with, whereas quality assurance focuses on elements with which users do not interact directly. For instance, screens, windows, menus, error messages, consistency, and navigation are usability issues. On the other hand, code integrity is a quality assurance issue.” (Weinschenk, 2005). Next sub section illustrates ‘user centered design’.

User Centered Design User Centered Design (UCD) is “a broad term to describe design processes in which the end-users influence how a design takes shape” (Abras, Maloney-Krichmar, and Preece, 2004, p. 1). It is also defined as “a philosophy based on the needs and interests of the user, with an emphasis on making products usable and understandable (Norman, 1988, p. 188). “The UCD discipline includes a set of techniques to help development teams accurately collect and assess data on users’ characteristics, goals, and motivations, as well as their workflow, pain points, and critical behaviors (Sherman, 2006, p. 4).” Typical UCD activities can be listed as (Sherman, 2006, p. 4):     

Watching people performing tasks that product or service providers would like to assist, expedite or automate Discovering people’s skills, goals, motivations, frustrations, and successes as they work Creating designs that assist, expedite or automate, with attention to the intended users’ skills and goals Testing these designs by letting intended user group attempt to perform the tasks and activities supported by the product Revising the design as needed depending on the results of the test sessions.

16

Texas Tech University, Burchan Aydin, August 2014 However, it should be noted that UCD is not a process accommodating everything the users want. There are occasions in which the user needs will not be accepted due to business requirements. User wishes such as making shipping free, or setting the products on sale are not accounted as user needs in UCD philosophy (Francis, 2006). Next, usability engineering is demonstrated.

Usability Engineering Usability engineering (UE) is “a process, which amounts to specifying, quantitatively and in advance, what characteristics and in what amounts the end product to be engineered is to have” (Good, Spine, Whiteside, and George, 1986, p. 241). It can also be defined as “a method for designing usability into products and it involves the specification of usability metrics and goals” (Wixon and Jones, 1995, p. 7). Usability engineering is objective and laboratory oriented. However, the lack of hypothesis testing or statistical decision making is the difference from traditional experimental approaches. It is a method to assess whether a product meets the preselected usability goals and to determine the usability problems (Wixon and Jones, 1995). Some strengths of usability engineering are (Wixon and Jones. 1995, p. 7):     

Determining an agreed upon definition for usability Setting this definition in terms of metrics and goals for usability Treating usability as equals with other engineering goals Presenting a context for an agreed upon assessment for the usability of a product Providing a method for prioritizing usability problems.

There is no existing study found in the literature that described the relationship between usability, user centered design, and usability engineering. However, ‘usability engineering’ and ‘usability’ terms are used concurrently in cost justification. In other words, cost justification of usability and cost justification of UE were assumed identical by many researchers. Moreover, there are studies that used the term ‘cost justifying UCD’, and these could be assumed equal to the term ‘cost justification of usability’. The reason is that by having UCD activities, the goal is improving usability of the product or system. In other words, UCD activities are done to improve usability. 17

Texas Tech University, Burchan Aydin, August 2014 Henceforth, only cost justification of usability (CJU) term will be used in this report as it covers justification of UCD and usability engineering, simultaneously. Moreover, usability engineer, usability practitioner, and usability professional refer to individuals who provide usability services (Henneman, 2005). ‘Usability practitioner’ term will be used in this research so as to refer to the individual usability providers. Usability practitioners could be involved in usability activities for an organization as an internal usability group (a usability team within the company), external group (a usability consultant firm), or as an individual consultant (Henneman, 2005). The main goal of usability practitioners is either finding and eliminating the usability problems in the current product/service or creating a new product/service without usability problems. The next section illustrates usability problems.

Usability Problems Usability problems are defined as: “aspects of a user interface that may cause the resulting system to have reduced usability for the end user” (Nielsen & Mack, 1994, p. 3). To improve usability of an existing user interface, problems should be detected first. Moreover, to create a new design for a product or service with high usability, possible usability problems should be avoided. There are several usability methods in the literature that help detect existing or possible usability problems. The next section presents these methods.

Usability Methods User interfaces can be evaluated in four basic ways given in Table-2.5.

18

Texas Tech University, Burchan Aydin, August 2014 Table 2.5- User Interface Evaluation Methods (Nielsen, 1994) Method for Evaluating User Interfaces Automatically Empirically Formally Informally

Explanation By running a user interface specification through evaluation By using real users By using exact models and formulas to calculate usability measures Based on rules of thumb and the general skill, knowledge, and experience of the evaluator

Formal tests and automatic methods are difficult to apply, which makes empirical and informal methods the most used user interface evaluation methods. Usability inspection is an informal method. It is defined as: “the generic name for a set of methods based on having evaluators inspect or examine usability-related aspects of a user interface” (Nielsen, 1994, p.1) introduced the usability inspection methods as provided in Table-2.6.

Table 2.6- Usability Inspection Methods (Nielsen, 1994) Usability Inspection Method Heuristic evaluations Guideline reviews Pluralistic walkthroughs Consistency inspections Standards inspections Cognitive walkthroughs Formal usability inspections Feature inspections

Description Judging whether interface elements conform to established usability heuristics Checking for conformance with usability guidelines Meetings where users, developers, and human factors specialists explores a scenario discussing usability issues Designers from different projects inspect an interface to see whether it is consistent with their own designs Checking the compatibility of the interface with the market standards) Simulating user’s problem solving process at each step in the human-computer dialogue Team participation including a moderator, a design owner, and inspectors Checking the functions’ conformance to the needs of end users

19

Texas Tech University, Burchan Aydin, August 2014 Recently, various usability methods have evolved. Up-to-date methods are demonstrated in the webpage ‘usability.gov’ methods section (www.usability.gov). Table-2.7 shows usability methods and gives explanation for each. Table 2.7- Latest Usability Methods (www.usability.gov/methods) Method Card Sorting

Contextual Interviews

Focus Group Heuristic Evaluation Individual Interviews

Parallel Design

Personas

Prototyping

Explanation “Involving users in grouping information for a Web site. Participants review your Web site and then group the items in the website into categories.” “Interviews conducted in participant’s real work environment to watch and listen to them. Being performed in user’s work environment, they are more natural and realistic.” “Discussion among eight to 12 users or potential users of your site covering a range of topics determined before. It helps learning about user’s attitudes, beliefs, desires, and their reactions to ideas or to prototypes.” “Evaluators examine the interface and judge its compliance with heuristics (usability principles)” “Talking to one user at a time for 30 minutes to an hour face to face or by call or by video conferencing to understand the user’s attitudes, beliefs, and experiences.” “Several designers create an initial design from the same set of requirements working independently. At the end, each designer shares his/her design. The team considers each solution, and each designer uses the best ideas to further improve their own solution to generate many diverse ideas and to ensure that the best ideas are integrated into the final concept.” “A persona is an imaginary person who represents a key user group for your website. Creating personas help the usability team helps the team focus on the users’ goals and needs.”

“A prototype is a draft version of a Web site. Prototypes are used to explore ideas early in the development process. Prototypes can be low-fidelity (paper drawings) or high-fidelity (click-through of a few images or pages, or a fully functioning Web site).”

20

Texas Tech University, Burchan Aydin, August 2014

Method

Online Surveys

Table 2.7- Latest Usability Methods Continued Explanation “Structured interviews with users, displaying a list of questions online and recording the responses to find information about:  Who are the users, what do they want to accomplish and what information are they looking for?  Did users find the information they were looking for?  How satisfied are users with your site?  What experiences have users had with your site or similar sites?  What do users like best and like least about your site?  What frustrations or issues have users had with your site?  Do users have any ideas or suggestions for improvements?” “It involves learning about your users' goal and specific task users must do to meet those goals and what steps they take to accomplish those tasks. It complements a user analysis.

User and task analysis focuses on understanding:  what users' goals and what they actually do to achieve the Task goals Analysis  what personal, social, and cultural characteristics the users bring to the tasks  how users are influenced by their physical environment  how users' previous knowledge and experience influence how they think about their work and the workflow they follow to perform their tasks” “It is a method used to evaluate a product by testing it with representative users. Users will try to complete representative tasks Usability while observers watch, listen and takes notes. Testing The goal is to identify any usability problems, collect quantitative data on the performance (time on task, error rates) of participants, and determine participant's satisfaction with the product.” “A use case is a description of how users will perform tasks. Two main parts are:  the steps users will take to complete a particular task on a Web site Creating Use  the way the Web site should respond to user's actions Cases Each use case captures:  The actor (who is using the Web site?)  The interaction (what does the user want to do?)  The goal (what is the user's goal?)”

21

Texas Tech University, Burchan Aydin, August 2014 On the other hand, a recent study by Holzinger (2005) divided usability evaluation techniques into two categories; usability inspection methods (users are not involved) and usability test methods (end users are involved). Table-2.8 represents these usability evaluation techniques.

Table 2.8- Usability Evaluation Techniques (Holzinger, 2005) Comparison of Usability Evaluation Techniques Inspection Methods Test Methods Applicably in Phase Required Time Needed Users Required Evaluators Required Equipment Required Expertise Intrusive

Heuristic Evaluation

Cognitive Walkthrough

Action Analysis

Thinking Aloud

Field Observation

Questionnaire

All

All

Design

Design

Final Testing

All

Low

Medium

High

High

Medium

Low

None

None

None

3+

20+

30+

3+

3+

1-2

1

1+

1

Low

Low

Low

High

Medium

Low

Medium

High

High

Medium

High

Low

No

No

No

Yes

Yes

No

Before discussing how usability has been cost justified, one should be aware of different perspectives on usability. Perspective of management towards or against usability is an important factor in the success or failure of cost justification. Next section provides possible standpoints with respect to usability investments in organizations. Perspectives on Usability Hanson and Castleman (2006) evaluated three different perspectives that resist against investing on usability: “Engineer-centric culture, design-centric culture, and the customer-centric culture.” First, Hanson and Castleman (2006) discussed that an incompetent usability practitioner might cause the engineering team to consider user 22

Texas Tech University, Burchan Aydin, August 2014 centered design as irrelevant. “Engineer cultures tend to focus on technology-centered, feature-driven products. The problem with this culture starts once the technology matures as the user population expands. The key to the success turns to the user experience rather than features alone (Hanson and Castleman, 2006, p. 17).” “The challenges that an engineering-centered culture causes to user centered design are:    

Engineers define a product as ‘usable’ if it is possible to complete a task in a user interface; Engineers rely deeply on their own experience; Engineers believe they should own the interface; Engineers feel they are doing the right thing for the customer; (Hanson and Castleman, 2006, p. 19).”

It should be stated that Hanson and Castleman (2006) are not regarding engineering discipline as a block against usability investments. They are talking about the engineers who fail to represent users in the development process. “Design-centric cultures, on the other hand, are common among internet companies and consulting firms (Hanson and Castleman, 2006, p. 18).” Focusing on the design process as a creative endeavor and ignoring user experience is the problem with a design-centric culture that ends up with difficult to use but aesthetic products. “The challenges that a design-centered culture causes to user centered design are:     

Aesthetically-driven designers define the user experience more on the basis of aesthetic experience rather than ease-of-use; Aesthetically-driven designers regularly assume print visual design principles transfer directly to computer user interfaces; Aesthetically-driven designers usually value the user experience in terms of the visual art of a single user interface, rather than in the elegance of larger dynamic workflows; Aesthetically-driven designers rely mostly on their own instincts about the users; Aesthetically-driven designers attend selectively to user research, focusing only on the results that confirm their viewpoints; (Hanson and Castleman, 2006, p. 19).”

Customer-centric culture may sound like favorable for user centered design, yet it actually causes challenges to it. This is because employees in marketing, sales, 23

Texas Tech University, Burchan Aydin, August 2014 and product management have contact with customers regularly, and they believe that they understand the user needs, and become overconfident and ignore usability practitioners. “The challenges that a design-centered culture causes to user centered design are:    

Customer advocates rely heavily on customers’ self-report and customer suggestions to assess the usability of the interface; Customer advocates disproportionately focus on their biggest customers’ needs; Customer advocates are overconfident in their ability to know what the customers need; Customer advocates believe they own the user interface of the product; (Hanson and Castleman, 2006, p. 20).”

In order to deal with these three types of cultures that prevent usability activities, Hanson and Castleman (2006) suggested focusing on the Ease-of-Use pyramid.

Satisfaction

Efficiency

Effectiveness

Figure 2.2- The Ease-of-Use Pyramid (Sherman, 2006, p. 22)

24

Texas Tech University, Burchan Aydin, August 2014 According to the pyramid, first, effectiveness goals, then efficiency goals, and finally satisfaction goals should be met. “These goals can be formed based on usability guidelines or by a desired improvement on the current performance (Hanson and Castleman, 2006, p. 26). ” In a study performed in Western Electric in 1927, it was found that the performance of workers improved mainly due to them being tracked rather than other working conditions such as break schedules or lighting; a phenomenon known as the Hawthorne effect (Franke and Kaul, 1978). Therefore, setting usability goals and continuously tracking the status based on those goals might improve performance. Moreover, Hanson and Castleman (2006) discussed the impact of memory on the conclusions people reach. There can be bias in analysis because of the usability practitioners remembering more of the first and last trials than the trials in the middle of a study. Furthermore, the sessions a usability practitioner observed have more influence on him/her than the sessions he/she did not observe. To counteract these biases setting usability goals and tracking achievements is suggested. For example, at Peregrine Systems, a usability group consistently increased success rates on critical tasks from 0 to 25 percent up to 70 to 100 percent. Without calculating any return on investment, the team was able to convince the management to invest in usability by the consistent production of measurable results, in other words by meeting usability goals. Peregrine System had to lay off over 50 percent of employees for a nine month period, but the usability team size increased by 20 percent (Hanson and Castleman, 2006, p. 28-29). Hanson and Castleman (2006) also suggested an iterative process to meet usability goals. This process should replace the engineering-centric, design-centric, and customer-centric cultures so that usability will be a priority in the development lifecycle of projects. Figure-2.3 demonstrates this process. This process and the easeof-use pyramid may be adjusted by adding learnability, memorability, and error tolerance to cover all usability characteristics which are listed in Table-2.4.

25

Texas Tech University, Burchan Aydin, August 2014

Define Critical Tasks

Determine relevant usability metrics

Set usability goals

Test against usability goals

Feedback for next release

Effectiveness goals met?

Yes

Redesign to fix problems

No

Efficiency goals met?

Yes

Satisfaction goals met?

Yes

UI design meets usability requirements

Engineering implements UI

Figure 2.3- Iterative Process for Usability Goals (Sherman, 2006, p.23)

26

Texas Tech University, Burchan Aydin, August 2014 Usability practitioners having to suffer from organizational culture against usability should:      

reach across discipline boundaries, identify concurring change agents and develop relationships to foster collaboration; be determined in their encouragement and advocacy of a data-driven approach to design; work patiently under resistance and organizational obstacles; passionately defend the belief that technology should serve users not the other way around; have a strong grounding in business and technology; have empathy towards the ones who do not yet share the vision of user centered design, but do not show a superior or demeaning attitude toward colleagues who do not share their views (Sherman, 2006).

Rohn (2005) examined company cultures and their affect on usability investments:  ‘Customer-focused’ companies know and satisfy the needs of the customers as their important business strategy. Usability practitioners have easy access to users.  ‘Product-focused’ companies have the strategy of focusing on product requirements. Functionality has higher priority than usability, which is a disadvantage for usability.  ‘Technology-focused’ companies are ruled by technological advances and focus on applications for the technology. The disadvantage is that users are not a focus of attention, meaning that usability is usually neglected in organizations with technology-focused culture.  ‘Executive-focused’ companies are dominated by the founder, CEO, or other key managers. For ‘executive-focused’ companies having executives not favoring usability, it is firm for the usability practitioners to warrant a usability investment.  ‘Optimal data-driven’ companies focus on data-driven decision making. Product requirements are determined by field studies, usability studies, market research, surveys, Web metrics, and many more similar data gathering methods. According to Rohn (2005), “data-driven companies would be an ideal environment for usability practitioners” (p. 186).

27

Texas Tech University, Burchan Aydin, August 2014 Usability has to be cost justified in an economically valid and reliable base to cope with the negative perspectives against usability investments in different work environments. Next section explains how usability has been cost justified.

Cost Justification of Usability According to survey results (Harrison, Henneman, and Blatt, 1994), 62% of people are convinced of the value of user centered design by testimonials, case stories, and demonstrated results, 23% by support from customers, management, and outside companies, and only 15% by cost-benefit analysis. Even though survey results showed that cost-benefit analysis was not a primary method to influence people of the value of user centered design, 54% of the participants in the same survey believed that costbenefit analysis would be convincing if in-depth information about how the model worked and the assumptions and data used would be provided (Harrison et. al., 1994, p. 233). Even though it is not a primary convincing factor, cost-benefit analysis is the main tool used by usability researchers to cost-justify usability in the literature. Management looks for quantified cost and benefit figures for all investment alternatives. “The cost-benefit analysis is a method of analyzing projects for investment justification (Karat, 1994, p. 52)”. The method has the following steps (Nas, 1996, p. 60):    

Identification of relevant costs and benefits, Measurement of costs and benefits, Comparison of cost and benefit streams accruing during the lifetime of the project, Project selection

Donahue (2001) listed the steps of cost-benefit analysis for usability as (p. 1):    

Selecting a usability technique, Determining the proper unit of measurement, Making a rational assumption about the benefit’s magnitude, Translating the estimated benefit into a monetary figure.

28

Texas Tech University, Burchan Aydin, August 2014 Considering these steps, it can be stated that cost-benefit analysis is performed to cost justify usability by estimating the costs and benefits of usability activities and comparing against the costs of not performing any usability activities. The step where usability investment is compared against other investment alternatives is done by either simple or sophisticated techniques. Karat (1994) recommended benefit-cost ratio and payback period as simple techniques. Benefit-cost ratio is found by dividing the total benefits in the project lifetime by initial costs of usability (Karat, 1994). Payback period shows the time it takes to generate net cash flows to recover the initial costs of usability. Definitely, the shorter the payback period is the better investment alternative it is. Aydin, Millet, and Beruvides (2011) showed that cost benefit ratio is the mostly used financial metric in the cost justification of usability research domain. Simple techniques are easy to perform, but the major issue is that the time value of money is not considered in these techniques. Karat (1994) also recommended several sophisticated techniques with regards to usability cost justification. Two major techniques are net present value analysis and internal rate of return. Net present value is the present worth of the benefits minus the present value of the costs as formulated below. NPV= F1 (1/(1+i))1 + F2 (1/(1+i))2 + F3 (1/(1+i))3 +………+ Fn (1/(1+i))n - Eq. 1 C Where; F= Future cash inflows, years 1 to n i= Discount rate n= Project lifetime C= Present value of costs On the other hand, internal rate of return is “the actual rate of return that an investment in a project brings if cash inflows and outflows are as projected” (Karat, 1994, p. 63). It is calculated by finding the discount rate that makes NPV equal zero in equation 1. Trial and error method should be used until an approximate value for the discount rate can be found. Organizations have different rate of return expectations, thus if the internal rate of return of the usability investment is greater than their 29

Texas Tech University, Burchan Aydin, August 2014 expected rate, it can be stated that usability is a good investment for their case. Even though different researchers showed different methods for the justification of usability, the main concept followed is the cost-benefit analysis. The next section will provide these different methods, which are called usability cost justification models in the literature.

Usability Cost Justification Models Majority of the models in cost justification of usability domain were gathered in a book by Bias and Mayhew in 1994. After that there were a few primary theories, yet most of the establishments were modification of those primary models to different business cases such as organization type. Each model will be explained by showing cost calculation and benefit estimation steps, and investment selection technique. The weak and strong points of the models will be listed as well.

Mayhew and Mantei’s Analysis and Later Updates Mayhew and Mantei (1994) introduced a basic framework for cost justifying usability engineering. The first part of the analysis is calculating the costs associated with usability. The model has the following stages for cost calculations: 

Plan the usability engineering program to determine the ‘tasks’.



Identify the ‘usability techniques’ required to perform the tasks.



Further break the techniques down into required ‘steps’.



Break down the steps into ‘personnel hours and equipment costs’.

The second part of their model is estimating benefits. The steps for estimating the benefits are as follows: 

Select the best set of potential benefits depending on the audience. Table-2.10 to Table 2.14 show the benefits related with vendor companies and internal development organizations discussed by Mayhew and Mantei. 30

Texas Tech University, Burchan Aydin, August 2014 

Benefits are estimated based on the difference between the case that usability application is performed and the case that usability application does not exist.



For each potential benefit category, select a unit of measurement and estimate the magnitude of the benefit for this unit of measurement.



Multiply the estimated benefit per unit of measurement by the number of units.

A critical point is deciding whether the benefit category is a one-time benefit or a benefit gained each year throughout the lifecycle of the project. This decision was suggested to be made according to the benefit category. For instance, ‘increased productivity’ is an annual benefit through the lifecycle of the projects, while ‘decreased training’ benefits will be received only initially. The last stage is determining the return on investment. Their model justifies usability by subtracting costs from the benefits. There are some weak points in the model. 

To justify usability, benefits for each year were summed for the lifetime of the project and subtracted from the initial costs. The time value of money was ignored.



Only initial costs were considered, yet there is no consideration of what other costs could be incurred in the lifetime of the project due to usability investment.

On the other hand, the strong points of the model are: 

A detailed cost analysis is provided, and it makes it easier to estimate costs since usability tasks are divided further into techniques, and steps.



Decreased user errors were considered as a benefit category.



They talked about the additivity of usability improvements. They claimed that the total impact of several usability improvements might be conservatively estimated to be somewhere between the largest 31

Texas Tech University, Burchan Aydin, August 2014 benefit of individual improvements and the direct sums of each benefits of improvements. Let us assume three different usability improvements were done; A, B, and C. The benefits gained from those improvements are; X, Y, and Z, respectively. Mayhew and Mantei argued that overall benefit from investing on A, B, and, C will be larger than the greatest of X, Y, and Z, but will be less than the sum of X, Y, and Z. Mayhew and Tremaine (2005) updated Mayhew and Mantei’s model framework by focusing on analysis parameters. They added a step where usability practitioners have to determine parameters for each specific project and case. Moreover, potential benefit categories for different types of web sites were introduced in this updated model. Table 2.9 shows potential benefit categories for E-commerce, advertising funded, product information, customer service, and intranet web sites.

32

Texas Tech University, Burchan Aydin, August 2014 Table 2.9- Potential Benefit Categories for Web Sites (Mayhew&Tremaine, 2005) Web Site Type Benefits

E-commerce

Increased by-to-look ratios Decreased abandoned shopping carts Increased number of visits Increased return visits Increased length of visits Decreased failed searches Decreased costs of other sales channels Decreased use of “call back” button (customer service) Savings resulting from making changes in earlier development lifecycle Increased “click through” on ads Increased sales leads Decreased costs of traditional customer service channels Decreased training costs Increased user productivity Decreased user errors



Funded by Advertising

Product Information

Customer Service

Intranets

☆ ☆ ☆

☆ ☆ ☆

☆ ☆ ☆













☆ ☆ ☆ ☆ ☆ ☆

Donahue (2001) used the model Mayhew and Mantei (1994) suggested, and discussed additional benefit categories such as reduced documentation costs, litigation deterrence, increased e-commerce potential, competitive edge, advertising advantages, and better notices in the media. Donahue (2001) focused on the significance of starting the analysis with identifying the relevant benefits for audiences just as Mayhew and 33

Texas Tech University, Burchan Aydin, August 2014 Mantei (1994). For instance, development cost savings should be focused on rather than increased job satisfaction or higher productivity if the audience is development managers. Mayhew (2005) further extended their cost justification theory for international web sites. The costs are considered for a base country and an international country. The costs in the international country were expressed as differences relative to the base country. Analysis parameters vary based on the relevant country. For instance, customer support fully loaded hourly wages may be different for each country. Benefits are estimated as in their primary theory and a payback period analysis is suggested (Mayhew, 2005). Rajper, Shaikh, Shaikh, and Amin (2012) built a mathematical model for Mayhew and Tremaine’s analysis. It is not a novel addition to cost justification of usability research, as it is a direct modeling of the Mayhew and Tremaine’s work. The cost part was modeled as: Eq. 2 TotalUsabilityCost= Eq. 3 Each Phase cost= Eq. 4 Each Task cost= User cost= n×hrs× fully loaded wages Eq. 5 Where; n is the number of each user type, and hrs is number of working hours. The benefit section was modeled as: TotalPotentialBenefit= PB1 + PB2 + …..+PBz Eq. 6 Where; PB denotes potential benefit PB= v1× v2×…..vp Eq. 7 Where; v denotes variables involved in any potential benefit. Cost justification part was modeled as: UCJ=TPB – TUC Eq. 8 Where; UCJ=usability cost justification The benefit section of model is vague, as in certain cases variables might be added too, rather than solely multiplication. Another primary theory by Mayhew is the utilization of keystroke level modeling (KLM) to cost-justify usability. KLM is a cognitive modeling tool “involving identifying and counting all discrete human operations (physical, cognitive, 34

Texas Tech University, Burchan Aydin, August 2014 and perceptual) that users must execute to perform a specific task on a user interface” (Mayhew, 2005, p. 465). Productivity gain is the measure used to justify usability by KLM. It is carried out as follows. First, high priority tasks are modeled on the existing and proposed application. Next, model predicted task times are compared to actual tasks times gathered by carrying out productivity tests with expert users. The model is refined according to these comparisons to decrease the error rate. The refined modeling technique is then re-applied. Afterwards, the predicted productivity changes of the proposed application relative to the existing application are compared with goals. If for some tasks goals are not achieved, the models for those tasks are analyzed in detail to suggest redesign directions. Finally, productivity tests are performed to measure actual productivity on the proposed application as a final validation of the productivity gain goals as well as of the model predictions (Mayhew 2005, p. 472).

Karat’s Analysis and Later Updates Karat (1994) proposed a model that initiates with determining human factors cost and benefit variables. In order to identify and quantify these variables, the following guidelines were suggested to be pursued: “(1) Relevant variables vary based on the focus of the product and the context of its use, (2) There exists differences in cost and benefit variables for internal and external products, (3) A draft cost and benefit analysis should be submitted to project team members with different backgrounds to get feedback” (p. 53). Once the variables are identified, second step is separating tangible and intangible costs and benefits. Tangible costs and benefits are considered for further calculations. Karat also suggested identifying costs and savings as initial or ongoing by year. Karat also suggested that usability laboratory costs may be allocated to each usability application depending on the number of usability tests to be conducted for a given period of time. Moreover, another alternative was to include equipment costs and laboratory costs in fully burdened wages as overhead expenses. Weak points in this model are: 35

Texas Tech University, Burchan Aydin, August 2014 

Cost calculation part of the model is not introduced in detail. Only costs of usability tasks are discussed. It does not help a novice usability practitioner how to calculate costs of usability.



Possible benefits for different audiences are not listed.

Good points of Karat’s model are: 

Sophisticated selection techniques are introduced; NPV and IRR.



The model recommends usability practitioners to communicate with various departments in the company to make reliable estimations of cost and benefit variables.



Karat was the pioneer researcher who related cost-benefit analyses to business cases.



Sustaining costs and projected raise in variables in the future are recommended to be considered.

Karat (2005) introduced an updated cost-benefit analysis for cost justification, which was published in Bias and Mayhew’s (2005) second book. The secondary model differed in terms of benefit categories. Karat (2005) added increased sales or revenues resulting from increased completion rates on a website, and from repeat visits to a website. Increased completion rate is multiplied by value of increase in one percentage point in completion rate so as to find the financial benefit. To calculate the financial benefit from repeat customers, first average amount of current transaction on the site is found. Next, percent of target customers that make additional purchase if user experience is launched is found by field tests, and number of additional purchases target customers stated they would make if user experience is launched. Final step is determining the number of target customers for user experience and multiplying all these measures to determine the value of improvement in user experience. Other benefit categories increased user productivity and decreased personnel costs were the same for primary and secondary theories of Karat. Dray and Karat (1994) discussed usability cost justifying for internal development projects. The benefits of usability application suggested for internal 36

Texas Tech University, Burchan Aydin, August 2014 development projects were listed as; reduced training, reduced information organization time, reduced supervisory assistance and inspection, reduced service call lengths, and reduced transferred call costs. Dray and Karat used internal rate of return analysis for justifying usability for internal development projects. In a case study illustrated in their work (usability improvement for helpdesk system), an internal rate of return of 32% was found. They concluded that an IRR of 32% was a high return on investment and should warrant a decision favoring the usability application.

Ehrlich and Rohn’s Analysis and Later Updates Ehrlich and Rohn (1994) analyzed cost justification of usability for ‘vendor companies’. A vendor company is a for-profit company selling products or services to obtain revenue (Ehrlich, Rohn, 1994, p. 74). In terms of costs of usability, Ehrlich and Rohn discussed initial costs and sustaining costs of usability for vendor companies, corporate customer, and end user. Differing from Karat (1994), Ehrlich and Rohn accounted lab construction and equipment costs as initial costs. The sustaining costs are listed as employees, contractors, videotapes, recruitment of participants, and upgrades to equipments. Benefits of usability to the vendor company are given as increased sales, reduced support costs, prioritized product requirements and reduced development costs. The good point on Ehrlich and Rohn’s analysis was the introduction of sustaining costs of usability investments. The weak point is that they did not explain or formulate how to calculate costs and estimate benefits even though they listed possible cost and benefit categories. Rohn (2005) extended on this analysis for vendor companies by differentiating business-to-business (B-to-B) products and business-to-consumer (B-to-C) products. B-to-B products can be cost justified in terms of productivity gains, whereas B-to-C products can be typically cost justified in terms of ease of learning. In addition to the benefits listed in pioneer analysis, Rohn (2005) included following benefits of usability for vendor companies: increased customer satisfaction and loyalty, increased customer references, increased favorable reviews. Moreover, increased productivity, 37

Texas Tech University, Burchan Aydin, August 2014 decreased training, decreased minor and major errors, decreased support, decreased development costs, decreased installation, configuration, and deployment costs, and decreased maintenance were listed as benefits for both the vendor company and the customers. The selection techniques suggested were benefit-cost ratio and payback period for the usability investment.

Bohmann’s Analysis Bohmann (2000), influenced by a comment made by Jacob Nielsen, suggested a model to cost justify usability programs for websites. In the suggested model, first step is estimating saved user hours. Saved user hours per year = Users per year x Tasks per user x Estimated time savings per task

Eq. 9

The next step is predicting the economic effect of the time savings as follows; Economic effect of time savings = Saves user hours x User value per hour

Eq. 10

Finally, the economic impact of the usability evaluation is found by;

Economic impact of usability evaluation = Total savings per year x Probability of improved results due to usability evaluation

Eq.11

Where; the probability of improved results is measured by the following equation. Probability of better results due to usability evaluation = P(successful design | usability evaluation) – P(successful design | NO usability evaluation)

38

Eq. 12

Texas Tech University, Burchan Aydin, August 2014

Bevan’s Model Bevan (2000) proposed a model based on the Usability Maturity Model (UMM). The UMM is demonstrated in ‘ISO TR 18529 (2000)’ (as cited in Bevan 2005, p. 578) as consisting of seven processes each of which containing base practices to involve users. Table-2.10 shows the UMM.

Table 2.10- The Usability Maturity Model User-Centered Design Process 1. Ensure UCD content in system strategy

2. Plan and manage the UCD process

3. Specify the stakeholder and organizational requirements

4. Understand and specify the context of use

Base Practices 1.1 Represent Stakeholders 1.2 Collect market intelligence 1.3 Define and plan system strategy 1.4 Collect market feedback 1.5 Analyze trends in users 2.1 Consult stakeholders 2.2 Identify and plan user involvement 2.3 Select human-centered methods and techniques 2.4 Ensure a human-centered approach within the team 2.5 Plan human-centered design activities 2.6 Manage human-centered activities 2.7 Champion human-centered approach 2.8 Provide support for human-centered design 3.1 Clarify and document system goals 3.2 Analyze stakeholders 3.3 Assess risk to stakeholders 3.4 Define the use of the system 3.5 Generate the stakeholder and organizational requirements 3.6 Set quality in use objectives 4.1 Identify and document user’s tasks 4.2 Identify and document significant user attributes 4.3 Identify and document organizational environment 4.4 Identify and document technical environment 4.5 Identify and document physical environment 39

Texas Tech University, Burchan Aydin, August 2014 Table 2.10- The Usability Maturity Model continued User-Centered Design Process Base Practices 5.1 Allocate functions 5.2 Produce composite task model 5.3 Explore system design 5.4 Use existing knowledge to develop design 5. Produce design solutions solutions 5.5 Specify system and use 5.6 Develop prototypes 5.7 Develop user training 5.8 Develop user support 6.1 Specify and validate context of evaluation 6.2 Evaluate early prototypes in order to define the requirements for the system 6.3 Evaluate prototypes in order to improve the design 6.4 Evaluate the system to check that the 6. Evaluate designs against stakeholder and organizational requirements requirements have been met 6.5 Evaluate the system in order to check that the required practice has been followed 6.6 Evaluate the system in use in order to ensure that it continues to meet organizational and user needs 7.1 Management of change 7.2 Determine impact on organization and stakeholders 7. Introduce and operate the 7.3 Customization and local design system 7.4 Deliver user training 7.5 Support users in planned activities 7.6 Ensure conformance to workplace ergonomic legislation

Bevan (2000) explained that usability should be customized depending on the project. In other words, the UMM should be customized by selecting the required UCD processes and the base practices to perform those processes. Afterwards, persondays required to perform the base practices are found as follows;

40

Texas Tech University, Burchan Aydin, August 2014 Person days = (Preparation time x people) + (Application time x people) + (Reporting time x people)

Eq. 13

After person-days are calculated, it is multiplied by the daily rate to find the total labor cost. Moreover, other labor related costs are added such as laboratory hire or cost of the participant recruitment. Eq. 14 The next step is estimating the benefits. The usability practitioner should determine the relevant benefits for the specific project and estimate them. Afterwards, Bevan (2000) suggested calculating a benefit-cost ratio to justify usability as a final step of his model. Eq. 15 Bevan (2000) suggested a very detailed cost analysis, yet the benefits are solely based on estimations. There is no explanation or tool provided that would help usability practitioner estimate the potential benefits. Thus, by following Bevan’s model, usability practitioners have to make assumptions of possible benefits through their own judgments and experiences. In all these cost justification of usability analysis models, researchers provided potential benefits. The following section provides a collection of potential benefits of usability. Potential Benefits of Usability Investments Usability investments offer various types of benefits as proven by real cases in many industries. This section demonstrates benefit providers, benefit receivers, and benefit types for several organizational cases. Table-2.11 shows these measures for organizations that develop IT systems for internal use by implementing usability applications.

41

Texas Tech University, Burchan Aydin, August 2014 Table-2.12 represents benefit measures for organizations that develop their own websites, intranets, or extranets. Table-2.13 shows benefit measures for organizations that develop e-commerce sites, intranets, and extranets for external client organizations by investing on usability.

42

Texas Tech University, Burchan Aydin, August 2014 Table 2.11- Benefits for Organizations Developing IT Systems for Internal Use Organization

Benefit Provider

Benefit Receiver

Benefit Types 1. 2. 3. 4. 5. 6.

Organization as a whole

Organizations developing usable IT systems for internal use (Donahue et al., 1999, Mayhew and Tremaine, 2005, Dray and Karat, 1994, Mayhew and Mantei, 1994)

Internal usability team

Development team

Support Staff

Users of IT systems in these organizations

43

Faster learning Reduced task time Increased productivity Reduced employee errors Reduced staff turnover Reduce time spent by other staff providing assistance for users 7. Reduced support and help line costs 8. Reduced cost of training 9. Reduced maintenance costs 10. Reduced development time and cost 11. Reduced call length 12. Reduced transfer call costs 1. Increasing Productivity  Less rework  Early detection and prevention of usability problems  Reducing development time  Fewer maintenance requirements 2. Increasing job satisfaction 1. Increasing Productivity  Less or no need for training  Fewer support calls received  Time to focus on productive tasks 2. Increasing job satisfaction 1. Increasing Productivity  Less or no need for training  Need for fewer support calls  Decreasing error rates  Decreasing task times 2. Increasing job satisfaction

Texas Tech University, Burchan Aydin, August 2014 Table 2.12- Benefits for Organizations Developing their Websites, Intranets, or Extranets Organization

Benefit Provider

Usability team in the development organization

Organizations developing their websites, intranets, and extranets with usability (Donahue et al., 1999, Karat, 1994)

Usability team in the development organization

Benefit Receiver

Benefit Types

1. 2. 3. 4. Development 5. organization 6. as a whole 7. 8. 9.

Reduced development costs Reduced support costs Reduced training costs Reduced maintenance costs Reduced development time Reduced employee turnover Decreased litigation risk Improved reputation Increased sales and business with client organizations 1. Increased productivity  Early detection and prevention of usability problems Development  Reduced development time team  Less rework 2. Increased job satisfaction

1.  Usability  team in the  development  organization  2. 1. External 2. client organizations  Development accessing  organization extranets of 3. these 4. organizations 5. 1. External Development users of their  organization website  (public) 2. Internal users of the websites and intranets in these organizations

44

Increased productivity Decreased time to find information Decreased task time Less need for calling for support Decreased error Less need for training Increased job satisfaction Satisfaction with extranet Improved efficiency Decreased task times Decreased error rates Reduced turnover Decreased support costs Decreased training costs Increased productivity Decreased time to find information Less need to make support calls Increased job satisfaction

Texas Tech University, Burchan Aydin, August 2014 Table 2.13-Organizations Developing E-commerce, Intranet or Extranet for External Parties Organization

Benefit Provider

Usability team in development organization

Benefit Receiver

Development organization as a whole

Benefit Types 1. 2. 3. 4. 5. 6. 7. 8. 9. 1.

Organizations Usability Development  developing e- team in the team in the commerce development development  sites, organization organization  2. intranets, and 1. Client extranets for organization external Development  for which client organization  website is organizations 2. developed (Donahue et 1. al., 1999, Client  Ehrlich and organization  Rohn, 1994, Development for which 2. Karat, 1994, organization 3. Intranet is Rohn, 2005, 4. developed Bevan, 2000)  1. 

Development organization

Client organization for which ecommerce or extranets are developed

45

 2. 3. 4. 5. 6. 7.

Reduced development costs Reduced support costs Reduced training costs Reduced maintenance costs Reduced development time Reduced employee turnover Decreased litigation risk Improved reputation Increased sales and further business with client organizations Increased productivity Early detection and prevention of usability problems Reduced development time Less rework Increased job satisfaction Improved marketing and advertising value Increased chance of new business Improved reputation Decreased litigation risk Increased productivity Decreased task times Decreased error rates Less training costs Less support costs Increased employee job satisfaction Reduced employee turnover Increased sales Increased new and returning customers Increased referrals from customers Better customer relations Advertising value Decreased litigation risk Increased buy-to-look ratios Decreased abandoned shopping carts Decreased failed searches

Texas Tech University, Burchan Aydin, August 2014 Table 2.13-Benefits for Organizations Developing E-commerce, Intranet or Extranet for External Parties Continued… Organization

Benefit Provider

Benefit Receiver

1. Organizations developing e commerce Users within  sites, Development these client  intranets, and organization organizations extranets for  external  client 2. organizations 1. (Donahue et Public users 2. al., 1999, of the sites 3. Ehrlich and Client of these Rohn, 1994, organization 4. client Karat, 1994, 5. organizations Rohn, 2005, Bevan, 2000)

46

Benefit Types Increased productivity Decreased time to find information Decreased task time Less need for calling for support Decreased error rate Less need for training Increased job satisfaction Satisfaction with the site Ease of learning Decreased task times Increased completion rates Decreased failure rates

Texas Tech University, Burchan Aydin, August 2014 Table-2.14 demonstrates the benefit measures for organizations developing commercial software by investing on usability.

Table 2.14-Benefits for Organizations Developing Commercial Software Organization Organizations developing commercial software with usability (Donahue et al., 1994, Karat, 1994, Ehrlich and Rohn, 1994)

Benefit Provider Usability teams in the development organization

Benefit Receiver Organization as a whole

Benefit Types

Reduced development costs Reduced support costs Reduced training costs Reduced maintenance costs Reduced development time Reduced employee turnover Decreased litigation risk Improved reputation Increased sales and further business with client organizations 10. Advertising value Development Organizations 1. Reduced support and organization purchasing training costs commercial 2. Reduced training software for 3. Reduced support costs internal use 4. Increased employee satisfaction  Reduced employee turnover Development Users in the 1. Reduced learning time organization purchasing 2. Decreased task times organizations 3. Decreased error rates and public 4. Increased job satisfaction users of the 5. Less need for training software 6. Less need for calling for support

47

1. 2. 3. 4. 5. 6. 7. 8. 9.

Texas Tech University, Burchan Aydin, August 2014 Table-2.15 shows the benefit measures for vendor companies that purchase usable products to sell.

Table 2.15-Benefits for Vendor Companies Organization Organizations purchasing usable software to sell (Vendor companies) (Ehrlich and Rohn, 1994, Rohn, 2005, Mayhew and Mantei, 1994)

Benefit Benefit Provider Receiver Organizations Vendor that produce company the usable software

Benefit Types 1. Increased sales 2. Increased customer satisfaction and loyalty 3. Increased customer references 4. Increased favorable reviews 5. Increased productivity of own employees 6. Decreased internal training 7. Decreased own employees’ errors 8. Reduced support costs 9. Reduced development costs 10. Decreased installation, configuration, and deployment costs 11. Decreased maintenance costs

Having listed all potential benefits for different organizational cases, why these benefits can be achieved will be explained next. Usability investment may increase revenues because of providing product or service satisfaction. Revenues can be maximized by attracting new customers and being able to keep current customers by providing usable product or service (Bias and Mayhew, 2005). “Satisfied customers influence four other people to buy the same brand or to use the same service, while dissatisfied customers influence ten other people to avoid the product or service” (Bias and Mayhew, 2005, p. 203). Moreover, The U.S Better Business Bureau (as cited in Bias and Mayhew, 2005) estimates that for every 100 customers that are dissatisfied with a site, 50 will tell 8 to 16 other 48

Texas Tech University, Burchan Aydin, August 2014 people about their bad experiences (p. 203). Suggestions from previous users of a product or site are certainly important purchase decision factors for first time users. Usability decreases errors made by user in any user interface, and eliminates catastrophic errors. User interfaces lacking usability was a curious factor in the Three Mile Island accident. Three Mile Island accident was the result of engineers lacking information about an open valve because of a poorly designed interface (Rhodes, 2001). The key reason why users tend to make fewer errors is that the interface is designed to prevent errors as a requirement of usability principles. Evidently, users will make fewer errors on an interface designed to prevent errors compared against an interface designed without any usability effort. Usability decreases task completion times, because the interface is designed so as to minimize the steps to reach the completion and to optimize the menu structures for ease of use. Since the interface is easier to understand, training time will be less or in some cases no training will be needed. Usability decreases maintenance and development costs. In the development stage of a product life cycle, it is easy to detect potential usability problems and to eliminate them. If a product is released without any usability efforts, customers will usability-test the system and dissatisfied customer population will force the company to make changes to the product in the post-release stage of the product life cycle. These are called late design changes in usability research domain. Boehm (1981) estimated that fixing usability problems in the maintenance phase is 100 times more expensive than fixing the same problems in the requirements phase. Moreover, Pressman (1992) estimated that if making changes in definition phase costs 1, it costs 1.5-6 times more in development phase, and 60-100 times more after product release. Thus, implementing usability applications in the development stage decreases maintenance and development costs. Usability decreases development time as well. Development time decreases because of the fact that a product with only relevant functionality (minimalist design) is designed. This eliminates time wasting because of designing irrelevant functions for the product (Bevan, 2005). 49

Texas Tech University, Burchan Aydin, August 2014 Usability decreases support costs as users need less help to figure out how to use the product. Users can learn easily how to process relevant tasks without calling the customer service, which in turn help companies save significant amount of support costs. Usability practitioners, depending on their organization, can refer to the above tables that represent potential benefits to use in cost justification of usability. The costs of usability, on the other hand, will be explained in the next section.

Potential Costs of Usability This dissertation focuses on the benefit part of the usability cost justification, because of the fact that there are already various sources to help calculate costs associated with usability investments. The difficult part of the justification is certainly the benefit estimations, which is the key reason why the focus of the dissertation was selected to be the benefit part. However, a brief review of costs of usability is provided in this section. Total cost of usability changes depending on the usability tasks, techniques, and steps selected for each specific project. Usability practitioners can easily calculate costs depending on labor and material needed to accomplish those steps. Table-2.16 shows main costs of usability.

50

Texas Tech University, Burchan Aydin, August 2014 Table 2.16- Costs of Usability Initial Costs of Usability Laboratory construction, video and audio equipment, tools, furniture, products, labor (Rohn, 2005, Mayhew and Mantei, 1994)

Sustaining Costs Employees, contractors, recruitment of and compensation for participants, travel for field studies, videotapes or digital storage, equipment maintenance, upgrades to video and audio equipment, computer equipment, and software. (Rohn, 2005, Bias and Mayhew, 2005)

Theoretical Model Usability benefits were collected via a thorough literature review. Pareto analysis helped in determining the significant benefits of usability for further analysis; increased sales, decreased error rate, decreased task time, decreased training time, and increased website traffic. Distributions were found for sample data for these five variables. By creating sufficient amounts of bootstrap samples, population descriptive statistics were estimated. The sample means of the lower quantiles of the bootstrap samples from this analysis were picked as target values for the second research in a goal programming model. This goal programming model helps usability practitioners assign preferred weights on these variables depending on their company expectations from usability investments. Third research was planned to focus on the interaction models of the goal programming models developed in the second research. Quantile regression was utilized on related variables to determine the relationship sensitivity.

51

Texas Tech University, Burchan Aydin, August 2014

CHAPTER III RESEARCH I: A METHODOLOGY FOR DATA COLLECTION AND ANALYSIS TO ASSIST COST JUSTIFICATION OF USABILITY This research is a data collection and analysis effort for cost justification of usability research domain. It is a complete literature review with data tables that show entire list of increased benefits and reduced costs achieved by usability investments in software industry. Not only does it shows the entire list of the types of gains through better software usability, but also provides the numerical values for most of them. The most significant usability gains were selected via Pareto charts analysis. Distributions of these significant benefits were determined. Moreover, the dependencies between these significant benefits were modeled. Finally, samples in the data tables for the significant gains were resampled by utilizing bootstrap sampling method, and lower quantiles of the bootstrap samples were gathered.

This paper will be submitted to the Journal of Human-Computer Interaction.

52

Texas Tech University, Burchan Aydin, August 2014 A Methodology for Data Collection and Analysis to Assist Cost Justification of Usability Abstract Usability investments help companies increase benefits and reduce costs. There are many categories of increased benefits and reduced costs mentioned in the cost justification literature. It is hard for usability practitioners to justify usability through all these categories. This paper examines the significant gains from usability that practitioners should focus on based on their company. These can be listed as: (1) increased sales (2) reduced error rates, (3) reduced task times, and (4) increased traffic on website. The study involves literature review, Pareto analysis, distribution fitting and bootstrap sampling.

Introduction Usability might be perceived as ‘ease of use’, yet it is well beyond this short definition (Qusenbery, 2001). According to ISO 9241-11 guidelines, it is “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (1998, p. 2). Thus, according to the ISO standards, the three main factors that should be considered in order to design for usability are: effectiveness, efficiency, and user satisfaction. Effectiveness is “the accuracy and completeness with which users achieve specified goals”, whereas efficiency is “the use of resources in relation to the accuracy and completeness with which users achieve goals” (ISO 9241-11, 1998, p. 2). Common effectiveness metrics are task completion rates and error rates. On the other hand, efficiency is usually measured by time on task and task flow deviation. Task flow deviation is “the ratio of the optimal number of steps to complete a task to the average number of steps to complete the task” (Sherman, 2006, p. 22). A definition in terms of efficiency factor is provided by Porat and Tractinsky (2012) as: “Usability work is a focus on the design and evaluation of activities in order to aid users accomplish the

53

Texas Tech University, Burchan Aydin, August 2014 intended tasks efficiently”. User satisfaction is “the freedom from discomfort, and positive attitudes towards the use of the product” (ISO 9241-11, 1998, p. 2). In the software industry, usability is a measure of how well a software program facilitates user learning, helps users remember what they have learned, reduces error rates, increases efficiency and effectiveness, and how much users are satisfied with the software (Donahue, Weinschenk, & Nowicki, 1999). Usability not only benefits the end users of the software, but also the developers, the companies they work for, and the vendors (Donahue, 2001). Marcus (2005) listed the return on investment of usability for software internally used as: (1) increased user productivity, (2) decreased user errors, (3) decreased training costs, (4) decreased development costs, and (5) decreased user support costs. Moreover, returns on investment of usability for externally sold software are listed as: (1) increased sales, (2) decreased customer support costs, (3) savings gained from making changes earlier in the development life cycle, and (4) reduced cost of training offered through vendor company (Marcus, 2005). Investment costs are inevitable. The key point is to validate the investment with the profits gained. Harrison, Henneman, and Blatt (1994) conducted a survey with results indicating that 62% of people are convinced of the value of user centered design by testimonials, case stories, and demonstrated results, 23% by support from customers, management, and outside companies, and only 15% by cost-benefit analysis. Even though survey results showed that cost-benefit analysis was not a primary method to influence people of the value of user centered design, 54% of the participants in the same survey believed that cost-benefit analysis would be convincing if in-depth information about how the model worked and the assumptions and data used would be provided (Harrison et. al., 1994, p. 233). Even though it is not a primary convincing factor, cost-benefit analysis is the main tool used by usability researchers to cost-justify usability in the literature. Management looks for quantified cost and benefit figures for all investment alternatives. “The cost-benefit analysis is a

54

Texas Tech University, Burchan Aydin, August 2014 method of analyzing projects for investment justification (Karat, 1994, p. 52)”. The method has the following steps (Nas, 1996, p. 60): • Identification of relevant costs and benefits, • Measurement of costs and benefits, • Comparison of cost and benefit streams accruing during the lifetime of the project, • Project selection Donahue (2001) listed the steps of cost-benefit analysis for usability as (p. 1): Selecting a usability technique, determining the proper unit of measurement, making a rational assumption about the benefit’s magnitude, and translating the estimated benefit into a monetary figure. Considering these steps, it can be stated that costbenefit analysis is performed to cost justify usability by estimating the costs and benefits of usability activities and comparing against the costs of not performing any usability activities. Justification of usability investment has been a major part of the usability research domain during dot.com era, yet there has been a declining trend in usability justification model development lately (Aydin, Millet, & Beruvides, 2011). Majority of the models in cost justification of usability domain were gathered in a book by Bias and Mayhew in 1994. After that there were a few primary theories, yet most of the establishments were modification of those primary models to different business cases such as organization type. This research study is a much needed re-focus on the cost justification of usability. Existing cost justification models are based on many assumptions. Currently, there is no study that provides reliable numerical ranges for these assumptions. However, it is crucial that usability practitioners show reliable numbers to decision makers to promote usability investments. How much increase or decrease may happen in any performance measure after usability investments is not apparent. Therefore, it is assigned to the usability practitioner to make assumptions on the magnitude of the usability benefits. The accuracy of these assumptions depends mainly on the consistency of usability practitioner or researcher. Consistency in usability testing endeavors is a major problem as reported in Vermeeren et. al., 2008.

55

Texas Tech University, Burchan Aydin, August 2014 For instance, a variation of 37% to 72% has been detected on within-team consistency, which indicates considerable subjectivity (Vermeeren et. al., 2008).This paper will provide a tool for usability practitioners to better evaluate possible changes in sales, traffic rate, error rates of users, task times, and relationships of those variables. The main objective of this study is to build foundations for modeling usability benefits. The results of the study shall be used for developing optimization models for those benefits. Methods The methods used to accomplish this study are a thorough literature review for data collection, State-of-the-Art Matrix analysis, Pareto analysis, distribution analysis, bootstrap resampling, and regression modeling. These methods are illustrated in this section.

Data Collection In this research, a quantitative research methodology was used and data collection is held through a complete literature review. Types of data are percentages. Thus, data fall into ratio scale of measurement. The steps of the methodology used for data collection can be listed as follows. 1. A fairly extensive literature search for relevant studies was conducted through professional journals, using online databases to identify studies addressing the topic 2. Each study’s results are converted to a common statistical index in order to ensure data comparability between studies. The first point in the list has been established by a literature review via online databases, and electronic journals. A State-of-the-Art Matrix Analysis technique for literature review was conducted to assist in the data collection of the research articles surveyed (Beruvides and Omachonu, 2001). See also Tolk, Cantu, and Beruvides (2013); Ng et al. (2012); and Sumanth, Omachonu, and Beruvides (1990) for further examples of this literature review technique. The second point was fulfilled by selecting the studies that includes a numeric data as a result of any usability

56

Texas Tech University, Burchan Aydin, August 2014 improvement, and a common index was picked as ‘percentage changes in usability benefits’ as a result of usability improvement. A total of 135 studies including hypothetical and real world cases were explored with a total of 385 data points. The total percentage data available is 156. Any data point, which is out of the range of 3 standard deviations below or above the mean, was accounted as an outlier. There were two outliers detected for increased sales category. The results with and without outliers are both provided in results section. Table-3.1 shows the number of hypothetical versus real data gathered for usability benefits.

Table 3.1- Number of Hypothetical and Real Data Benefit

Hypothetical

Real

Total

Increased sales

4

44

48

Decreased task time

2

39

41

Decreased error rate

4

30

34

Increased website traffic

0

19

19

Decreased training time

7

7

14

First of all, usability benefit types mentioned in the literature were collected. The mostly mentioned benefit was found as increased sales. See Figure 3.1 for the Pareto chart of usability benefits mentioned in the literature. The results indicated that increased sales, decreased task times, reduced maintenance and support costs, increased user satisfaction and loyalty, decreased errors, reduced training costs, reduced development costs, increase in website traffic, and increased completion rates were the usability benefits that are mostly mentioned in the literature.

57

50

0

Increased Sales Decreased Task Time Reduced Maintenance and support Cost Increased user satisfaction and loyalty Decreased Error Reduced Training Reduced Development Cost Increase in traffic (number of visitors) Completion Rate Reduced Development Time Increased Returning Customer Reduced Employee Turnover Reduced Learning Time Liability, Litigation Deterrence Reduced drop off rates Improvement of ease placing orders Purchasing Experience Increased buy-to-look ratio Decreased abondoned shopping carts Decreased use of call back button Failure Rate Decreased Installation, Configuration, and… Increased lead generation Reduced supervisory assistance and incpection Orientation time Number of interruptions Time spent on site

Texas Tech University, Burchan Aydin, August 2014

Pareto Chart for Mentioned Data

70 100

60 90

80

80% 70

40 60

50

30 40

20

30

10 20

10

Figure 3.1 - Pareto Analysis for Usability Benefits Mentioned in the Literature

58 0

Increased Sales

59

0

Figure 3.2- Pareto Chart for Numeric Data Availability Orientation time Interruptions

Reduced supervisory assistance and incpection

Increased lead generation

Decreased Installation, Configuration, and…

Purchasing Experience

Improvement of ease placing orders

Failure Rate

Decreased use of call back button

Increased buy-to-look ratio Decreased abondoned shopping carts

Reduced drop off rates

Reduced Employee Turnover

Reduced Learning Time

Reduced Development Time

Increased Returning Customer

78%

Reduced Development Cost

Increased user satisfaction and loyalty

50

Reduced Training

Decreased Error

Reduced Maintenance and support Cost

Increase in traffic (number of visitors)

Completion Rate

Decreased Task Time

Texas Tech University, Burchan Aydin, August 2014

The next step was gathering the numerical data in these studies. Increased sales

category was found to be the benefit with most numeric data availability. See Figure

3.2 for Pareto Analysis for numerical data availability of usability benefits.

Pareto Chart for Numerical Data

70 100

60 90

82% 80

70

40 60

50

30 40

20 30

20

10 10

0

Texas Tech University, Burchan Aydin, August 2014 According to numeric data availability Pareto, increased sales, decreased task time, completion rate, increase in traffic, reduced maintenance and support costs, decreased error rates, and reduced training are the significant benefits. The problem at this point was that there were differences in terms of unit of measurements in between studies for the same benefit types. For instance, in some cases the values of a benefit before and after usability improvements were not provided. Only data available was how many more units sold after usability improvement, or how many less errors were made after usability improvement (for example, 1 error per day is eliminated with usability improvement, Mayhew and Tremaine, 2005). On the other hand, majority of the studies provided percentage changes in a benefit type before and after usability improvement (for example, error rates decreased by 2.5%, Mantei and Teorey, 1988). In order to have comparable data, the studies providing percentage change data were selected, and another Pareto Analysis was made to determine benefit types with sufficient data for statistical purposes. Another key reason for selecting percentages as the common index point was that it helped address the issue of different cost basis in different studies. The costs in each study were most likely different, but by taking benefit percentages; costs were accounted relatively equal. Furthermore, there were many benefit categories that could be grouped together due to similarity. First, decreased error rate, decreased failure rate, and increased completion rates were grouped. The values for increased completion rates were reversed to count as decreased error rate. Next, user satisfaction and loyalty, reduced employee turnover, and purchasing experience were grouped since user satisfaction is the key point for employee turnover, and purchasing experience. Considering these grouped components, a final Pareto analysis was conducted to conclude the variables to be evaluated. Figure 3.3 shows the Pareto Analysis with the final usability benefits list.

60

61

Figure 3.3- Pareto Chart for Percentages Data reduced drop off rate

Increased lead generation

Increased returning customer

Reduced Learning Time

Reduced Development Time

Reduced Development Cost

Reduced Employee Turnover and increased customer satisfaction and purchasing experience

Reduced Maintenance and support Cost

Reduced Training

Increase in traffic (number of visitors)

Decreased error, decreased failure rate

Decreased Task Time

Increased Sales

0

10

20

30

40

50

60

70

Texas Tech University, Burchan Aydin, August 2014

Pareto Chart for Percentages Data 100

90

79% 80

70

60

50

40

30

20

10

0

Texas Tech University, Burchan Aydin, August 2014 According to the final Pareto chart, the usability benefits to be focused in this research are decided to be increased sales, decreased task time, decreased error rate, increase in traffic and decrease in training time. The rest of the usability benefits are saved for future research endeavor due to current data unavailability. There are very important benefit categories such as reduced maintenance and support cost and reduced development time etc, that will be investigated in future studies. It should be noted here that training time reduction data do not involve reduced learning time data. Training is producing a motivated user who has the essential skills needed to apply what has been learned and then to continue to learn on the job (Compeau, Olfman, Sei, and Webster, 1995). It is about the training that companies offer for their employees for ease of usage of companywide software. On the other hand, a general definition of learning is: “the process by which an activity originates or is changed through reacting to an encountered situation, provided that the characteristics of the change in activity cannot be explained on the basis of native response tendencies, maturation, or temporary states of the organism (e.g., fatigue, drugs, etc) (Hilgard and Bower, 1966, p. 2)”. Reduced learning time is another usability benefit that is out of scope of this dissertation as a result of insufficient data as can be seen in Figure 3.3.

Sample Distributions Hypothetical data were eliminated from statistical analysis. Distributions of the usability benefit data samples (with only real data) were found by ARENA input analyzer module. The best fitting distribution is picked as the sample distribution, which is an estimate of the population distribution function. Arena performs the Chisquared and Kolmogorov-Smirnov goodness-of-fit tests for distribution fitting. The null hypothesis for the test is that the data follows the chosen distribution. Therefore, failing to reject the hypothesis means accepting the fitted distribution. It is not suggested to use the Chi-Square test for ratios and percentages. Moreover, the Kolmogorov test is preferred over Chi-Square test if the sample size is small. Thus, only the result of the Kolmogorov-Smirnov test will be taken into consideration.

62

Texas Tech University, Burchan Aydin, August 2014 Arena also provides p-values which are the type-1 error levels. A large p value indicates a lack of evidence to reject the null hypothesis.

Two-way Interactions Selected usability benefits were checked for any correlation by Spearman’s Rho non-parametric test. The correlated variables were further analyzed by using MS Excel Data Analysis module and R statistical software. Two way interaction models were found for those variables with significant interaction. Three-way and four-way interactions were omitted due to data limitations.

Bootstrap The bootstrap method was introduced by Efron (1979). It is a method to generalize a sample statistic to the parent population in a scientific approach (Singh and Xie, 2008). It helps gathering information cheaply and in a timely fashion. The sample data is used as a surrogate population, and a large number of samples of same size are resampled from the sample data. These large numbers of samples are named bootstrap samples. The sample statistic is then determined for each bootstrap sample, and histogram of this set of computed statistics is the bootstrap distribution of the statistic (Singh et al., 2008). The bootstrap distributions for lower quartiles of selected usability benefits were found in this study. The details of the method will be illustrated in the results section. Next section demonstrates the results of distribution, interaction, and bootstrap analyses’ results. Results and Discussion

Distribution Results Weibull distribution fits the ‘percentage increase in sales’ sample without any outliers data. However, exponential distribution also fits the data very good, which is expected as exponential distribution is a subset of Weibull. A decision was made here to assume exponential distribution. The square error for Weibull fit is 0.00156,

63

Texas Tech University, Burchan Aydin, August 2014 whereas 0.00225 for exponential fit. See Table 3.2 for the ARENA input analyzer solution for increased sales sample without outliers in data table.

Table 3.2 Increased Sales Distribution Fitting, Outliers Excluded Distribution Summary KolmogorovSmirnov Test

Data Summary

Histogram Summary

Distribution Expression Square Error Test Statistic Corresponding p-value Number of Data Points Min Data Value Max Data Value Sample Mean Sample Std Dev Histogram Range Number of Intervals

Exponential "-0.001 + EXPO(1.38)" 0.002253 0.12 > 0.15 44 0 9 1.38 1.86 -0.001 to 9 6

See Table 3.3 for the ARENA input analyzer solution for increased sales sample with outliers in data table.

Table 3.3 Increased Sales Distribution Fitting with outliers Distribution Summary KolmogorovSmirnov Test

Data Summary

Histogram Summary

Distribution Expression Square Error Test Statistic Corresponding p-value Number of Data Points Min Data Value Max Data Value Sample Mean Sample Std Dev Histogram Range Number of Intervals

64

Weibull "-0.001 + WEIB(1.71, 0.596)" 0.002920 0.144 > 0.15 46 0 46.2 3.04 8.2 -0.001 to 47 6

Texas Tech University, Burchan Aydin, August 2014 The next best fitting distribution is exponential distribution however Kolmogorov-Smirnov test p-value is less than 0.01, therefore we reject the null hypothesis that exponential model fits data. Thus, when the outliers exist in data, increased sales followed a Weibull distribution as provided in Table 3.4. Next, beta distribution fits the ‘percentage decrease of task time’ sample data. See Table 3.4 for ARENA input analyzer results.

Table 3.4 Decreased Task Time Distribution Fitting Distribution Summary KolmogorovSmirnov Test

Data Summary

Histogram Summary

Distribution Expression Square Error Test Statistic Corresponding p-value Number of Data Points Min Data Value Max Data Value Sample Mean Sample Std Dev Histogram Range Number of Intervals

Beta BETA(1.17, 1.4094) 0.015179 0.077 > 0.15 39 0.04 0.98 0.453 0.263 0 to 1 6

Beta distribution fits the ‘percentage decrease of error rate’ sample data. See Table 3.5 for ARENA input analyzer results.

65

Texas Tech University, Burchan Aydin, August 2014 Table 3.5 Decreased Error Rate Distribution Fitting Distribution Summary KolmogorovSmirnov Test

Data Summary

Histogram Summary

Distribution Expression Square Error Test Statistic Corresponding p-value Number of Data Points Min Data Value Max Data Value Sample Mean Sample Std Dev Histogram Range Number of Intervals

Beta 1 * BETA(1.58, 0.913) 0.013359 0.171 > 0.15 30 0.09 1 0.645 0.266 0 to 1 5

On the other hand, gamma distribution fits the ‘increased traffic time’ sample data. See Table 3.6 for distribution summary retrieved from ARENA input analyzer.

Table 3.6 Increased Traffic Rate Distribution Fitting Distribution Summary KolmogorovSmirnov Test

Data Summary

Histogram Summary

Distribution Expression Square Error Test Statistic Corresponding p-value Number of Data Points Min Data Value Max Data Value Sample Mean Sample Std Dev Histogram Range Number of Intervals

Gamma GAMM(1.18, 1.04) 0.009704 0.104 > 0.15 19 0.06 4.54 1.23 1.27 0 to 4.99 5

Finally, triangular distribution fits the 'decreased training time' sample data as summarized in Table 3.7.

66

Texas Tech University, Burchan Aydin, August 2014 Table 3.7-Decreased Training Time Distribution Fitting Distribution Summary KolmogorovSmirnov Test

Data Summary

Histogram Summary

Distribution Expression Square Error Test Statistic Corresponding p-value Number of Data Points Min Data Value Max Data Value Sample Mean Sample Std Dev Histogram Range Number of Intervals

Triangular TRIA(0.17, 0.918, 1) 0.167297 0.558 0.0164 7 0.25 1 0.787 0.302 0.17 to 1 5

It should be noted that even though the best fit is triangular distribution, Kolmogorov-Smirnov test result indicates that it is not a good fit. The null hypothesis is rejected. .

Two-way Interactions Spearman’s rho test was used to determine existence of any two way interactions. Increased sales and traffic was found to have a dependency. Moreover, decreased error rate and decreased task time was detected to have dependency. Rest of the two way interactions were detected to be insignificant. Training time is not correlated with any other variables, thus is removed from further analysis. o Test Statistics for Sales and Traffic: Spearman’s Rho=1 (Positive Correlation) Hypothesis: H01: S and Tf are mutually independent H11: There is either a tendency for larger values of Tf to be paired with larger values of S, or there is a tendency for smaller values of Tf to be paired with larger values of S.

67

Texas Tech University, Burchan Aydin, August 2014 Two-tailed test Decision Criteria: Reject Ho at the level of alpha equals 0.05, if absolute value of Sperman’s Rho is greater than its 1-alpha/2 quantile. This value is 0.8 as given in Quantiles of Sperman’s rho table in Conover, 2007. Spearman’s Rho=1>0.8 Thus, we reject the null hypothesis at a significance level of 0.95. There is a significant positive correlation between increased sales and traffic rate. o Test Statistics for Error rate and task time: Spearman’s Rho=0.95 Hypothesis: H01: ER and TaT are mutually independent H11: There is either a tendency for larger values of TaT to be paired with larger values of ER, or there is a tendency for smaller values of TaT to be paired with larger values of ER. Two-tailed test Decision Criteria: Reject Ho at the level of alpha equals 0.05, if absolute value of Sperman’s Rho is greater than its 1-alpha/2 quantile. This value is again 0.8 as given in Quantiles of Sperman’s rho table in Conover, 2007. Spearman’s Rho=0.95>0.8 Thus, we reject the null hypothesis at a significance level of 0.95. There is a positive correlation between decreased error rate and decreased task time. For these significant two-way interactions, parametric non-linear regression was used to determine the relationship models. First, increased sales and traffic rate interaction were analyzed. See Table 3.8 for the sales versus traffic rate regression output.

68

Texas Tech University, Burchan Aydin, August 2014

Table 3.8-Increased Sales x Increased Traffic Rate Regression Summary Output R Square Adjusted R Square Standard Error Observations ANOVA

0.880279813 0.840373084 1.194773806 5

df Regression Residual Total

Intercept Increased Traffic Rate

1 3 4

SS 31.48807 4.282453 35.77052 Coefficients -0.259746244 5.070378837

MS 31.48807 1.427484 Standard Error 0.881994 1.079576

F 22.05843

Significance F 0.018256

t Stat -0.2945 4.69664

P-value 0.787579 0.018256

R-square is almost 0.9 and Significance F value is 0.01, thus increased sales and increased traffic rate regression model is reliable. Increased sales = -0.259746244 +5.070378837*Increased Traffic Rate Eq-3.1 Next two-way interaction to be shown is between task time and error rate. See Table 3.9 for regression output summary.

69

Texas Tech University, Burchan Aydin, August 2014

Table 3.9-Decreased Task Time x Decreased Error Rate Regression Summary Output R Square Adjusted R Square Standard Error Observation s ANOVA

0.97090703 9 0.95636055 9 0.03053570 7 4

df Regression

1

Residual

2

Total

3

SS

MS

F

0.06223514 1 0.00186485 9 0.0641

0.06223514 1 0.00093242 9

66.7451520 2

Significanc eF 0.01465384 8

t Stat

P-value

5.73589237 8 8.16977062 2

0.02907561 2 0.01465384 8

Coefficients Intercept Decreased Error Rate

0.22295807 5 0.44757383 1

Standard Error 0.03887068 7 0.05478413 7

R square is almost 1, and Significance F is 0.01, thus regression equation is reliable. Decreased task time= 0.222958075+0.447573831*Decreased error rate

Eq-3.2

Bootstrap Results The bootstrap method is used for the usability benefits with significant correlation; increased sales, increased traffic, decreased error rate, and decreased task time. The bootstrap sample size is denoted as n (which equals to the sample size), whereas the number of bootstrap samples is denoted as N. There are many different recommendations on how to determine the number of bootstrap samples in the literature. Efron and Tibshirani (1986) suggested usage of at least 250 bootstrap

70

Texas Tech University, Burchan Aydin, August 2014 samples for finding confidence intervals. Conover (2007) stated that for estimating simply the mean and the standard error of an estimator, 100 or 200 replications are needed unless confidence intervals are needed. In this study, the point estimate for the lower quantile of the population is the key interest, thus the confidence intervals are not found. The approach used in this study in determining the number of bootstrap samples is taking the second power of the sample size (N=n2) as recommended in Singh and Xie (2008). The difference from their recommendation is that after finding n2, the closest upper thousand value was selected. Table 3.10 shows the selected N for each benefit type.

Table 3.10-Bootstrap Sample Sizes and the Number of Samples Increased Sales Increased Traffic Decreased Error Rate Decreased Task Time

n 44 19 30 39

n2 1936 361 900 1521

N 2000 1000 1000 2000

In order to create the bootstrap samples, INDEX function was used in Microsoft Excel. The formula is provided below. =INDEX($A$2:$A$45, ROWS($A$2:$A$45)*RAND()+1)

Eq-3.3

Rows function returns the number of rows in the array, and index function returns a value from the referred position of the single column. Thus, Equation 3.3 enables random selection of a number from the sample. The sample size (n) for increased sales is 44. Therefore, the sample size for each bootstrap samples is 44 as well. The statistic investigated is the lower quantile of increased sales. Table 3.11 shows the descriptive results.

71

Texas Tech University, Burchan Aydin, August 2014

Table 3.11-Descriptive Statistics of the Lower Quantile Statistic of the Bootstrap samples of Sales Mean

0.28844

Standard Error

0.001588663

Median

0.25

Mode

0.24

Standard Deviation

0.071047187

Sample Variance

0.005047703

Kurtosis

3.015876451

Skewness

1.576551555

Range

0.54

Minimum

0.15

Maximum

0.69

Sum

576.88

Count

2000

Confidence Level(95.0%)

0.003115609

The values of the overall population parameters are unknown. The data analysis results lack the usability investment data that is kept confidential by companies. It is just a representative random sample from a whole population of companies that have invested on usability. Since, the population parameters are unknown, standard error is taken into consideration to estimate the variation of the statistic rather than standard deviation. Standard error indicates how precisely you know the statistic of the population (Streiner, 1996). Standard error in Table 3.11 is calculated by dividing the standard deviation by the square root of the number of bootstrap samples. Very small standard error indicates that 0.28844 is a very precise estimation for the increased sales population's lower quantile. Figure 3.4 shows the bootstrap distribution of increased sales lower quantile.

72

Texas Tech University, Burchan Aydin, August 2014

800

120.00%

700 100.00% 600 80.00% Frequency

500

400

60.00%

300 40.00% 200

20.00% 100 0.00% 0.15 0.174545455 0.199090909 0.223636364 0.248181818 0.272727273 0.297272727 0.321818182 0.346363636 0.370909091 0.395454545 0.42 0.444545455 0.469090909 0.493636364 0.518181818 0.542727273 0.567272727 0.591818182 0.616363636 0.640909091 0.665454545 More

0

Bin

Figure 3.4 - Increased Sales Lower Quantile Bootstrap Distribution The next usability benefit sample investigated is the web site traffic. The traffic bootstrap resampling yielded the following descriptive statistics.

73

Texas Tech University, Burchan Aydin, August 2014

Table 3.12-Descriptive Statistics of the Lower Quantile Statistic of the Bootstrap samples of Web Site Traffic Mean

0.41168

Standard Error

0.00540605

Median

0.425

Mode

0.2

Standard Deviation

0.170954315

Sample Variance

0.029225378

Kurtosis

0.943345886

Skewness

0.598495433

Range

1.1025

Minimum

0.0675

Maximum

1.17

Sum

411.68

Count

1000

Confidence Level(95.0%)

0.010608516

The standard error is less than 0.01, which indicates that 041168 is a precise estimation for the lower quantile of the traffic population. Figure 3.5 is the histogram of the bootstrap distribution for traffic rate.

74

Texas Tech University, Burchan Aydin, August 2014 180

120.00%

160

100.00%

140 Frequency

120

80.00%

100

60.00%

80 60

40.00%

40

20.00%

20 0

0.00%

Bin

Figure 3.5 - Increased Traffic Lower Quantile Bootstrap Distribution

The bootstrap statistics for the error rate are provided in Table 3.13. Table 3.13-Descriptive Statistics of the Lower Quantile Statistic of the Bootstrap Samples of Error Rate Mean Standard Error Median Mode Standard Deviation Sample Variance Kurtosis Skewness Range Minimum Maximum Sum Count Confidence Level(95.0%)

0.48438 0.003991428 0.4575 0.45 0.126220034 0.015931497 -0.46639272 -0.052572036 0.6825 0.1175 0.8 484.38 1000 0.007832544

75

Texas Tech University, Burchan Aydin, August 2014 The standard error is very low, which indicates very narrow confidence interval. The estimate of 048438 for the lower quantile of the error rate is precise. Figure 3.6 shows the histogram.

180

120.00%

160 100.00% 140

Frequency

120

80.00%

100 60.00% 80 60

40.00%

40 20.00% 20

0

0.00%

Bin

Figure 3.6 - Decreased Error Rate Lower Quantile Bootstrap Distribution Finally, the results of the bootstrap analysis for the decreased task time are provided below.

76

Texas Tech University, Burchan Aydin, August 2014

Table 3.14-Descriptive Statistics of the Lower Quantile Statistic of the Bootstrap samples of Task Time Mean Standard Error Median Mode Standard Deviation Sample Variance Kurtosis Skewness Range Minimum Maximum Sum Count Confidence Level(95.0%)

0.25781 0.000960638 0.265 0.25 0.042961049 0.001845652 1.386176074 -0.769764818 0.37 0.1 0.47 515.62 2000 0.001883957

The standard error is less than 0.001, indicating that the estimation of 025781 for the lower quantile of decreased task time population is very precise. The histogram is demonstrated below.

77

Texas Tech University, Burchan Aydin, August 2014 350

120.00%

300

100.00%

Frequency

250

80.00%

200 60.00% 150 40.00%

100

20.00%

0

0.00% 0.1 0.116818182 0.133636364 0.150454545 0.167272727 0.184090909 0.200909091 0.217727273 0.234545455 0.251363636 0.268181818 0.285 0.301818182 0.318636364 0.335454545 0.352272727 0.369090909 0.385909091 0.402727273 0.419545455 0.436363636 0.453181818 More

50

Bin

Figure 3.7 - Decreased Task Time Lower Quantile Bootstrap Distribution

Conclusion There has not been any study in the cost justification of usability domain that showed a model of dependencies between any usability benefit types. This paper showed that percentage change in increased sales was positively correlated with percentage change in web site traffic rate that were gained by usability investments. The traffic on usable websites is higher as users spend more time, and there is a higher chance of returning to the website, thus; increased chance of sales. In this paper a model was developed that shows the exact relationship between sales and website traffic. This model can aid usability practitioners make assumptions on how much increase in sales to expect when a certain level of traffic is achieved. Moreover, a model was developed that showed the correlation between percentage change in error rate and percentage change in task time. Error rate and task time were found to be positively correlated. A usability practitioner can make an estimation of how much the

78

Texas Tech University, Burchan Aydin, August 2014 error rate will change when a change in task times occurs as a result of usability investments. Usability practitioners estimate possible increase or decrease in usability benefits or costs. Current usability cost justification depends on those practitioners experience and estimation powers. However, usability practitioners can use the descriptive statistics for usability benefits determined in this paper in order to justify usability investments. The numbers found in this research study are based on a comprehensive data collection, and statistical analysis, therefore it is definitely more reliable than individual usability practitioner’s estimations. Usability practitioners might integrate these numbers to any usability cost-benefit model for their organization to achieve a general idea of the potential benefits of any software usability investment.

79

Texas Tech University, Burchan Aydin, August 2014 References Abras, C., Maloney-Krichmar, D., Preece, J. (2004). User-centered design. In Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousands Oaks: Sage Publications. Aydin, B., Millet, B., Beruvides, M. G., (2011). The State-Of-The-Art Matrix Analysis for Cost-Justification of Usability Research. Paper presented at the Proceedings of the American Society for Engineering Management: Winds of Change, Staking paths to explore new horizons, Lubbock, Texas (pp. 221-229). Red Hook, NY: Curran Associates, Inc. Beruvides, M. G., & Omachonu, V. (2001). A systemic-statistical approach for manageing research information: The State-of-the-Art-Matrix analysis. Proceedings of the 2001 Industrial Engineering Research Conference Bevan, N. (2000). Cost benefit analysis. TRUMP report, September 2000. http://www. usability. serco. com/trump/methods/integration/costbenefits. htm. Bevan, N. (2005). Cost benefits evidence and case studies. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability:An update for the Internet age (pp. 575-600). San Francisco, CA: Morgan Kaufmann. Bias, R. G., & Mayhew, D. J. (1994). Cost-justifying usability. San Diego, CA: Academic Press. Bias, R. G., & Mayhew, D. J. (2005). Cost-justifying usability: An update for the Internet age. San Francisco, CA: Morgan Kaufmann. Black, J. (2002). Usability is next to profitability. Business Week Online. Retrieved from http://www.businessweek.com/technology/content/dec2002/tc2002124_2181.h tm. Boehm, B. W., (1981). Software Engineering Economics. Englewood Cliffs, NJ: Prentice-Hall, Inc. Bohmann, K. (2000). Valuing Usability Programs. Retrieved from http://www.bohmann.dk/articles/valuing_usability_programs.html. Compeau, D., Olfman, L., Sei, M., & Webster, J. (1995). End-user training and learning. Communications of the ACM, 38(7), 24-26. Conover, W. J. (2007). Practical Nonparametric Statistics (3rd Ed.). John Wiley & Sons Inc. Donahue, G. M. (2001). Usability and the bottom line. Software, IEEE, 18(1), 31-37. Donahue, G. M., Weinschenk, S., Consulting, W., & Nowicki, J. (1999). Usability Is Good Business. IEEE Software, 18(1), 31-37.

80

Texas Tech University, Burchan Aydin, August 2014 Dray, S., & Karat, C. (1994). Human factors cost justification for an internal development project. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 111-122). San Diego, CA: Academic Press. Dray, S. (1995): The importance of designing usable systems. Interactions, 2(1), 17– 20. Efron, B. (1979). Bootstrap methods: another look at the jackknife. The annuals of Statistics, 1-26. Efron, B., Tibshirani, R. (1986). Bootstrap methods for standard errors, confidence intervals, and other methods of statistical accuracy. Statistical Science, 1(1), 54-77. Ehrlich, K., & Rohn, J. (1994). Cost-justification of usability engineering: A vendor's perspective. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 73-110). San Diego, CA: Academic Press. Engaging Networks (n.d.). Effectiveness and ROI. Retrieved from http://www.engagingnetworks.net/uk/campaigning/effectiveness-and-roi. Francis, H. (2006). Tales from the trenches: Getting usability through corporate. In P. Sherman (Eds.), Usability success stories: How organizations improve by making easier-to-use software and web sites (pp. 41-61). Burlington, VT: Ashgate Publishing Company. Franke, R.H. & Kaul, J.D. (1978). The Hawthorne experiments. American Sociological Review, 43, 623–643. Good, M., Spine, T. M., Whiteside, J., George, P. (1986). User-derived impact analysis as a tool for usability engineering. In the Proceedings of Human Factors in Computing Systems, CHI’86, (pp. 241-246). New York, NY: ACM. Hadley, B. (2004). Clean, cutting-edge UI design cuts McAfee’s support calls by 90%. Retrieved from http://www.pragmaticmarketing.com/resources/clean-cuttingedge-ui-design-cuts-mcafees-support-calls-by-90. Hanson K., & Castleman, W. (2006). Tracking Ease-of-Use Metrics: A Tried and True Method for Driving Adoption of UCD in Different Corporate Cultures. In P. Sherman (Eds.), Usability success stories: How organizations improve by making easier-to-use software and web sites (pp. 15-39). Burlington, VT: Ashgate Publishing Company. Harrison, M., Henneman, R., & Blatt, L. (1994). Design of a human factors costjustification tool. In R. G. Bias & D.J. Mayhew (Eds.), Cost-justifying usability (pp. 203-239). San Diego, CA: Academic Press. Henneman, R. L. (2005). Marketing Usability. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 143-164). San Francisco, CA: Morgan Kaufmann. Hilgard, E. R., & Bower, G. H. (1966). Theories of Learning. New York: AppletonCentury-Crofts.

81

Texas Tech University, Burchan Aydin, August 2014 Hirsch, S., Fraser, J., & Beckman, S. (2004). Leveraging Business Value: How ROI Changes User Experience. Report for Adaptive Path. Retrieved from http://www.adaptivepath.com/uploads/documents/apr-005_businessvalue.pdf. Holzinger, A. (2005). Usability engineering for software developers. Communications of the ACM, 48(1), 71-74. Hynes, C. (1999). HFI helps Staples.com boost repeat customers by 67%. Retrieved from http://www.humanfactors.com/downloads/documents/staples.pdf. Human Factors International. (2005). Usability: A business case [White paper]. Retrieved from http://www.humanfactors.com/BusinessCase.asp. ISO/IEC. 9241-14 Ergonomic requirements for office work with visual display terminals (VDT)s - Part 14 Menu dialogues, ISO/IEC 9241-14: 1998 (E), 1998. Johnson, R. A. (1994). Miller and Freund’s Probability and Statistics for Engineers. Englewood Cliffs, NJ: Prentice-Hall. Karat, C. M. (1994). A business case approach to usability cost justification. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 45-70). San Diego, CA: Academic Press. Karat, C. M., (2005). A business case approach to usability cost justification for the web. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 103-142). San Francisco, CA: Morgan Kaufmann. Leedy, P. D., & Ormrod, J. E., (2005). Practical research: Planning and design. (8th Ed.). Upper Saddle River, NJ: Pearson Education, Inc. Lund, A. (1997). Another approach to justifying the cost of usability. Interactions, 4(3), 48-56. Mantei, M. M., & Teorey, T. J. (1988). Cost/benefit analysis for incorporating human factors in the software lifecycle. Communications of the ACM, 31(4), 428439.Mayhew, D., & Mantei, M. (1994). A Basic Framework for CostJustifying Usability Engineering. In Bias, R., Mayhew, D., Cost-justifying usability. Academic Press. 1994. 9-43. Marcus, A. (2005). User interface design’s return on investment: Examples and statistics. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 17-40). San Francisco, CA: Morgan Kaufmann. Mayhew, D. J. (2005). Keystroke level modeling as a cost justification tool. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 465-488). San Francisco, CA: Morgan Kaufmann. Mayhew, D. J. (2005). Cost justification of usability engineering for international web sites. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age, (pp. 359-384). San Francisco, CA: Morgan Kaufmann.

82

Texas Tech University, Burchan Aydin, August 2014 Mayhew, D. J., & Tremaine, M. M. (2005). A basic framework. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 41-102). San Francisco, CA: Morgan Kaufmann. McCurdie, T., Taneva, S., Casselman, M., Yeung, M., McDaniel, C., Ho, W., & Cafazzo, J. (2012). mHealth Consumer Apps: The Case for User-Centered Design. Biomedical Instrumentation & Technology, 46(2). Nas, T. (1996). Cost-benefit analysis: Theory and application. Thousand Oaks, CA: Sage Publications, Inc. Newnan, D. G., Eschenbach, T. G., & Lavelle, J. P. (2009). Engineering economic analysis. New York, NY: Oxford University Press. Ng, E., Beruvides, M. G., Simonton, J. L., Chiu-Wei, C., Peimbert-Garcia, R. E., Winder, C. F., & Guadalupe, L. J. (2012). Public transportation vehicle maintenance and regional maintenance center: An analysis for existing literature. Engineering Management Journal, 24, 43-51. Nielsen, J. (1993). Usability engineering. San Diego, CA: Academic Press, Inc. Nielsen, J. (1994). Usability inspection methods. In Proceedings of CHI '94 Conference Companion on Human factors in Computing Systems(pp. 413414). doi: 10.1145/259963.260531 Nielsen, J., & Mack, R. L. (1994). Usability inspection methods. New York, NY: John Wiley and Sons. Nielsen, J., & Gilutz, S. (2003). Usability return on investment: Fremont, CA: Nielsen Norman Group. Nielsen, J. (2008, January 22). Usability ROI Declining, But Still Strong. Retrieved from http://www.nngroup.com/articles/usability-roi-declining-but-still-strong/. Norman D. A. (1988). The design of everyday things. New York, NY: Currency Doubleday. Oreskovic, A. (2001, March 5). Testing 1-2-3. Internet/Web/Online Service Information. Standard Media International. Ortiz, J. L. (2003). A Case Study in the ROI of Online Virtual Tours. Retrieved from http://www.panorama.nu/ROI.pdf. Phase 5 UX (n.d.). Assessment of Web Site ROI for a Canadian FI. Retrieved from http://phase5ux.com/who-we-are/case-studies/case-study-5/. Plesko, G. (2009, November 29). Designing Superior Shopping Experiences. UX Magazine, Article No 432. Retrieved from http://uxmag.com/articles/designing-superior-shopping-experiences. Porat, T., & Tractinsky, N. (2012) It's a pleasure buying here: the effects of

83

Texas Tech University, Burchan Aydin, August 2014 web-store design on consumers' emotions and attitudes. Human–Computer Interaction, 27:3, 235-276. Pressman, R. (1992). Software engineering: a practitioner's approach. New York, NY: McGraw-Hill Companies Inc. Qusenbery, W. (2001). What does usability mean: Looking beyond ‘ease of use’. Retrieved from http://wqusability.com/articles/more-than-ease-of-use.html. Accessed on April 2012. Rajanen, M. (2003). Usability cost-benefit models–different approaches to usability benefit analysis. In Proceedings of the 26th Information Systems Research Seminar in Scandinavia (IRIS26), Haikko, Finland. Retrieved from http://www.tol.oulu.fi/users/mikko.rajanen/iris26_rajanen.pdf. Rajper, S., Shaikh, A. W., Shaikh, Z. A., & Amin, I. (2012). Usability cost benefit analysis using a mathematical equation. Emerging Trends and Applications in Information Communication Technologies Communications in Computer and Information Science, 2012, (281), 349-360. Rigsbee, S., & Fitzpatrick, W. B. (2012). User-Centered Design: A Case Study on Its Application to the Tactical Tomahawk Weapons Control System. JOHNS HOPKINS APL TECHNICAL DIGEST, 31(1), 76. Rhodes, J. (2001). A business case for usability. Retrieved from http://www.webword.com/moving/businesscase.html. Rohn, J. A. (2005). Cost-justifying usability in vendor companies. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 185-214). San Francisco, CA: Morgan Kaufmann. ROI Improvement by Redesigning Your Website. (2012, May 31). Retrieved from http://www.webmasterstudio.com/blog/web-design/roi-improvement-byredesigning-your-website.aspx. Rosenberg, D. (2004). The myths of usability ROI. Interactions, 11(5), 22-29. Sherman, P. (2006). Usability success stories : How organizations improve by making easier-to-use software and web sites. Burlington, VT: Ashgate Publishing Company. Singh, K., & Xie, M. (2008). Bootstrap: a statistical method. Unpublished Working Paper. Rutgers University. Retrieved from http://www.stat.rutgers.edu/home/mxie/stat586/handout/Bootstrap1.pdf. Streiner, D. L. (1996). Maintaining standards: differences between the standard deviation and standard error, and when to use each. Can J Psychiatry, 41: 498502.

84

Texas Tech University, Burchan Aydin, August 2014 Sumanth, D. J., Omachonu, V. K., & Beruvides, M. G. (1990). A review of the stateof-the-art research on white-collar/knowledge worker productivity. International Journal of Technology Management, 5, 337-355. Tahir, M., & Rozells, G. (2007). Growing a business by meeting (real) customer needs. In R. Carol, & J. James (Eds.), user-centered design stories (pp. 99110). San Francisco, CA: Morgan Kaufmann. Theusabilityblog (2011, August 12). Can you quantify your site redesign’s ROI?. Retrieved from http://usability.com/2011/08/12/this-is-post-2-for-testing/. Think Brownstone Web Site (2013). The Importance (and ROI) of Great User Experience Design. Retrieved from http://www.thinkbrownstone.com/theimportance-and-roi-of-great-user-experience/. Tomlin, W. C. (July, 2009). eCommerce ROI: Why Usability always beats advertising. Retrieved from http://www.usefulusability.com/ecommerce-roiwhy-usability-always-beats-advertising/. Tolk, J., Cantu, J., & Beruvides, M. G. (2013). High reliability organization research: Transforming theory into practice, a literature review for healthcare. ASEM Annual Conference on Engineering Management. Uldall-Espersen, T. (2005, September). Benefits of usability work–does it pay off?. In International COST294 Workshop on User Interface Quality Models (p. 7). Usability Sciences: Lee.com e-commerce case study. (2009, June 24). Retrieved from http://www.usabilitysciences.com/sites/default/files/pdf/Lee_Case_Study_609 _0.pdf. Usability Sciences: Search success jumps to 60% and improves re-purchase intent. (2008, February 20). Retrieved from http://www.usabilitysciences.com/print/enterprise-search-success-jumps-to-60from-12-when-attitudinal-analytics-recommendations-based-on-visit-intentimplemented. User Vision Web Site: Britannia Web Site Case Study. (2004). Retrieved from http://www.uservision.co.uk/resources/case-studies/britannia-building-society/. User Vision Web Site: Careers Scotland Case Study. (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/careers-scotland/. User Vision Web Site: Emirates Airline Case Study (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/ema/. User Vision Web Site: Honeywell Case Study (2007). Retrieved from http://www.uservision.co.uk/resources/case-studies/honeywell/. User Vision Web Site: KeyPoint Case Study (2013). Retrieved from http://www.uservision.co.uk/resources/case-studies/keypoint-technologies/.

85

Texas Tech University, Burchan Aydin, August 2014 User Vision Web Site: Sainsbury’s Bank (2009). Retrieved from http://www.uservision.co.uk/resources/case-studies/sainsburys-bank/. User Vision Web Site: Student Loans Company (2006). Retrieved from http://www.uservision.co.uk/resources/case-studies/student-loans-company/. User Vision Web Site: Think Property (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/think-property/. Vermeeren, A. P., Attema, J., Akar, E., de Ridder, H., Von Doorn, A. J., Erbuğ, Ç., Berkman, A. E., & Maguire, M. C. (2008). Usability problem reports for comparative studies: consistency and inspectability. Human–Computer Interaction, 23(4), 329-380. Wilson, C., & Rosenbaum, S. (2005). Categories on return on investment and their practical implications. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability: An update for the Internet age (pp. 215-263). San Diego, CA: Morgan Kaufmann. Wixon, D., & Jones, S. (1995). Usability for fun and profit: A case study of the design of DEC RALLY version 2. In C. Lewis, and P. Poison (Eds.), Human Computer Interface Design: Success Cases, Emerging Method, and Real World Context (pp. 3-35). New York, NY: Spinger Verlag. Wong, C. Y. (2003). Applying Cost-Benefit Analysis for Usability Evaluation Studies on an Online Shopping Website. In 19th International Symposium on Human Factors in Telecommunication.

86

Texas Tech University, Burchan Aydin, August 2014

CHAPTER IV RESEARCH 2: GOAL PROGRAMMING MODEL FOR USABILITY BENEFIT EXPECTATIONS This study is an optimization attempt in cost justification of usability research domain. Optimization model approach utilized is a goal programming. It includes soft constraints rather than traditional hard constraints in other optimization models. Possible deviations from some pre-determined target values are evaluated. For this research, target values are set to the lower quantile values for increased sales and web site traffic, and decreased error rate, and task time as found in Aydin and Beruvides (2014a). The study includes two separate optimization models. First one is a model that provides expected possible percentage increase in sales and traffic rate on web sites in case of a usability investment to improve the ease of use of the company website. It is primarily helpful for e-commerce companies. On the other hand, second model provides expected percentage decrease in user error rates and task times for any type of software. It is mainly developed for companies’ using the intranet.

This paper is going to be submitted to the journal of Human-Computer Interaction.

87

Texas Tech University, Burchan Aydin, August 2014 Goal Programming Model for Usability Assumptions Abstract The key objective of this study is to optimize the usability benefit expectations. The fact that different types of companies expect different benefits is taken into consideration. The paper consists of two parts: (1) an optimization model for the assumptions of percentage increase in sales and percentage increase in website traffic, (2) an optimization model for the expectations of percentage change in error rate and percentage change in task time. The optimization model used is goal programming. Finally, a sensitivity analysis is performed to test how sensitive the results are to small changes in the constraints and the variables.

Introduction Usability is defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (ISO 9241-11 guidelines, 1998, p. 2). Usability investments offer increased benefits and decreased costs in various categories for many different types of companies. Moreover, end users benefit from more usable software with improved efficiency, effectiveness, and satisfaction. The benefits of usability investment has been justified by real case studies for mainly vendor companies that sell user friendly software, companies that develop user friendly software for internal use, and IT companies that develop and sell user friendly software. Some success stories with usability investments are listed below: 

American Airlines decreased maintenance and support costs by 60% to 90% after investing on usability (Harrison, et al., 1994).



La Quinta Hotel Chain achieved a revenue growth of 83% in a year by improving website usability (Peterson, 2007).



Redesign of DEC Rally software resulted in 80% increase in revenues for the developing organization (Wixon and Jones, 1995).

88

Texas Tech University, Burchan Aydin, August 2014 

Online purchases from DELL Computer Inc. website rose from $1 million per day in September 1988, to $34 million per day in March, 2000 after usability improvements (Black, 2002).



90% reduction in customer errors was achieved in an e-commerce website (Karat, 2005).



Mean error rates per screen decreased by 67% after usability enhancements in a large wireless communicator carrier (Cope and Uliano, 1995).



The New York Stock Exchange (NYSE) upgraded their computer system with more usable equipment, and error rates dropped by 10% (Gibbs, 1997).



Task times decreased by 325% from 17 minutes to 4 minutes at American Express after the redesign of its computer interface for bank authorizations (Gibbs, 1997).



Time to checkout reduced by 40% on Performance Bikes web site after the redesign (Nielsen and Gilutz, 2003).



Time to register was decreased by 53% from an average of 15 minutes to 7 minutes on Breastcancer.org web site (Usability First, 2012).



American Express computer interface for bank authorizations was redesigned for improved usability. Training times fell from 12 hours to two hours (Gibbs, 1997).

Apart from these real cases, there are usability cost justification models developed by usability researchers that help practitioners estimate possible rate of return of usability. The main points of these models are determining possible benefit categories, estimating the benefit amount, and comparing that against the costs of the investment. Estimating the benefits requires crucial assumptions. Aydin and Beruvides (2014a) explored the numerical ranges for some of the usability benefit types and their distribution. Other than that, there are currently no studies that help usability 89

Texas Tech University, Burchan Aydin, August 2014 practitioners on estimation of usability benefit expectations and very small variations in these expectations impact the justification results enormously. This research study is built upon the findings of Aydin and Beruvides (2014a) for increased sales and website traffic, decreased error rate, task time and training time categories of usability benefits. In Aydin and Beruvides (2014a), percentage change data for these usability expectations were collected from the literature. Their distributions were determined, and bootstrap analysis was used to estimate the lower quantiles of the benefits. Moreover, the interactions were analyzed. This paper will be a step towards optimizing these benefit expectations. A tool will be developed that will help practitioners in justification of usability by allowing different weight assignments to different benefit categories. There has not been any study that tried to model the benefit expectations of usability cost justification. There is a strong need for optimization of these expectations. Thus, this research study will tackle the following research questions: ‘How can the usability benefit expectations used for usability cost justification be optimized? How would the target values for goal programming model be decided upon? How can usability practitioners retrieve different values from this model to justify usability investments depending on the company type and key expectations from usability investments?’ Considering these research problems, this study premise that: H04: Optimal usability benefit expectation values given by the model does not change according to the weights assigned to positive and negative deviations from target values. H14: Optimal usability benefit expectation values differ based on the weights assigned to positive and negative deviations from target values Methodology “Several managerial decision-making problems can be modeled by using goals rather than hard constraints more accurately (Ragsdale, 1998, p. 265).” Soft constraints represent target values that are desirable to achieve. In this research the target values for the usability benefit variables will be set to bootstrap statistics for the lower quantiles of the bootstrap samples that are resampled from the data table. The 90

Texas Tech University, Burchan Aydin, August 2014 reason of using the lower quantiles is to achieve conservative results for the cost justification of usability efforts. In all past studies on cost justification of usability, assuming conservative usability benefits expectations have been proposed, and in this study the models create conservative results due to the lower quantile selection. There are two models: (1) Sales and traffic rate, and (2) error rate and task time. Training time has been eliminated as no significant correlation was detected with any other variable. The two-way interactions: sales versus traffic time and task time versus error rate were found significant in Aydin and Beruvides (2014a), therefore the focus has been on these two correlations in this goal programming study.

Increased Sales and Increased Traffic Rate Model The objective is to minimize the maximum deviation from target values for the means of increasing sales, and increasing traffic rate. The variables of the model are shown in Table 4.1. As a reminder, the target values for the main decision variables were found as follows (Aydin and Beruvides, 2014a). Data collection yielded a table for percentages changes in usability benefits. Bootstrap samples were created by resamling these samples in the data table. The lower quantiles of each bootstrap samples were found. The mean of the lower quantiles of the bootstrap samples were picked as the target values for the goal programming models in this study (Population Means: MS, MTf). Moreover, the interaction constraints of the two goal programming models were found in Aydin and Beruvides (2014a) by utilizing Spearman’s Rho test and ordinary least square regression methods.

91

Texas Tech University, Burchan Aydin, August 2014 Table 4.1- Variables of the Sales x Traffic Rate Model Variable Type

Notation

Main Decision Variables

Deviational Variables

Weights Maximum Deviation from Means

S: Percentage increase in sales due to usability improvement Tf: Percentage increase in traffic rates due to usability improvement di - : underachievement from target value Mi ; di + : overachievement from target value Mi ; Where; i: S, Tf wS-: weight of the negative deviation from Μs wS+: weight of the positive deviation from Μs wTf-: weight of the negative deviation from ΜTf wTf+: weight of the positive deviation from ΜTf Q

The weights listed in Table 4.1 shall be assigned by usability practitioners depending on the significance of the benefit types based on company type and company goals. Variable Q is the maximum of the weighted percentage deviations from the target means. Weighted percentage deviations are calculated as follows: wi- x di- / Μi for underachievements

Eq. 4.1

wi+ x di+ / Μi for overachievements Where; i: S, Tf

Eq. 4.2

The objective of the model is minimizing the maximum of the weighted percentage deviations from target means; thus minimizing the predefined variable Q. MIN Q Subject to wS- * dS- / Μs ≤ Q

Eq. 4.3

Sale Goal

Eq. 4.4

92

Texas Tech University, Burchan Aydin, August 2014 WTf- * dTf- / ΜTf ≤ Q

Traffic Goal

Eq. 4.5

S + dS-- dS+ = Μs (actual versus target sale)

Eq. 4.6

Tf + dTf—dTf+ = ΜTf (actual versus target traffic rate)

Eq. 4.7

S = -0.259746244 +5.070378837*Tf

Eq.4.8

di -, di + ≥0 Where; i: S, Tf

Eq. 4.9

-

-

Eq. 4.10

wS , wTf ≥0

First set of constraints are to set the model as a MINMAX optimization. As the objective function is minimizing Q, these constraints make sure that Q is the maximum weighted deviation from target means. It should be noted that the objective function will ignore the positive deviations from targets, because of the fact that more percentage increase in sales or traffic are favorable. However, less increase in sales or traffic than target value should be penalized. This will help usability practitioners perform a conservative justification of usability. The value of these penalty weights are to be decided by the users of the model. Any biased manipulation of the weights will not make any significant overall impact, since an enormous weight assignment for any under or over achievement for a decision variable will cause serious underachievement in the other decision variable. Thus, users of the model are encouraged to assign realistic weights depending on their company type. Second set of constraints are to set the actual versus target functions. The third set of constraints is the two-way interaction between sales and traffic rate. Sales is set as the Y variable, whereas traffic rate is the X variable. The last set of constraints is to set all decision variables as non negative.

93

Texas Tech University, Burchan Aydin, August 2014

Decreased Task Time and Decreased Error Rate Model The objective is to minimize the maximum deviation from target values for the means of decreasing task time, and decreasing error rate. The variables of the model are shown in Table 4.2.

Table 4.2- Variables of the Error Rate and Task Time Model Variable Type

Main Decision Variables

Deviational Variables

Weights Maximum Deviation from Means

Notation TaT: Absolute percentage decrease in task time due to usability improvement ER: Absolute percentage decrease in error rate due to usability improvement di - : underachievement from target value Mi ; di + : overachievement from target value Mi ; Where; i: TaT, ER wER-: weight of the negative deviation from ΜER wER+: weight of the positive deviation from ΜER wTaT-: weight of the negative deviation from ΜTaT wTaT+: weight of the positive deviation from ΜTaT Z

The weights listed in Table 4.2 should be assigned by usability practitioners depending on the significance of the benefit types based on company type and company key expectations. Variable Z is the maximum of the weighted percentage deviations from the target means. Weighted percentage deviations are calculated as follows: wi x di / Μi for underachievements + + wi x di / Μi for overachievements Where; i: ER, TaT

Eq. 4.11 Eq. 4.12

The objective of the model is minimizing the maximum of the weighted percentage deviations from target means; thus minimizing the predefined variable Z. 94

Texas Tech University, Burchan Aydin, August 2014

MIN Z Subject to

Eq. 4.13

WTaT- * dTaT- / ΜTaT ≤ Z WER- * dER- / ΜER ≤ Z

Task Time Goal Error Rate Goal

Eq. 4.14 Eq. 4.15

TaT + dTaT—dTaT+ = ΜTaT (actual versus target task time) ER + dER—dER+ = ΜER (actual versus target error rate)

Eq. 4.16 Eq. 4.17

Decreased task time= 0.222958075+0.44757381*Decreased error rate

Eq. 4.18

di -, di + ≥0 Where; i: ER, TaT

Eq. 4.19

-

-

Eq. 4.20

wER , wTaT ≥0

First set of constraints are to set the model as a MINMAX optimization. As the objective function is minimizing Z, these constraints make sure that Z is the maximum weighted deviation from target means. It should be noted that the objective function will ignore the positive deviations from targets, because of the fact that more percentage decrease in error rate and task time are favorable. However, less decrease in error rate or task time than target values should be penalized. This will help usability practitioners perform a conservative justification of usability. Second set of constraints are to set the actual versus target functions. The third set of constraints is the two-way interaction between task time and error rate. Task time is set as the Y variable, whereas error rate is the X variable. The last set of constraints is to set all decision variables as non negative. Next section illustrates the sensitivity analysis performed to see how possible changes in target values and relationship constraints impact actual values, overachievements and under achievements. An MS Excel spreadsheet solver package was used to execute the sensitivity analysis.

95

Texas Tech University, Burchan Aydin, August 2014 Sensitivity Analysis The two goal programming models developed are non-linear as multiplications of the variables are involved. The significant points to take into consideration in the sensitivity analysis of non-linear optimization in MS Excel solver module are the reduced gradients for the variables, and Lagrange multipliers for the constraints (Ragsdale, 1998).

Sensitivity Analysis for Sales and Traffic Model Assuming all the weights are assigned a value of 1, which means the company’s perspective on under and over deviations for increased sales and increased traffic rates are the same, the sensitivity analysis results are provided in Table 4-3.

Table 4.3- Sales Traffic Model Sensitivity Report with equal weights Cell $C$4 $D$4 $C$5 $D$5 $C$6 $D$6 Cell $C$11 $C$7 $D$7

Adjustable Cells Name Actual Percent Change Sales Actual Percent Change Traffic Under Sales Under Traffic Over Sales Over Traffic Constraints Name SalesxTraffic Goal Sales Goal Traffic

Final Value 0.475 0.145 0 0.267 0.187 0 Final Value -0.259 0.288 0.412

Reduced Gradient 0 0 0.479 0 0 2.429 Lagrange Multiplier 0.479 -0.479 2.429

The same results are attained if the weights assigned are 10,000, or 100,000 for the under and over achievements for sales and traffic. This indicates that the model is very robust; as long as the four weights are assigned equal by the usability researcher and/or engineering manager who uses the model, the above results will be achieved.

96

Texas Tech University, Burchan Aydin, August 2014 In regular nonlinear models, reduced gradient shows the change in the objective function in case that decision variable changes one unit. For instance, if under achievement from target sales increases 1 unit, the objective function increases by 0.479. Similarly, if over achievement from target traffic increases by 1 unit, the objective function increases by 2.429. On the other hand, the Lagrange multiplier shows that if the right hand side of the interaction constraint of sales and traffic increases by 1 unit, then the objective function increases by 0.479. It should be noted clearly at this point that the objective function is not the point of interest in this goal programming model. The key interest is the actual sales and traffic changes depending on the weights assigned to under and over achievements. The possible outcomes when the penalty weights are different can be seen in Figure 4.1.

2.00 1.80 1.60 1.40 1.20 Sales

1.00

Traffic 0.80 0.60 0.40 0.20 0.00 1 to 100 1 to 30 1 to 5

1 to 1

5 to 1 30 to 1 100 to 1

Figure 4.1 Sales Traffic Model Sensitivity Analysis

97

Texas Tech University, Burchan Aydin, August 2014 The x-axis on Figure 4.1 shows the rate of 'penalty weight assigned to deviations from sales target value' to 'penalty weight assigned to deviations from traffic target value'. The plot starts with a 1 to 100 rate. Thus, an iteration of the model with this weight ratio results in traffic rates very close to its target value. At this point the sales are very high. An initial perspective by looking at the plot could be using these weights since both sales and traffic rate are high, however there is the cost issue that needs to be addressed. Achieving these levels of increase in sales and traffic requires more usability investment, which causes more initial costs. Usability practitioner or engineer manager using the model must conduct a cost benefit analysis. In the region where weights are 1 to 30, and 1 to 100, (the very left of x-axis), there is an indication that the benefit expectation results for sales and traffic rate is starting to get insensitive to further changes in the x-axis. Figure 4.2 better shows this insensitivity. 2 1.8 1.6 1.4 1.2 Sales

1

Traffic 0.8 0.6 0.4 0.2

0 1 to 600 1 to 500 1 to 400 1 to 300 1 to 200 1 to 100

Figure 4.2 Sensitivity Analysis for Higher Weights for Traffic 98

Texas Tech University, Burchan Aydin, August 2014 In Figure 4.2, the rate of 1 to 200 is the cutoff point after which the insensitivity starts. Users of the model should not spend time on the region smaller than 1 to 200 as it is practically useless. Moreover the very right end of the x-axis in Figure 4.1 shows an indication of insensitivity. Figure 4.3 proves this insensitivity. The usability benefit expectations are insensitive to changes in the rates higher than 200.

0.350

0.300

0.250

0.200 Sales Traffic

0.150

0.100

0.050

0.000 100 to 1 200 to 1 300 to 1 400 to 1 500 to 1 600 to 1

Figure 4.3 Sensitivity Analysis for Higher Weights for Sales

To sum up the sensitivity findings for this model, users should focus on the region in between the ratios 1 to 200 and 200 to 1.

99

Texas Tech University, Burchan Aydin, August 2014

Sensitivity Analysis for Task time and Error Rate Model The non-linear optimization sensitivity result of MS Excel solver module is given in Table 4.5.

Table 4.5- Task Time Error Rate Model Sensitivity Report with equal weights Cell $C$4 $D$4 $C$5 $D$5 $C$6 $D$6 Cell $C$11 $C$7 $D$7

Adjustable Cells Name Actual Percent Change Task Time Actual Percent Change Error Rate Under Task Time Under Error Rate Over Task Time Over Error Rate Constraints Name ERxTaT Task Time Goal Task Time Goal Error Rate

Final Value 0.357 0.298 0 0.186 0.098 0 Final Value 0.223 0.258 0.484

Reduced Gradient 0 0 3.879 0 0 1.736 Lagrange Multiplier 3.879 -3.879 1.736

Similar to the discussions for the sales traffic model, the interest is not the objective function. Therefore, there is no need to explore the reduced gradients and the Lagrange multipliers. See Figure 4.4 for possible outcomes of the model with different weights assigned.

100

Texas Tech University, Burchan Aydin, August 2014 0.6

0.5

0.4

Task Time

0.3

Error Rate 0.2

0.1

0 1 to 100 1 to 30 1 to 5

1 to 1 10 to 1 30 to 1 100 to 1

Task time weight/ Error rate weight

Figure 4.4 Task Time and Error Rate Model Sensitivity Analysis

The region to the very left of the plot seems to be the optimum region since both task time and error rate are high. However, the costs associated with achieving those usability benefit levels should be considered. A cost benefit analysis is needed. Penalty weight ratios less than 1 to 100 and more than 100 to 1 were analyzed in order to explore sensitivity. See Figure 4.5 for small weight ratios.

101

Texas Tech University, Burchan Aydin, August 2014 0.490 0.480

0.470 0.460 Task Time

0.450

Error Rate 0.440 0.430 0.420 0.410 1 to 600 1 to 500 1 to 400 1 to 300 1 to 200 1 to 100

Task time weight/ Error rate weight

Figure 4.5 Sensitivity Analysis for Higher Weights for Error Rate

The cutoff point is 1 to 200 penalty weight ratio. Usability benefit expectations are insensitive to further decreasing the penalty weight ratio. See Figure 4.6 for high weight ratios.

102

Texas Tech University, Burchan Aydin, August 2014 0.300

0.250

0.200

Task Time

0.150

Error Rate 0.100

0.050

0.000 100 to 1 200 to 1 300 to 1 400 to 1 500 to 1 600 to 1

Task time weight/ Error rate weight

Figure 4.6 Sensitivity Analysis for Higher Weights for Task Time

The cutoff point is 200 to 1 penalty weight ratio. Usability benefit expectations for task time and error rate are insensitive to weight ratio changes after this point.

Conclusion The models developed in this study take into consideration (1) the population lower quantiles of increased sales and traffic, and decreased task time and error rate, (2) the correlation between sales and traffic and task time and error rate, and (3) the companies' preference for these usability benefits. It gives suggestions on the expectations of increased sales and traffic for engineer managers or usability practitioners. These numbers are definitely more reliable than subjective estimations

103

Texas Tech University, Burchan Aydin, August 2014 since they are based on statistical methods. The next step in this optimization effort is investigating the impact of different quantiles on the interaction of variables.

References Aydin, B., Beruvides, M. G. (2014a). A Methodology for Data Collection and Analysis to Assist Cost Justification of Usability. Working Paper, Industrial Engineering Department, Texas Tech University, Lubbock, Texas, 28p. Black, J. (2002). Usability is next to profitability. Business Week Online. Retrieved from http://www.businessweek.com/technology/content/dec2002/tc2002124_2181.h tm. Cope, M. E., & Uliano, K. C. (1995). Cost-justifying usability engineering: A real world example. In Proceedings of the Human Factors and Ergonomics Society 39th Annual Meeting: Designing for the Global Village (pp. 263-267). Santa Monica, CA: Human Factors and Ergonomics Society. Gibbs, W. W. (1997, July). Taking computers to task. Scientific American Magazine, 7/97, 1-9. Retrieved from http://www.strassmann.com/pubs/sciam/. Harrison, M., Henneman, R., & Blatt, L. (1994). Design of a human factors costjustification tool. In R. G. Bias & D.J. Mayhew (Eds.), Cost-justifying usability (pp. 203-239). San Diego, CA: Academic Press. ISO/IEC. 9241-14 Ergonomic requirements for office work with visual display terminals (VDT)s - Part 14 Menu dialogues, ISO/IEC 9241-14: 1998 (E), 1998. Karat, C. M., (2005). A business case approach to usability cost justification for the web. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 103-142). San Francisco, CA: Morgan Kaufmann. Nielsen, J., & Gilutz, S. (2003). Usability return on investment: Fremont, CA: Nielsen Norman Group. Peterson, M. R. (2007). UX Increases Revenue: Two Case Studies. UX User Experience Magazine, 6(2), 4-6. Ragsdale, C. T. (1998). Spreadsheet Modeling and Decision Analysis. (2nd Ed.) Cincinati, OH. South-Western College Publishing. Usability First (2012). Case Studies. Forekar Lab. Retrieved http://www.usabilityfirst.com/about-usability/usability-roi/case-studies.

from

Wixon, D., & Jones, S. (1995). Usability for fun and profit: A case study of the design of DEC RALLY version 2. In C. Lewis, and P. Poison (Eds.), Human 104

Texas Tech University, Burchan Aydin, August 2014 Computer Interface Design: Success Cases, Emerging Method, and Real World Context (pp. 3-35). New York, NY: Spinger Verlag.

105

Texas Tech University, Burchan Aydin, August 2014

CHAPTER V RESEARCH 3: DETERMINING CONSTRAINTS OF GOAL PROGRAMMING MODEL FOR USABILITY BENEFITS In Aydin and Beruvides (2014b), goal programming models were developed for increased sales and traffic rates, and decreased error rate and task time benefits as a result of usability investments. The interaction constraints in these models were developed by using ordinary least square regression method. Thus equal slope was assumed for all levels of the variables. However, this research study focuses on determining possible slope changes based on different quantiles of the variable. In order to detect quantile based interaction changes, quantile regression method is used.

Paper will be submitted to the Journal of Human-Computer Interaction.

106

Texas Tech University, Burchan Aydin, August 2014 Determining Constraints of Goal Programming Model for Usability Benefits Abstract The objective of this study is to analyze the interaction constraints of the goal programming models developed by Aydin and Beruvides (2014b). In Aydin and Beruvides (2014b), two goal programming models were developed for increased sales and traffic rates, and decreased error rate and task time benefits. Ordinary least square regression method was used two determine the two-way interactions. Equal slope was assumed for all quantiles of the dependent variables. The results of this study indicated that there were not any significant differences in the slopes of the regression lines depending on the quantiles. Thus, this research proved that the approach taken in Aydin and Beruvides (2014b) was accurate in terms of interaction constraints used in the models.

Introduction Developing software easy to use is not easy to do. It requires focusing mainly on user needs that are feasible to implement. Understanding the user needs and reflecting those needs to the development of software in the design stage is the duty of usability practitioners. However, usability practitioners have one more responsibility to handle; convincing the decision makers for investing on usability. Software product managers have to consider various investment alternatives to place development resources (Bias and Mayhew, 1994). Majority of these alternatives are easy to be quantified to perform cost-benefit analysis. On the other hand, usability investments’ benefits are hard to be quantified. That makes cost-benefit analysis for usability investments a serious challenge for usability practitioners. There are many existing cost justification model for usability investments. Using these models, estimating the costs of usability investments are fairly straightforward, yet estimating the benefits are tricky (Mayhew and Mantei, 1994). For instance, in Mayhew and Mantei’s model (1994), costs are calculated as follows:

107

Texas Tech University, Burchan Aydin, August 2014    

Plan the usability engineering program to determine the ‘tasks’. Identify the ‘usability techniques’ required to perform the tasks. Further break the techniques down into required ‘steps’. Break down the steps into ‘personnel hours and equipment costs’. As can be seen, the cost calculations are very precise and there are not many

assumptions to deal with. Only assumptions are estimating the personnel hours needed to perform a step. Rest of the process is finding the parameters such as equipment costs and labor costs. On the other hand, estimating the benefits requires the following steps that include many crucial assumptions:    

Select the best set of potential benefits depending on the audience. Benefits are estimated based on the difference between the case that usability application is performed and the case that usability application does not exist. For each potential benefit category, select a unit of measurement and estimate the magnitude of the benefit for this unit of measurement. Multiply the estimated benefit per unit of measurement by the number of units. (Mayhew and Mantei, 1994) The crucial assumption here is estimating the magnitude of the benefits. In all

cost justification models this is the most important assumption as any small variation impacts the cost-benefit analysis results enormously. Aydin and Beruvides (2014a) demonstrated possible magnitude values for percentage of change in sales, website traffic, error rate, and task time. Moreover, Aydin and Beruvides (2014b) developed goal programming models that showed the magnitude of percentage changes in sales, website traffic, error rate and task time by taking into consideration the interactions and weights. This research study is an enhancement on Aydin and Beruvides (2014a) and Aydin and Beruvides (2014b). In Aydin and Beruvides (2014a), it was determined that increased traffic rate and increased sales were positively correlated. Moreover, it was found that decreased error rate and decreased task time were detected to be positively correlated. However, it is not known if the slope of the regression line for traffic rate and sales are equal for all quantiles of increasing sales.

108

Texas Tech University, Burchan Aydin, August 2014 For example, does a 1% increase in traffic rate increase the sales the same percentage for 0.25 quantile sale percentage change volume as one in the 0.9 quantile? Or is there a non constant relationship between increased traffic rate and increased sales. Similarly, is a 1% decrease in error rate associated with same decrease in the task time for 0.25 quantile of task time as one in the 0.9 quantile? Or is there a non constant relationship between decreased error rate and decreased task time? Method In order to see the existence of correlation between sales and traffic, and error rate and task time, nonparametric Spearman Rho test was used in Aydin and Beruvides (2014a). Then the correlated variables were modeled by ordinary least squares regression. In this research study, quantile regression modeling is utilized to see: 

The effect of website traffic on sales for usability investments with low, medium, and high percentage changes in sales,



The effect of error rate on task time for usability investments with low, medium, and high percentage changes in task time.

Thus, the main difference is exploring the impacts of independent variables on dependent variables for different quantiles of the dependent variable. The first quantile regression analysis was conducted for sales and traffic rate. Sales is assumed to be the dependent variable, and traffic rate is counted as independent variable. Secondly, quantile regression was conducted for error rate and task time. Decreased task time is counted as the dependent variable, whereas decreased error rate is processed as the independent variable. In order to justify the use of quantile regression modeling, following assumptions should be met (Katchova, 2013; Koenker, 2005; Buchinsky, 1998): 

The coefficients determined by the quantile regression must be significantly different than zero. In other words, zero should be out of the confidence interval for the quantile regression coefficient. 109

Texas Tech University, Burchan Aydin, August 2014 Confidence intervals are found by using the output of the quantile regression in R software. The significance level is selected as 5%. 

The coefficients determined by the quantile regression should be significantly different than the coefficients estimated by the ordinary least squares regression, namely linear regression, which was found in Aydin and Beruvides (2014a). In other words, the confidence intervals found via the output of the R software for each quantile regression coefficient should exclude the linear regression coefficients.



The coefficients determined by quantile regression for different quantiles should be significantly different than each other. So as to check if each coefficient is significantly different than each other, quantile regression plots are analyzed via R software.



Heterosdecasticity should be checked. The Breusch-Pagan test statistics must be significantly different than zero. In R software, the command ‘> bptest’ applies the Breusch-Pagan test. The null hypothesis in Breusch-Pagan test is that error variances are equal. On the other hand, the alternative hypothesis is that the error variances are a multiplicative function of one or more variables. A large chi-square indicates that heteroscedasticity is present.

If all the assumptions are met, the quantile regression results will indicate reliably the marginal relationships between usability impact factors. This will help software vendors and developers determine how much to invest on usability, since they will know the level of website traffic increase and error rate decrease that no further provides any value to the company. The data table formed in Aydin and Beruvides (2014a) provided the following number of data points as shown in Table 5.1.

110

Texas Tech University, Burchan Aydin, August 2014

Table 5.1-The Number of Interaction Data Points in the Data Table (Aydin and Beruvides, 2014a) Usability Benefit

Sales

Sales

Traffic

Error Rate

Task Time

5

3

3

2

0

Traffic

5

Error Rate

3

2

Task Time

3

0

4

Total Sample Size in Data Table

44

19

30

4

39

The first step on the quantile regression was to use only these data points to analyze sales and traffic interaction (5 data points), and task time and error rate interaction (4 data points). These interaction data were analyzed in R-Studio software. The second step was to create random samples from the original samples to increase the number of interaction data points. For sales, 1,000 random samples were created with a sample size of 19, in order to match the sample size of traffic sample in the data table. These 1,000 samples were ranked from smallest to largest (See Table 5.2).

111

Texas Tech University, Burchan Aydin, August 2014

Table 5.2-Random samples Random Samples (1 to 1000)

1 2

Sample 2

Sample 3

……………………………………

X1,1

X1,2

X1,3

X2,1

X2,2

X2,3

……………………………… X1,1000 .. ……………………………… X2,1000 .

X19,3

…………………

X19,2

Sample 1000

…………………. ..

X19,1

…………………. ..

………………….

…………………

Ranks (1 to 19)

19

Sample 1

……………………………… X19,1000

The point of interest is the row averages in order to determine the expected value of the ranks 1 trough 19.

E[Ranki]=

Eq 5.1

for i=1,..,19

Once the expected values of the ranks were found, they were matched with the traffic data ranked smallest to largest. The same random sampling procedure was used for task time data in order to match the 30 error rate data points.

E[Ranki]=

Eq 5.2

for i=1,..,30

The last step was conducting the quantile regression in R-Studio and checking the quantile regression assumptions.

112

Texas Tech University, Burchan Aydin, August 2014 Results The quantiles investigated are the lower quantile (25th percentile), the median (50th percentile), and the upper quantile (75th percentile).

Sales and Traffic Quantile Regression Results Table 5.3 shows the coefficients and the confidence intervals for sales and traffic quantile regression.

Table 5.3-Quantile Regression Coefficients and Confidence Intervals for Sales and Traffic with Original Data Points 25th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.05345000

-inf

0.57343

Increased Traffic Rate

5.043170

-inf

inf

50th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.05345000

-inf

inf

Increased Traffic Rate

5.043170

-inf

inf

75th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.15667

-0.32186

inf

Increased Traffic Rate

6.33333

-inf

inf

The infinitely wide confidence intervals are caused by the insufficient number of data. The coefficients are not significantly different than zero, as the confidence intervals include zero. Moreover, quantile regression coefficients are not significantly different from the ordinary least squares regression coefficients. See Table 5.4 for a comparison.

113

Texas Tech University, Burchan Aydin, August 2014

Table 5.4- Comparison of Ordinary Least Square regression and Quantile Regression

Increase in sales

OLS Regression

Quantile Regression at 25th quantile

Quantile Regression at 50th quantile

Quantile Regression at 75th quantile

Intercept

-0.2597

-0.05345000

-0.05345000

-0.15667

Increased website traffic

5.0704

5.043170

5.043170

6.33333



*: Significantly different quantile regression coefficients than zero at 5%



+: Significantly different quantile regression coefficients from OLS coefficients at 5%

As can be seen, the assumptions of the quantile regression are not satisfied with these 5 original interaction data points. Next, the results for the expected value of ranks data set are shown in Table 5.5.

114

Texas Tech University, Burchan Aydin, August 2014

Table 5.5-Quantile Regression Coefficients and Confidence Intervals for Sales and Traffic with Random Sampled Ranked Data Set 25th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.10845

-0.25244

-0.08848

Increased Traffic Rate

1.08102

0.88635

1.27745

50th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.07960

-0.24485

-0.00343

Increased Traffic Rate

1.10562

1.05796

1.47636

75th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.05281

-0.29013

0.02167

Increased Traffic Rate

1.24361

1.08203

1.46424

As can be seen, the infinite confidence intervals were eliminated by increasing the sample size. Table 5.6 shows the comparisons with the ordinary least squares regression.

115

Texas Tech University, Burchan Aydin, August 2014

Table 5.6- Comparison of Ordinary Least Square regression and Quantile Regression

Increase in sales

OLS Regression

Quantile Regression at 25th quantile

Quantile Regression at 50th quantile

Quantile Regression at 75th quantile

Intercept

-0.19047

-0.10845*

-0.07960*

-0.05281

Increased website traffic

1.27180

1.08102*

1.10562*

1.24361*



*: Significantly different quantile regression coefficients than zero at 5%



+: Significantly different quantile regression coefficients from OLS coefficients at 5%

The coefficients are mostly significantly different than zero, but they are not significantly different than the ordinary least square regression as indicated by the confidence intervals. Thus, the assumptions of the quantile regression were not satisfied. Although the quantile regression assumptions were not satisfied, the results showed an increase on the impact of website traffic on increased sales for the upper quantile of sales as the coefficient increases from 1.08 to 1.24. However, this finding cannot be concluded as the assumptions of the quantile regression were not satisfied. Figure 5.1 also shows that quantile regression results are not significantly different than the ordinary least squares results. The dotted lines show the ordinary least square regression limits, intercepting the confidence limits of the quantile regression.

116

Texas Tech University, Burchan Aydin, August 2014

Figure 5.1 Comparison of the Quantile Regression and Ordinary Least Square Regression Coefficients for Sales and Traffic Rate

Task Time and Error Rate Quantile Regression Results Table 5.7 shows the coefficients and the confidence intervals for task time and error rate quantile regression.

117

Texas Tech University, Burchan Aydin, August 2014

Table 5.7-Quantile Regression Coefficients and Confidence Intervals for Task Time and Error rate with Original Data Points 25th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.23254

-inf

inf

Increased Error Rate

0.3968300

-inf

inf

50th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

0.22415

-inf

inf

Increased Error Rate

0.44615

-inf

inf

75th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

0.21631

-inf

inf

Increased Error Rate

0.49231

-inf

inf

The confidence intervals are infinitely wide as a result of insufficient number of data. The coefficients are not significantly different than zero, as the confidence intervals include zero. Moreover, quantile regression coefficients are not significantly different from the ordinary least squares regression coefficients. See Table 5.8 for a comparison.

118

Texas Tech University, Burchan Aydin, August 2014

Table 5.8- Comparison of Ordinary Least Square regression and Quantile Regression Decrease in Task Time

OLS Regression

Quantile Regression at 25th quantile

Quantile Regression at 50th quantile

Quantile Regression at 75th quantile

Intercept

-0.2597

0.2325397

0.2241538

0.2163077

Decreased Error rate

5.0704

0.3968254

0.4461538

0.4923077



*: Significantly different quantile regression coefficients than zero at 5%



+: Significantly different quantile regression coefficients from OLS coefficients at 5%

As can be seen, the assumptions of the quantile regression are not satisfied with these 4 original interaction data points. Next, the results for the expected value of ranks data set are shown in Table 5.9.

119

Texas Tech University, Burchan Aydin, August 2014

Table 5.9-Quantile Regression Coefficients and Confidence Intervals for Task Time and Error Rate with Random Sampled Ranked Data Set 25th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.01322

-1.08961

0.02667

Increased Error Rate

0.59428

0.49440

1.68603

50th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.01306

-0.20349

-0.00453

Increased Error Rate

0.70995

0.62035

1.04829

75th Percentile

Coefficients

Lower Bound

Upper Bound

Intercept

-0.05303

-0.07802

-0.01486

Increased Error Rate

0.97791

0.85146

1.01547

The infinite confidence intervals were eliminated by increasing the sample size to 30. Table 5.10 shows the comparisons with the ordinary least squares regression.

120

Texas Tech University, Burchan Aydin, August 2014

Table 5.10- Comparison of Ordinary Least Square regression and Quantile Regression Quantile Quantile th Regression at 25 Regression at 50th quantile quantile

Quantile Regression at 75th quantile

Decrease in Task Time

OLS Regression

Intercept

-0.19047

-0.01322

-0.01306*

-0.05303*

Decreased Error rate

1.27180

0.59428*

0.70995*

0.97791*



*: Significantly different quantile regression coefficients than zero at 5%



+: Significantly different quantile regression coefficients from OLS coefficients at 5%

The coefficients are significantly different than zero, except for the intercept at 25th quantile. However, they are not different than the ordinary least square regression significantly since the confidence intervals of the quantile regression coefficients includes the coefficient for the ordinary least square regression coefficient. Thus, the assumptions of the quantile regression were not satisfied. Figure 5.2 also demonstrates that there is no significant difference as the limits of the ordinary least square regression intersects the limits of the quantile regression.

121

Texas Tech University, Burchan Aydin, August 2014

Figure 5.2- Comparison of the Quantile Regression and Ordinary Least Square Regression Coefficients for Task Time and Error Rate

Conclusion The objective of this study was to see if the interaction model of sales and traffic rate had different slopes in different increased sales quantiles’ regions (similarly for task time and error rate interaction model slopes). The results showed that there were slight changes in the slope of interaction for sales and traffic rate, and task time and error rate based on quantiles, but they are not statistically significant. Therefore, the interaction models determined in Aydin and Beruvides (2014a) and utilized in the goal programming models in Aydin and Beruvides (2014b) are valid. There is no basis to use different interaction models for different quantiles of sales, and different quantiles of task time as the quantile regression assumptions were not satisfied. 122

Texas Tech University, Burchan Aydin, August 2014 References Aydin, B., Beruvides, M. G. (2014a). A Methodology for Data Collection and Analysis to Assist Cost Justification of Usability. Working Paper, Industrial Engineering Department, Texas Tech University, Lubbock, Texas, 28p. Aydin, B., Beruvides, M. G. (2014b). Goal programming model for usability assumptions. Working paper, Industrial Engineering Department, Texas Tech University, Lubbock, Texas, 17p. Buchinsky, M. (1998). Recent Advances in Quantile Regression Models: A Practical Guideline for Empirical Research. Journal of Human Resources, 33(1). Bias, R. G., & Mayhew, D. J. (1994). Cost-justifying usability. San Diego, CA: Academic Press. Katchova, Ani [econometricsacademy]. (2013, February 24). Quantile Regression [Video file]. Retrieved from http://www.youtube.com/watch?v=P9lMmEkXuBw. Koenker, R. (2005). Quantile regression. Cambridge University press. Mayhew, D., & Mantei, M. (1994). A basic framework for cost-justifying usability engineering. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 9-44). San Diego, CA: Academic Press.

123

Texas Tech University, Burchan Aydin, August 2014

CHAPTER 6 GENERAL CONCLUSIONS AND FUTURE WORK Research Features and Outcomes Usability can be described as the incorporation of effectiveness, efficiency and user satisfaction into a product. It is defined in ISO 9241-11 guidelines as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (1998, p. 2). Effectiveness is the completeness and accuracy in achieving specific goals (Qusenbery, 2001). Efficiency is the speed in which users can complete the tasks (Qusenbery, 2001). In ISO guidelines, not only time but other resources such as cognitive resources are also included in the efficiency definition. On the other hand, satisfaction is how pleasant the product is to use (Nielsen, 1993). These characteristics make usability investments vital for better software development and sale. If software is released without significant usability efforts, it is way more challenging and costly to implement usability improvements in the post-release stage (Bias and Mayhew 1994, Bias and Mayhew, 2005). The literature review in this research shows the significant impacts usability offers for various types of companies in terms of the benefit receivers and benefit providers (See Table 2.11 through Table 2.15). Moreover, different cost justification models were thoroughly demonstrated in the literature review section. Each cost justification model recommends techniques of calculating investment costs, and how to estimate potential benefits. After costs are calculated and benefits are estimated, they are compared with financial ratios. Models differ in the financial ratios recommended for use. The mostly used one in these models is the cost-benefit ratio, which actually not to be used in private organizations (Aydin, Millet, and Beruvides, 2011), because the cost-benefit ratio is an economic technique for non-profit/governmental organizations, not for profit generation organizations (Newnan, Lavelle, and Eschenbach, 2009). These ratios are further used

124

Texas Tech University, Burchan Aydin, August 2014 to compare against other investment alternatives for the organization. The problem is with the accuracy of the benefit part of the financial ratio. The expected levels of increase or decrease in those benefits as a result of any usability investment are unknown. Researchers have been subjectively estimating these levels of increase and decrease in cost justification of usability efforts. However, any small changes in these levels, obviously affect the cost justification results significantly. Thus, the main objective of this research is to create a decision tool that helps usability practitioners and/or engineering managers in determining levels of increase or decrease in usability benefits. Two goal programming optimization models were developed as the mentioned decision tool. Developing these goal programming models required incorporation of data collection, Pareto analysis, theoretical distribution fitting, regression analysis, Bootstrap resampling, and finally quantile regression. A State-of-the-Art-Matrix analysis study (Aydin et al., 2011) was the foundation of this research, as it revealed insights on the problems faced in this research domain. In the data collection stage, various types of articles were found including but not limited to case studies, empirical studies, review based papers, and opinion based papers. The appropriate studies to include and the studies to exclude from further review were identified as the factor considered was availability of numerical data. Thus, the studies providing numerical data for the levels of increase or decrease in any usability benefit type were filtered. The final step data collection was converting each study’s results or findings into a common statistical index. The common point was selected to be the ‘percentage change before and after the usability investment’. By using percentages, the problem that each study constitutes different cost basis was dealt with. A total of 156 percentage change data points were collected. The hypothetical data were removed, leaving 139 real percentage data points for the usability benefits shown in Table 6.1.

125

Texas Tech University, Burchan Aydin, August 2014

Table 6.1 Number of Hypothetical and Real Data Benefit

Hypothetical

Real

Total

Increased sales

4

44

48

Decreased task time

2

39

41

Decreased error rate

4

30

34

Increased website traffic

0

19

19

Decreased training time

7

7

14

The benefits listed in Table 6.1 were selected via Pareto analysis. Over twenty different usability benefit types were found during data collection. Thus, there was a need for prioritization of the goals of the research. The Pareto analysis results indicated increased sales, decreased task time, decreased error rate, increased web site traffic, and decreased training time were the benefits to be focused upon. Decreased training time was also eliminated from further analysis, since there was no significant correlation with other benefits. The goal programming model required interaction constraints, thus a benefit with no significant correlation with other benefits was not needed at this point. The next step was finding theoretical distributions for the data table samples. The sample data were analyzed via ARENA input analyzer module, and distributions fitting sample data for each benefit listed in Table 6.1 are provided in this research. Kolmogorov-Simirnov tests were used to conclude significance of these distributions. This was followed by determining possible correlations between these benefits. Nonparametric Spearman’s Rho tests revealed positive correlation between increased sales and increased website traffic, and between decreased task time and decreased error rate. The regression models were found by using ordinary least square regression method for each to be used in the goal programming models.

126

Texas Tech University, Burchan Aydin, August 2014 A key parameter needed for the goal programming models is target values. Target values are the right hand side of the soft constraints in the model, and there is tolerance for positive and negative deviation from these values. In order to find these target values, Bootstrap sampling method was utilized. Each usability benefit sample was resampled to create thousands of Bootstrap samples. The lower quantiles of each of these Bootstrap samples were collected and descriptive statistics were analyzed. With significantly low standard errors achieved, the lower quantiles found were assumed to be good estimators of the population values. Two goal programming models were developed by using these target values and interaction constraints; (1) increased sales and increased web site traffic model, (2) decreased error rate and decreased task time model. The details of the models are provided in Chapter IV. Since the ordinary least square regression method was used, a constant slope was assumed for the regression line. In order to observe the behavior of the regression lines in different quantiles, quantile regression method was utilized. The main limitation in this research was that most of the companies liked to keep the information regarding the numerical benefits achieved by usability investments confidential. Data availability was a serious issue, which was addressed by utilizing Bootstrap sampling. However, there are still very important usability benefits that could not be analyzed due to lack of numerical data availability. Reduced maintenance need, reduced support calls, increased user satisfaction, reduced employee turnover are only some of those. This research also required some assumptions. Reduction in a negative variable was assumed to be a benefit. For instance, reduction in error rate was assumed to be a benefit. Studies provided data with differing time basis. Therefore, effective rate formulas were used to convert all data to annual terms. The next section explains the findings and what they may indicate for usability practitioners and/or engineering managers.

127

Texas Tech University, Burchan Aydin, August 2014 Findings This dissertation is a collection of three research studies. The research questions for the first research illustrated in the first chapter were addressed as follows: 

The benefits of software usability are shown in Table 6.2. Table 6.2 Benefits of Software Usability Investments Increased sales, decreased error rate, increased completion rate, improvement of ease placing orders, decreased task time, reduced training time, reduced learning time, reduced maintenance and support cost, reduced development cost, reduced development time, reduced employee turnover,

Usability Benefits Mentioned in Usability Literature

increased user satisfaction and loyalty, enhanced purchasing experience, increased returning customer, reduced drop off rates, increase in traffic (number of visitors), liability and litigation deterrence, increased buyto-look ratio, decreased abandoned shopping carts, decreased use of call back button, decreased installation, configuration, and deployment costs, increased lead generation, reduced supervisory assistance and inspection, decreased orientation time, decreased number of interruptions, more time spent on site.

128

Texas Tech University, Burchan Aydin, August 2014 

From the benefits listed in Table 6.2, increased sales, decreased error rate, decreased task time, increased web site traffic, and decreased training time were analyzed further; as the number of numerical data available prevented focusing on the rest of the list.



Dependencies were detected between; (1) increased sales and increased web site traffic, and (2) decreased error rate and decreased task time.



Theoretical distributions are summarized in Table 6.3.

Table 6.3 Summary of the Theoretical Distributions

Exponential

KolmogorovSmirnov p-value > 0.15

Beta

> 0.15

0.015179

Beta

> 0.15

0.013359

Gamma

> 0.15

0.009704

Distribution Increased sales Decreased task time Decreased error rate Increased traffic rate 

Square Error 0.002253

The lower quantiles of these benefits are summarized in Table 6.4.

Table 6.4 Summary List of the Population Lower Quantiles for Usability Benefits Benefit

Percentage Change

Increased sales

29%

Increased web site traffic

41%

Decreased error rate

48%

Decreased task time

26%

The research questions for the second research illustrated in the first chapter were addressed as follows:

129

Texas Tech University, Burchan Aydin, August 2014 

Two goal programming optimization models were developed by determining target values and constraints.



The target values were found by estimating the lower quantiles of Bootstrap samples, with significantly small standard errors. It indicated that these lower quantile values were good estimators of the population.



The goal programming model minimizes the maximum deviations from the target values, and allows the users add different weights to under and over achievements from target values depending on the company type. For instance, a vendor company selling usable software can add high penalty weights to underachievement from target values.

The research questions for the third research illustrated in the first chapter were addressed as follows: 

Quantile regression analysis was utilized to see the differences in relationship models for different quantiles of the dependent variable. Thus the impact of an increase in web site traffic of companies that had low increases in sales after a usability investment against companies that had high increases in sales after a usability investment are not significantly different.

The following conclusions are acquired based on the hypotheses listed in the first chapter of this dissertation: 

The null hypothesis for the first research is that all usability benefits are independent of each other. Since, significant positive correlations were found, the null hypothesis was rejected.



The null hypothesis for the second research is that company type does not affect the focus of usability investments, thus yielding same usability benefit assumptions results. However, the sensitivity analysis showed that assigning equal or different weights to under and over

130

Texas Tech University, Burchan Aydin, August 2014 achievements from sales, traffic, error rate, and task time goals leads to different outcomes. Thus, the null hypothesis was rejected. 

The null hypothesis of the third research is that a percentage point change in a usability benefit affects a dependent usability benefit equally regardless of the quantiles of that dependent variable. The quantile regression results showed that there are no significant differences based on quantiles, thus we failed to reject the null hypothesis.

These findings indicate: 

The interaction between sales and web site traffic in big scale companies and small scale companies that invest on usability are the same.



The interaction between error rate and task time in big scale companies and small scale companies that invest on usability are the same.



No matter how much focus is given on increasing usability to increase sales, as long as traffic rate cannot be increased over 40%, the highest increase on sales will be around 180%.



No matter how much focus is given on increasing usability to decrease task time, as long as error rates cannot be decreased more than 49%, the highest decrease on task time will be around 44%.

It should be noted that these values are going to be used in cost justification of usability models to find overall benefit. Then, benefits will be compared against investment costs with recommended financial ratio in a specific cost justification model. These cost justification models were illustrated in the second chapter of this dissertation. Thus, the key contribution of the decision tool developed in this dissertation is that it provides the usability benefit expectations needed to estimate the

131

Texas Tech University, Burchan Aydin, August 2014 benefits part of the cost justification of usability models. The subjectivity in estimations of these assumptions in existing models was addressed by using scientific methods in this dissertation.

Future Research This research is the first optimization modeling effort in usability cost justification research area. Due to data limitations, it was restricted to sales, traffic, error rate, and task time related usability benefits. Further studies should focus on the rest of usability benefits mentioned in Table 6.2. For instance, reduced maintenance need, and reduced support need are very important benefits for software development organizations. Moreover, the goal programming model shall be modified by addition of other usability benefits, which requires finding the regression models of these benefits. Another future research idea is that an empirical study should be designed that includes the level of usability as a factor. Thus, keeping usability improvement level (which indicates the budget allocated for usability) as an independent variable, the changes in usability benefits could be analyzed via ANOVA. The empirical results should be compared against the optimization model results.

References Aydin, B., Millet, B., Beruvides, M. G., (2011). The State-Of-The-Art Matrix Analysis for Cost-Justification of Usability Research. Paper presented at the Proceedings of the American Society for Engineering Management: Winds of Change, Staking paths to explore new horizons, Lubbock, Texas (pp. 221-229). Red Hook, NY: Curran Associates, Inc. Bias, R. G., & Mayhew, D. J. (1994). Cost-justifying usability. San Diego, CA: Academic Press.

132

Texas Tech University, Burchan Aydin, August 2014 Bias, R. G., & Mayhew, D. J. (2005). Cost-justifying usability: An update for the Internet age. San Francisco, CA: Morgan Kaufmann. Leedy, P. D., & Ormrod, J. E., (2005). Practical research: Planning and design. (8th Ed.). Upper Saddle River, NJ: Pearson Education, Inc. Newnan, D., Lavelle, J. and Eschenbach, T., Engineering Economic Analysis, tenth edition, Engineering Press, Austin, TX (2009). Nielsen, J. (1993). Usability engineering. San Diego, CA: Academic Press, Inc. Qusenbery, W. (2001). What does usability mean: Looking beyond ‘ease of use’. Retrieved from http://wqusability.com/articles/more-than-ease-of-use.html. Accessed on April 2012.

133

Texas Tech University, Burchan Aydin, August 2014

BIBLIOGRAPHY Aerts, E., Gilis, K. (n.d.). Return on investment for usability- ROI. Retrieved from http://www.agconsult.be/en/publications/articles/roi.asp. Åborg, C., Sandblad, B., Gulliksen, J., & Lif, M. (2003). Integrating work environment considerations into usability evaluation methods: the ADA approach. Interacting with Computers, 15(3), 453-471. Abras, C., Maloney-Krichmar, D., Preece, J. (2004). User-centered design. In Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousands Oaks: Sage Publications. ACM Special Interest Group on Computer-Human Interaction Curriculum Development Group. (1992). ACM SIGCHI Curricula for Human-Computer Interaction. New York: Hewett, T. T., Baecker, R., Card, S., Carey, T., Gasen, J., Mantei, M., Perlman, G., Strong, G., & Verplank, W. Andre, T. S., Hartson, H. R., Belz, S. M., & McCreary, F. A. (2001). The user action framework: a reliable foundation for usability engineering support tools. International Journal of Human-Computer Studies, 54, 107-136. Aydin, B., Millet, B., Beruvides, M. G., (2011). The State-Of-The-Art Matrix Analysis for Cost-Justification of Usability Research. Paper presented at the Proceedings of the American Society for Engineering Management: Winds of Change, Staking paths to explore new horizons, Lubbock, Texas (pp. 221-229). Red Hook, NY: Curran Associates, Inc. Aydin, B., Beruvides, M. G. (2014a). A Methodology for Data Collection and Analysis to Assist Cost Justification of Usability. Working Paper, Industrial Engineering Department, Texas Tech University, Lubbock, Texas, 28p. Aydin, B., Beruvides, M. G. (2013b). Goal programming model for usability benefit expectations. Working paper, Industrial Engineering Department, Texas Tech University, Lubbock, Texas, 19p.

134

Texas Tech University, Burchan Aydin, August 2014 Aykin, N. (1994). Software reuse: a case study on cost-benefit of adopting a common software development tool. In R. G. Bias & D. J. Mayhew (Eds.), Costjustifying usability (pp. 177-202). San Diego, CA: Academic Press. Bak, J., Nguyen, K., Risgaard, P., & Stage, J. (2008, October 18-22). Obstacles to usability evaluation in practice: a survey of software development organizations. Paper presented at the Proceedings of the Nordic Conference on Human-Computer Interaction (NordiCHI): Building bridges, New York. doi: 10.1145/1463160.1463164. Bevan, N. (1995, October 23). User-centred design of man-machine interfaces. In Proceedings of the Man-Machine Interfaces for Instrumentation: IEE Colloquium(pp. 7/1-7/6). doi: 10.1049/ic:19951086. Bevan, N. (2001). International standards for HCI and usability. International Journal of Human-Computer Studies, 55, 533-552. Bevan, N. (2005). Cost benefits evidence and case studies. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability:An update for the Internet age (pp. 575-600). San Francisco, CA: Morgan Kaufmann. Bias, R. G., & Mayhew, D. J. (1994). Cost-justifying usability. San Diego, CA: Academic Press. Bias, R. G., & Karat, C-M. (2005). Justifying cost-justifying usability. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability: An update for the Internet age (pp. 1-16). San Francisco, CA: Morgan Kaufmann. Bias, R. G., & Mayhew, D. J. (2005). Cost-justifying usability: An update for the Internet age. San Francisco, CA: Morgan Kaufmann. Bias, R. G., & Reitmeyer, P. B. (1995). Usability support inside and out. Interactions, 2(2), 29-32. Black, J. (2002). Usability is next to profitability. Business Week Online. Retrieved from http://www.businessweek.com/technology/content/dec2002/tc2002124_2181.h tm.

135

Texas Tech University, Burchan Aydin, August 2014 Bloomer, S., & Croft, R. (1997). Pitching usability to your organization. Interactions, 4(6), 18-26. Bloomer, S., Croft, R., & Kieboom, H. (1997). Strategic usability: introducing usability into organisations. Tutorial presented at the Proceedings of the Conference on Human Factors in Computing Systems: CHI 97, Atlanta, GA. Bloomer, S., Croft, R., & Wolfe, S. (1998). Selling usability to organisations: strategies for convincing people of the value of usability. Tutorial presented at the Proceedings of the Conference on Human Factors in Computing Systems CHI 98: Making the Impossible Possible, Los Angeles, CA (pp. 153-154). New York, NY: Association for Computing Machinery, ACM. Bloomer, S., & Wolfe, S. (1999). Successful strategies for selling usability into organisations. Tutorial presented at the Proceedings of CHI '99 Extended Abstracts on Human Factors in Computing Systems: The CHI is the Limit, Pittsburgh, PA (pp. 114-115). New York, NY: Association for Computing Machinery, ACM. Boehm, B. W., (1981). Software Engineering Economics. Englewood Cliffs, NJ: Prentice-Hall, Inc. Boehm, B. W. (1984). Software engineering economics. IEEE Transactions on Software Engineering SE-10(1), 4-21. Bohmann, K. (2000). Valuing Usability Programs. Retrieved from http://www.bohmann.dk/articles/valuing_usability_programs.html. Boivie, I. (2005). A fine balance: Addressing usability and users needs in the development of IT systems for the workplace (Doctoral dissertation). Retrieved from http://uu.diva-portal.org/smash/record.jsf?pid=diva2:167026. Boivie, I., Åborg, C., Persson, J., & Löfberg, M. (2003). Why usability gets lost or usability in in-house software development. Interacting with Computers, 15(4), 623-639. Bond, C. (2007). Time is money: impacts of usability at a call center. Ergonomics in Design: The Quarterly of Human Factors Applications 15(17), 17-22.

136

Texas Tech University, Burchan Aydin, August 2014 Bossert, J. L. (1991). Quality Function Deployment: A Practitioner's Approach. Milwaukee, WI: ASQC Quality Press. Braun, K. (2002, July). Measuring Usability ROI at eBay. Presented at UPA Annual Conference Panel presentation: Measuring Return on Investment for Usability. Orlando, FL. Brinck, T. (2005). Return on goodwill: Return on investment for accessibility. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability: An update for the Internet age (pp. 385-414). San Francisco, CA: Morgan Kaufmann. Brown, J. S. (1991). Research that reinvents the corporation. Harvard Business Review, 69(1), 102-110. Brownlie, D. (1991). A Case Analysis of the Cost and Value of Marketing Information. Marketing Intelligence & Planning, 9(1), 11-18. Buchinsky, M. (1998). Recent Advances in Quantile Regression Models: A Practical Guideline for Empirical Research. Journal of Human Resources, 33(1). Bulkeley, W. (1992). Study finds hidden costs of computing. The Wall Street Journal, November 2, (pp. B4). Cajander, Å., Gulliksen, J., & Boivie, I. (2006). Management perspectives on usability in a public authority: a case study. In Proceedings of the 4th Nordic Conference on Human-Computer Interaction: Changing Roles(pp. 38-47). doi: 10.1145/1182475.1182480. Carter, J. (1999). Incorporating standards and guidelines in an approach that balances usability concerns for developers and end users. Interacting with Computers, 12(2), 179-206. Catarci, T., Matarazzo, G., & Raiss, G. (2002). Driving usability into the public administration: The Italian experience. International Journal of HumanComputer Studies, 57(2), 121-138. Compeau, D., Olfman, L., Sei, M., & Webster, J. (1995). End-user training and learning. Communications of the ACM, 38(7), 24-26.

137

Texas Tech University, Burchan Aydin, August 2014 Conover, W. J. (2007). Practical Nonparametric Statistics (3rd Ed.). John Wiley & Sons Inc. Cope, M. E., & Uliano, K. C. (1995). Cost-justifying usability engineering: A real world example. In Proceedings of the Human Factors and Ergonomics Society 39th Annual Meeting: Designing for the Global Village (pp. 263-267). Santa Monica, CA: Human Factors and Ergonomics Society. Corry, M. D., Frick, T. W., Hansen L. (1997). User-centered design and usability testing of a website: An illustrative case study. ETR&D, 45(4). 65-76. Cox, M. E., O'Neal, P., & Pendley, W. L. (1994). UPAR analysis: Dollar measurement of a usability indicator for Software Products. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 145-158). San Diego, CA: Academic Press. Crow, D. (2005). Valuing usability for startups. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability: An update for the Internet age (pp. 165-184). San Francisco, CA: Morgan Kaufmann. Donahue, G. M. (2001). Usability and the bottom line. Software, IEEE, 18(1), 31-37. Donahue, G. M., Weinschenk, S., & Nowicki, J. (1999). Usability is Good Business. IEEE Software, 18(1), 31-37. Dray, S., & Karat, C. (1994). Human factors cost justification for an internal development project. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 111-122). San Diego, CA: Academic Press. Dray, S. (1995): The importance of designing usable systems. Interactions, 2(1), 17– 20. Dray, S., Karat, C., Rosenberg, D., Siegel, D., & Wixon, D. (2005). Is ROI an effective approach for persuading decision-makers of the value of usercentered design? In Proceedings of the CHI EA '05 Extended Abstracts on Human Factors in Computing Systems(pp. 1168-1169 ). doi: 10.1145/1056808.1056865.

138

Texas Tech University, Burchan Aydin, August 2014 Efron, B. (1979). Bootstrap methods: another look at the jackknife. The annuals of Statistics, 1-26. Efron, B., Tibshirani, R. (1986). Bootstrap methods for standard errors, confidence intervals, and other methods of statistical accuracy. Statistical Science, 1(1), 54-77. Ehrlich, K., & Rohn, J. (1994). Cost-justification of usability engineering: A vendor's perspective. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 73-110). San Diego, CA: Academic Press. Engaging Networks (n.d.). Effectiveness and ROI. Retrieved from http://www.engagingnetworks.net/uk/campaigning/effectiveness-and-roi. Fabris, P. (1999, April 20). You Think Tomaytoes, I Think Tomahtoes. Retrieved from http://www.cio.com.au/article/95314/think_tomaytoes_think_tomahtoes/. Fellenz, C. B. (1997). Introducing usability into smaller organizations. Interactions, 4(5), 29-33. Ferracioli, F., & CAMARGO-BRUNETTO, M. D. O. (2011). Using and integrating Discount Usability Engineering in the life cycle of a healthcare web application. In Proceedings of the Health Ambiant Information Systems Workshop, HamIS (Vol. 2011). Fiotakis, G., Raptis, D., Avouris, N. (2009). Considering cost in usability evaluation of mobile applications: Who, where and when. In Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I, INTERACT’09(pp. 231-234). doi: 10.1007/978-3-642-03655-2_27. Francis, H. (2006). Tales from the trenches: Getting usability through corporate. In P. Sherman (Eds.), Usability success stories: How organizations improve by making easier-to-use software and web sites (pp. 41-61). Burlington, VT: Ashgate Publishing Company. Franke, R. H., & Kaul, J. D. (1978). The Hawthorne experiments. American Sociological Review, 43, 623–643. Gallaway, G. (1981) Response times to user activities in interactive man/machine computer systems. In Proceedings of the Human Factors and Ergonomics

139

Texas Tech University, Burchan Aydin, August 2014 Society Annual Meeting (pp. 754-758). Rochester, NY: Human Factors and Ergonomics Society. Gibbs, W. W. (1997, July). Taking computers to task. Scientific American Magazine, 7/97, 1-9. Retrieved from http://www.strassmann.com/pubs/sciam/. Goggins, R. W., Spielholz, P., & Nothstein, G. L. (2008). Estimating the effectiveness of ergonomics interventions through case studies: Implications for predictive cost-benefit analysis. Journal of Safety Research, 39, 339-344. Good, M., Spine, T. M., Whiteside, J., George, P. (1986). User-derived impact analysis as a tool for usability engineering. In the Proceedings of Human Factors in Computing Systems, CHI’86, (pp. 241-246). New York, NY: ACM. Gulliksen, J., Boivie, I., & Göransson, B. (2006). Usability professionals--current practices and future development. Interacting with Computers, 18(4), 568-600. Gulliksen, J., Boivie, I., Persson, J., Hektor, A., & Herulf, L. (2004). Making a difference: a survey of the usability profession in Sweden. In Proceedings of the 3rd Nordic Conference on Human-computer Interaction(pp. 207-215). doi:10.1145/1028014.1028046. Hadley, B. (2004). Clean, cutting-edge UI design cuts McAfee’s support calls by 90%. Retrieved from http://www.pragmaticmarketing.com/resources/clean-cuttingedge-ui-design-cuts-mcafees-support-calls-by-90. Hahn, J., Kauffman, R. J., & Park, J. (2002, January). Designing for ROI: toward a value-driven discipline for e-commerce systems design. In System Sciences, 2002. HICSS. Proceedings of the 35th Annual Hawaii International Conference on (pp. 2663-2672). IEEE. Hanson K., & Castleman, W. (2006). Tracking Ease-of-Use Metrics: A Tried and True Method for Driving Adoption of UCD in Different Corporate Cultures. In P. Sherman (Eds.), Usability success stories: How organizations improve by making easier-to-use software and web sites (pp. 15-39). Burlington, VT: Ashgate Publishing Company.

140

Texas Tech University, Burchan Aydin, August 2014 Harrison, M., Henneman, R., & Blatt, L. (1994). Design of a human factors costjustification tool. In R. G. Bias & D.J. Mayhew (Eds.), Cost-justifying usability (pp. 203-239). San Diego, CA: Academic Press. Hendrick, H. W. (1996). The ergonomics of economics is the economics of ergonomics. In Proceedings of the Human Factors and Ergonomics Society 40th Annual Meeting (pp. 1-10). Philadelphia, PA: Human Factors and Ergonomics Society. Hendrick, H.W. (2003). Determining the cost-benefits of ergonomics projects and factors that lead to their success. Applied Ergonomics, 34, 419-427. Henneman, R. L. (2005). Marketing Usability. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 143-164). San Francisco, CA: Morgan Kaufmann. Hilgard, E. R., & Bower, G. H. (1966). Theories of Learning. New York: AppletonCentury-Crofts. Hirsch, S., Fraser, J., & Beckman, S. (2004). Leveraging Business Value: How ROI Changes User Experience. Report for Adaptive Path. Retrieved from http://www.adaptivepath.com/uploads/documents/apr-005_businessvalue.pdf. Hocko, J. (2011). User-Centered Design in Procured Software Implementations. Journal of Usability Studies, 6(2), 60-74. Holzinger, A. (2005). Usability engineering for software developers. Communications of the ACM, 48(1), 71-74. Hoque, A. Y., & Lohse, G. L. (1999). An information search cost perspective for designing interfaces for electronic commerce. Journal of Marketing Research, 36, 387-394. House, C. H., & Price, R. L. The return map: tracking product teams. Harvard Business Review, 69(1), 92. Human Factors International (n.d.). Measuring ROI for usability. Retrieved from http://www.humanfactors.com/downloads/roi.pdf.

141

Texas Tech University, Burchan Aydin, August 2014 Human Factors International. (2005). Usability: A business case [White paper]. Retrieved from http://www.humanfactors.com/BusinessCase.asp. Hynes, C. (1999). HFI helps Staples.com boost repeat customers by 67%. Retrieved from http://www.humanfactors.com/downloads/documents/staples.pdf. Johnson, R. A. (1994). Miller and Freund’s Probability and Statistics for Engineers. Englewood Cliffs, NJ: Prentice-Hall. Karat, C. (1989). Iterative usability testing of a security application. In Proceedings of the Human Factors Society 33rd Annual Meeting (pp. 273-277). Denver, CO: Human Factors Society. Karat, C. (1990a). Cost-benefit analysis of usability engineering techniques. In Proceedings of the Human Factors Society 34th Annual Meeting (pp. 839-843). Orlando, FL: Human Factors Society. Karat, C. (1990b). Cost-benefit analysis of iterative usability testing. In Proceedings of the IFIP TC13 3rd International Conference on Human-Computer Interaction (pp. 839-843). Amsterdam, The Netherlands: North-Holland Publishing Co. Karat, C. (1992). Cost-justifying human factors support on development projects. Human Factors Society Bulletin, 35(11), 1-8. Karat, C. (1993a). Usability Engineering in dollars and cents. IEEE Software, 10(3), 88-89. Karat, C. M. (1994). A business case approach to usability cost justification. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 45-70). San Diego, CA: Academic Press. Karat, C. M., (2005). A business case approach to usability cost justification for the web. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 103-142). San Francisco, CA: Morgan Kaufmann. Karat, C. M., & Lund A (2005). The return on investment in usability of web applications. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 297-316). San Francisco, CA: Morgan Kaufmann.

142

Texas Tech University, Burchan Aydin, August 2014 Keinonen, T. (2010). Protect and appreciate- Notes on the justification of usercentered design. International Journal of Design, 4(1), 17-27. Kieboom, H., & Howard, S. (1995). Communicating the Value of Usability Engineering with Cost-Benefit Analysis Techniques. In H. Hasan & C. Nicastry (Eds.), HCI a light into the future. Proceedings of OZCHI’95, Wollongong, Australia, November 27- November 30, (pp. 307-309). Australia, CHISIG Australia Downer, A.C.T. Koenker, R. (2005). Quantile regression. Cambridge University press. Lamont, S. (2003). Case study: Successful adoption of a user-centered design approach during the development of an interactive television application. In Proceedings of the 1st European Conference on Interactive Television (EuroITV). http://www. euroitv. org/index. php. Landauer, T. K. (1995). The trouble with computers. Cambridge, MA: MIT Press. Lazar, J., Ratner, J., Jacko, J. A., & Sears, A. (2004). User Involvement in the Web Development Process: Methods and Cost-Justification. Retrieved from http://scholar.googleusercontent.com/scholar?q=cache:3KC7RbqWiqEJ:schola r.google.com/+User+Involvement+in+the+Web+Development+Process:+Meth ods+and+Cost-Justification&hl=en&as_sdt=0,44. Lee, D. S., & Pan, Y. H. (2007). Lessons from applying usability engineering to fastpaced product development organizations. In Usability and Internationalization. HCI and Culture (pp. 346-354). Springer Berlin Heidelberg. Leedy, P. D., & Ormrod, J. E., (2005). Practical research: Planning and design. (8th Ed.). Upper Saddle River, NJ: Pearson Education, Inc. Lund, A. (1997). Another approach to justifying the cost of usability. Interactions, 4(3), 48-56. Lund, A. (2007). Usability ROI as a strategic tool. UX User Experience Magazine, 6(2), 8-10.

143

Texas Tech University, Burchan Aydin, August 2014 Mack, Z., & Sharples, S. (2009). The importance of usability in product choice: A mobile phone case study. Ergonomics, 52(12), 1514-1528. MacIntyre, F., Estep, K. W., & Sieburth, J. M. (1990). Cost of user-friendly programming. Journal of Forth Application and Research, 6(2), 103-115. Mantei, M. M., & Teorey, T. J. (1988). Cost/benefit analysis for incorporating human factors in the software lifecycle. Communications of the ACM, 31(4), 428-439. Marcus, A. (2005). User interface design’s return on investment: Examples and statistics. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 17-40). San Francisco, CA: Morgan Kaufmann. Mattila, H. (2009). Increasing websites' effectiveness by improving usability. Retrieved from https://aaltodoc.aalto.fi/handle/123456789/3132. Mauro, C. (1994). Cost-justifying usability in a contractor company. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 123-142). San Diego, CA: Academic Press. Mauro, C. (2002). Professional usability testing and return on investment as it applies to user interface design for web-based products and services. Retrieved from http://www.ergonomia.ca/doc/UsabTesting&ROI.pdf. Mauro, C. L. (2005). Usability science: Tactical and strategic cost justifications in large corporate applications. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 265-296). San Francisco, CA: Morgan Kaufmann. Mayhew, D. (1994). Cost-benefit analysis of upgrading computer hardware. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 159-176). San Diego, CA: Academic Press. Mayhew, D., & Mantei, M. (1994). A basic framework for cost-justifying usability engineering. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability (pp. 9-44). San Diego, CA: Academic Press.

144

Texas Tech University, Burchan Aydin, August 2014 Mayhew, D. J. (2005). Keystroke level modeling as a cost justification tool. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 465-488). San Francisco, CA: Morgan Kaufmann. Mayhew, D. J. (2005). Cost justification of usability engineering for international web sites. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age, (pp. 359-384). San Francisco, CA: Morgan Kaufmann. Mayhew, D. J. (2008). Make a stronger business case for usability. Retrieved from http://download.techsmith.com/morae/docs/casestudies/making-case-forusability.pdf. Mayhew, D. J., & Tremaine, M. M. (2005). A basic framework. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 41-102). San Francisco, CA: Morgan Kaufmann. McCurdie, T., Taneva, S., Casselman, M., Yeung, M., McDaniel, C., Ho, W., & Cafazzo, J. (2012). mHealth Consumer Apps: The Case for User-Centered Design. Biomedical Instrumentation & Technology, 46(2). Mrazek, D., & Rafeld, M. (1992). Integrating human factors on a large scale: Product usability champions. In P. Bauersfeld, B. Penny, and G. Lynch (Eds.), Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference, Monterey, California, June 3-7, 1992 (pp 565-570). New York, NY: Association for Computing Machinery, ACM. Mysková, R. (2010). Evaluation of information system contributions to company and valuation of its return rate from managers’point of view. In M. E. Mastorakis., V. Mladenov, Z. Bojkovic (Eds.), ICCOM'10 Proceedings of the 14th WSEAS International Conference on Communications, Corfu Island, Greece, July 2325, 2010 (pp. 229-233). Stevens Point, WI: World Scientific and Engineering Academy and Society (WSEAS). Nas, T. (1996). Cost-benefit analysis: Theory and application. Thousand Oaks, CA: Sage Publications, Inc.

145

Texas Tech University, Burchan Aydin, August 2014 Newnan, D. G., Eschenbach, T. G., & Lavelle, J. P. (2009). Engineering economic analysis. New York, NY: Oxford University Press. Nielsen, J. (1990). Big paybacks from discount usability engineering. IEEE software, 7(3), 107-108. Nielsen, J. (1993). Usability engineering. San Diego, CA: Academic Press, Inc. Nielsen, J. (1994). Usability inspection methods. In Proceedings of CHI '94 Conference Companion on Human factors in Computing Systems(pp. 413414). doi: 10.1145/259963.260531 Nielsen, J. (1998). Failure of corporate websites. Alertbox for October, 18, 1998. Retrieved from http://www.useit.com/alertbox/981018.html. Nielsen, J., & Gilutz, S. (2003). Usability return on investment: Fremont, CA: Nielsen Norman Group. Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. In Proceedings of INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems(pp. 206-213). doi: 10.1145/169059.169166 Nielsen, J., & Mack, R. L. (1994). Usability inspection methods. New York, NY: John Wiley and Sons. Nielsen, J. (2008, January 22). Usability ROI Declining, But Still Strong. Retrieved from http://www.nngroup.com/articles/usability-roi-declining-but-still-strong/. Nielsen, J., Berger, J. M., Gilutz, S., Whitenton, K. (2013). Return on investment (ROI) for usability. (4th ed.). Nielsen Normal Group. Norman D. A. (1988). The design of everyday things. New York, NY: Currency Doubleday. ISO/IEC. 9241-14 Ergonomic requirements for office work with visual display terminals (VDT)s - Part 14 Menu dialogues, ISO/IEC 9241-14: 1998 (E), 1998. Olsen Jr, D., Arnowitz, J., Braiterman, J., Skidgel, J., Dykstra-Erickson, E., & Evenson, S. (2002). Design Expo 2. In Proceedings of CHI EA '02 CHI '02

146

Texas Tech University, Burchan Aydin, August 2014 Extended Abstracts on Human Factors in Computing Systems(pp. 700-701). doi: 10.1145/506443.506553 Oreskovic, A. (2001, March 5). Testing 1-2-3. Internet/Web/Online Service Information. Standard Media International. Ortiz, J. L. (2003). A Case Study in the ROI of Online Virtual Tours. Retrieved from http://www.panorama.nu/ROI.pdf. Ostrander, E. (2000, October). Usability testing of documentation has many benefits of unknown value. STC Usability SIG Newsletter: Usability Interface, 7(2). Retrieved from http://www.stcsig.org/usability/newsletter/0010pilotstudy.html. Palmer, J. W. (2002). Web site usability, design, and performance metrics. Information systems research, 13(2), 151-167. Patrocinio, J., Concejero, P., Prieto, S., Rodríguez, J., & Merino, D. (2008). A methodology for the calculation of usability economics applied to voice activated services. Retrieved from http://www.icin.biz/files/2008papers/Session7B-4.pdf. Accessed on December 2011. Peterson, M. R. (2007). UX Increases Revenue: Two Case Studies. UX User Experience Magazine, 6(2), 4-6. Phase 5 UX (n.d.). Assessment of Web Site ROI for a Canadian FI. Retrieved from http://phase5ux.com/who-we-are/case-studies/case-study-5/. Pick, T. (2007, August 21). The ROI of website redesigns per Forrester. Retrieved from http://webmarketcentral.blogspot.com/2007/08/roi-of-website-redesignsper-forrester.html. Plesko, G. (2009, November 29). Designing Superior Shopping Experiences. UX Magazine, Article No 432. Retrieved from http://uxmag.com/articles/designing-superior-shopping-experiences. Pressman, R. (1992). Software engineering: a practitioner's approach. New York, NY: McGraw-Hill Companies Inc.

147

Texas Tech University, Burchan Aydin, August 2014 Qusenbery, W. (2001). What does usability mean: Looking beyond ‘ease of use’. Retrieved from http://wqusability.com/articles/more-than-ease-of-use.html. Accessed on April 2012. Ragsdale, C. T. (1998). Spreadsheet Modeling and Decision Analysis. (2nd Ed.) Cincinati, OH. South-Western College Publishing. Rajanen, M. (2003). Usability cost-benefit models–different approaches to usability benefit analysis. In Proceedings of the 26th Information Systems Research Seminar in Scandinavia (IRIS26), Haikko, Finland. Retrieved from http://www.tol.oulu.fi/users/mikko.rajanen/iris26_rajanen.pdf. Rajanen, M., & Livari, N. (2007). Usability cost-benefit analysis: how usability became a curse word? In Proceedings of the Human-Computer Interaction – INTERACT 2007(pp. 511-524). doi:10.1007/978-3-540-74800-7_47. Rajanen, M., & Livari, N. (2010). Traditional usability costs and benefits: fitting them into open source software Development. In Proceedings of ECIS 2010. Retrieved from http://is2.lse.ac.uk/asp/aspecis/20100073.pdf. Rajanen, M., & Jokela, T. (2004). Analysis of Usability Cost-Benefit Models. In the Proceedings of ECIS 2004. Retrieved from http://is2.lse.ac.uk/asp/aspecis/20040136.pdf. Rajper, S., Shaikh, A. W., Shaikh, Z. A., & Amin, I. (2012). Usability cost benefit analysis using a mathematical equation. Emerging Trends and Applications in Information Communication Technologies Communications in Computer and Information Science, 2012, (281), 349-360. Rauch, T., & Wilson, T. (1995). UPA and CHI surveys on usability processes. ACM SIGCHI Bulletin, 27(3), 23-25. Rauterberg, G. (2004). Cost justifying usability (CoU): state of the art overview 2003. Retrieved from http://www.idemployee.id.tue.nl/g.w.m.rauterberg/lecturenotes/0H420/Cost%2 0justifying%20Usability.pdf.

148

Texas Tech University, Burchan Aydin, August 2014 Reamplify (n.d.). Why do you need a website redesign? What return on investment does a redesign bring? Retrieved from http://reamplify.com/why-redesign/. Rhodes, J. (2000) Usability can save your company. Retrieved from http://www.webword.com/moving/savecompany.html. Rhodes, J. (2001). A business case for usability. Retrieved from http://www.webword.com/moving/businesscase.html. Rigsbee, S., & Fitzpatrick, W. B. (2012). User-Centered Design: A Case Study on Its Application to the Tactical Tomahawk Weapons Control System. JOHNS HOPKINS APL TECHNICAL DIGEST, 31(1), 76. Rittenberg, L., & Tregarthen, T. (2009). Markets, maximizers, and efficiency. In Principles of Microeconomics (pp. 139-166). Nyack, NY: Flat World Knowledge, Inc. Rohn, J. A. (1995). Incorporating and justifying usability engineering in constantly changing organizations. Annual Proceedings of the Silicon Valley Ergonomics Conference & Exposition, ErgoCon’95, San Jose, California, May 22-24, 1995 (pp. 52-56). Rohn, J. A. (2005). Cost-justifying usability in vendor companies. In R. G. Bias & D. J. Mayhew (Eds.), Cost justifying usability: An update for the Internet age (pp. 185-214). San Francisco, CA: Morgan Kaufmann. ROI Improvement by Redesigning Your Website. (2012, May 31). Retrieved from http://www.webmasterstudio.com/blog/web-design/roi-improvement-byredesigning-your-website.aspx. Rosenberg, D. (1989). A cost benefit analysis for corporate user interface standards: what price to pay for a consistent “look and feel”? In J. Nielsen (Ed.), Coordinating user interfaces for consistency (pp. 21-34). Boston, MA: Academic Press. Rosenberg, D. (2004). The myths of usability ROI. Interactions, 11(5), 22-29. Sauro, J. (2004). Premium usability: getting the discount without paying the price. Interactions, 11(4), 30-37.

149

Texas Tech University, Burchan Aydin, August 2014 Sauro, J. (2006). Quantifying usability. Interactions, 13(6), 20-21. Sauro, J., & Kindlund, E. (2005). A method to standardize usability metrics into a single score. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(pp. 401-409). doi:10.1145/1054972.1055028 Schaffer, E. (n.d.). How to measure ROI for usability. Retrieved from http://www.humanfactors.com/downloads/roi.pdf. Schneider, M. F. (1985). Why ergonomics can no longer be ignored. Office Administration and Automation, 46(7), 26-29. Sherman, P. (2006). Usability success stories : How organizations improve by making easier-to-use software and web sites. Burlington, VT: Ashgate Publishing Company. Sherman, P., & Hura, S. L. (2006). Collaborating with change agents to make a better user interface. In P. Sherman. (Ed.), Usability success stories: How organizations improve by making easier-to-use software and web sites (pp. 177-190). Burlington, VT: Ashgate Publishing Company. Singh, K., & Xie, M. (2008). Bootstrap: a statistical method. Unpublished Working Paper. Rutgers University. Retrieved from http://www.stat.rutgers.edu/home/mxie/stat586/handout/Bootstrap1.pdf. Streiner, D. L. (1996). Maintaining standards: differences between the standard deviation and standard error, and when to use each. Can J Psychiatry, 41: 498502. Ssemugabi, S., & De Villiers, R. (2007). A comparative study of two usability evaluation methods using a web-based e-learning application. In Proceedings of SAICSIT '07 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries(pp. 132-142). doi: 10.1145/1292491.1292507 Strøm, G. (2004). How usability practitioners can get their suggestions implemented in industrial software design. Retrieved from http://www.georg.dk/Billeder/usab-imple.pdf.

150

Texas Tech University, Burchan Aydin, August 2014 Tahir, M., & Rozells, G. (2007). Growing a business by meeting (real) customer needs. In R. Carol, & J. James (Eds.), user-centered design stories (pp. 99110). San Francisco, CA: Morgan Kaufmann. Theusabilityblog (2011, August 12). Can you quantify your site redesign’s ROI?. Retrieved from http://usability.com/2011/08/12/this-is-post-2-for-testing/. Theofanos, M. F., & Mulligan, C. (2006). Redesigning the United States Department of Health and Human Services web site. In P. Sherman (Ed.), Usability success stories: How organizations improve by making easier-to-use software and web sites (pp.63-92). Burlington, VT: Ashgate Publishing Company. Think Brownstone Web Site (2013). The Importance (and ROI) of Great User Experience Design. Retrieved from http://www.thinkbrownstone.com/theimportance-and-roi-of-great-user-experience/. Travis, D. (2006, November 2). Selling usability to your manager. Retrieved from http://www.userfocus.co.uk/articles/sellingusability.html. Travis, D. (2009). Usability ROI: 2 measures that will justify any design change. Retrieved from http://www.userfocus.co.uk/articles/usability_roi.html. Tudor, L. (1998). Human factors: does your management hear you? Interactions, 5(1), 16-24. Turner, W. C. (2011). A Strategic Approach to Metrics for User Experience Designers. Journal of Usability Studies, 6(2), 52-59. Turner, C. W., Lewis, J. R., & Nielsen, J. (2006). Determining usability test sample size. International encyclopedia of ergonomics and human factors, 3, 30843088. Uldall-Espersen, T. (2005, September). Benefits of usability work–does it pay off?. In International COST294 Workshop on User Interface Quality Models (p. 7). Ungar, J., & White, J. (2008, April). Agile user centered design: enter the design studio-a case study. In CHI'08 Extended Abstracts on Human Factors in Computing Systems (pp. 2167-2178). ACM.

151

Texas Tech University, Burchan Aydin, August 2014 Usability First (2012). Case Studies. Forekar Lab. Retrieved from http://www.usabilityfirst.com/about-usability/usability-roi/case-studies. Usability Sciences: Lee.com e-commerce case study. (2009, June 24). Retrieved from http://www.usabilitysciences.com/sites/default/files/pdf/Lee_Case_Study_609 _0.pdf. Usability Sciences: Search success jumps to 60% and improves re-purchase intent. (2008, February 20). Retrieved from http://www.usabilitysciences.com/print/enterprise-search-success-jumps-to-60from-12-when-attitudinal-analytics-recommendations-based-on-visit-intentimplemented. User Interface Engineering (2001). Are the product lists on your site loosing sales? Retrieved from www.uie.com/publications/whitepapers/PogoSticking.pdf. User Vision Web Site: Aberdeenshire Council Case Study. (2004). Retrieved from http://www.uservision.co.uk/resources/case-studies/aberdeenshire-council/. User Vision Web Site: Britannia Web Site Case Study. (2004). Retrieved from http://www.uservision.co.uk/resources/case-studies/britannia-building-society/. User Vision Web Site: British Council Case Study. (2008). Retrieved from http://www.uservision.co.uk/resources/case-studies/british-council/. User Vision Web Site: Careers Scotland Case Study. (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/careers-scotland/. User Vision Web Site: Emirates Airline Case Study (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/ema/. User Vision Web Site: KeyPoint Case Study (2013). Retrieved from http://www.uservision.co.uk/resources/case-studies/keypoint-technologies/. User Vision Web Site: Monster.co.uk Case Study (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/monstercouk/. User Vision Web Site: National Australia Bank Case Study (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/national-australia-bank/.

152

Texas Tech University, Burchan Aydin, August 2014 User Vision Web Site: Sainsbury’s Bank (2009). Retrieved from http://www.uservision.co.uk/resources/case-studies/sainsburys-bank/. User Vision Web Site: Scottish Legal Aid Board (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/scottish-legal-aid-board/. User Vision Web Site: Scottish Widows (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/scottish-widows/. User Vision Web Site: SQA Case Study (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/scottish-qualificationsauthority-sqa/. User Vision Web Site: Student Loans Company (2006). Retrieved from http://www.uservision.co.uk/resources/case-studies/student-loans-company/. User Vision Web Site: Think Property (n.d.). Retrieved from http://www.uservision.co.uk/resources/case-studies/think-property/. Vessey, I. (1994). The effect of information presentation on decision making: A costbenefit analysis. Information & Management, 27(2), 103-119. Weinschenk, S. (2005). Usability: A business case. White paper Wilson, C., & Rosenbaum, S. (2005). Categories on return on investment and their practical implications. In R. G. Bias, & D. J. Mayhew (Eds.), Cost-justifying usability: An update for the Internet age (pp. 215-263). San Diego, CA: Morgan Kaufmann. Wixon, D., & Jones, S. (1995). Usability for fun and profit: A case study of the design of DEC RALLY version 2. In C. Lewis, and P. Poison (Eds.), Human Computer Interface Design: Success Cases, Emerging Method, and Real World Context (pp. 3-35). New York, NY: Spinger Verlag. Wobben, S. (2010). Return on investment of usability. Retrieved from http://www.stefanwobben.com/usability/return-on-investment-of-usability/. Wong, C. Y. (2000). Justifying a usability engineering method for web advertising. In the Proceedings of the joint conference on APCHI 2000 (4th Asia Pacific Conference on Human Computer Interaction), ASEAN Ergonomics, (6th S. E.

153

Texas Tech University, Burchan Aydin, August 2014 Asian Ergonomics Society Conference), Singapore, November 27-December 1 2000 (pp. 386-391). Wong, C. Y. (2003). Applying Cost-Benefit Analysis for Usability Evaluation Studies on an Online Shopping Website. In 19th International Symposium on Human Factors in Telecommunication. Yousefi, P. & Yousefi P. (2011). Cost justifying usability a case-study at Ericsson (Master’s thesis, School of Computing Blekinge Institute of Technology, Sweden). Retrieved from http://www.bth.se/fou/cuppsats.nsf/all/913e0ab5e235dafdc1257863002cf756/$ file/MCS-2011-01.pdf .

154