Information Systems success evaluation

6 downloads 57900 Views 6MB Size Report
the design, development and implementation of ICT tools. Even though ... lifecycle stages, in the case of a Web application development scenario. The outcome ...
Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Information Systems success evaluation: What, How and When? A consolidated framework for practitioners Abstract Previous information systems (IS) research has established that there is no single success factor that determines the success of Information and Communication Technology (ICT); rather, many factors interact with each other at organizational and individual levels. In addition, ICT has a lifecycle with different stages that add complexity to IS success. Identifying which factors weigh more at different stages is necessary to better understand IS success and to provide guidelines for practitioners at each lifecycle phase. This work aims to contribute to theory development by consolidating different frameworks into a single framework to assess IS success by establishing what to measure, how to measure, and when to measure. Some frameworks

have

adopted

a

Socio-technical

approach,

recognizing

the

interdependence of technology, work tasks, people and organizational structure on IS success. Therefore, these frameworks are worth examining and consolidating periodically to ensure this approach is included, and to evaluate their applicability to real-life situations. Moreover, this single framework serves as a guideline for practitioners in considering important factors for ICT tools. With a good understanding of such factors, practitioners can detect user dissatisfaction in time and interventions can be made accordingly. Keywords: IS success, IS lifecycle, ICT tools, Acceptance models, User acceptance

For submission to European Journal of Information Systems

1

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Introduction Organizations invest millions of dollars in Information and Communication Technologies (ICT), and this tendency is bound to continue; such technologies are permeating our work environments more and more Practitioners (i.e. software designers and developers, IT departments and managers) need ways to predict and ensure the success of companies’ ICT investments, yet they sometimes struggle with the design, development and implementation of ICT tools. Even though past efforts in literature have succeeded in specifying what Information Systems (IS) variables affect IS success, the question of how to measure these variables is still an object of discussion and study, as shown in e.g. Petter et al (2008) and Gable et al (2003). One generally established way to explore the “how” of measuring IS success at the individual user level is to evaluate user acceptance; there are several acceptance models in the IS literature for this purpose i.e. (Davis, 1989; Goodhue, 1998; Lewis, 1995; Venkatesh et al, 2003). Such models provide operative constructs determining “how” to measure IS success in one dimension. Most models are spread out among many different studies and each have a high degree of specificity regarding acceptance characteristics (i.e. perceived ease of use, user satisfaction, usability, etc.); we believe that a combination of different acceptance characteristics from the different acceptance models could provide a multidimensional suitable instrument for measuring IS success variables at the individual level. On top of that, ICT has a lifecycle, whose different stages (pre-implementation, implementation and post-implementation) have their own particular challenges, and some factors cannot actually be measured until some time has passed and the users have been exposed to the implemented technology for some time. This means that an additional challenge of measuring IS success is to measure the right factor at the right time – when to measure. Consequently, to understand and evaluate IS success at the individual level requires three things: first, that we identify the IS success variables, which has already been done in previous literature (e.g. Petter et al (2013)). Second, that we measure IS success variables through a combination of acceptance models found in the IS literature; with them we are able to measure several IS success variables, enabling a multidimensional approach. And finally, to consider the IS lifecycle and classify which IS success variables are relevant to evaluate at which stage. Identifying which IS success variables have greater relative importance at the different lifecycle stages is necessary to better understand IS success and provide guidelines for practitioners at each lifecycle stage. This is particularly valuable for early stages, when it is easier and less costly to fix issues. With a good understanding of the IS success For submission to European Journal of Information Systems

2

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

variables, practitioners can detect user dissatisfaction in time and intervene accordingly. The presented work aims to contribute to theory development by consolidating current literature frameworks to define what IS success variables are relevant to measure, determine how to measure these variables at the individual user level using different user acceptance models, and identify when to measure these variables at the IS lifecycle stages, in the case of a Web application development scenario. The outcome is a consolidated framework, which practitioners can use as a guideline to guarantee IS success. This framework is applied to two industrial case studies where a Web application prototype was evaluated at the pre-implementation stage.

Research methodology To achieve IS success there are different bodies of research that practitioners can take advantage of. Unfortunately, these research findings are scattered in different studies, which differs in time and are proposed by different authors, making it difficult for practitioners to be sure that they have covered the necessary aspects when evaluating ICT tools during the design, development and implementation in practice. Thus, there is a need to solve this jigsaw puzzle found in the literature in order to converge in a broader framework, which unites and relates the theory required to achieve IS success. In order to know what theory is required from IS literature, we need to understand what practitioners face when dealing with ICT as they strive for IS success. Consequently, the methodological premise of this paper stems from a real-life case scenario where a practitioner (one of the authors) wanted to ensure IS success of an ICT tool (Web application), and sought in the literature for suitable instruments to do so.

Abductive approach Practitioners rarely have the luxury of starting off a process with a purely theoretical preparation phase. Quite frequently, the fact of an ICT implementation is in itself the trigger for the practitioner to consult available best practices and literature, in order to devise instruments with which to do the job. As for this case, the need to ensure IS success was triggered by two case projects where an ICT tool (Web-based) was designed and implemented into a production system, and the developer – one of the researchers – was tasked with finding a suitable theoretical construct to bring into the empirical world. When research is the result of moving between theory and empirics, it is characterized as abductive. According to Ong (2012) an abductive research strategy aims to derive technical concepts and theories from lay concepts and interpretations of social life, on For submission to European Journal of Information Systems

3

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

the basis of social actors’ motives and accounts. The purpose of this is to “Develop a theory and test it iteratively” or to “develop an interpretation or construct and test a theory” (Ibid, p. 424; the quoted explanation is in turn attributed to Blaikie, (2000) p. 101). An extension of the abductive approach is systematic combining (Dubois and Gadde 2002): “Systematic combining is a process where the theoretical framework, empirical fieldwork, and case analysis evolve simultaneously, and it is particularly useful for development of new theories. Systematic combining is discussed in terms of two processes; the first process is matching theory and reality, and the second process is directing and redirecting the study. These processes affect and are affected by four factors: what is going on in reality (empirical world), available theories, the case that gradually evolves, and the analytical framework.” (Dubois and Gadde 2002, p. 554) Thus, the systematic combining approach gives us the ideal grounded “framework” to solve the postulated practitioners’ problem. Just as in Dubois and Gadde’s work (2002), our research has been through different phases. We first started by looking for answers in the literature to the question of how to measure the IS success of a Web application prototype. A similar case study was found in literature, where an ICT was developed to support operators in the shop floor. In this case, Tjahjono (2009) used two acceptance models as instruments of measurement of IS success. However, acceptance models study acceptance characteristics (e.g. usability, user satisfaction, perceived usefulness, intention to use, etc.), and using one model would not be enough. Rather, a combination of different acceptance models would be more comprehensive in letting us measure “IS success”. Hence, four acceptance models were combined to elaborate our instrument of measurement, a 20-item questionnaire. Then, we went to the empirical world and performed two industrial case studies (studies A and B), where we used the 20-item questionnaire to measure the “IS success” of the Web applications. Besides the evaluation of questionnaire, we performed observations and interacted with the users. While we were on-site testing the ICT tools, we discovered issues with the IT infrastructure of the companies. We concluded from trying the theory in practice that something more than acceptance is needed, to achieve IS success. “(…) the search for complementary theories continued. It was guided by the findings of the empirical world” (Dubois and Gadde 2002, p. 553). We extended our search in the literature to broaden the analytical framework, asking, what else do we need to evaluate to ensure IS success? The most well accepted solution we found was the DeLone and McLean model (1992, 2003), from here-on referred to in this paper as the D&M model. This IS success model has evolved over time, and has been extended by the authors by integrating For submission to European Journal of Information Systems

4

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

more IS success variables. The current model (2013) specifies both independent and dependent IS success variables. Just like any other product, IS has a lifecycle and IS success may change over time. Some IS success variables may only be present at later lifecycle stages, and users may change their behavior and role while using the technology. A drawback of evaluating IS success at one lifecycle stage only (e.g. implementation) and not the rest is that we may be missing valuable information. Paraphrasing what Dubois and Gadde (2002) said to suit our case: what happens in the second lifecycle stage can change our interpretation of the first or third lifecycle stage. The conclusions regarding the characteristics of a development process are time-dependent. Hence, some guidance on when to measure IS success variables become crucial for practitioners. In a systematic combining approach, there is a back-and-forth process of search into the literature and empirical studies for answers. The present work (Figure 2) comprises studies of three theories (model-frameworks) to answer different questions found at different times of our research process. The first theory addressed the identification of the IS success variables. Knowing what leads to IS success, the second logic step would be to know how to measure IS success; therefore the second search for theory focused on finding a way or ways to measure IS success. And the third theory concentrated on the IS lifecycle phases and IS success. The outcome of this work is to present a consolidated framework that combines three different models found in the literature to ensure a lifecycle-sensitive and holistic instrument for IS success evaluation. This framework can be seen as an expansion of the previous ones found in literature.

For submission to European Journal of Information Systems

5

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Figure 1 The research process and outcomes

Results Theoretical theme 1: what leads to IS success? The IS field is undergoing rapid development with a tendency towards increased complexity, especially in IS development (ISD) ( e.g. Xia and Lee, 2003; Benbya and McKelvey, 2006). This complexity is partly caused by a new appreciation for aspects beyond the technological characteristics (the dependent variables). Some frameworks presented below have evolved from a purely technological approach to a Sociotechnical one, acknowledging the interplay and influence of work tasks, people and organizations (the determinants or independent variables) on IS success. As this sociotechnical systems perspective becomes recognized, it needs to be taken into account in evaluations. Therefore, theories and models that emerge are worth examining and consolidating periodically to ensure that this perspective is included, and evaluate their applicability to real-life evaluation situations. What to measure regarding IS success variables has already been addressed extensively in literature, and the authors of this paper find that consistency and convergence can be found between certain seminal frameworks.

For submission to European Journal of Information Systems

6

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Evolution of IS success according to the D&M model The first search for theory performed during this research process had the intention to identify variables related to IS success. Since the D&M model and its updates proved to be a substantial foundation for many contributions on IS success variables, its evolution is summarized below, in Figure 2:

(2

(2

00 3

)

01 3)

(2 01

3)

(current state-of-the-art for IS success)

Updated D&M model + Leavitt’s organizational model

Updated D&M model (7 variables)

D&M IS success model (6 variables) Leavitt’s organizational change model Leavitt, 1965

Delone & McLean, Delone & McLean, 1992 2003

Petter et al., 2013

Figure 2 IS success (D&M) model evolution In 1992 DeLone and McLean stated that “there is no single determinant for IS success, but rather it should be treated as a multidimensional construct” and proposed an IS success model where six interrelated variables for IS success were specified: system quality, information quality, use, user satisfaction, individual impact and organizational impact. This model has gained traction as a useful framework to understand IS success (Petter et al, 2013). Later on, DeLone and McLean (2003) presented an updated version of their model, now consisting of seven variables: system quality, information quality, use, intention to use, service quality, and net benefits (individual and organizational impact). The main differences between their first model and the updated one are that they 1) added service quality; 2) split use into use and intention to use; and 3) merged individual impact and organizational impact into net benefits. In both models, IS success was seen as a dependent variable from the technology perspective only, i.e. without considering the effects that people, structure, and tasks may have on the success of the technology (Petter et al, 2013). In 2013, Petter et al addressed this one-sidedness by combining the updated D&M model with a model from the Organization Theory field, Leavitt’s diamond of organizational change model (Leavitt, 1965). According to this model, every For submission to European Journal of Information Systems

7

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

organization consists of four main interdependent sub-systems: structure, technology, task and people, and because of this interdependence a change in one of them will have an impact on the others. Thus, tasks, structure and people are considered determinants of technology success. Besides technology (already covered by the updated D&M model), Petter et al identified a total of 43 variables, which fall into five proposed categories of determinant (independent) variables of IS success: task, user, social, project, and organizational characteristics. It is important to note that when mapped onto Leavitt’s model (Figure 3), the user and social categories fall into Leavitt’s people sub-system. The Project and organizational characteristics categories belong to the structure sub-system of Leavitt’s model. In their study, 15 of the 43 variables were considered “important success factors that consistently have demonstrated to influence IS success across many studies” (Petter et al, 2013), also shown in Figure 3 In summary, Petter et al (2013) propose that there are two types of IS success variables: the ones that are independent variables, i.e. determinants of IS success, and dependent IS success variables.

Figure 3 Determinants or variables of overall IS success ( Petter et al. 2013, p. 45)

For submission to European Journal of Information Systems

8

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Selecting IS success variables to evaluate a Web app (prototype) at pre-implementation stage As argued by Matera et al (2006), Web-based applications (Web apps) have influenced several domains, granting access to information and services to a variety of users of different characteristics and backgrounds. Web apps have several advantages: they are easy to develop, update and provide ubiquity as they can reach more users and devices, compared to on-site dedicated applications. If Web apps provide useful information that is easy to find, and are organized in a way that is accessible and easy to navigate, users of these Web apps will continue to use them (Matera et al, 2006). A fundamental characteristic of Web app success is usability, and for that reason this aspect has increasingly received attention (Matera et al, 2006). Nielsen (1994, p. 26) describes usability as: “(…) not a single, one-dimensional property of a user interface. Usability has multiple components and is traditionally associated with five characteristics: learnability, efficiency, memorability, few errors and user satisfaction”. All these usability attributes relate to the dependent IS success variable system quality. Because the Web app will be developed as a high-fidelity prototype, it will be both possible and appropriate to evaluate usability (i.e. system quality), which is broken down into learnability, few errors, efficiency, and user satisfaction. However, evaluating just usability would constitute an incomplete analysis, as it is merely one of our 22 earlier defined dependent IS success variables. Therefore, other variables need to be added to create a better understanding of what causes IS success at the early stages of Web app design. To identify which IS success variables are relevant to evaluate at the preimplementation phase, it helps to know the purpose of the Web app and its context. In the two practitioner cases underlying this research, the Web app aimed to help operators (novices and experts) in the case companies with their learning process and with standardized work. Thus, task compatibility information quality and net benefits were important to evaluate. To evaluate the net benefits at the individual level, the cycle time, to establish the impact of the Web app on the operators’ performance. Since the operators were to be exposed to a new learning tool for the first time, it was also appropriate to know their attitude towards technology, and intention to use.

Theoretical theme 2: how to measure IS success? In an attempt to measure IS success at the individual level (i.e. the user’s perspective), researchers have developed instruments of measurement (Petter et al, 2008) known as acceptance models. These models have tended to focus on a single IS success variable at a time and/or specific characteristics of user acceptance. Thus, a combination of

For submission to European Journal of Information Systems

9

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

acceptance models seems necessary to support the measurement of IS success at individual level and with consideration of socio-technical interdependencies. Gable et al (2003) and Petter et al (2008) expressed a vision of a multidimensional approach for measuring the dependent IS success variables, some of which can be measured with acceptance characteristics. Still, which acceptance models to use is not determined by Gable et al (2003) or by Petter et al (2008). With this second theoretical study we aimed to find which acceptance models and their specific characteristics could help us measure IS success in the development of a Web application.

Acceptance models According to Adell (2009): “A number of different models is used in the information technology area which today includes one of the most comprehensive research bodies on acceptance and use of new technology”. The selection of which acceptance models to use depends on the object of study and its function. In this case, it was an ICT tool – a Web application prototype developed to support operators in the shop floor. Four main acceptance models were singled out to evaluate this object of study. The rationale behind the selection of the acceptance models used, is based on a study performed by Tjahjono (2009) where an ICT tool to support operators was tested. Tjahjono used two acceptance models: the Technology acceptance model (TAM) (Davis, 1989) and the Task technology fit model ( TTF) (Goodhue, 1998) to evaluate IS success. A similar empirical setting and approach as ours had been described in Tjahjono's (2009) study, supporting the suitability of these two particular acceptance models for our purposes. However, the acceptance characteristics measured with these models only covered some of the IS success variables in Petter et al’s (2013) D&M model, and an important factor in terms of Web development highlighted by Matera et al (2006) was overlooked: usability. To fill the gaps of the previous acceptance models, two additional ones were selected from the literature, the IBM computer usability satisfaction questionnaires (IBM-US) (Lewis, 1995) and the forthcoming StrömbergKarlsson acceptance scale (SKAS) (Strömberg and Karlsson, personal communication, January, 2014). The four models were later consolidated into a 20-item evaluation questionnaire featuring Likert scales. Table 1 presents the four acceptance models and their characteristics. Table 1 The TAM, TTF, IBM-US, and SKAS acceptance models Acceptance Model

Description

For submission to European Journal of Information Systems

Acceptance Characteristic

Acronym

10

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Technology Acceptance Model (TAM)

TAM concentrates on predicting and explaining use. Two factors, PEOU and PU, are theorized to be fundamental determinants of system use. (Davis, 1989)

Task technology fit (TTF)

TTF seeks to measure the effectiveness of an IS. “(…) the strongest link between information systems and performance impacts will be due to the correspondence between task needs and system functionality” (Goodhue 1989, p.107)

IBM computer usability satisfaction (IBM-US)

IBM-US focuses on user satisfaction and usability towards delivering usable products.

Perceived ease of use Perceived usefulness

“SKAS aims to test the user’s acceptance of a product in order to gain a deeper understanding of the user’s willingness and satisfaction with using, and buying, the product” (Strömberg and Karlsson, personal communication, January 2014)

PU

The right data

RDAT

The right level of detail Accuracy

RDET

Compatibility

COMP

Locatability

LOCA

Accessibility

ACESS

Flexibility

FLEX

Meaning

MEAN

Assistance

ASSIS

ACC

Ease of use of hardware and software Systems reliability Currency

EUH&S

Training

TRAIN

Authorization

AUTH

Presentation

PRES

Confusion

CONF

System usefulness Information quality Interface quality Overall

StrömbergKarlsson acceptance scale (SKAS)

PEOU

SYREL CURR

SYSUSE INFOQUAL INTERQUAL OVERALL

Trust and Control

T&C

Perceived Benefit

PB

Compliance

C

Thanks to the scale congruence between these four acceptance models, it was possible to elaborate a 20–item questionnaire with questions selected from each model to evaluate the web application tool.

For submission to European Journal of Information Systems

11

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Evaluating IS success of a Web application prototype at the pre-implementation phase At this stage we had enough theory to build an instrument of measurement to evaluate the Web app prototype at its pre-implementation stage. We determined that seven IS success variables (Figure 4) were possible to measure at this stage.

Figure 4 The seven variables appropriate for IS success evaluation at the pre-implementation stage (adapted figure from Petter et al (2013))

The instrument of measurement: the 20-item questionnaire Six out of the seven identified IS success variables - task compatibility, attitude towards technology, system quality, information quality, intention to use and user satisfaction - were evaluated through the use of four selected acceptance models (TAM, TTF, IBM-US and SKAS) whose acceptance characteristics were used to build an evaluation questionnaire (while the seventh variable, net benefits was measured in the form of cycle time). A 20-item questionnaire was elaborated using a seven-level Likert scale (where 1 means strongly agree and 7 strongly disagree). SKAS uses a different scale convention (with seven steps between adjective pairs), but it was adapted to have the same Agree-Disagree convention as other items. Table 2 shows the mapping of the 2o questionnaire items against the six consolidated D&M IS success variables, the four acceptance models and their acceptance characteristics, and usability characteristics. It is important to note that item 4 had no acceptance model attached to it; it was included to evaluate the Web app’s case-specific purpose as a tool for learning support. Also, item 19 was taken from the questionnaire elaborated by For submission to European Journal of Information Systems

12

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Tjahjono (2009) as it was deemed suitable for our case-specific purposes, and also has no association to a specific acceptance model. Table 2: The 20-item questionnaire used to evaluate the Web applications Item

Description

1

It is simple to use the Web application

2

I can effectively complete this task using this Web application Using the Web application would improve my job performance Using the Web application help me learn the task easily The task I am doing becomes easier when I use the Web application

3 4 5 6 7 8 9 10 11 12

I feel comfortable using the Web application It was easy to learn to use the Web application I would find the Web application useful in my job Whenever I make a mistake using the Web application, I recover easily and quickly It is easy to find the information in the Web application The information provided with the Web application is easy to understand The information is effective in helping me complete this task. The organization of information on the Web application is clear The Web application is useful for this task Navigating in the Web application is easy The level of detail of the information is enough to carry out this task

IS success variables

Acceptance model

Acceptance

TAM

Perceived ease of use

Task compatibility

TAM

Perceived usefulness

Task compatibility

TAM

Perceived usefulness

System quality

NONE Task compatibility Attitude toward technology System quality Task compatibility

TAM

Perceived usefulness

IBM-US

System Usefulness

TAM

Perceived ease of use

TAM

Perceived usefulness Interface quality

Few errors

IBM-US IBM-US

Information quality

Efficiency

IBM-US

Information quality

IBM-US

Information quality

System quality Information quality Information quality Information quality Information quality Task compatibility

SKAS

Information quality Perceived benefit Perceived effort

Information quality

TTF

Right level of detail

Information quality

SKAS

Perceived effort

TTF

Right data

19

Understanding how should I act based on the information is easy The Web application misses important information that would be useful to carry out this task I intend to use the Web application if I have a problem

Intention to use

Tjahjono

Intention of use

20

Overall I am satisfied with the Web application

User satisfaction

IBM-US

Overall

13 14 15 16 17 18

Usability

System quality

Information quality

IBM-US SKAS

For submission to European Journal of Information Systems

Learnability

Efficiency

User satisfaction

13

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Two industrial case studies at the pre-implementation stage Two industrial exploratory case studies, study A and B, are presented to illustrate the evaluation of the seven identified IS success variables for a Web app prototype at the pre-implementation stage. The net benefits at this early stage were measured through the cycle time as a way to quantify the learning process of the users to carry out the task. However, the measurement of the net benefits requires longitudinal studies to collect and guarantee substantial evidence; this study should therefore be considered the first measurement in a longitudinal study. The participating companies in both studies belonged to Swedish industry. In these studies two company-specific Web app prototypes were developed, both of which aimed to teach novice operators assembly or process steps, and to standardize work. Exploratory Study A: Metal Cutting Industry In study A, the Web app prototype was developed to support a final assembly process at a company in the metal cutting industry. This Web app contained assembly instructions to teach novice operators how to assemble products. The assembly instructions were developed with the help of expert operators who revised the content and gave feedback, thereby improving the quality of the information. 13 novice users in this study tested the Web application prototype. The testing procedure consisted of the users assembling the product with the aid of the web app, and the cycle time was measured. All novice users were able to assemble the product successfully. Afterwards, they answered the 20-item questionnaire, in which they were able to give comments for each item, which in turn provided feedback to improve the Web app. For instance, users mentioned wanting bigger pictures, and that the time pace of the video was too fast. The user evaluation gave evidence that it is possible to know the effectiveness of the work instructions (information quality) and the web application (system quality-usability). The questionnaire and the study are reported in greater detail in Authors (2014). Exploratory Study B: Process industry In study B, the Web app prototype supported a process that consists of preparing a chemical solution in a company that manufactures urological products. The developed prototype contained standardized work procedures in written and video form. Experienced technicians at the company created the video instructions. Five operators tested the Web app - two were experienced technicians and three were novice users. All the users who tested the application performed the chemical For submission to European Journal of Information Systems

14

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

preparation task successfully. The evaluation of the Web application was carried out the same way as for study A: First, the users prepared the chemical solution with the aid of the Web app, the cycle time was measured, and later they answered the 20-item questionnaire. They also gave comments on to improve the application. Examples of feedback include that experienced technicians observed differences between the written instructions and the video instructions – the latter did not follow the same sequence as the written instructions. One user mentioned that the quality of the videos was poor. More extensive information regarding study B can be found in Authors (2015).

Theoretical theme 3: when to evaluate the IS success variables? Integrating the IS lifecycle This empirical research (to date) - understood as part of a longitudinal study with follow-ups - only allowed us to evaluate IS success at an early stage (preimplementation). Thus, we returned to the literature to establish guidance on when to measure IS success; or rather, when to measure specific aspects covered regarding the what (according to Gable et al, 2003; and Petter et al., 2013) and the how (Davis, 1989; Lewis, 1995; Goodhue, 1998; and Strömberg and Karlsson, personal communication, January 2014). Matera et al (2006) have pinpointed the importance of considering IS lifecycle in Web design because of the iterative nature of Web design. A dynamic fact to consider is that users’ behavior and attitudes with regard to the technology may change with time. A helpful model for understanding IS life cycles is presented by Díez and McIntosh (2009), who identified several factors at individual and organizational levels that influence the use and usefulness of IS during different stages of the IS life cycle, with the aim to support IT systems used for environmental management. Their work consisted of a literature review in which they identified over 250 influencing factors of IS success at organizational and individual level, and each factor was categorized into best, potential and worst predictors based on the number of times the factors have been studied, and a weighting factor (i.e. the number of times studied/number of times found to be significant). In the following subsections we will present only the best and potential predictors at the individual level. Building on that categorization, Díez and McIntosh (2009) grouped their findings into three life cycle stages based on the IS lifecycle as categorized by Bhattacherjee and Premkumar (2004): preimplementation, implementation and post-implementation.

Pre-Implementation The main process of pre-implementation comprises the design and development of IS The factors which were classified by (Díez & McIntosh, 2009) as potential (P) or best For submission to European Journal of Information Systems

15

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

(B) predictors at this stage are: communication between users and developers (P); existence of a sponsor/champion (P); IS planning sophistication (P); task technology fit (P); and user participation (B).

Implementation Díez and McIntosh (2009) defined implementation as “(…) the process whereby an individual or organization deploys an IS in its work. It implies acquisition, diffusion and later assimilation of the IS because of its perceived benefits.” Their proposed concept of implementation is related to ‘adoption’ (Karahanna et al, 1999; Thong, 1999) .” For this life cycle stage, Díez and McIntosh identified eight factors (best predictors) which may have an influence at the implementation stage: behavioral intention, computer experience, perceived usefulness, subjective norms, system quality, top management support, user support, and user training.

Post-Implementation According to Díez and McIntosh, (2009) “Once the IS has been implemented, individuals and organizations will, after a period of exposure, formally and informally (i) validate the usefulness of the IS (Sibley and Kumar, 1990; Smith and Smith, 2007), (ii) check its future use (Sibley and Kumar, 1990) , and (iii) justify investments made in the implementation of the new technology (Fitzgerald, 1998).” Due to the fact that post-implementation is one of the less studied phases, only potential factors were identified by Díez and McIntosh (2009) at individual level. The potential factors at this stage are: ease of use, perceived usefulness, user participation in the development and implementation, user satisfaction, user’s confirmation of expectation and user selfesteem. Moreover, this stage is where the net benefits will be most fully reflected. Table 3 summarizes best or potential factors at each stage of the IS lifecycle process as identified by this literature study. Best predictors are indicated with the letter B, and potential predictors with the letter P. Table 3: Independent variables and IS life cycle phases, with Best (B) and Potential (P) predictors indicated Preimplementation

Communication between usersdevelopers

P

Existence of a sponsor

P

Task Technology Fit

P

User participation

B

Implementation

For submission to European Journal of Information Systems

Postimplementation

16

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

IS planning sophistication

P

Top management support

B

Behavioral intention

B

Perceived usefulness

B

Computer experience

B

Subjective norms

B

System quality

B

User support

B

User training

B

P

Ease of use

P

User participation in the development and implementation

P

User satisfaction

P

User confirmation of expectation

P

User self-esteem

P

An additional acceptance model: the Unified Theory of Acceptance and Use of Technology UTAUT Besides the four acceptance models used to evaluate the Web application, another acceptance model was included in our analysis, the Unified Theory of Acceptance and Use Of Technology (UTAUT). UTAUT assesses the likelihood of success for new technology introductions and helps explain the drivers of acceptance to intervene (training, marketing, etc.) when users are less inclined to adopt and use new systems (Venkatesh et al, 2003). It was added because of its widespread use and multidimensional approach. This model “(…) is based on an extensive literature review and empirical comparisons of eight different models: the theory of reasoned action, the technology acceptance model, the theory of planned behavior, a model combining the technology acceptance model and the theory of planned behavior, the model of PC utilization, the motivational model, the social cognitive theory and the innovation diffusion theory including their extensions (…)” (Adell, 2009 p. 41). The UTAUT acceptance characteristics are shown in Table 4.

For submission to European Journal of Information Systems

17

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Table 4 The UTAUT acceptance model Acceptance Model

Description

Acceptance Characteristic Performance expectancy

Unified theory of acceptance and use of technology (UTAUT)

UTAUT assesses the likelihood of success for new technology introductions and helps explain the drivers of acceptance (…) when users are less inclined to adopt and use new systems (Venkatesh et al, 2003).

Effort expectancy Attitude toward using technology Social Influence Facilitating conditions Self-efficacy Anxiety Behavioral intention to use the system

Acronym PE EE ATT SI FC SE ANX BI

The consolidated framework In this section we present and analyze the developed consolidated framework for IS success. This framework aims to contribute to theory development by consolidating well-accepted models and frameworks into a single one to assess IS success, and to practice by helping practitioners with the use of the consolidated framework as a guideline for IS success assessment. This framework resulted from solving a practitioner problem; consequently, it was developed and applied simultaneously to two real-life case scenarios at the pre-implementation stage.

The core of the framework: what, how and when The core of the framework consists of three operative questions - what to measure, how to measure and when to measure - for which practitioners use different models at each phase (see Figure 5). These models serve as tools in a toolbox and all together form a consolidated measurement framework. The order convention presented in this paper was established by how we faced the practical evaluation task during preimplementation of the Web apps during the research process. However, in a real-life context, the order of what, how and when may change according to evaluator needs. For the first question, what to measure, the updated D&M model (dependent variables) together with the Determinants (independent variables) (Petter et al, 2013) provide a framework that can be used to identify important factors leading to IS success.

For submission to European Journal of Information Systems

18

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Figure 5 Practitioners’ framework for IS success measurement For the second question, how to measure, as argued by Petter et al (2008) researchers tend to have focused on a single IS success variable at a time (e.g. system quality) or a specific acceptance characteristic (i.e., perceived usefulness and/or ease of use) in previous studies. It is our belief that the combination of different acceptance models can work as a toolbox for IS success measurement. For the last question, when to measure, we acknowledge that IS has a lifecycle and the behavior of the users changes over time. This change of behavior may be reflected in IS success variables emerging and receding over time; therefore, it is important to know when to measure, in order to gain a better understanding of IS success. Díez and McIntosh's (2009) framework helps identify which factors weigh more at the three different IS lifecycle stages. As argued by Bhattacherjee and Premkumar (2004), a three-period model allows us to examine not only beliefs and attitude changes among users over time, but also the rate of such changes across time from early to later stages of ICT usage. Having a consolidated framework that considers the IS lifecycle in the evaluation of IS success can benefit different stakeholders. Software developers who do the evaluation For submission to European Journal of Information Systems

19

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

at the pre-implementation stage can detect weakness in the design and improve the ICT tool at a stage when it is easier, cheaper and changes have less negative impact. Managers can track users’ satisfaction levels with technology usage (at least during the initial stages of ICT usage, when such change is prevalent), identify sources of any dissatisfaction, and intervene accordingly to avoid possible ICT abandonment.

Mapping the relevant IS Success variables and acceptance characteristics across the IS lifecycle Table 5 presents the mapping of the IS success variables proposed by Petter et al (2013) and Díez and McIntosh (2009) to the acceptance characteristics of the five revise models, and the lifecycle stages at which they matter. The categorization of the IS success variables according to their relevance at each IS lifecycle stage is based on Díez and McIntosh’s findings. The acceptance characteristics related to the five included acceptance models are used to evaluate several IS success variables. For instance, from TAM, perceived usefulness (PU) evaluates task compatibility, and perceived ease of use (PEOU) system quality (Petter et al, 2013). With TTF, information quality, system quality, service quality, and user training can be measured. UTAUT may help with intention to use, attitude toward technology, and so forth.

For submission to European Journal of Information Systems

20

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Table 5 – Mapping of different IS success variables, acceptance models and characteristics to each IS life cycle phase: Preimplementation, Implementation and Post-implementation, with specification of related questionnaire items. IS success variables (Petter et al, 2013)

Task Compatibility

IS success factors Díez and McIntosh (2009)

Task Technology Fit (P)/Perceived usefulness (B)

Pre-Impl.

X (TTF)

Task Difficulty

Playfulness

User involvement

User participation (B)

X

Relationship with developers

Communication between users and developers (P)

X

Management support

Top management support (B)/Existence of a sponsorchampion (P)

Impl.

X (PU)

Post-Impl.

X (PU)

X

X (Sponsor)

Management processes Organizational Competence

X (Top Management)

Attitude toward Technology Enjoyment

Attitudes Enjoyment

For submission to European Journal of Information Systems

TAM

Perceived Usefulness

SKAS

Perceived Benefit

UTAUT

Performance expectancy

SKAS

Perceived Benefit

Q8

UTAUT

Social Influence

SF2, SF4

UTAUT

Social Influence

OE7

UTAUT

Facilitating Conditions

UTAUT

Attitude Toward Technology

UTAUT

Anxiety

SKAS

Perceived Benefit

Items

Q4, Q5, Q9 U6, RA1, RA5

X X XX

IT infrastructure

Characteristics

X

Extrinsic motivation IS planning sophistication (P) Computer experience (B)

Acceptance Model

X X

21

PBC2, PBC5 AF1 ANX1, ANX4 Q8

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Trust

Trust

UTAUT

Attitude Toward Technology

SKAS

Trust and Control Anxiety

X

UTAUT

Attitude Toward Technology

Self-efficacy

User self-esteem (P)

X

UTAUT

Self-efficacy

User expectations

User confirmation of expectation (P)

X

UTAUT

Effort Expectancy

IBM-US

System Quality

System quality (B)/Ease of use (P)

X (System quality)

X (Ease of use)

Service Quality

Information quality

Professionalism IS

For submission to European Journal of Information Systems

X

X

22

A1 SE1,2,4, 6,7 EOU3

Interface Quality Perceived Usefulness, Perceived Ease of Use

TTF

Flexibility, System Reliability

UTAUT

Effort Expectancy, Facilitating Conditions Performance expectancy Compliance

Information Quality

ANX2, ANX3

System Usefulness

TAM

SKAS

AFFECT1 , AF2

IBM-US

Information Quality

TTF

Right Data, Accessibility, Locatability, Right Level of Detail

SKAS

Performance expectancy

UTAUT

Facilitating Conditions

TTF

Assistance

EOU6, PBC5 Q11,12,13, 17, 18 Q19, Q20

Q14, Q15 FC3

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Intention to use

Behavioral intention (B)

UTAUT

Behavioral Intention to Use the System

X

IBM-US

Overall

X

IBM-US

Overall

X

BI1, BI2, BI3

Use User satisfaction Net benefits

User satisfaction (P) User support (B)

X

Subjective norms

Subjective norms (B)

X

UTAUT

Social Influence

User training

X

TTF

Training

For submission to European Journal of Information Systems

23

SN1, SN2

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Discussion Practitioners want to ensure IS success of the designed, developed and/or implemented ICT tools. In the present work we provide a consolidated framework that practitioners can use when assessing IS success. This consolidated framework was applied to two real-life case scenarios to illustrate its use at the pre-implementation stage of two Web apps prototypes. To evaluate IS success of the prototypes, the consolidated framework not only included task technology fit (or task compatibility), as in Díez and McIntosh (2009), but also considered other IS success variables defined by Petter et al (2013). Seven IS success variables were selected: system quality (where some usability aspects were included), information quality, intention to use, user satisfaction, attitude toward technology, task compatibility and net benefits. These IS success variables were measured by the 20-item questionnaire except for the net benefits. The net benefits were measured by the cycle time. Several IS success variables (see Table 5) at the pre-implementation stage proposed by Díez and McIntosh (2009) and Petter et al (2013) may be hard to measure, but including them in the evaluation is crucial to ensuring success. For example, communication between users-developers, existence of a sponsor, IS planning sophistication and user participation. Still, we were aware of their importance and we tried in studies A and B to interact with users and let them participate in the development of the Web apps in order to guarantee the Web apps fit their needs and contain the right information. The early evaluation and close interaction with the final users helped us pinpoint areas of improvement about the content of the information and the usability of the Web app. Even though we have paid attention to user participation, other factors such as management support and the existence of a sponsor are crucial and if they are not considered there might be a negative effect (e.g. Wilkinson, 2011). Even though management support and a sponsor may be hard to measure, we propose it can be done through UTAUT-social influence. Another important IS success variable that is not identified by Díez and McIntosh (2009) but defined by Petter et al (2013) is IT infrastructure (marked as XX in Table 5). In study B, there was a weak reception of either the Wi-Fi and the cellular network connection and the navigation of the Web app was very slow during the tests. This problem did not let us validate the effectiveness of the Web app. Without an IT infrastructure that supports the ICT tools, it would not be possible to take advantage of the benefits of ICT tools. Otherwise, the impact will become negative and the users will not perceive any benefits at all. This is perhaps an indication of the need to

For submission to European Journal of Information Systems

24

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

periodically update and consolidate theoretical models, as new issues emerge as time passes and accessible technology matures and develops. With our empirical studies we contribute to a multidimensional perspective in the same way that Gable et al (2003) did for implemented enterprise resource planning systems, and Petter et al (2008) did in a more holistic perspective. Our studies intend to give a broader picture and a better understanding of IS success on Web applications. In addition to the multidimensional approach, we included the IS lifecycle dimension as proposed by Díez & McIntosh (2009). There are other studies that includes the IS lifecycle perspective e.g. Bhattacherjee and Premkumar (2004) work which gave the IS lifecycle perspective of two factors: belief and attitude toward technology. Yet, the aforementioned studies lack the multidimensional perspective.

Limitations Like most previous studies our study focuses on the individual level only. The reason why the organizational aspect was not considered in this study is because the Web app was developed as a proof-of-concept to show the participating companies the possible benefits and use of Web apps as a learning tool to help standardize work at the individual level. At this stage of our research, we have concentrated on the pre-implementation stage and what IS success variables may have an impact on the success of the Web apps. A longitudinal study is recommended to follow up the rest of the IS lifecycle for the same type of studies. An issue when mapping Díez and McIntosh’s IS factors alongside Petter et al.’s (2013) IS success variables, was that the two models prioritize the variable’s relevance differently. According to Díez and McIntosh, the IS success variables shown in bold in Table 5 are the ones that are best or optimal predictors at a specific IS lifecycle stage. The others, shown in normal font, are merely present at the mentioned IS lifecycle stage. For example, Díez and McIntosh considered attitude as not relevant, whereas Petter et al (2013) considered it an important factor. In addition, there were two IS factors found by Díez and McIntosh, user training and subjective norms, which are not considered IS success variables as important at overall IS by Petter et al (2013) who state: “Managers can also work to develop training programs to influence selfefficacy (…)”. A further challenge presented while mapping was that the same variable was expressed by different definitions by different authors of the acceptance models. In the end, mapping was performed on the basis of the authors’ own interpretation of similarities and differences between the two models.

For submission to European Journal of Information Systems

25

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

Another important difference is the categorization of dependent and independent variables; for example, user satisfaction at pre-implementation stage is a dependent variable, but independent variable at implementation and post-implementation stage. Hence, the IS lifecycle adds dynamics to the IS success variables.

Conclusions This paper continues the work of establishing frameworks and practitioner approaches for evaluating IS success in manufacturing-related ICT development. This is carried out by consolidating existing well-supported models for IS success (what), and adding the aspects of how and when to measure the IS success variables that have been found to have an impact on IS success in the literature. The resultant framework points out how practitioners can choose and measure the right IS success variables at the right time, using appropriate questionnaire items, during different stages of the IS life cycle. Choosing which factors are more relevant depend on the purpose of IT/ICT tools and the target users. Ultimately, there is no simple answer because the holistic picture of what influences IS success has been recognized to be socio-technical and complex – our belief is that a contemporary evaluation framework for IS success should reflect this.

Acknowledgements The authors would like to acknowledge the research funding from the Swedish Innovation Agency VINNOVA through the project “The Operator of the Future”. The authors are fully grateful to the managers, operators and technicians of the participating companies. Without their experience and support we could not have concretized the practical aspects of our research.

References ADELL, E (2009) Driver experience and acceptance of driver support systems - a case of speed adaptation. PhD thesis, Lund University. BENBYA H and MCKELVEY B (2006) Toward a complexity theory of information systems development. Information Technology & People, 19(1), 12–34. BHATTACHERJEE A and PREMKUMAR G (2004) Understanding Changes in Belief and Attitude toward Information Technology Usage: A theoretical Model and Longitudinal Test. Management Information Systems, 28(2), 229–254. BLAIKIE N (2000) Designing Social Research: The Logic of Anticipation. Wiley. DAVIS F D (1989) Perceived Usefulness, Perceived Ease of Use, and User Acceptance. MIS Quarterly, 13(3), 319–339. DELONE W H and MCLEAN E R (2003) The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19(4), 9–30. For submission to European Journal of Information Systems

26

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

DÍEZ E and MCINTOSH B S (2009) A review of the factors which influence the use and usefulness of information systems. Environmental Modelling and Software, 24(5), 588–602. DUBOIS A and GADDE L-E (2002) Systematic combining: an abductive approach to case research. Journal of Business Research, 55(7), 553–560. FITZGERALD G (1998) Evaluating information systems projects: multidimensional approach. Journal of Information Technology, 13, 15–27.

A

GABLE G, SEDERA D and CHAN T (2003) Enterprise systems success: a measurement model. Twenty-Fourth International Conference on Information Systems, (2000), 576–591. GOODHUE D (1998) Development and Measurement Validity of a TaskTechnology Fit Instrument for User Evaluations of Information System. Decision Sciences, 29(1), 105-138. KARAHANNA E, STRAUB D W and CHERVANY N L (1999) Information Technology Adoption Across Time: a Cross-Sectional Comparison of Pre-Adoption and Post-Adoption beliefs. MIS Quarterly, 23(2), 183–213. LEAVITT H J (1965) Applied organizational change in industry: Structural, technological and humanistic approaches. Handbook of Organizations. Rand McNally & Co. LEWIS J R (1995) IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction, 7(1), 57–78. MATERA M, RIZZO F and CARUGHI G (2006) Web usability: Principles and evaluation methods. In Web Engineering (MENDES E and MOSLEY N, Eds), pp. 143-180, Springer Berlin Heidelberg. NIELSEN J (1994) Usability Engineering. Elsevier Science. ONG B K (2012) Grounded Theory Method (GTM) and the Abductive Research Strategy (ARS): a critical analysis of their differences. International Journal of Social Research Methodology, 15(5), 417–432. PETTER S, DELONE W and MCLEAN E (2008) Measuring information systems success: models, dimensions, measures, and interrelationships. European Journal of Information Systems, 17(3), 236–263. PETTER S, DELONE W, and MCLEAN, E R (2013) Information systems success: The quest for the independent variables. Journal of Management Information Systems, 29(4), 7–62. SABHERWAL R (1999) The Relationship Between Information System Planning Sophistication and Information System Success: An Empirical Assessment. Decision Sciences, 30(1), 137–167. SIBLEY E H and KUMAR K (1990) Post Implementation Evaluation of ComputerBased Information Systems: Current Practices. Communication of the ACM, 33(2), 203–212. SMITH J and SMITH P (2007) Environmental Modelling: An Introduction. OUP Oxford. STRÖMBERG H (January 2014) Personal communication. THONG J (1999) An integrated model of information systems adoption in small businesses. Journal of Management Information Systems, 15(4), 187–214. TJAHJONO B (2009) Supporting shop floor workers with a multimedia taskoriented information system. Computers in Industry, 60(4), 257–265. For submission to European Journal of Information Systems

27

Information System success evaluation: What, How and When? A consolidated framework for practitioners Paper Genre: Theory Development

VENKATESH V, MORRIS, M G, DAVIS G B, and DAVIS F D (2003) User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. WILKINSON J (2011) Dead on Delivery: How A Successful Project Failed. PMI Virtual Library, 1-4. XIA W and LEE G (2003) Complexity of information systems development projects: conceptualization and measurement development. Journal of Management Information Systems, 22(1), 45–83.

For submission to European Journal of Information Systems

28

Suggest Documents