Information System Success - CiteSeerX

14 downloads 39418 Views 862KB Size Report
College of Business Administration, University of Missouri – St. Louis. 8001 Natural Bridge ... Information System Success: Individual and Organizational Determinants. Abstract .... management support for ISs and facilitating conditions for ISs.
Information System Success: Individual and Organizational Determinants1

by Rajiv Sabherwal2 University of Missouri Curators Professor and Emery C. Turner Professor of Information Systems College of Business Administration, University of Missouri – St. Louis 8001 Natural Bridge Road, St. Louis, MO 63121 Phone: 314-516-6490 • Email: [email protected]

Anand Jeyaraj Doctoral Candidate in Information Systems College of Business Administration, University of Missouri – St. Louis 8001 Natural Bridge Road, St. Louis, MO 63121 Phone: 314-516-4882 • Email: [email protected]

Charles Chowa Doctoral Candidate in Information Systems College of Business Administration, University of Missouri – St. Louis 8001 Natural Bridge Road, St. Louis, MO 63121 Phone: 314-516-4883 • Email: [email protected]

Forthcoming in Management Science Original Submission: August 2004 First Revision: November 2005 Second Revision: February 2006 Accepted: April 2006

1:

We are grateful to the Associate Editor, the four anonymous reviewers, and the Department Editor for their insightful comments on previous versions of the paper. Their comments have been very helpful in improving the paper. The paper has also benefited from several suggestions the first author received from participants of research seminars at Louisiana State University, University of Maryland, University of Oklahoma, and University of Illinois, Chicago. 2: Contact author.

Information System Success: Individual and Organizational Determinants

Abstract Despite considerable empirical research, results on the relationships among constructs related to information system (IS) success, as well as the determinants of IS success, are often inconsistent. A comprehensive understanding of IS success thus remains elusive. In an attempt to address this situation, which may partly be due to the exclusion of potentially important constructs from prior parsimonious models of IS success, we present and test a comprehensive theoretical model. This model explains interrelationships among four constructs representing the success of a specific information system (user satisfaction, system use, perceived usefulness, system quality), and the relationships of these IS success constructs with four user-related constructs (user experience with ISs, user training in ISs, user attitude towards ISs, and user participation in the development of the specific IS) and two constructs representing the context (top-management support for ISs and facilitating conditions for ISs). To test the model, we first used meta-analysis to compute a correlation matrix for the constructs in the model based on 612 findings from 121 studies published between 1980 and 2004, and then used this correlation matrix as input for a LISREL analysis of the model. Overall, we found excellent support for the theoretical model. The results underline the importance of user-related and contextual attributes in IS success and raise questions about some commonly believed relationships. Keywords: Information system success, user satisfaction, system use, system quality, perceived usefulness, meta-analysis, structural equation modeling.

1. Introduction Information system (IS) success and its determinants have long been considered critical to the field of information systems (Bailey and Pearson 1983; DeLone and McLean 1992; Seddon 1997; Rai, Lang, and Welker 2002). However, empirical results in this area are inconsistent, and a synthesis across the numerous empirical studies is needed (Rai et al.). The main objective of this paper is to provide further insights into the success of an IS that is adopted or used by individuals within the organization, and the determinants of IS success, by empirically integrating prior research in this area. IS success is viewed from the perspective of individual users, and not from the perspective of senior business executives1. As DeLone and McLean (1992) and Rai et al. (2002) suggest, the observed empirical relationships among the constructs related to IS success might be due to the exclusion of other factors affecting them. This problem could be mitigated by examining IS success along with its potential determinants. Therefore, this paper addresses the following specific questions: (1) How do the various constructs reflecting IS success affect each other? (2) How do these IS success constructs depend on constructs characterizing the users and the context? To pursue these questions, this paper develops a comprehensive theoretical model, including constructs related to the context, the users, and IS success. The theoretical model is tested through a combination of metaanalysis and structural equation modeling.

2. Theoretical Development 2.1. Overview of the Theoretical Model The theoretical model is presented in a top-down fashion. The broad theoretical model, which identifies the research propositions, is presented first. In the subsequent sub-sections, the detailed theoretical model is proposed by using the prior literature on IS success and its determinants to develop the hypotheses related to each proposition. The theoretical model relies on several theoretical areas: expectancy theory (Vroom 1964; Thompson, Higgins, and Howell 1991), theory of reasoned action (TRA) (Fishbein and Ajzen 1975; Davis, Bagozzi, and Warshaw 1989), theory of planned behavior (TPB) (Ajzen 1991; Taylor and Todd 1995), 1

The interested reader can learn more about business value of IT in Melville, Kraemer, and Gurbaxani (2004). Page 1

technology acceptance model (TAM) (e.g., Davis 1989), unified theory of acceptance and use of technology (UTAUT) (Venkatesh et al. 2003), social cognitive theory (SCT) (Bandura 1986; Compeau and Higgins 1995), and innovation diffusion theory (IDT) (Rogers 1983; Moore and Benbasat 1991). Space considerations inhibit detailed discussion of these theoretical foundations here, but they have been extensively discussed in prior literature (e.g., Venkatesh et al.). Figure 1 presents the broad theoretical model, which includes three broad concepts. The first concept, IS success, is examined using four constructs – system quality, perceived usefulness, user satisfaction, and system use (DeLone and McLean 1992; Seddon 1997; Rai et al. 2002). Proposition 1 represents the interrelationships among these four constructs. Four constructs – user experience with information systems (ISs), user attitude towards ISs, user training in ISs, and user participation in the development of the specific IS – characterize users. Proposition 2 focuses on the interrelationships among these constructs, and Proposition 3 reflects their effects on IS success. The third concept – the context – is represented by two constructs: topmanagement support for ISs and facilitating conditions for ISs. Proposition 4 deals with the relationship between these two constructs, Propositions 5 and 6 focus on their effects on user-related constructs and IS success, respectively. Table 1 identifies the ten constructs that comprise the theoretical model along with the related constructs from the paper’s theoretical foundations and the related variables from prior empirical studies. Of these ten constructs, user participation in the development of the specific IS and the four IS success constructs pertain to the specific IS, while the other five constructs pertain to information systems in general.

2.2. Information System Success Based on a comprehensive survey of prior literature, DeLone and McLean (1992) proposed, but did not empirically test, a model of IS success that included six constructs: system quality, information quality, use, user satisfaction, individual impact, and organizational impact. Seddon (1997) used theoretical considerations to modify DeLone and McLean’s model. He distinguished between actual impacts and expected impacts, and incorporated the additional construct of perceived usefulness (Davis 1989). Moreover, he pointed out that system use in the DeLone and McLean model has three possible meanings: a behavior, a proxy for benefits, Page 2

and an event in a process leading to individual or organizational impact. Seddon viewed system use as a behavior that reflects an expectation of net benefits from using the system. Overall, Seddon’s model included three types of constructs: measures of information and system quality, system use as behavior, and general measures of net benefits from system use. Rai et al. (2002) further built on DeLone and McLean (1992) and Seddon (1997). They viewed perceived usefulness as being related to individual impacts because it is based on several of the constructs DeLone and McLean had linked to individual impacts, such as improved individual productivity. Rai et al. focused on five constructs (system quality, information quality, perceived usefulness, user satisfaction, and system use) and represented system quality and system use in terms of ease of use and system dependence, respectively. Based on a survey of 274 users of an integrated student information system, they tested DeLone and McLean’s model, Seddon’s model, and a modified Seddon model, including a correlational path between perceived usefulness and system use. Rai et al. found the modified Seddon model to perform the best. Constructs from this model are used in this paper to represent IS success, but information quality is excluded as few empirical studies have examined it in relation to the other constructs in the model. Figure 2 depicts the proposed relationships among the constructs related to IS success. These hypotheses are more specific and detailed representations of Proposition 1, are numbered H1a to H1e, and are identical to the relationships included in Rai et al.’s (2002) modified Seddon (1997) model. Due to space considerations, the hypotheses are not formally stated but indicated in Figures and within the text. As shown in the Figure, system quality, which is defined in terms of the system’s reliability, ease of use, and response time (Rai et al.), is expected to facilitate perceived usefulness (Hypothesis 1a, or H1a), which is defined as the degree to which an individual believes that use of the system enhances the individual’s productivity and job performance (Davis 1989). H1a is consistent with TAM, which suggests that perceived ease of use affects perceived usefulness (Davis 1989). Like other hypotheses about the interrelationships among constructs representing IS success, this hypothesis is consistent with Seddon and Rai et al. System quality (H1b) and perceived usefulness (H1c) are expected to facilitate user satisfaction, which is defined as the extent to which Page 3

the user believes that the information system meets his or her information requirements (Ives, Olson and Baroudi 1983). H1b and H1c are consistent with TAM and TPB, where beliefs about the system affect attitudes related to using the system (Rai et al.; Wixom and Todd 2005). User satisfaction is posited to facilitate system use2 (H1d), which is defined as the behavior of using the system (Seddon 1997). H1d is consistent with TAM and TPB, which consider attitudes towards using the system as influencing system use. In addition, perceived usefulness is posited to be correlated with system use (H1e). Rai et al. (2002) added this correlational path when modifying Seddon’s model. H1e is consistent with the expectancy theory of motivation, according to which individuals assess the consequences of their actions in terms of likely rewards and modify their behavior based on the desirability of rewards (Vroom 1964; Thompson et al. 1991). H1e is also consistent with DeLone and McLean’s (1992) model, which includes a path from system use to individual impact (represented by Rai et al. and in this paper as perceived usefulness), and with Seddon’s model, as Rai et al. point out (p. 65).

2.3. User-Related Constructs User attributes have an important role in the eventual success of IS (Guimaraes and Igbaria 1997). This study includes four user-related constructs, as shown in Figure 3: user experience with ISs, user attitude towards ISs, user training in ISs, and user participation in the development of the specific IS. For simplicity, they are called user experience, user attitude, user training, and user participation, respectively, in Figure 3 and at most places in the paper. Of these, the first three constructs relate to ISs in general, while user participation focuses on the specific IS. User experience with ISs may be defined as the duration or level of an individual's prior use of computers and ISs in general (Guimaraes and Igbaria 1997). User training in ISs is defined as the extent to which an individual has been trained about ISs through college courses, vendor training, in-house training, and self-study (Igbaria, Guimaraes and Davis 1995). It reflects such prior training about ISs in general, and not training about the specific IS being developed. User experience is hypothesized to affect user training (H2a), 2

However, the measures of system use in some empirical studies are based on the respondents’ perceptions of use. Page 4

based on the expectation that users who have gained greater experience with ISs would have encountered greater opportunities to receive training with respect to ISs, and would have encountered greater need to receive such training (DeLone 1988). User experience is also hypothesized to affect user attitude towards ISs (H2b). Triandis (1971) has defined attitude as “an idea, charged with affect, that predisposes a class of actions to a particular class of social situation” (p. 2). Fishbein and Ajzen (1975) view attitude as the affect that one feels against or for some object or behavior, and differentiate between attitudes towards objects (e.g., “This system is excellent”) and attitudes related to behaviors (e.g., “I hate using the new system”). Incorporating both these aspects, we define user attitude towards ISs as a user’s affect, or liking, for ISs and for using them (Venkatesh et al. 2003). Moreover, user attitude towards ISs focuses on the users’ general attitude towards ISs, and not towards the specific system being developed (Thompson et al. 1991); the latter attitude is incorporated in user satisfaction (Rai et al. 2002; Wixom and Todd 2005). We define user participation in the development of the specific IS as the assignments, tasks, and behaviors that users, or their representatives, perform during the IS development (ISD) project, or the user’s psychological state of involvement in the project3. User training is posited to facilitate user participation (H2c). ISD teams would seek greater participation from users if they have received IS training in the past, and such users may themselves be more motivated to participate in ISD projects (Guimaraes, Staples, and McKeen 2003). User attitude is also posited to affect user participation (H2d), consistent with the argument that a less favorable attitude towards ISs might lead to users not contributing to the ISD project, and not feeling psychologically involved (Thompson et al. 1991; Hartwick and Barki 1994). Effects of User-related Constructs on System Success: User experience with ISs enables the formation of habits related to using systems (Triandis 1971). Therefore, consistent with prior literature (Guimaraes and Igbaria 1997), user experience is hypothesized to enhance system use (H3a), as shown in Figure 4. User training in ISs is posited to facilitate system quality (H3b). User training enables users’ self-

3

Barki and Hartwick (1994) distinguish between “involvement” and “participation,” but most studies have used the terms interchangeably, both before and after 1994. Our definition of user participation includes both “participation” and “involvement.” Page 5

efficacy and skills with respect to ISs (Agarwal and Prasad 2000; Compeau and Higgins 1995). As a result, users set higher standards and make greater contribution to ISD efforts (Agarwal and Prasad). User training can thus directly (H3b) and indirectly (through user participation; H2c and H3f, which is discussed later) lead to better systems (Guimaraes et al. 2003). Users with a more positive attitude towards ISs are likely to be more satisfied with the new system and to view it as more useful (Guimaraes and Igbaria 1997). Moreover, users who are favorably inclined towards ISs are more likely to use the new system to a greater extent, even if they may not have participated in its development (Igbaria 1990). This is consistent with Triandis’s (1971) argument that behavior is influenced by what people would like to do (i.e., attitudes). Therefore, user attitude is hypothesized to affect perceived usefulness (H3c), user satisfaction (H3d), and system use (H3e). Prior literature (e.g., Ives and Olson 1984; Barki and Hartwick 1994) leads us to posit that user participation in the development of a specific IS affects all four aspects of IS success: system quality (H3f), perceived usefulness (H3g), user satisfaction (H3h), and system use (H3i). Theories of cognition help explain the effect of user participation on system quality and system use (e.g., through improved understanding of the specific system and its features), whereas theories of motivation help explain its effect on perceived usefulness and user satisfaction (e.g., through increased commitment and reduced resistance to change) (Ives and Olson).

2.4 The Context Two constructs representing the context for IS development and use – top-management support for ISs and facilitating conditions for ISs – were included. Top-management support for ISs refers to the senior executives’ favorable attitude toward, and explicit support for, ISs (e.g., Doll 1985). Facilitating conditions for ISs reflect the processes and resources that facilitate an individual’s ability to utilize information systems (Thompson et al. 1991). In TPB, facilitating conditions are construed to be means of perceived behavioral control. Both these constructs relate to ISs in general, rather than the specific IS. For simplicity, they are called topmanagement support and facilitating conditions at most places in the paper.

Page 6

When top management is highly supportive of ISs, greater resources are likely to be allocated to develop and support ISs (Yap 1989), enhancing facilitating conditions for ISs (Thong, Yap and Raman 1996). We therefore posit top-management support of ISs to be positively associated with facilitating conditions (H4). Also, when the level of top-management support is high, senior executives are more likely to attend project meetings related to the specific IS, participate in important decisions, and monitor the project (Thong et al.). This would lead to a greater willingness on the part of the users to participate in the specific ISD project (Markus 1983; Purvis, Sambamurthy and Zmud 2001), which is consistent with arguments based on the effects of social norms (Compeau and Higgins 1995) and social influence (Venkatesh et al. 2003). We therefore posit topmanagement support for ISs to enhance user participation in the development of a specific IS (H5a). Facilitating conditions for ISs, such as the presence of help desks and technical support teams, enable individuals to gain experience with, and learn about, ISs (Taylor and Todd 1995). Therefore, facilitating conditions for ISs are expected to influence user experience with ISs (H5b) and user training in ISs (H5c). Facilitating conditions are also posited to enhance user attitude towards ISs (H5d); better facilitating conditions imply circumstances that enhance the pleasure associated with, and allay the anxiety about, using ISs (Venkatesh et al. 2003). Top-management support for ISs is expected to directly affect IS success. There is considerable evidence of the effect of top-management support on IS success in prior literature (Doll 1985; Jarvenpaa and Ives 1991; Purvis et al. 2001). Top-management support for ISs in general promotes the quality of the specific system by facilitating the allocation of needed resources during the project (Thong et al. 1996). Symbolic actions of support by senior managers also contribute to successful implementation (Sharma and Yetton 2003). Indeed, lack of top-management support is a critical barrier to IS use (Guimaraes and Igbaria 1997; Igbaria et al. 1995). Thus, top-management support is posited to positively affect all four aspects of IS success (Hypotheses 6a-6d).

2.5 Some Concluding Remarks about the Theoretical model The overall theoretical model is presented in Figure 4. This model builds on the broad theoretical model (Figure 1) and incorporates the hypotheses for relationships among constructs related to IS success (Figure 2), Page 7

users (Figure 3), and the context (Hypothesis 4), and the relationships across these four concepts. In developing the theoretical model, some inconsistencies in prior theories and empirical results needed to be addressed. When faced with such inconsistencies, we preferred to rely on theory over empirical results, because the latter would be captured through the meta-analysis. Moreover, we preferred more recent theoretical work, as it usually builds on prior theories. For example, user satisfaction has been found to affect system use (Baroudi, Olson and Ives 1986), the opposite causal effect has been found (Lee, Kim and Lee 1995), and a non-significant relationship has been found between these two constructs (Ang and Soh 1997). To address this inconsistency, we relied on Rai et al.’s (2002) modified Seddon model of IS success, as it builds on prior work by DeLone and McLean (1992) and Seddon (1997), and included a path from user satisfaction to system use. The relationship between facilitating conditions for ISs and system use is another example of inconsistency in prior literature. Facilitating conditions have been argued to affect use, either directly or through behavioral intention (Thompson et al. 1991; Taylor and Todd 1995), but the relationship between these constructs has also been found to be non-significant (e.g., Mawhinney and Lederer 1990). We excluded a direct path from facilitating conditions for ISs to system use, especially since this effect may be explained through user experience with ISs and user attitude towards ISs. The theoretical model includes 10 constructs and 27 hypotheses, although several potential hypotheses are explicitly excluded. User experience with ISs and user training in ISs are each posited to only affect system quality and system use, while user attitude is posited to affect perceived usefulness, user satisfaction, and system use, but not system quality. Top-management support for ISs is only hypothesized to directly affect user participation but not the other three user-related constructs. Instead, it is argued to affect facilitating conditions, which are posited to affect the other three user-related constructs. We also do not hypothesize several possible direct paths between user-related constructs and IS success (although user participation is hypothesized to affect all four dimensions of system success). Consistent with Ang and Soh (1997), user experience is posited to affect system use, but not perceived usefulness, user satisfaction, or system quality; indirect paths through user training, user attitude, and user participation may explain these Page 8

relationships. User experience is also not posited to directly affect user participation, although there may be indirect effects through user attitude and user training. Moreover, user attitude and user training are not hypothesized to affect each other. Although these constructs may seem related, we believe it is more likely that they are both influenced in similar directions by facilitating conditions and user experience. Improved facilitating conditions for ISs and greater user experience with ISs would enhance users’ attitude towards ISs and the amount of training they have received in ISs. By contrast, if facilitating conditions are poor and user experience with ISs is low, it is likely that users would have poor attitude towards ISs and a low of training in ISs.

3. Research Methods 3.1 Meta-Analysis We employed meta-analysis methods suggested by Hunter and Schmidt (1990) to compute corrected correlations among the constructs. Meta-analysis refers to a set of procedures for accumulating and analyzing descriptive statistics reported by individual studies (Alavi and Joachimsthaler 1992). The three steps in a metaanalysis – identifying the individual studies to be included in the analysis; coding the individual studies; and accumulating the findings reported by the individual studies – are explained below. Identifying Individual Studies for Meta-Analysis: A preliminary search of the ABI/INFORM database for articles focusing on the four IS success4 constructs indicated seven contextual and user-related antecedent constructs: top-management support for ISs, facilitating conditions for ISs, user experience with ISs, user training in ISs, user attitude towards ISs, user participation in the development of the specific system, and the quality of ISD team. The seven antecedent constructs and the four constructs representing IS success were then used to search for the studies that comprise the meta-analysis sample. In order to identify studies on all the relationships in our theoretical model, we focused on each bivariate relationship at a time. Thus, we conducted the online search for 55 relationships among the four IS success constructs and the seven antecedents of IS success. We searched two online databases (ABI/INFORM 4

We searched for articles with one or more of several terms related to information systems (i.e., “information system,” “information technology,” “computer,” “software,” “hardware,” and “computing technology”) along with one or more of several terms related to IS success: “satisfaction,” “quality,” “use,” “usefulness,” and their variants (e.g., “usage”). Page 9

and JSTOR) for articles published in 18 journals from 1980 to 2004. We first searched for the various terms associated with each pair of constructs5 within full text of the potential articles, when available, and otherwise within the abstract and citations. This search revealed 917 potential studies. After reading the full text of each paper, 501 articles that were not relevant to IS success or its antecedents were excluded6. A closer review of the remaining 416 studies led to 345 studies being excluded due to one or more7 reasons. We excluded: (a) 126 studies (e.g., Lamb and Kling, 2003) that provide a theoretical review or results of qualitative research; (b) 48 empirical studies (e.g., Bailey and Pearson 1983) that develop or refine the measure of one of the IS success constructs but do not investigate its relationship with any other construct in the model; (c) 45 studies (e.g., Baronas and Louis 1988) utilizing laboratory experiments8; (d) 117 studies (e.g., Sethi and King, 1999) that reported statistics that could not be converted into Pearson correlations; (e) 17 studies (e.g., Guinan, Cooprider and Faraj 1998) that dealt with constructs not related to IS development, adoption, or success; (f) 26 studies (e.g., Torkzadeh and Dhillon 2002) that were not set in a traditional organizational context, i.e., which did not examine an organizational IS used by the organization’s members; and (g) eight studies (e.g., Bajaj and Nidumolu 1998) that review the IS literature. Thus, 71 of the 416 studies were identified as being suitable for meta-analysis. A search of the bibliographies of these 71 articles yielded 35 additional journal articles that were suitable for meta-analysis. We also contacted authors of research-in-progress papers presented at conferences to request unpublished papers. Moreover, we identified doctoral dissertations using the lists published by MIS Quarterly and Dissertation Abstracts International. This process identified 24 additional studies, leading to 130 studies in total. However, no study had examined the relationship between quality of ISD team and user training. In addition, only one study 5

6 7 8

The search was done using one or more of the aforementioned search terms related to information systems, along with one or more of search terms related to each construct involved in the relationship. In a pilot run of the search procedure before adopting it, two authors independently searched for two relationships involving four constructs, and obtained identical results. Two examples of the articles excluded due to this reason are: Hirschheim and Lacity (2000), which examined IS outsourcing, and Sabherwal and King (1995), which examined strategic IS decision-making process. The sum of the numbers of studies excluded due to reasons (a)-(g) exceeds 345 because some studies were excluded due to more than one reason; e.g., 15 of the 46 studies excluded due to reason (c) would also have been excluded due to reason (d). This was because it is considered inappropriate to combine laboratory experiments with studies that do not include manipulations, experimental controls, and random assignment of subjects (e.g., surveys) in a meta-analysis (Egger, Ebrahim, and Smith 2002; Parker et al. 2003; and Roth, Bobko, and McFarland, 2005) Page 10

had examined the relationship of quality of ISD team with facilitating conditions and user experience. We therefore excluded quality of ISD team, and dropped the three studies that focused on relationships involving quality of ISD team. The remaining 127 studies are identified in Appendix A of the Online Supplement. To ensure independence of samples for the meta-analysis, we eliminated six studies that used the same data sets as another study in the sample. The meta-analysis is based on findings reported by the remaining 121 studies9. Coding Studies for Meta-Analysis: For relationships involving IS success constructs, we coded the study only if the data had been provided by the end-users and not by other informants (e.g., Choe 1996). Moreover, for each bivariate relationship, we coded only one observation from a single study, with two exceptions. First, if the data was split into, and reported only for, independent subgroups (e.g., Sanders and Courtney 1985), the results for each individual group were coded and used in the analysis. Second, if a study was based on longitudinal data and separately reported the results for each period (e.g., Venkatesh 2000), we combined the various correlations as follows. If the study included odd number of periods, we used the correlation and sample size for the mid-point of the sample. If the study included even number of periods, we used the average of the correlations for the two middle periods and the smaller of the corresponding sample sizes. Thus, only one correlation was included for meta-analysis from each longitudinal study. No correlation used for the meta-analysis focused on the pre-implementation situation. To enable consistency in coding, we designed a coding sheet and formulated coding rules (e.g., about the preference for multi-item measures over single-item measures). Initially, two authors independently coded data from the same studies. We obtained variable names, contexts, and the relevant statistics (including

9

They are distributed over time as follows: 1982 (1), 1984 (1), 1985 (3), 1986 (6), 1987 (3), 1988 (1), 1989 (5), 1990 (7), 1991 (7), 1992 (13), 1993 (6), 1994 (8), 1995 (7), 1996 (7), 1997 (9), 1998 (5), 1999 (6), 2000 (9), 2001 (6), 2002 (2), 2003 (6), and 2004 (3). They are distributed across sources as follows: ACM Computing Surveys (0), Academy of Management Journal (3), Academy of Management Proceedings (1), Administrative Science Quarterly (0), the Americas Conference on Information Systems (0), Communications of the ACM (1), Database (0), Decision Sciences (10), Decision Support Systems (6), IEEE Transactions on Engineering Management (2), Information and Management (29), Information Systems Research (2), International Conference on Information Systems Proceedings (6), Journal of Association for Information Systems (1), Journal of End-User Computing (1), Journal of Management Information Systems (15), Management Science (5), MIS Quarterly (17), Omega (7), Organization Science (0), unpublished doctoral dissertations (14), and Quality Management Journal (1). Page 11

correlations, reliabilities, and sample sizes) and categorized the variables into the constructs examined in this paper. Inter-rater reliability for the categorization was excellent (94.7 percent agreement, with Cohen’s Kappa of 0.91) with disagreements being resolved through discussion. Subsequently, each of the same two authors independently coded about half of the remaining studies. Cumulatively, we coded 612 findings10, which are identified in Appendix B of the Online Supplement, for the 45 bivariate relationships among the constructs. Accumulating Findings: We employed the following procedure to obtain the mean corrected effect sizes for each relationship in our model. We first obtained partially corrected correlations by correcting all reported effect sizes for measurement errors11. Next, to correct for sampling errors, we computed the weighted (by sample size) mean of the partially corrected effect sizes (Hunter and Schmidt, 1990; Hedges and Olkin, 1995). This procedure was repeated for all 45 relationships, producing a matrix of fully-corrected correlations involving the ten constructs. Table 2 contains the meta-analysis results, including the means and standard deviations obtained for each construct. It also shows, for each bivariate relationship, the corrected correlation, the cumulative sample size, the number of studies included in the meta-analysis, and the Failsafe N12. Meta-analysis procedures typically favor studies with statistically significant results over studies with non-significant results (Cooper and Hedges 1994). We alleviated such publication bias to some extent by including unpublished doctoral dissertations and conference proceedings. We tested for publication bias using Failsafe Ns, funnel plots, and χ2 tests (the latter two are described in Appendix C of the Online Supplement). Failsafe N varied from 7 to 294 across the 45 relationships, with a median of 45. The Failsafe Ns (and the funnel plots and χ2 tests) provide confidence in the robustness of the correlation matrix with respect to the possible exclusion of studies with non-significant results. We conducted additional analysis, described in Appendix C, to compare the correlation matrix based on the entire set of studies with the correlation matrix in which each 10

11 12

The coding yielded 630 findings, as indicated in Appendix B of the Online Supplement. Of these, 18 were excluded because: (i) the reliability for either variable in the bivariate relationship was below 0.60; or (ii) the sample size in the individual study was above 500 and the reliabilities for at least one variable was not reported. This was done by dividing the observed correlation by the square root of the product of the reliabilities of the two variables (Bamberger, Kluger and Suchard, 1999). For any relationship, Failsafe N indicates the number of additional studies (with null results) needed to render the results for that relationship non-significant, at a pre-specified level (p < 0.05 in this study) (Williams and Livingstone 1994). Page 12

correlation is based on a homogeneous sub-set of observations. This analysis did no find a statistically significant difference between the two correlation matrices. We therefore used the correlation matrix based on all studies in the subsequent LISREL analysis (as recommended by Brown and Peterson 1993).

3.2 Structural Equation Modeling The individual corrected correlations in the correlation matrix produced from the meta-analysis are based on different sample sizes, but LISREL requires a single sample size for the entire matrix. Prior literature recommends several possibilities, including using the minimum sample size (Tett and Meyer 1993) and the harmonic mean sample size (Viswesvaran and Ones 1998). Adopting a more conservative approach, we used the minimum sample size. We computed the mean and standard deviation of each research variable across all studies for which they were reported. We set the reliabilities of all variables to one and the error variances to zero, because the corrected correlations obtained from a meta-analysis are considered free of measurement errors (Hunter and Schmidt 1990). Test of the Theoretical Model: The LISREL analysis was started with the theoretical model, with one exogenous latent construct (top-management support) and nine latent endogenous constructs (the other nine constructs). The results for the structural model corresponding to the theoretical model, given in Figure 4, were unsatisfactory, as shown in Table 3. Five non-significant paths were deleted based on non-significant t-statistics (p < 0.05, two-tailed test). These paths are discussed below. H1c and H1d, concerning the effects of perceived usefulness on user satisfaction and user satisfaction on system use, respectively, were not supported (t = 1.07 and 1.38, respectively). However, these three constructs depend on several common constructs (top-management support, user attitude, user participation, and system quality), and the relationships may have earlier been found to be significant due to the exclusion of such common antecedents. The non-support for H3i (t = 1.63), concerning the effect of user participation on system use, may be because this effect is partly mediated through perceived usefulness, and partly explained through both user participation and system use depending on top-management support. The non-support for H2d (t = 1.38), concerning the effect of user attitude on user participation, may be related to the diverse nature Page 13

of participation. Whereas H2d was based on a positive view of user participation, such participation may sometimes be critical. Users with a negative attitude towards ISs may participate more in the ISD project even though they are resisting, rather than contributing to, the project. Also, greater user participation may be due to IS developers’ efforts to convince users with a negative attitude towards ISs of the system’s usefulness (Markus 1983). Thus, the relationship between user attitude and user participation may be curvilinear rather than linear. Finally, the non-support for H6a (t = 1.27), representing the effect of top-management support on system quality, may be due to this relationship being mediated through user characteristics. When viewed along with the support for H6b, H6c, and H6d, this non-support for H6a indicates that top-management support for ISs may not improve the quality of the focal system, but it might create a situation where individuals make greater use of the system, perceive it as more useful, and feel more satisfied with it. The results for the emergent model are given in Figure 5. It includes three unexpected paths that were included based on theoretical considerations (Marcoulides and Hick 1993) and modification indices (MIs) of 10.0 or more (Denison, Hart and Kahn 1996). One unexpected path is from user attitude to system quality (MI = 44.20). A direct path for this relationship was not included due to the expectation that it would be mediated through user participation. Along with the non-significance of the path from user attitude to user participation, this direct path from user attitude to system quality indicates that with the same level of participation in the ISD project, users with a more positive attitude towards ISs would make more valuable contributions to the IS. By contrast, a similar level of participation by users with a more negative attitude towards ISs might reflect their resistance to the project or their negotiations with the developers (Markus 1983). The second unexpected path is from user experience to user participation (MI = 10.99). A direct path from user experience to user participation was not included due to the expectation that this effect would be mediated through user attitude and user training. However, the attitudinal component of such mediated effect is absent from the emergent model due to the non-significance of the path from user attitude to user participation. This result suggests that user experience leads to greater user participation, over and above its effects through user training. Users with greater experience with IS would have greater motivation, and may be more likely to be Page 14

invited by IS developers, to participate during ISD projects (e.g., Guimaraes et al. 2003), even if they have received similar amount of training in ISs as users who are also less experienced with ISs. The third unexpected path is from system quality to system use (MI = 10.80). Based on Rai et al. (2002), this unexpected path was not hypothesized, but it was included in the DeLone and McLean (1992) model, which considers system quality as affecting both system use and user satisfaction. This path is also consistent with TAM, UTAUT, and IDT, which consider perceived ease of use – an important aspect of system quality – to affect system use, either directly or indirectly through behavioral intention (Venkatesh et al. 2003). The resulting model has excellent fit, as shown in Table 3. The MIs for all the excluded paths are below 10.0 and all the included paths are significant (p < 0.05, two-tailed test). Sensitivity Analysis: To further evaluate the robustness of our results, we tested the emergent model under several other conditions, as described in Appendix D of the Online Supplement. We applied the metaanalytic procedures on four different randomly selected sub-samples of our full sample of studies, and then conducted LISREL analyses using the resulting correlation matrices. Also, instead of setting the reliabilities of the variables to 1.00 (as was done in the testing of the theoretical model because the meta-analysis procedures correct for reliabilities), the reliabilities of all the variables were set to 0.90 and the error variances were set to equal the variance of the scale multiplied by one minus reliability (Jöreskog and Sörbom 2001). We also tested the emergent model using harmonic mean sample size (1703) instead of the minimum sample size (326). These tests support the emergent model, as discussed in Appendix D. Alternative Models: The proposed model represents one combination of expected relationships among the ten constructs we have examined. The underlying theoretical rationale for this model has been presented earlier. However, it is possible to argue for alternative models that organize these ten constructs differently and posit a different set of relationships. In order to further examine the accuracy of the emergent model in this paper, we examined whether the emergent model would differ substantially if we started from a different model. We considered two alternative models as potential starting points.

Page 15

The first alternative initial model was based on the DeLone and McLean (1992) model of system success instead of Rai et al.’s (2002) modified Seddon (1997) model. Thus, it included a path from system use to perceived usefulness instead of the correlation between these constructs, a path from user satisfaction to perceived usefulness instead of the opposite causal path, and a path from system quality to system use instead of a path from system quality to perceived usefulness. The paths from context-related and user constructs were the same as in the theoretical model shown in Figure 4. When this alternative model was used as the starting point, and the above approach was followed to drop and add paths, the emergent model was the same as the model shown in Figure 5, but with one difference: it included the path from system use to perceived usefulness instead of the correlation between them. The fit statistics for this model are nearly identical to those for the emergent model shown in Figure 5. Thus, despite the differences in the two initial models, the emergent models are quite similar. The emergent model shown in Figure 5 has greater theoretical support than the alternative emergent model because Rai et al. built on DeLone and McLean’s and Seddon’s models, and DeLone and McLean’s model included individual impact rather than perceived usefulness. The second alternative initial model involved four changes from the theoretical model in Figure 4. We included additional paths from facilitating conditions for ISs, which imply the presence of an organizational and technical infrastructure that helps in overcoming barriers to IS success (Venkatesh et al. 2003), and user experience with ISs, which implies greater ability to rely on personal experience in gauging the system’s quality and usefulness, to system success. Whereas Figure 4 includes only one of the eight possible paths from facilitating conditions or user experience to the four IS success constructs (i.e., from user experience to system use), this alternative model included three additional paths: from facilitating conditions to system use, and from user experience to user satisfaction and perceived usefulness. These paths have received inconsistent support in prior literature. In addition, this alternative model excluded the path from user participation to perceived usefulness, as this effect may be mediated through system quality. Upon using this model as the starting point, the emergent model was identical to that in Figure 5. Thus, the results of sensitivity analysis as well as the alternative models support the emergent model given in Figure 5. Page 16

4. Discussion 4.1 Limitations Before we discuss our findings, it is important to note the study’s limitations. First, the study is based on the reported statistics from a large number of prior empirical studies, which enables the testing of the comprehensive model. However, as with other meta-analyses, this assumes that it is meaningful to combine results based on different variables and measures, across different studies. It also makes our results dependent on the quality of the prior studies. To minimize these effects, we took the precautions recommended for metaanalyses, but our results should nevertheless be viewed in the light of this inherent limitation of meta-analyses. Second, we had to exclude some constructs due to the low number of prior empirical studies examining them. For example, we could not include quality of ISD team, which is a contextual factor that might depend on top-management support for ISs, and influence user participation and system quality. We also had to exclude behavioral intention, which is an important construct in technology adoption theories, and might mediate the relationships between system use and its contextual and user-related antecedents. Third, we could not include moderating effects in the model. The exclusion of moderating effects was due to our inability to test them in a structural equation model without having access to the original data from each study, but it also helped keep the scope of the study manageable. However, it precluded examining how the relationships in the model might vary across different situations. Consequently, the model best represents the relationships when the user population comprises a mix of users in terms of age and gender. Moreover, the model is more representative of the situation when system use is voluntary13. Finally, for each longitudinal study that only reported results for the individual periods, we used the correlation from the mid-point of the study. Thus, none of the correlations used in the meta-analysis pertained to the pre-implementation period. Moreover, due to the requirements of the meta-analysis, we had to exclude case studies and laboratory experiments. We also excluded studies that were not conducted in a traditional 13

In terms of the cumulative sample size, 19.5 percent of the sample clearly involved voluntary use, whereas only 4.9 percent of the sample clearly involved mandatory use. Most (74.2 percent) of the sample was from studies that did not explicitly state whether use was voluntary or mandatory, and the other 1.4 percent was from studies combining voluntary and mandatory use. Page 17

organizational context, such as when a system implemented by one organization is used by individuals at its customer organizations. Therefore, the emergent model best applies to the post-implementation situation and to information systems in organizational contexts.

4.2 Findings Our first research question directly relates to the motivation for the paper: would the previously found relationships among the four IS success constructs hold if the effects of contextual and user-related antecedents are included? The hypotheses related to this question were based on Rai et al.’s (2002) model of system success. Our results provide mixed support for these hypotheses. They support the expected paths from system quality to perceived usefulness (H1a) and user satisfaction (H1b), and the correlated path between perceived usefulness and system use (H1e), but not the paths from perceived usefulness to user satisfaction (H1c) and from user satisfaction to system use (H1d). Also, as discussed above, one path – from system quality to system use – which was not included in Rai et al’s model, was added. Thus, the model of IS success needs to be modified when contextual and user-related antecedents of IS success are incorporated. Our second research question pertains to constructs affecting IS success. Proposition 3 and the associated hypotheses (H3a–H3i) represent the expected effects of user-related constructs, while Proposition 6 and the associated hypotheses (H6a–H6d) represent the proposed effects of top-management support. Eleven of these 13 hypotheses were supported. The lack of support for the hypothesized effect of user participation on system use (H3i) might be because this effect is mediated through system quality and perceived usefulness. The lack of support for the hypothesized effect of top-management support on system quality (H6a) may be understood in the light of the dependence of system quality on user participation, user attitude, and user training, each of which directly or indirectly (through facilitating conditions) depends on top-management support. Thus, top-management support does not directly lead to better quality systems, but it enables circumstances in which better systems are developed. Also, one unexpected significant path, from user attitude to system quality was found, as discussed above.

Page 18

The results support three of the four hypothesized relationships among user-related constructs (H2a– H2d), the hypothesis about the relationship between the two aspects of the context (H4), and all five hypotheses about the effects of the context on user-related constructs (H5a–5e). The non-supported hypothesis about the effect of user attitude on user participation (H2d) and the unexpected significant path from user experience to user participation may be interrelated. Along with the support for the paths from top-management support to user participation (H5a) and from user training to user participation (H2c), they suggest that even if the users do not have a positive attitude towards ISs, they participate more in ISD if they are experienced and trained in ISs, and perceive top managers as supporting IS.

4.3 Implications for Research The results of the study have implications for the model of IS success. They question some commonly believed relationships, such as between perceived usefulness and user satisfaction, and between user satisfaction and system use. User satisfaction was found to depend on system quality but not on perceived usefulness, and system use was found to depend on system quality but not on user satisfaction. These results indicate that some of the constructs representing IS success, which do not affect each other but depend on some common factors, might appear to affect each other if those common factors are excluded. Specifically, the relationships between user satisfaction and system use, and between perceived usefulness and user satisfaction, may depend on other common factors; this possibility has been recognized in prior research (Rai et al. 2002). User attitude towards ISs in general is one construct that might explain the non-significant results concerning the effect of user satisfaction and system use and the effect of perceived usefulness on user satisfaction. Whereas user satisfaction is “typically viewed as the attitude a user has towards an information system” (Wixom and Todd 2005, p. 87), and reflects the user’s attitude towards the specific system, user attitude reflects the user’s attitude towards information systems in general. DeLone and McLean (1992) note that studies involving user satisfaction as the dependent variable should also include user attitude towards ISs in general because user satisfaction and user attitude towards ISs may be interrelated. We included user attitude towards ISs in this study, and found it to affect user satisfaction and system use, as well as system Page 19

quality and perceived usefulness. However, the inclusion of user attitude towards ISs – along with system quality, which affects perceived usefulness, user satisfaction, and system use – might be the reason we did not find perceived usefulness to affect user satisfaction or user satisfaction to affect system use. Future research on IS success models should therefore include user attitude towards ISs in general, at least as a control variable. Results of this study are generally consistent with the research on technology adoption and use, including the theoretical and empirical work on TRA, TPB, TAM, IDT, SCT, and UTAUT, although we were unable to include behavioral intention, a key construct mediating effects on use in these models, or test for moderating effects. Perceived usefulness and system quality, which incorporates perceived ease of use, affect system use, which is consistent with technology adoption models. For example, in UTAUT, performance expectancy and effort expectancy (which relate to perceived usefulness and system quality, respectively) are argued to affect behavioral intention, which in turn affects use. The non-significant effect of user satisfaction on system use (which is contrary to IS success models, as noted above) is consistent with technology adoption and use models (Wixom and Todd 2005), which have found user satisfaction to be a weak predictor of system use (Davis et al. 1989), especially when perceived usefulness and ease of use (which is incorporated within system quality) are included (Venkatesh et al. 2003). Finally, the non-significant direct effect of facilitating conditions for ISs on system use is also consistent with prior research in this area (Mawhinney and Lederer 1990). In addition to the above consistency with prior research on technology adoption and use, the paper also includes some results that may influence future research in this area. In discussing the UTAUT model, Venkatesh et al. (2003, p. 470) call for incorporating “causal antecedents of the constructs used within the model,” and this study provides insights into some such antecedents. More specifically, it indicates that four aspects related to ISs in general, rather than the specific IS, may affect system quality (which incorporates perceived ease of use), perceived usefulness, and system use. These include user attitude towards ISs (which affects system quality, perceived usefulness, and system use), top-management support for ISs (which influences perceived usefulness and system use), user training in ISs (which affects system quality) and user experience with ISs (which impacts system use). Prior literature on technology adoption and use has considered Page 20

some constructs related to these four constructs, but they have pertained to the specific system rather than ISs in general. Some examples are user satisfaction (which, as noted above, reflects user attitude towards the specific IS), social influence factors (which include top-management support for the specific IS), self-efficacy (which is somewhat related to user training about the specific IS), and user experience with the specific system (which has generally been used as a moderator). The results indicate that it may be useful for future research on technology adoption and use to incorporate these four constructs related to ISs in general as antecedents of constructs related to ease of use, perceived usefulness, and use of a specific system. The theoretical model included a construct, user participation, which has been emphasized in the ISD literature. Prior literature on the effect of user participation on system success has generally focused on system use (Ives and Olson 1984; Hartwick and Barki 1994) or system quality (Ives and Olson 1984; Guimaraes et al. 2003). The results support the expected effects of user participation on system quality, but not its expected effect on system use. Instead, they indicate that user participation leads to better quality systems, which are perceived to be more useful. This result seems intuitively appealing. Moreover, the significant effect of user attitudes towards ISs on system quality, along with the non-significant effect of user attitudes towards ISs on user participation, suggests that users with a more favorable attitude towards ISs in general would make more valuable contributions to ISD even with the same amount of participation. Thus, greater use of the system results from an amalgam of several factors and not from greater user participation alone. These findings should be investigated in future research on user participation in IS development. This paper implicitly assumes that the relationships included in the theoretical model are stable across different kinds of organizations, systems, and users. This approach was needed due to reasons identified in the limitations, but it differs from the contingency approach adopted by Venkatesh et al. (2003) in the development of UTAUT, which used a parsimonious model with ten constructs with four determinants, four moderators, behavioral intention, and use. Venkatesh et al.’s use of a contingency approach provides valuable insights into the moderators. However, we were able to include a more comprehensive model of IS success, and test the relationships among all ten constructs instead of focusing on the various direct and moderating effects on Page 21

behavioral intention and system usage. Future research can build on the emergent model in this paper by developing a contingency-theoretic extension that views the relationships as varying under different conditions. Possible moderators include voluntariness of IS adoption and user characteristics such as age and gender. Voluntariness may be the more important moderator since an IS is usually either voluntary or mandatory, but users are a mix of ages and genders. Characteristics of the IS (e.g., its size and complexity) may also moderate the underlying relationships by influencing the resources needed and available for developing and using it.

4.4 Implications for Practice The results of this study also have some potentially useful implications for practice. First, the results related to the IS success constructs indicate that system quality and perceived usefulness affect system use, but user satisfaction does not. Therefore, system developers and managers should strive to enhance system quality (and ease of use, which it incorporates) and perceived usefulness instead of user satisfaction. They should direct their attention toward factors leading to better quality systems rather than on factors that lead to increased user satisfaction with the system. Second, the paper indicates that system quality, which affects all the other aspects of system success, is improved through user training about ISs, a more favorable user attitude towards ISs, and greater user participation in the development of the specific system. While the importance of user participation has been recognized in the prior literature, its effect may only be within the context of the specific system being developed. In contrast, developing more favorable user attitude towards ISs in general14 and providing users with IS training may be better long-term strategies as they affect the quality of future ISs, and also directly or indirectly lead to greater system use. Managers should provide users with opportunities to receive in-house or vendor-based IS training, which may also improve user attitude towards ISs. User attitude towards ISs may also be improved through enhanced facilitating conditions for ISs and through communication with other individuals who are favorably inclined towards ISs. Moreover, user attitude may improve and user training may accumulate over time, making user participation in ISD less important. 14

However, improved user attitude towards ISs in general may be a valuable side effect of user participation in a specific project. Page 22

Third, the paper reiterates the importance of top-management support and facilitating conditions. Topmanagement support motivates greater user participation, and leads to greater IS success, in terms of perceived usefulness, user satisfaction, and system use. Facilitating conditions for ISs (e.g., technical support, help desks, and online user assistance) do not directly affect IS success, but they provide the necessary context for the users to gain experience with ISs, receive IS training, and develop better attitude towards ISs. Experience, training, and attitude, in turn, directly or indirectly affect IS success. Thus, the paper indicates that the quality of the specific system and four constructs related to ISs in general – user training, user attitude, top-management support, and facilitating conditions – are critical to IS success. If IS developers and managers focus on these aspects, user participation in the development of the specific system, user satisfaction, perceived usefulness, and system use, would improve as well.

4.5 Conclusion This study has developed and tested a comprehensive model of IS success and its contextual and user-related determinants. A rigorous test of this model using a combination of meta-analysis and structural equation modeling provides insights into the ways in which the constructs representing IS success affect each other, especially when they are viewed in the light of their determinants. Based on extensive prior research, this empirical investigation helps advance knowledge of IS success and its determinants beyond the prior models of IS success (DeLone and McLean 1992; Seddon 1997; Rai et al. 2002), and also contributes to research on technology adoption and use (e.g., Davis et al. 1989; Venkatesh et al. 2003). Thus, this paper provides considerable support for the arguments made in prior IS literature but also raises some interesting questions for future research on IS success and its determinants.

References Agarwal, R., J. Prasad. 2000. “A Field Study of the Adoption of Software Process Innovations by Information Systems Professionals.” IEEE Transactions on Engineering Management 47(3) 295-308. Ajzen, I. 1991. “The Theory of Planned Behavior.” Organizational Behavior and Human Decision Processes 50(2) 179-211.

Page 23

Alavi, M., E.A. Joachimsthaler. 1992. “Revisiting DSS Implementation Research: A Meta-Analysis of the Literature and Suggestions for Researchers.” MIS Quarterly 16 95-116. Ang, J., P.H. Soh. 1997. “User Information Satisfaction, Job Satisfaction, and Computer Background: An Exploratory Study.” Information & Management 32(5) 255-266. Bailey, J.E., S.W. Pearson. 1983. “Development of a Tool for Measuring and Analyzing Computer User Satisfaction.” Management Science 29(5) 530-545. Bajaj, A.S., R. Nidumolu. 1998. “A Feedback Model to Understand Information System Usage.” Information & Management 33(4) 213. Bamberger, P.A., A.N. Kluger, R. Suchard. 1999. “The Antecedents and Consequences of Union Commitment: A Meta-analysis.” Academy of Management Journal 47(3) 304-318. Bandura, A. 1986. Social foundations of thought and action: A social cognitive theory, Prentice Hall, Englewood Cliffs, NJ. Barki, H., J. Hartwick. 1994. “Measuring User Participation, User Involvement, and User Attitude.” MIS Quarterly 18(1) 59-82. Baronas, A.K., M.R. Louis. 1988. “Restoring a Sense of Control during Implementation: How User Involvement leads to System Acceptance.” MIS Quarterly 12(1) 111-124. Baroudi, J.J., M.H. Olson, B. Ives. 1986. “An Empirical Study of the Impact of User Involvement on System Usage and Information Satisfaction.” Communications of the ACM 29(3) 232-238. Bentler, P.M., D.G. Bonett. 1980. “Significance Tests and Goodness of Fit in the Analysis of Covariance Structures.” Psychological Bulletin 88(3) 588-606. Brown, M.W., R. Cudeck. 1993. “Single Sample Cross-Validation Indices for Covariance Structures.” Multivariate Behavioral Research, 24(4), 445-455. Brown, S.P., R.A. Peterson, 1993. “Antecedents and Consequences of Salesperson Job Satisfaction: Metaanalysis and Assessment of Causal Effects.” Journal of Marketing Research 30(1) 63-77. Carmines, E., J. McIver. 1981. “Analyzing Models with Unobserved Variables: Analysis of Covariance Structures.” In Social measurement: Current issues. G. B. Bohrnstedt, E. (Ed.), Sage Publications, Beverly Hills, CA, 65-115. Choe, J. 1996. “The Relationships among Performance of Accounting Information Systems, Influence Factors, and Evolution Level of Information Systems.” Journal of Management Information Systems 12(4) 215-239. Compeau, D.R., C.A. Higgins. 1995. “Application of Social Cognitive Theory to Training for Computer Skills.” Information Systems Research 6(2) 118-143. Cooper, H. and Hedges, L.V. 1994. The Handbook of Research Synthesis. New York, NY: Russell Sage Foundation. Davis, F.D. 1989. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.” MIS Quarterly 13(3) 319-339. ____, P. Bagozzi, P.R. Warshaw. 1989. “User Acceptance Of Computer Technology: A Comparison of Two Models.” Management Science 35(8) 982-1001.

Page 24

DeLone, W.H. 1988. “Determinants of Success for Computer Usage in Small Business.” MIS Quarterly 12(1) 51-61. ____, E.R. McLean. 1992. “Information Systems Success: The Quest for the Dependent Variable.” Information Systems Research 3(1) 60-95. Denison, D.R., S.L. Hart, J.A. Kahn. 1996. From chimneys to cross-functional teams: Developing and validating a diagnostic model. Academy of Management Journal. 39(4), 1005-1024. Doll, W.J. 1985. “Avenues for Top Management Involvement in Successful MIS Development.” MIS Quarterly 9(1) 17-35. Egger, M., S.H. Ebrahim, G.D. Smith. 2002. “Where Now for Meta-analysis?” International Journal of Epidemiology, 31(1), 1-5. Fishbein, M., I. Ajzen. 1975. Belief, Attitude, Intention, and Behavior, Addison-Wesley, Reading, MA. Guimaraes, T., M. Igbaria. 1997. “Client/server System Success: Exploring the Human Side.” Decision Sciences 28(4) 851-876. ____, D.S. Staples, J.D. McKeen. 2003. “Empirically Testing some Main User-Related Factors for Systems Development Quality.” The Quality Management Journal 10(4) 39-54. Guinan, P.J., J.G. Cooprider, S. Faraj. 1998. “Enabling Software Development Team Performance during Requirements Definition: A Behavioral versus Technical Approach.” Information Systems Research 9(2) 101125. Hartwick, J., H. Barki. 1994. “Explaining the Role of User Participation in Information System Use.” Management Science 40(4) 440-465. Hedges, L.V., I. Olkin. 1985. Statistical Methods for Meta-analysis. Orlando, FL: Academic Press. Hirschheim, R., M. Lacity. 2000. “The Myths and Realities of Information Technology Insourcing.” Communications of the ACM 43(2) 99. Hunter, J.E., F.L. Schmidt. 1990. Methods of Meta-Analysis: Correcting Error and Bias in Research Findings, Sage Publications, Inc., London. Igbaria, M. 1990. “End-User Computing Effectiveness: A Structural Equation Model.” Omega 18(6) 637-652. ____, T. Guimaraes, B. Davis. 1995. “Testing the Determinants of Microcomputer Usage via a Structural Equation Model.” Journal of Management Information Systems 11(4) 87-104. Ives, B., M.H. Olson. 1984. “User Involvement and MIS Success: A Review of Research.” Management Science 30(5) 586-603. Jarvenpaa, S.L., B. Ives. 1991. "Executive Involvement and Participation in the Management of Information Technology." MIS Quarterly 15(2) 205-227. Jöreskog, K.G., and D. Sörbom. 2001. LISREL 8: User’s Reference Guide. (2nd Edition). Chicago, IL: Scientific Software International, Inc. Lamb, R., R. Kling. 2003. “Reconceptualizing users as social actors in information systems research.” MIS Quarterly 27(2) 197.

Page 25

Lee, S.M., Y.R. Kim, J. Lee. 1995. “An Empirical Study of the Relationships among End-User Information Systems Acceptance, Training, and Effectiveness.” Journal of Management Information Systems 12(2) 189202. Marcoulides, G.A., R.H. Hick. 1993. “Organizational culture and performance: Proposing and Testing a Model.” Organization Science 4(2) 209-225. Markus, L.M. 1983. “Power, Politics, and MIS Implementation.” Communications of the ACM 26 430-444. Mawhinney, C.H., A.L. Lederer. 1990. “A Study of Personal Computer Utilization by Managers.” Information and Management 18(5) 243-253. Melville, N., K. Kraemer, V. Gurbaxani. 2004. “Review: Information Technology and Organizational Performance: An Integrative Model of IT Business Value.” MIS Quarterly 28(2): 283-322. Moore, G., I. Benbasat. 1991. “Development of an Instrument to Measure Perceptions of Adopting an Information Technology Innovation.” Information Systems Research 2(3) 192-222. Parker, C.P., B.B. Baltes, S.A. Young, J.W. Huff, R.A. Altmann, H.A. Lacost, J.E. Roberts. 2003. “Relationships between Psychological Climate Perceptions and Work Outcomes: A Meta-analytic Review,” Journal of Organizational Behavior, 24(4), 389-416. Purvis, R.L., V. Sambamurthy, R.W. Zmud. 2001. “The Assimilation of Knowledge Platforms in Organizations: An Empirical Investigation.” Organization Science 12(2) 117. Rai, A., S.S. Lang, R.B. Welker. 2002. “Assessing the Validity of IS Success Models: An Empirical Test and Theoretical Analysis.” Information Systems Research 13(1) 50-69. Rogers, E.M. 1983. Diffusion of Innovations, The Free press, New York, NY. Roth, P.L., P. Bobko, and L.A. McFarland. 2005. “A Meta-analysis of Work Sample Test Validity: Updating and Integrating Some Classic Literature.” Personnel Psychology, 58(4), 1009-1037. Sabherwal, R., W.R. King. 1995. “An Empirical Taxonomy of the Decision-making Processes concerning Strategic Applications of Information Systems.” Journal of MIS, 11 (4:Spring), 177-214. Sanders, G.L., J.F. Courtney. 1985. “A Field Study of Organizational Factors Influencing DSS Success.” MIS Quarterly 9(1) 77-93. Seddon, P.B. 1997. “A Respecification and Extension of the DeLone and McLean model of IS success.” Information Systems Research 8(3) 240-254. Sethi, V., R.C. King. 1999. “Nonlinear and Noncompensatory Models in Information Satisfaction Measurement,” Information Systems Research 10(1) 87-96. Sharma, R., P. Yetton. 2003. “The Contingent Effects of Management Support and Task Interdependence on Successful Information Systems Implementation.” MIS Quarterly 27(4) 533. Taylor, S., P. Todd. 1995. “Understanding Information Technology Usage: A Test of Competing Models.” Information Systems Research 6(2) 144-176. Tett, R.P., J.P. Meyer. 1993. Job satisfaction, organizational commitment, turnover intention, and turnover: Path analyses based on meta-analytical findings. Personnel Psychology. 46(2:Summer), 259-293. Thompson, R.L., C.A. Higgins, J.M. Howell. 1991. “Personal Computing: Toward a Conceptual Model of Utilization.” MIS Quarterly 15(1) 125-143. Page 26

Thong, J.Y.L., C. Yap, K.S. Raman. 1996. “Top Management Support, External Expertise and Information Systems Implementation in Small Businesses.” Information Systems Research 7(2) 248-267. Torkzadeh, G., G. Dhillon. 2002. “Measuring factors that influence the success of Internet commerce.” Information Systems Research 13(2) 187. Triandis, H.C. 1971. Attitude and Attitude Change, John Wiley and Sons Inc., New York, NY. Venkatesh, V. 2000. “Determinants of Perceived Ease of Use: Integrating Control, Intrinsic Motivation, and Emotion into the Technology Acceptance Model.” Information Systems Research 11(4) 342-265. ____, M.G. Morris, G.B. Davis, F.D. Davis. 2003. “User Acceptance of Information Technology: Toward a Unified View.” MIS Quarterly 27(3) 425-478. Viswesvaran, C., D.S. Ones. 1998. “Theory Testing: Combing Psychometric Meta-Analysis and Structural Equation Modeling.” Personnel Psychology 48 865-885. Vroom, V. 1964. Work and Motivation, Wiley and Sons Inc., New York, NY. Williams, C.R., L.P. Livingstone. 1994. “Another Look at the Relationship between Performance and Voluntary Turnover.” Academy of Management Journal 37(2) 269-298. Wixom, B., P.A. Todd. 2005. “A Theoretical Integration of User Satisfaction and Technology Acceptance.” Information Systems Research 16(1) 85-102. Yap, C.S. 1989. "Issues in Managing Information Technology." The Journal of the Operational Research Society 40(7) 649-659.

Page 27

Page 28

Page 29

Page 30

Table 1: Study Constructs Construct Top-management Support for ISs Facilitating Conditions for ISs

Our definition

Related constructs in theoretical models Top-management support for, and Top-management support (ISD favorable attitude toward, ISs in general. literature); Subjective norm (TRA, TAM, TPB); Social influence (UTAUT) The processes and resources that Facilitating conditions (UTAUT); facilitate an individual’s ability to utilize Perceived behavioral control (TPB); ISs. Compatibility (IDT) The duration or level of an individual's Experience (UTAUT) prior use of computers and ISs.

Related variables in empirical research (Number of studies) Management Support (14); Top-management Support (11); Others (12)

Facilitating Conditions (7); End-user Computing/ Internal Computing/Organizational/Technical Support (9); Information Center (6); Others (13) User Experience with Computer Experience (12); Experience (6); ISs DSS/EIS/Email/CASE /PC/Microcomputer Experience (6); Others (8) User Training in ISs The extent to which an individual has Indirectly related to self-efficacy (SCT) Training (13); User Training (6); End-user Training been trained about ISs, through courses, (2); Others (10) training, manuals, and so on. User Attitude towards A user’s affect, or liking, for ISs and for Attitude towards behavior (TRA, TPB); Attitudes (8); Attitude (5); Affect (4); Attitude towards ISs using them. Attitude towards using technology Computers (4) or EUC (2); Anxiety (2); Computer (UTAUT); Affect (SCT); Anxiety (SCT) Anxiety (2); Others (11) User Participation in The tasks and behaviors that users User participation (ISD literature); User Involvement (10); User Participation (10); the development of the perform during the ISD process, or the User involvement (ISD literature) Involvement (2); Participation (2); Others (12) specific IS users’ psychological state of involvement in the project. System Quality The quality of the system, in terms of System quality (IS success models); Ease of Use (16); System Quality (14); Perceived reliability, ease of use, and response Perceived ease of use (TAM); Ease of Ease of Use (10); Shell/DSS/System Characteristics time. use (IDT); Effort expectancy (UTAUT) (4); Others (11) Perceived Usefulness The degree to which an individual Perceived usefulness (TAM, IS Usefulness (6); Perceived Usefulness (27); Impact on believes that using the system enhances success models); Performance end-users' jobs (4); Impact (3); Perceived DSS his or her productivity and job expectancy (UTAUT) Benefits (2); Others (14) performance. User Satisfaction The extent to which the user believes User satisfaction (IS success models) User Satisfaction (28); User Information Satisfaction that the system meets his or her (11); End-User Satisfaction or Computing Satisfaction information requirements. (8); Overall Satisfaction (5); Others (11) System Use The individual’s behavior of, or effort put System use (IS success models); System use, usage, or utilization (22); Use, usage, or into, using the system. Usage (TAM, UTAUT) utilization (17); Frequency of use (8); Others (15) Page 31

Table 2: Results of Meta-Analysis Mean Number of studies and Failsafe N given above the diagonal. (S.D.) Mean corrected correlation coefficient and (cumulative sample sizea) given below the diagonal. TMS FC EXP TRG ATT PART SQ PU US SU Top-management Support for ISs (TMS) Facilitating Conditions for ISs (FC) User Experience with ISs (EXP) User Training in ISs (TRG) User Attitude towards ISs (ATT) User Participation in the development of the specific IS (PART) System Quality (SQ) Perceived Usefulness (PU) User Satisfaction (US) System Use (SU)

4.64 (1.30)

16 (134)

11 (22)

14 (17)

9 (31)

6 (14)

14 (36)

20 (116)

14 (81)

20 (76)

10 (28)

9 (25) 13 (57)

11 (48) 10 (58) 9 (23)

3 (7) 3 (14) 3 (20) 6 (13)

8 (24) 8 (42) 10 (40) 12 (91)

15 (45) 16 (77) 11 (40) 17 (153)

9 (45) 15 (36) 14 (39) 10 (64)

16 (42) 19 (114) 19 (72) 17 (92)

11 (42)

12 (70)

28 (179)

5 (19)

35 (294)

12 (101) 17 (122)

21 (134) 30 (246) 24 (120)

4.19 (1.61) 4.20 (2.11) 3.64 (1.17) 5.37 (1.32)

0.47 (3002) 0.15 (2592) 0.11 (2369) 0.22 (2036)

0.19 (2498) 0.19 (2243) 0.27 (2797)

0.27 (2586) 0.34 (2665)

0.18 (1991)

3.22 (1.70)

0.17 (823)

0.16 (326)

0.28 (409)

0.38 (409)

0.16 (777)

4.53 (1.42) 4.74 (1.20) 4.83 (1.15) 4.25 (2.23)

0.18 (2917) 0.34 (4555) 0.34 (1829) 0.24 (4096)

0.20 (1627) 0.20 (3184) 0.30 (2652) 0.18 (3411)

0.31 (1923) 0.29 (3852) 0.17 (2418) 0.35 (5056)

0.25 (1722) 0.23 (2119) 0.19 (2509) 0.24 (3495)

0.43 (2512) 0.50 (3599) 0.37 (2045) 0.32 (4077)

0.24 (1531) 0.34 (1897) 0.37 (4486) 0.24 (669)

0.47 (7367) 0.47 (2499) 0.37 (4022)

0.41 (3312) 0.46 (6508)

0.30 (4253)

Table 3: Fit Results for Theoretical and Emergent Models Criterion Statistics χ2 to the degrees of freedom ratio Goodness of fit index (GFI) Adjusted goodness of fit index (AGFI) Normed fit index (NFI) Root mean square error of approximation (RMSEA) Standardized root mean square residual (SRMR)

a

Empirical Results for the Theoretical Model 4.73

Empirical Results for the Emergent Model 1.56

Recommended Values < 3.0

0.95

0.98

> 0.90

0.85

0.95

> 0.90

0.93

0.98

> 0.90

0.09

0.04

< 0.08

0.09

0.05

< 0.08

Reference Carmines and McIver (1981) Bentler and Bonnett (1980) Bentler and Bonnett (1980) Bentler and Bonnett (1980) Browne and Cudeck (1993) Browne and Cudeck (1993)

Each sample size is the sum of sample sizes from individual studies that reported the correlation for that bivariate relationship.

Page 32