A validation of the end-user computing satisfaction ... - Semantic Scholar

2 downloads 8063 Views 142KB Size Report
aDepartment of Management, College of Business Administration, Kansas State ... applied to Taiwanese end-users of typical business software applications.
Information & Management 39 (2002) 503–511

A validation of the end-user computing satisfaction instrument in Taiwan Roger McHaneya,*, Ross Hightowerb, John Pearsonc a

Department of Management, College of Business Administration, Kansas State University, Manhattan, KS 66506, USA b College of Business Administration, University of Central Florida, P.O. Box 161991, Orlando, FL 32816-1991, USA c Department of Management, College of Business and Administration, Southern Illinois University at Carbondale, Carbondale, IL 62901, USA Received in revised form 8 February 2001; accepted 13 May 2001

Abstract This article focuses on the psychometric stability of the end-user computing satisfaction (EUCS) instrument by Doll and Torkzadeh when applied to Taiwanese end-users of typical business software applications. Using a survey of 342 users, this research provides evidence that the instrument is a valid and reliable measure in Taiwanese applications. Given this evidence, managers and software product developers can confidently apply the instrument in the investigation of competing tools, features, and technologies in Taiwan. # 2002 Elsevier Science B.V. All rights reserved. Keywords: End-user computing satisfaction; International MIS; User satisfaction

1. Introduction Management information systems (MIS) research has been criticized for its lack of standardization, failure to develop well-defined outcome measures, and lax methodological rigor [8,17,18,32]. Recent business trends, such as globalization, internationalization and the widespread use of the Internet, have created an increased need for dependable ways to measure the success and/or satisfaction with an organization’s IS. The search for an appropriate dependent variable to measure user satisfaction has been largely academic and has resulted in a bewildering array of instrument choices and considerations. To compound * Corresponding author. Fax: þ1-785-532-7024. E-mail addresses: [email protected] (R. McHaney), ross. [email protected] (R. Hightower), [email protected] (J. Pearson).

the problem, managers and decision-makers are faced with increasingly complex sets of parameters to consider in the development and implementation of IS [4,30]. In order for computer-based applications to be effectively used in the global business environment [20,31], a better understanding of what factors influence a successful implementation needs to be developed. Underlying factors affecting user satisfaction have been investigated empirically in a number of different ways and yet researchers have yet to quantify a dependent variable representing system success [1–3,9,12,21,22]. Although much of the research has been of an academic nature, its intent has been to provide a valid and reliable surrogate for measuring the level of user satisfaction with an organization’s IS. One method of accomplishing this goal is to extend the generalizability of an existing instrument. This approach has been stressed by various IS researchers

0378-7206/02/$ – see front matter # 2002 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 8 - 7 2 0 6 ( 0 1 ) 0 0 1 1 9 - 7

504

R. McHaney et al. / Information & Management 39 (2002) 503–511

as necessary and appropriate until a comprehensive dependent variable has been developed and universally accepted. Although a comprehensive instrument for measuring user satisfaction does not yet exist, accepted surrogate measures are presently in use. One such measure is the Doll and Torkzadeh [10] end-user computing satisfaction (EUCS) instrument. Past research has demonstrated instrument validity—content validity, construct validity, and reliability—as well as internal validity and statistical conclusion validity [11,14–16,27,28]. The results of these studies add evidence to the argument that external validity and generalizability are present; however, to date, this instrument has not been tested in a Taiwanese setting. The purpose of the present study is to determine whether the Doll and Torkzadeh instrument maintains psychometric stability in cross-cultural settings. In past studies, the generalizability of EUCS and other instruments has been questioned by developers and researchers have suggested that these instruments be tested prior to application in new areas. If evidence supports psychometric stability, managers and software product developers can confidently apply the instrument in the investigation of competing tools, features, and technologies. This investigation focuses on establishing construct validity, internal validity and reliability of the Doll and Torkzadeh EUCS instrument in cross-cultural settings. If the hypothesized psychometric properties of this instrument are consistent with prior studies, the use of the EUCS instrument can be confidently extended to use in Taiwanese applications.

2. End-user computing satisfaction The implementation of IS technology has been an uncertain process. Some systems are successful and others are not. While, it may seem convenient to categorize outcomes in this manner, it is not that simple. Success cannot always be characterized as a ‘yes’ or ‘no’ proposition. A particular system may be viewed as a success by some stakeholders and as a failure by others. Because most systems can be measured from a variety of perspectives and may serve an array of needs within an organization, it becomes difficult to classify a system in a binomial fashion.

Instead, it becomes necessary to classify it on continuum of values ranging from failure to success. In the absence of clear, objective measures, researchers and practitioners have turned to perceptual surrogates. User satisfaction, as an indicator of system success, probably originated with Cyert and March [7]. They suggested that an IS that meets the needs of the user reinforces satisfaction with the system. The measure assumes that the user will be dissatisfied with the system if it does not provide information in a satisfactory form. The technical quality of a system is irrelevant, since a technically superior system would still not be considered successful if it did not meet the needs of the users. Literally, hundreds of articles have been published on user satisfaction or closely related constructs over the years. Although some have questioned the validity of user satisfaction as a measure of system success [29], the preponderance of the evidence suggests that user satisfaction is a useful measure of system success [23]. In order to identify the determinants of success, a researcher must first be able to operationalize a measure. Many empirical studies in the area of information systems have been concerned with this task [2,3]. DeLone and McLean present an organized view of this quest for a dependent variable in system success. They state: ‘‘Different researchers have addressed different aspects of success, making comparisons difficult and building a cumulative tradition for I/S research similarly elusive’’. This sentiment is also reflected in a statement by Jarvenpaa, Dickson and DeSanctis: ‘‘Another factor that has contributed to weak and inconclusive results is the use of a great number of measuring instruments, many of which may have problems with reliability and validity’’. A simple approach would be to ask users a single line item question such as ‘Was your system a success?’. At first glance, this may seem to be an easy, straightforward method of obtaining a dependent variable for success. Upon deeper reflection, obvious problems come to light. In fact, the single item approach has been criticized as ambiguous or prone to misunderstanding. When the single item approach is applied to a Taiwanese setting, language and cultural differences (i.e. definition of success) might heighten this misunderstanding. Another approach for obtaining a reliable dependent variable would be to construct a new instrument

R. McHaney et al. / Information & Management 39 (2002) 503–511

for the Taiwanese environment. The process of building a new instrument is not easy. Straub outlines a procedure that includes methods for instrument validation, internal validity, statistical conclusion validity, and external validity. While this approach could be employed in the construction of a measure of information system success, yet another instrument would have been added to the great number already in existence. DeLone’s and McLean’s concerns for a cumulative tradition for consistent information system research and Jarvenpaa’s, Dickson’s and DeSanctis’ plea for a standardization of IS instruments would have to be ignored. For these and other reasons, an existing instrument was used as a surrogate measure for success. Doll and Torkzadeh proposed an instrument for measuring end-user computing satisfaction (EUCS). Like Davis, Doll and Torkzadeh developed an instrument that consisted of ease of use, content, accuracy, format and timeliness. Their instrument was specifically designed to work within the end-user computing environment. In 1994, Doll and coworkers revisited their model constructs and tested five alternative configurations for EUCS. This study provided evidence that EUCS comprises five sub-scales within a second-order construct called EUCS. The current form of the instrument was used here. Doll and Torkzadeh used a multistep process to validate their instrument and found it to be generalizable across several applications. However, Taiwanese applications were not among them. Therefore, it is important to ensure that the psychometric properties of the instrument remain constant. The instrument is of particular interest here because the applications tested can be categorized as end-user computing.

3. Methods 3.1. Subjects The target population for this study was knowledge workers—specifically, individuals whose primary work related activities were information-based and required the use of IT to complete these activities. Representatives from 25 companies were identified and asked to participate in this study. The companies (all located in Taiwan) represented a diverse group of

505

industries including the public sector, manufacturing, consulting, health care, transportation, and finance. Each representative was asked to distribute approximately 20 questionnaires to a randomly selected group of knowledge workers throughout their organization. 3.2. Instrument The survey package contained a cover letter from the organizational representative, an additional letter from the researchers explaining the purpose of the study, and the questionnaire. All respondents were guaranteed confidentiality of individual responses, and only summary statistics were returned to participating organizations. The questionnaire consisted of two parts. The first involved 14 demographic questions designed to solicit information about the respondent, their organization, and the extent to which they used a computer at work and home. The second part of the questionnaire was Doll and Torkzadeh’s EUCS instrument. This consisted of 12 questions designed to measure the respondent’s satisfaction with the computer systems they most frequently utilize. This instrument was designed to provide a composite measure of user satisfaction, as well as to provide measures on the dimensions of information content, accuracy, format of information, ease of use, and timeliness. The exact form of the EUCS instrument used in this study is shown in Fig. 1. Five Likert-type scales were used to score the responses. 3.3. Demographics The response rate was 68.4% (342 usable response out of 500 questionnaires distributed). This high return rate can most likely be attributed to the use of a corporate representative/sponsor in the dissemination and collection of the survey instrument. The sample was almost evenly split between males (48%) and females (52%); the typical respondent was 31.2 years old; had worked for their organization approximately 5.7 years; and represented a wide range of positions within the organization. The target group was knowledge workers or individuals whose primary work related activities were information-based. Almost 74% of the respondents indicated that they used their computer several times a day to complete work related

506

R. McHaney et al. / Information & Management 39 (2002) 503–511

Fig. 1. Form of EUCS instrument.

activities, while the remainder used their computer somewhat less frequently. Typical software packages utilized included spreadsheets, database packages, word processors, and statistical packages. These responses provide evidence that the respondents are knowledge workers and fall within our targeted population.

4. Analysis and results Table 1 reports simple statistics and correlation for each element of the instrument. The internal consistency measure of Cronbach alphas for each factor

were: content ¼ 0:81; accuracy ¼ 0:80; format ¼ 0:65; ease of use ¼ 0:79; and timeliness ¼ 0:66. Reliability for the overall 12-item instrument was calculated to be 0.90, which compares favorably to an overall alpha of 0.92 in the original Doll and Torkzadeh study. Table 2 reports corrected item–total correlations and alpha if the item is deleted. Individual item–total correlations are all significant ranging from a low of 0.57 to a high of 0.68. The alpha for each deleted item remains at 0.89 indicating good reliability. An examination of the sub-scales reveals significant item–total correlations ranging from 0.59 to 0.76 and alpha for each deleted construct to be from 0.79 to 0.82.

R. McHaney et al. / Information & Management 39 (2002) 503–511

507

Table 1 Correlation matrices and simple statisticsa

Item correlations C2 C3 C4 A1 A2 F1 F2 E1 E2 T1 T2

C1

C2

C3

C4

A1

A2

F1

F2

E1

E2

T1

0.63 0.57 0.46 0.41 0.39 0.40 0.43 0.42 0.45 0.54 0.45

0.54 0.46 0.36 0.29 0.34 0.44 0.45 0.45 0.56 0.49

0.45 0.34 0.33 0.41 0.37 0.43 0.39 0.54 0.40

0.42 0.40 0.39 0.46 0.36 0.32 0.52 0.59

0.67 0.39 0.59 0.36 0.35 0.35 0.43

0.48 0.54 0.35 0.39 0.33 0.35

0.48 0.38 0.37 0.39 0.30

0.36 0.41 0.41 0.38

0.65 0.36 0.33

0.37 0.27

0.49

Content

Accuracy

Ease of use

Format

Timeliness

0.488 0.419 0.719

0.499 0.786

0.799

Sub-scale and overall instrument Accuracy 0.505 Ease of use 0.561 Format 0.588 Timeliness 0.740 Overall EUCS 0.898

correlations

Item

Variance

Mean

0.438 0.636 0.467 0.744

Individual items: simple statistics C1 3.33 0.83 C2 3.40 0.81 C3 3.22 0.88 C4 3.36 0.90 A1 3.59 0.89 A2 3.58 0.87 F1 3.51 0.86 F2 3.53 0.84 E1 3.46 0.85 E2 3.51 0.84 T1 3.36 0.89 T2 3.37 0.95 Factor

Mean

Variance

Sub-scales: simple statistics Content 13.30 Accuracy 7.17 Format 7.04 Ease of use 6.96 Timeliness 6.73

2.72 1.60 1.47 1.54 1.58

Overall EUCS

7.1

a

41.21

Sample size ¼ 342.

Construct validity was assessed using confirmatory factor analysis to determine if the hypothesized factor structure exists in the collected data [5]. Doll and Torkzadeh originally proposed a five-scale factor

structure. Subsequent research and additional analysis has provided evidence that EUCS is a multifaceted construct consisting of five sub-scales and a single overall second-order construct [6]. The second level

508

R. McHaney et al. / Information & Management 39 (2002) 503–511

Table 2 Reliability analysis Item

Corrected item–total correlation

Alpha of entire instrument if item or construct is deleted

C1 C2 C3 C4 A1 A2 F1 F2 E1 E2 T1 T2

0.68 0.66 0.62 0.64 0.62 0.59 0.57 0.64 0.58 0.57 0.64 0.59

0.89 0.89 0.89 0.89 0.89 0.89 0.89 0.89 0.89 0.89 0.89 0.89

Sub-scales Content Accuracy Format Ease of use Timeliness

0.76 0.61 0.68 0.59 0.69

0.79 0.81 0.80 0.82 0.79

structure is a single factor called EUCS, which is composed of the original factor structure of content, accuracy, format, ease of use, and timeliness. Results of the current study are compared to model 4—five first-order factors/one second-order factor solution recommended by Doll and coworkers. LISREL 8 [13,19] was used to test the fit of the hypothesized model against the collected data [24–26]. Fig. 1 contains the a priori factor structure that was tested. Table 3 presents the goodness-of-fit indexes for this study and compares them to the values reported by Doll and coworkers. The absolute indexes (GFI ¼ 0:98, AGFI ¼ 0:91, and RMSR ¼ 0:04) compare favorably with the values reported by Doll and coworkers indicating a good model-data fit. The Chisquare statistic divided by the degrees of freedom also indicates a reasonable fit at 2.62 [33]. LISREL’s maximum likelihood estimates of the standardized parameter estimates are presented in Table 4 for the observed variables and Table 5 for the latent variables. Table 4 compares factor loadings,

Table 3 Goodness-of-fit indexes

Chi-square (d.f.) Chi-square/d.f. Normed fit index (NFI) Goodness-of-fit index (GFI) Adjusted goodness-of-fit index (AGFI) Root mean square residual (RMSR)

Current study

Doll et al. [14] study

115.6 (44) 2.62 0.98 0.98 0.91 0.04

185.81 (50) 3.72 0.940 0.929 0.889 0.035

Table 4 Standardized parameter estimates and t-values Item

Current study Factor loading

C1 C2 C3 C4 A1 A2 F1 F2 E1 E2 T1 T2

0.76 0.76 0.70 0.68 0.82 0.81 0.64 0.76 0.81 0.81 0.74 0.67 a

(16.1) (16.2) (14.5) (13.9) (17.0) (16.7) (12.1) (14.6) (15.8) (15.9) (14.4) (12.9)

Doll et al. [14] study R-square (reliability) 0.57 0.58 0.49 0.46 0.68 0.66 0.40 0.58 0.65 0.66 0.54 0.44

Factor loading a

0.826 0.852 (20.36) 0.725 (16.23) 0.822 (19.32) 0.868a 0.890 (20.47) 0.780a 0.829 (17.89) 0.848a 0.880 (16.71) 0.720a 0.759 (13.10)

R-square (reliability) 0.68 0.73 0.53 0.68 0.76 0.79 0.61 0.69 0.72 0.78 0.52 0.58

Indicates a parameter fixed at 1.0 in original solution and t-values for item factor loadings are indicated in parentheses.

R. McHaney et al. / Information & Management 39 (2002) 503–511

509

Table 5 Structural coefficients and t-valuesa Item

Current study

Content Accuracy Format Ease of use Timeliness a

Doll et al. [14] study

Standard structure coefficient

R-square (reliability)

Standard structure coefficient

R-square (reliability)

0.74 0.80 0.90 0.78 0.72

0.55 0.64 0.80 0.62 0.52

0.912 0.822 0.993 0.719 0.883

0.68 0.73 0.53 0.68 0.76

(22.4) (26.8) (40.9) (25.6) (20.8)

(17.67) (16.04) (18.19) (13.09) (13.78)

Indicates a parameter fixed at 1.0 in original solution and t-values for factor structural coefficients are indicated in parentheses.

corresponding t-values, and R-square values for this study with those reported by Doll and coworkers. All items have significant loadings on their corresponding factors indicating good construct validity. R-square values range from 0.40 to 0.68 providing evidence of acceptable reliability for all individual items. Table 5 provides standard structural coefficients and corresponding t-values as well as R-square values for the latent variables. The standard structural coefficients indicate the validity of the latent constructs with values ranging from 0.72 to 0.90. The t-values are all significant and the R-square values range from 0.52 to 0.80 indicating acceptable reliability for all factors.

5. Conclusions 5.1. Limitations This research looked at validation issues related to the EUCS instrument and its administration to IS users in Taiwan. Although the instrument retained its psychometric properties in this application, these results cannot be generalized to all international applications without further testing. The current findings do strengthen arguments indicating that EUCS remains valid outside the United States. 5.2. Summary The purpose of this research was to determine whether an IS instrument commonly used as a surrogate measure for success, EUCS, can be applied in a Taiwanese setting. Before a manager, decision-maker, or software developer is able to obtain the benefits of

using a standardized instrument to compare competing software packages, features, or approaches, it is important to make sure that the instrument maintains its psychometric properties. The experiment indicates that the EUCS instrument did retain these properties. In addition, this study provides further evidence that EUCS is a multifaceted construct consisting of subscales—content, accuracy, format, ease of use, and timeliness—and a single overall second-order construct, EUCS.

References [1] D.A. Adams, R.R. Nelson, P.A. Todd, Perceived usefulness, ease of use, and usage of information technology: a replication, MIS Quarterly 16 (2), 1992, pp. 227–247. [2] J.E. Bailey, S.W. Pearson, Development of a tool for measuring and analyzing computer user satisfaction, Management Science 29 (5), 1983, pp. 530–545. [3] J.J. Baroudi, M.H. Olson, B. Ives, An empirical study of the impact of user involvement on system usage and information satisfaction, Communications of the ACM 29 (3), 1986, pp. 232–238. [4] S. Blili, L. Raymond, S. Rivard, Impact of task uncertainty, end-user involvement, and competence on the success of enduser computing, Information & Management 33 (3), 1998, pp. 137–153. [5] K.A. Bollen, Structural Equations with Latent Variables, Wiley, New York, 1989. [6] W.W. Chin, R.N. Newsted, The importance of specification in causal modeling: the case of end-user computing satisfaction, Information Systems Research 6 (1), 1995, pp. 73– 81. [7] J. Cyert, J.G. March, A Behavioral Theory of the Firm, Prentice-Hall, Englewood Cliffs, NJ, 1963. [8] W.H. DeLone, E.R. McLean, Information success: the quest for the dependent variable, Information Systems Research 3 (1), 1992, pp. 60–95.

510

R. McHaney et al. / Information & Management 39 (2002) 503–511

[9] F.D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly 13 (4), 1989, pp. 319–340. [10] W.J. Doll, G. Torkzadeh, The measurement of end-user computing satisfaction, MIS Quarterly 12 (2), 1988, pp. 259– 274. [11] W.J. Doll, W. Xia, G. Torkzadeh, A confirmatory factor analysis of the end-user computing satisfaction instrument, MIS Quarterly 18 (4), 1994, pp. 453–461. [12] M.J. Ginzberg, Key recurrent issues in the MIS implementation process, MIS Quarterly 5 (2), 1981, pp. 47–59. [13] L.A. Hayduk, Structural Equation Modeling with LISREL, Johns Hopkins University Press, Baltimore, MD, 1987. [14] A.R. Hendrickson, K. Glorfeld, T.P. Cronan, On the repeated test-retest reliability of the end-user computing satisfaction instrument: a comment, Decision Sciences 25 (4), 1994, pp. 655–667. [15] A.R. Hendrickson, P.D. Massey, T.P. Cronan, On the test-retest reliability of perceived usefulness and perceived ease of use scales, MIS Quarterly 17 (2), 1993, pp. 227– 230. [16] B. Ives, M.H. Olson, J.J. Baroudi, The measurement of user information satisfaction, Communications of the ACM 26 (10), 1983, pp. 785–793. [17] S.L. Jarvenpaa, G.W. Dickson, G. DeSanctis, Methodological issues in experimental IS research, MIS Quarterly 9 (2), 1985, pp. 141–156. [18] A.M. Jenkins, Research methodologies and MIS research, in: E. Mumford, et al. (Eds.), Research Methods in Information Systems, Elsevier, North Holland, 1985, pp. 103–117. [19] K.G. Jo¨ reskog, D. So¨ rbom, LISREL 8: Structural equation modeling with the SIMPLIS command language, Chicago Scientific Software, Inc., Chicago, IL, 1993. [20] O. Khalil, M. Elkordy, The relationship between user satisfaction and systems usage: empirical evidence from Egypt, Journal of End-User Computing 11 (2), 1999, pp. 21– 28. [21] W.R. King, B.J. Epstein, Assessing information system value, Decision Sciences 14 (1), 1983, pp. 34–45. [22] M.A. Mahmood, Systems development models: a comparative investigation, MIS Quarterly 11 (3), 1987, pp. 293–311. [23] M.A. Mahmood, J.M. Burn, L.A. Gemoets, C. Jacquez, Variables affecting information technology end-user satisfaction: a meta-analysis of the empirical literature, Int. J. Human-Computer Studies 52, 2000, pp. 751–771. [24] H.W. Marsh, The structure of masculinity/femininity: an application of confirmatory factor analysis to higher-order factor structures and factorial invariance, Multivariate Behavioral Research 20, 1985, pp. 427–449. [25] H.W. Marsh, D. Hocevar, Application of confirmatory factor analysis to the study of self-concept: first- and second-order factor models and their invariance across groups, Psychological Bulletin 97 (3), 1985, pp. 562–582. [26] H.W. Marsh, D. Hocevar, A new, more powerful approach to multitrait multimethod analysis: application of second-order confirmatory factor analysis, Journal of Applied Psychology 73 (1), 1988, pp. 107–117.

[27] R.W. McHaney, T.P. Cronan, Computer simulation success: on the use of the end-user computing satisfaction instrument, Decision Sciences 29 (2), 1998, pp. 525–536. [28] R.W. McHaney, R. Hightower, D. White, EUCS test–retest reliability in representational model decision support systems, Information & Management 36, 1999, pp. 109–119. [29] N.P. Melone, A Theoretical assessment of the user satisfaction construct in information systems research, Management Science 36 (1), 1990, pp. 76–91. [30] S.C. Palvia, N.L. Chervany, An experimental investigation of factors influencing predicted success in DSS implementation, Information & Management 29 (1), 1995, pp. 43–54. [31] C. Shayo, R. Guthrie, M. Igbaria, Exploring the measurement of end-user computing success, Journal of End-User Computing 11 (1), 1999, pp. 5–14. [32] D.W. Straub, Validating instruments in MIS research, MIS Quarterly 13 (2), 1989, pp. 147–166. [33] B.B. Wheaton, B. Muthen, D.F. Alwin, G.F. Summers, assessing reliability and stability in panel models, in: D.R. Heise (Ed.), Sociological Methodology, Jossey-Bass, San Francisco, CA, 1977. Roger McHaney is an Associate Professor of Management Information Systems at Kansas State University. Prior to starting an academic career, Dr. McHaney was employed by the Jervis B. Webb Company. While there he developed discrete simulation models for managerial decision-makers in corporations such as General Motors, Goodyear, Ford, IBM, Chrysler, Kodak, Caterpillar, The Los Angeles Times, and The Boston Globe. His current research interests include information systems success measures, innovative uses for simulation languages and uses of the genetic algorithm in simulation settings. Dr. McHaney holds a PhD from the University of Arkansas, where he specialized in Computer Information Systems and Quantitative Analysis. He has published in journals such as Decision Sciences, Decision Support Systems, Information & Management, Simulation, International Journal of Production Research, Simulation & Gaming, and The Journal of End-User Computing. Ross Hightower received his MS and PhD in Decision Sciences from Georgia State University. After 9 years at Kansas State University he moved to University of Central Florida where he is, once again, an Assistant Professor. His research has been published in journals such as Decision Sciences, Information Systems Research, and Information & Management. His main research interests are in the areas of computer-mediated communication, information exchange, and user acceptance of technology. He has served as a consultant to companies such as Bank America, IBM, BAPCO and United Capital Insurance.

R. McHaney et al. / Information & Management 39 (2002) 503–511 John Pearson is an Associate Professor of Information Technology at Southern Illinois University at Carbondale. He has been on the faculties of Kansas State University and St. Cloud State University. He has published articles in Communications of the ACM, Information &

511

Management, Decision Support Systems, Journal of Computer Information Systems, Journal of Strategic Information Systems, Journal of Management Information Systems, and Public Administration Quarterly. Dr. Pearson has been active in both national and international conferences. His primary teaching and research interests are in the areas of Internet applications, learning and training, and quality management issues.

Suggest Documents