Measuring Acceptance of Computer-Mediated ... - Semantic Scholar

3 downloads 28089 Views 245KB Size Report
The most common form of CMCS is “electronic mail” ... computer conferencing systems and one electronic mail system. .... course on office automation. Among ...
Measuring Acceptance of Computer-Mediated Communication Systems

Starr Roxanne Hiltz Department of Computer and Information Science, New Jersey institute of Technology, Newark, NJ 07702 Kenneth Johnson Department of Psychology, Upsala College, East Orange, NJ 07019

Three dimensions of acceptance for Computer-Mediated Communication Systems (CMCS) were only moderately interrelated in a longitudinal study of users of four systems: use, subjective satisfaction, and benefits. The methodological objective of this study was to identify generaiizabie factor structures for acceptance of CMCS, based on a small set of items. Analysis of the items measuring subjective satisfaction identified four factors: satisfaction with the interface, feelings that the system’s performance was productive and stimulating, ability of CMCS to support expressive interpersonal communications, and problems with CMCS as a mode of communication and information exchange. TWO components of benefits were identified: impacts on prOChICtivity and impacts on career advancement. The findings suggest that future studies of CMCS’s in particular, and perhaps of computer-based information systems in general, should not assume that usage alone or subjective satisfaction alone are adequate measures of successful implementation. Use, subjective satisfaction and perceived benefits may vary independently.

Introduction “Acceptance” or “success” of COmpUter SySkmS Or new communication technologies is sometimes assumed to be unidimensional. For instance, if employees use an interactive computer system, it may be defined by management as “successful. ” “Technicists” (Moshowitz, 1981) Or “SYSterns rationalists” (Kling, 1980) may assume that if a computer system is well designed, it will be used; if it is king used, the users must like it; and therefore it must be having the intended beneficial impacts. However, many social analyses of computing assume that whether or not systems will be used, and whether or not they will have the inReceived

4, 1988.

December

9, ,987:

revised

01989 bY ‘oh Wiley pi Sons, Inc.

JOURNAL

February

29, 19% accepted March

tended beneficial effects on users as individuals and on productivity enhancement for organizations, is much more problematic. (See, for instance, Keen, 198 1; Atteweil & Rule, 1984; Strassman, 1985). In order for research results relating to the acceptance or success of computer-based information systems to be cumulative, information scientists must build a common conceptual framework. They must develop valid, reliable, and generalizable measures that can be applied to whole classes of systems, rather than only to specific systems. This article reports on a study which made progress toward these goals for one class of information system, ComputerMediated Communication Systems (CMCS). The most common form of CMCS is “electronic mail” or message systems, which deliver discrete text communications from a sender to one or more recipients via computer networks. Computerized conferencing systems provide software structures oriented toward the support of extended group communication on a common task or topic Wiltz & Turoff, 1978; Rice, 1984; Johansen, Vallee & Spangler, 1979; Uhlig, Farber, & Bair, 1979; Hiltz & Turoff, 1985). Most frequently approached as a medium of communication (e.g., (Rice, 1984)), CMCS’s are simultaneously a type of computer based information system. Understanding user behavior in regard to CMCS can benefit from prior studies of computer based information systems as well as from research traditions derived from studies of group communication. Prior research on CMCS and MIS was drawn upon to develop the concept of three dimensions of acceptance or success (use, satisfaction, and benefits); measures of these dimensions; and hypotheses about their interrelationships and correlates. Baseline and four-month follow-up questionnaires were sent to almost 1000 new users of three computer conferencing systems and one electronic mail system. Objectives of the study included identifying the factors or underlying constructs which form the compo-

OF THE AMERICAN SOCIETY FOR lNFORMATlON SCIENCE. 40(6):386-397,

1989

C C C 0002-8231 8 9 060386~ 12f04.00

,

nents of user satisfaction and benefits of CMCS; examining how satisfaction and benefits are related to use and to each other across a variety of systems; and examining what classes of predictors are most strongly associated with each of the three dimensions of acceptance. Data related to the former two objectives will be reported in this article. (See HiItz. Kerr, 8~ J 0 h nson. 1985 for complete results.) Acceptance is often popularly equated with amount of use of a CMCS or MIS. However, this is an inadequate measure: Hours of use is not a completely valid measure of acceptance of computer-mediated communication systems. Ideally. one would supplement the amount of use as an indicator with subjective ratings of a system’s acceptability and potential benefits

(Kerr & Hiltz; 1982. p. 58). Lack of use cannot be equated with “rejection:” the user may have no task or reason for using a system, may not have convenient access, or may lack an understanding of what the system can do and how to operate it. On the other hand, employees may be ordered to use a system, and thus may accumulate high usage statistics, while at the same time disliking the system and believing that its costs or disadvantages outweigh its benefits (Kerr & Hiltz, 1982; Powers & Dickson, 1973). In addition, as will be discussed further below, measures of amount of time spent using a system are not as straightforward as might be thought at first. Hence, in this study, acceptance is defined to mean “successful implementation or adoption” of a CMCS or MIS. We expected to find moderate levels of correlation among the three dimensions of acceptance, supporting the assumption that they are distinct but related concepts. The factor structures for subjective satisfaction and beneftits reported in this article may be useful in helping to build a standard set of measures for future studies of acceptance and impacts of CMCS. The findings on the extent of intercorrelation vs. independence of the use, subjective satisfaction, and benefits dimensions of acceptance have implications for studies of a much wider class of infonnation systems. Computer-Mediated Communications Systems: Related Research There is extensive literature on CMCS, encompassing hundreds of books and articles. (For reviews, see Rice. 1984; Kerr & Hiltz, 1982; Rice, 1980; Hiltz, 1986; Steinfield, 1986b; Culnan & Markus, 1987). The first systematic, empirical studies of CMCS took place at the Institute for the Future. Their taxonomy of “elements of group communication” (Vallee, Johanson, Randolph, & Hastings, 1974) served as a starting point for subsequent studies at the New Jersey Institute of Technology. Hiltz (1978, 1984) undertook a longitudinal case study of scientific research communities on EIES. the Electronic Information Exchange System at NJIT. It examined correlates of use, perceived productivity enhancement, and some aspects of subjective satisfaction.

A second project (Hiltz et al. 1985; Kerr & Hiltz, 1982) systematically compared the findings related to 30 possible predictors of acceptance of CMCS’s, for all studies with published evaluations. The evaluators were asked to reexamine their data and report their findings within a common framework. Studies for which correlates of acceptance were reported include a variety of systems, applications, and organizations (Martin0 & Bregenzer, 1981; Bregenzer & Martino, 1980; Umpleby, 1980; McCarroll, 1980; Siegel, 1980; P. Johnson-Lenz & T. Johnson-Lenz, 1980, 1981; Lamont, 1980; Stevens, 1980; Kerr, 1979; Guillaume, 1980; Adler & Lipinski, 1981; Lipinski, Spang, & Tydeman, 1980; Adriansson, 1980; Bair, 1974; Edwards, 1977; Tapscott. Greenberg, & Sartor; 1981). Evidence was sparse and conflicting for many of the correlates of user acceptance of CMCS. The conflicting results might be dttributable to different indicators of acceptance, different user populations, or differences among the systems. In addition, much of the data reported for the prior synthesis of research on CMCS was qualitative, making it difficult to determine the interaction among variables or their relative power in predicting acceptance. The research reported here is the first to systematically include identical measures of the same variables in a single longitudinal study that includes a variety of different types of users and different CMCS’s. Some prior research on teleconferencing focused on the appropriateness of alternative communication modes for task-oriented vs. social-emotional functions. For example, a controlled laboratory experiment on small group problem solving (Hiltz, Johnson, & Turoff, 1986) compared the process and outcome of computerized conferences vs. faceto-face discussions. There was proportionately more taskoriented communication associated with decision quality, and less social-emotional communication associated with ability to reach agreement, in computer conferences. Modes of communication differ in “social presence:” the feeling that a medium is personal, warm, and sociable rather than impersonal, cold and unsociable (Rice, 1984; Short, Williams, & Christie, 1976). The paucity of nonverbal cues in CMCS may limit information that serves to improve perception of communication partners, to regulate social interaction, and to provide a social context for communication. On the other hand, participants may explicitly increase overt social-emotional expressions, such as greetings (Duranti, 1986) and paralinguistic cues (Carey, 1980), to compensate for the missing communication channels. Some analysts have asserted that CMCSs are unsuitable for social-emotional communication (Heimstra, 1982), whereas others have described high levels of social-emotional content, which may escalate to “flaming” (Hiltz & Turoffi 1978; Rice & Love, 1987; Sproull & Kiesler, 1986). Steinfield (1986) has reported the results of a study of 220 users of an electronic mail system within a single organization. Not surprisingly, two distinct system-use factors were identified: task-related uses and socio-emotional uses. As will be detailed below, the factor structures for subjective satisfaction which we identified can be seen as a refine-

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-November 1989

387

ment of this dichotomy, with task-related and so&o-emotional related aspects of satisfaction each having two components. Prior Studies of Acceptance of MIS Our conceptualization of dimensions of acceptance and exploration of their interrelationship also draws upon prior studies of Management Information Systems (MIS). Zmud’s review of correlates of MIS success (1979) reports a number of studies showing a consistently positive association between MIS use and MIS satisfaction (Barrett, Thornton, & Cabe, 1986; Lucas, 1975, 1978; Maish, 1979; Schewe, 1976; Swanson, 1974; Vasarhelyi, 1977), and observes that “preconceived attitudes toward MIS are associated with MIS usage to a much greater extent than MIS satisfaction.” Zmud concludes that “the relationship between attitudes, involvement, satisfaction, and usage is quite complex,” and that “no studies have investigated this issue.” He also laments that many of the MIS-based studies related to this area “were of a laboratory nature and/or utilized students as subjects, ” and calls for studies in “real” MIS environments (Zmud, 1979, pp. 974-975). Bailey and Pearson (1983) developed and partially validated a measure of user information satisfaction with MISS that included a series of semantic differential scales for each of 39 dimensions. These were refined and further validated by Ives, Olson, and Baroudi (1983), who use the term “system success” to mean essentially the same thing as our term, “acceptance.” In Ives and Olson (1984), system success is broken down into four dimensions: measures of system quality, system usage, “information satisfaction,” and measures of changes in user behavior or attitudes. The last three correspond closely to our three dimensions of acceptance of CMCS. The first dimension that they identify, measures of system quality, generally is applied to the value, cost, timeliness, accuracy, etc. of the reports produced by an MIS (e.g., Boland, 1978; Lucas, 1976). This set of measures is not applicable to CMC, since there are no “reports” produced of the type generated by MIS. Studies of MIS usage include Lucas (1975), Ives and Olson (1984), King and Rodriguez (1978, 1981) and measures of “information satisfaction” are included in Powers and Dickson (1973), Maish (1979), Gallagher (1974), and Olson and Ives (1981). It should be noted that our dimension of “satisfaction” refers not to the products of using a system, however, but rather to the process of interacting with the system. Prior studies of MIS that encompassed measures of changes in user behavior or attitudes as a component of “success” include Maish (1979), Ives and Olson (1984), King and Rodriguez (1978), and Kaiser and Srinivasan (1980). The strength of the empirical link between satisfaction and use, and between use of computer systems and productivity increases or other measurable benefits, has been examined with far from consistent results. Evans (1976) suggested that there are minimum levels of satisfaction be-

388

low which people will refuse to use a system at all. Powers and Dickson (1973) concluded that user satisfaction is the most critical criterion in determining the success or failure of a computer system. Lucas (1975) found only a weak relationship between information system usage by salesmen and their performance. In a different context, Neumann and Segev (1980) show a weak correlation between satisfaction of bank managers with their MIS and the bank branch’s overall performance. On the other hand, Strassman (1985) examined 40 companies, and concluded that use of computer technology, as indicated by expenditures on information technology, “showed no correlation with management productivity.” Recently, Baroudi, Olsen, and Ives (1986) reported the results of a study of user information satisfaction with MIS based on the responses of 200 production managers of manufacturing organizations. Their measure of system use was based on user self reports rather than on objective monitoring of amount of use. A statistically significant but modest correlation of .28 between measures of use and satisfaction was observed. They also tested directionality of the relationship between use and satisfaction with altemative path models, and concluded that users’ satisfaction with the system leads to greater system usage. In another study related to identifying the dimensions of acceptance of MIS, Culnan (1984) reports the results of factor analyses for “dimensions of accessibility to online information,” Including one for a commercially available electronic mail system. Culnan’s subjects for the electronic mail analysis were 25 graduate students enrolled in a course on office automation. Among her findings were that variables intended to measure an anticipated “ease of use” factor did not all load unambiguously as a distinct dimension of accessibility. As will be seen below, we also found that some specific items or variables load on more than one user satisfaction factor. Method Samples and Data The data for this study include baseline and follow-up questionnaires plus system usage time for new users of four CMCS. We purposely selected four systems that differ substantially in terms of both functionality and user populations. This was in keeping with our objective of identifying factor structures and correlates of acceptance which would be generalizable, rather than tied to one specific system or type of user. All prospective new users (persons issued accounts) during the sampling periods were designated as participants in the study. Baseline questionnaires were distributed to 348 new users of EIES (the Electronic Information Exchange System, operated by the New Jersey Institute of Technology as both a utility and a R&D facility), and 234 new users of the Swedish COM system at its “QZ” (Stockholm University) installation. The COM questionnaires were translated

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-November 1989

h._

into Swedish for those respondents not fluent in English. Both EIES and COM are not-for-profit academic-based computerized conferencing systems, and both include a linear conferencing structure and a message system. Both are also fairly small in terms of total membership, with under 2000 users total at the time of the study. Directories include descriptions of members and conferences available for communication. However, COM’s interface is quite different from that of EIES. For instance, the EIES interface allows a choice of menus or commands, but always appears the same at any location within the system. The COM interface presents a limited menu of options at any choice point, arranged according to an “artificial intelligence” approach that selects the most likely choices at that particular point in an interaction sequence. Another difference is that EIES includes many more specialized subsystems than does COM, such as systems for report generation and dissemination, online surveys, and data bases. In addition, 197 users of a commercial, publicly available conferencing system (“PUBLICON”) and 156 users of a commercial electronic mail system (“INTMAIL”), both in the United States, were included. PUBLICON’s basic conferencing structure is a “branching” type, unlike that of EIES and COM. (See Hiltz and Murray 1985 for an explanation of structuring variations in CMCS). The PUBLICON version studied did not include a full electronic mail capability. INTMAIL includes the “standard” kind of inbasket and out-basket mail handling facilities of most commercial message systems. It has no conferencing capabilities, and the entire “user manual” is a folder, as compared to the much more voluminous documentation for the conferencinp systems. Both of these systems operate on networks with tens of thousands of users. There are no overall directories included in the systems. Among Users of

the Four Systems

Characteristics of the users and of the tasks which they were performing online also varied among the four systems. A fundamental difference is that all INTMAIL users worked for the same organization; the acronym “INTMAIL” was chosen to indicate that their CMCS was being used as an internal mail system. The other three CMCSs had users scattered among many different organizations, and were employed primarily for inter rather than intraorganizational communication. A second basic difference is cultural: most of the COM users are Swedish or European. whereas most of the users of the other three systems are American. The typical EIES user was a member of a task-oriented group, had a terminal or microcomputer at home, had infrequently communicated with distant group members before system use, and was a senior executive or manager with a master’s degree or doctorate. EIES also had the largest proportion of novices in the use of computers. Very few PUBLICON users belonged to a task-oriented group: on the contrary, they wandered onto the system be-

cause they were “‘just curious” and were likely to be looking for entertainment or exploration. Unlike most of the users of other systems, they were paying for their online time themselves. INTMAIL users (except for a handful who described themselves as consultants) worked in business, rather than government, academia, or other types of organizations. Four out of five were managers or executives. Only one out of ten had a terminal or microcomputer at home. INTMAIL users were most likely to have felt “required” to use the system as a condition of their employment. Since they were using INTMAIL to support their everyday internal corporate communications, they reported the highest importance ratings for communication. The modal COM user was a Swede employed by academia (30%) or government (25%) in a technical staff position, using the system for information exchange about technical subjects. Because the characteristics of the users of the systems varied, when differences in results occur, we will not be able to determine whether these differences are attributable to software characteristics or to user characteristics. On the other hand, ‘if results are consistent across systems, this will provide strong evidence for their generalizability. Data Collection Procedures and Response Rates

For all systems except PUBLICON, baseline questionnaires were included with the distribution of account access or documentation information, or sent immediately after an account was established by electronic request. PUBLICON users did not have to ask for an account; they chose it as one among a number of services available on a national network. Without a directory, participation of these users had to be solicited with a message requesting mailing address, which was sent to all first-time sign-ons. Thus the PUBLICON sample is self-selected, including only those who responded to an address request for mailing the baseline questionnaire, whereas the samples for the other three systems automatically included all new accounts. Follow-up questionnaires were mailed to each sampled participant after four months of system use. Items measuring subjective satisfaction with the systems and productivityrelated variables were included only on a “long” version of the follow-up. For EIES and COM, we had data on cumulative time online before sending the follow-ups. Those identified as “dropouts” (less than four hours total online) were sent only a short follow-up with a checklist of reasons for their limited use. Nonrespondents received at least three follow-up attempts (online reminder messages. a reminder postcard. and then a complete second copy of the questionnaire) and administrators associated with the target systems co-signed the cover letters and urged cooperation. Nevertheless, response rates for the follow-up varied from a low of 40% for COM to a high of only 56% for EIES. Because of different sample sizes and response rates, the combined sample is not evenly divided (Table I). About one-third

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE- November 1989



_,-__.-.-

-

..,.

.

389

TABLE I. System

Questionnaire response rates by system” Both (9)

Pre-use only

Followup only

None

Total sample

EIES

46

14

10

30

100% = 348

145

QZCOM PUBLICON INTMAIL ALL

22 49 28 38

13 25 I5 16

18 6 22 13

47 20 36 33

100% 100% 100% 100%

39 106 76 366

= = = =

Number long follow-ups

234 197 1% 935

“Chi square = 95, p = ,001,

of the all-systems sample consists of EIES users, instead of the intended one-fourth of the total sample. This should be remembered when looking at the “all-systems” data.’ Structures and Reliability

Factor analysis (principal axis factor analysis with varimax rotation (Norusis, 1984) was chosen as the technique to reduce a large number of items measuring different aspects of subjective satisfaction and perceived benefits to a smaller number of underlying constructs. The number of components or factors needed to adequately describe the data is determined by examining the eigenvalues (a measure of variance accounted for; we followed the common criterion of retaining factors with eigenvalues greater than or equal to 1 .OO). Having identified the factors, the scores for each individual on each factor were added to the case records. No system other than EIES has a sufficient number of “long follow-up” responses to support a reliable factor analysis for the subjective satisfaction and benefits variables. Thus, the strategy adopted to test the reliability of the factors derived from the all-systems combined data was to divide it into EIES vs. the other three systems combined. Only if the analysis replicates the same factor structures when the EIES sample is examined separately from the other systems can we assert that we have identified valid and reliable factors which are generalizable across a variety of CMCS. Measures of Amount of Use

System utilization measures may focus on “intensity of use” by looking at hours of use within a given time period, or may focus on “cumulative use” or experience at a particular point in time. Our basic measure does both: it is a count of the total hours of connect time for all users at approximately four months after first sign-on. ‘Theoretically, responses could have been weighted so that each system would contribute one-fourth of the variance on the “all systems” data. However, the proportion of responses by system varies greatly for different items, depending upon whether one is looking at items chosen from the preuse. short follow up, or long follow-up; and whether the items deal with questions on the group or task, which were answered only by respondents who belonged to a specific group and had a specific task to accomplish. Thus, it is not possible to assign a single weighting factor for respondents in this study.

390

System use was measured automatically by system monitor statistics collected as part of billing procedures. These data are more accurate and complete than recall would be, but are still far from perfectly valid measures of relative amount of system use. They were obtained for users approximately four months after receiving their accounts. Elapsed time is not exact; the usage statistics are produced only once a month, whereas new users can begin in midmonth. A second problem is that a user might not have actually begun regular use on the day an account was established or first used. Time online does not distinguish between “active” use (composing and sending) and “passive” use (receiving and reading). Ideally alternative measures of use would count such things as number and length of items sent, and number and length of items received (and presumably read.) However, hours of use, collected monthly as part of billing procedures, is the only usage data that were available for all systems. Time online is connect time; this is affected by modem baud rate, and by whether a user composes and reads while online, or uses a microcomputer to upload and download. In the latter case, the total amount of time spent on composing input for the system and on reading output from the system is much greater than the “connect time.” Thus, “connect time” is an incomplete measure of time spent on system use for many of those using a microcomputer as a terminal. Some in-house mail and conferencing systems do not tie up outside telephone lines and are not charged for. Users may stay logged on for hours, ready to receive any incoming items. Connect time would grossly overstate “use” in such cases. The four systems included in this study all charged for connect time by the minute, and also involved tying up a telephone line to dial in. Under these circumstances, connect time is not likely to be inflated by users being logged on but totally inactive. The data on connect time after four months were rounded to the nearest hour; this variable is labelled as “Time4.” Anything less than 30 minutes use was thus considered “zero hours.” The mean for all systems was 14 hours, with a range from zero to 646 hours (N = 925). The data are severely skewed towards the high use end (skewness - 10.7) and do not resemble a normal distribution (Kurtosis = 183). The variable Time4 was transformed to produce two alternative measures of system use, each of which has cer-

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-November 1989

b,-

TABLE 4.

Post-use system satisfaction: Factor names and loadings. All four systems

Item Overall Stimulate Understand Courteous Hard Impersonal Frustrating Waste Unproductive Express Impression Distracted Constrained Overload Variance (%) Cumulative (%) Eigenvalue

Interface

Performance

Unexpressive

Mode problems

-.42 -.29 -.77 - .63 .70 -52

- .51 -.48 -.25 -.35 .09 .32 .23 .79 Yii3 -TiT -.13 .I5 .07 .15 12.0 52.5 1.7

.36 .41 .I4 .I9 -.03 -.25 - .03 0 -.I7 .68 xl -06 -.30 .I7 8.5 61.0 1.2

-.I0 .06 -.07 -.06 .28 .09 .33 .27 .24 -.04 - .Ol .62 xi 52 7 9 68.9 I.1

.26 .I9 .04 .I3 - .08 -.25 -.I4 -.Ol -.17 .52 5 -32 2 .I1 7.0 68.0 I.0

-.I3 .03 -.I0 -.I5 .23 .09 .46 .39 .26 -.I0 .22 51 L .60 5 9.0 61.0 1.2

.37 .44 .I7 .I7 -.02 -.22 - .Ol .02 .26 .76 3 -03 -.27 .I3 1.30 9.10 61.50

-.07 .08 -.02 -.OI .29 .07 .20 .27 .27 .oo -.05 61 1 .49 3 1%

67

3 .26 -.06 -.I0 .50 .I2 .09 40.5 40.5 5.7

EIES only (A’ = 145) Overall Stimulate Understand Courteous Hard Impersonal Frustrating Waste Unproductive Express Impression Distracted Constrained Overload Variance (R) Cumulative (%) Eigenvalue

-.42 -.28 -.79 -.59 .68 48 .56 55 .24 - .07 -.I? .47 .24 .I4 42.0 42.0 5.9

-.53 -.55 -.31 -.33 .12 .34 .22 .77 xi -07 -.30 .21 .51 .17 10.0 52.0 1.4

Other three systems (N = 214) Overall Stimulate Understand Courteous Hard Impersonal Frustrating Waste Unproductive Express Impression Distracted Constrained Overload Eigenvalue Variance ( % ) Cumulatwe tr?, t

-.40 -.27 -.75 - .65 .72 xi _75 xi -.I2 -.06 - .09 .53 3% .08 5.60 39.90 39.90

- .54 -.49 -.24 -.37 .06 .36 .23 .77 85 l-.05 -.09 .13 .09 .12 1.70 12.50 52.40

8.20

69.70

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-November 1989

._^.

-- . . _. _.-._..------.

---- - _-----,.”

.-.

393

,

-.e

ier to reach people with whom the user needed to communicate. Obtaining information or ideas shows a somewhat weaker correlation with the productivity factor. High scores indicate a lack of perceived productivity improvements of this nature. ‘Career” is related most strongly to the items on longterm and short-term contributions to career advancement; whether the system provided leads or other useful information, or increased the “stock of ideas,” are also highly correlated. High scores indicate a perception that the system did not have any of these desirable impacts on the professional career. When the sample is split to check reliability of the factor structure, the coefficients are very similar for EIES only vs. the other three systems combined. This indicates that the factor structure has stability across different CMCS (see Table 5). Relationships Among Dimensions of Acceptance For all systems combined, the correlations between time online and the other dimensions of acceptance tend to be strongest for the log of hours online, as compared to raw hours or broadly categorized hours of use (Table 6). Logtime is significantly related to all measures of both subjective satisfaction and outcome, with the coefficients ranging from a weak .I4 for satisfaction with the system interface to a moderate -.35 for perceptions that it is difficult to conduct expressive communications via this medium (“Unexpressive”). Because of the intrinsic nature of a varimax rotation technique for factor analysis, the intercorrelations among the factors measuring the same dimension of acceptance are by definition weak or insignificant. Thus, besides the column of six coefficients for Logtime, the other coefficients of particular interest in Table 6 are the eight for the relationship between each of the four subjective satisfaction factors and the two outcome factors. These are all significant, but not very strong. The only exception to this pattern is the intercorrelation for the largely redundant factors, perception of system Performance and perceiving TABLE 5.

Productivity impact factor: Factor names and loadings

All systems Item Quantity Quality Useful Reach Short term Long term Leads Stock Variance (SE) Cumulative (9) Eigenvalue

394

Productivity-enhancing outcomes as a result of system use. The strongest of the intercorrelations between nonredundant factors comprising subjective satisfaction and benefits is .31 between Unexpressive and Career. Users who do not feel that the medium is satisfactory for expressive communications are not likely to feel that its use results in career advancement. Evidently, it is “personal” networking that provides the contacts that may aid professionals in their careers, and those who do not feel that the medium is personal in nature will not try to use it for such activities. When broken down for the four separate systems, (tables available on request from the author), the pattern of intercorrelations for the 14 key coefficients shows a marked difference between INTMAIL and the three conferencing systems. There are no significant relationships between amount of use as measured by Logtime and either the subjective satisfaction or benefit factors for the internal mail system. The only relationship that is both substantively and statistically significant for INTMAIL is between feeling that the system is unsatisfactory for expressive communication and perceiving no contribution to career advancement. For the three conferencing systems considered individually, there are individual coefficients that are not statistically significant, but no such overall pattern for lack of correlations. For example, for EIES there is no significant relationship between Logtime and Mode Problems, but the relationship between Logtime and productivity enhancement is stronger than average (.29). For COM, with the smallest sample, many of the coefficients are not significant, but the relationship between the subjective satisfaction factors and perceived career advancement is particularly strong (e.g., .34 for Career advancement with both Interface and Unexpressiveness). PUBLICON shows the strongest relationship between system use as measured by Logtime and the four subjective satisfaction factors, with coefficients ranging from .20 to .45. In sum, the hypothesis that the three dimensions of acceptance would be moderately interrelated is generally suppotted for the three conferencing systems, though the strength and significance of the intercorrelations are uneven rather than consistent. It is not supported for the simple mail system,

Productivity

Career

Other three (N = 210) Productivity Career

ElES Only I (Iv = 140) Productivity

Career

.85

.24

.80

.22

.89

.27

2-i x 64 _27

.34 .33 .31

Tii 23 3 _27

.30 .31 .35

7% xl 73 24

.41 .38 .24

.25 .47

?ii 73 52

.21 .51

Yz 3ii z

.80

.77

.29 .44

.33 60.9 60.9

13s 74.7

.32 59.1 59.1

139 73.0

.36 63.0 63.0

13.6 76.6

4.87

1.11

4.7

I.1

5.0

1.1

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-November 1989

TABLE 6.

Correlation matrrx

Time4

Variables Tune4

(raw hours used 4 most

Time categories Log time Interface Performance UNable to be Expressive Mode problems no Productivity increase no

and deccrtptive

Career

advancement

Mean SD Number of cases

stattstics

Time categories

for acceptance factors-all four systems combmed.

Log

time

Interface

Performance

Unexpressive

Mode Problems

Productivity

Career

I .52b .63 .I1

.17” -.23” -.lS” -.23h -.15’ 14.0 31.5 925

I 76h

I

.14h .19b

.08

.13” - .34b -.21b -.08 - .22b 2.80 1.20 925

-.3Y -.19b -.18b - .24b

I .06

-.08 .06 - .57b

-.16”

-.2lh .Ol .90

.Ol

.88 .56 758

I

- .08 .13” -.17”

.89 325

325

I .07 .I0

I -.21b

.3lb

.I2

- .03 .80 325

0 .80 325

I .07

I

0 .94 338

.02 .94 338

“P < .OI bP < ,001

for which there is no relationship between use and either subjective satisfaction or perceived benefits.

Summary and Discussion We purposely sampled four CMCSs that were very different in terms of functionality, user interface, and types of users. Would it be possible to identify factor structures for subjective satisfaction and for benefits that would apply not just to one specific system or type of user, but were generalizable? The factor structures for user satisfaction and perceived benefits identified in this study are related to the prior theoretical and methodological literature on CMCS and MIS. Moreover, the identified constructs have face validity as clearly different and basic aspects of reactions to CMCS, and they replicate across different systems. For both the subjective satisfaction and the benefits dimensions of acceptance, the factor structures and factor loadings are almost identical for the split samples vs. the all-systems combined sample. It can be argued that a different factor analysis procedure or a different rotation technique, or a different analyst identifying and naming the factors which emerged, could have obtained different results. However, the results for subjective satisfaction are also similar to those obtained by Steinfield (1986) in his study of a different electronic mail system. They expand the “task” vs. “social-emotional” dichotomy, which has a long tradition in communications research (Bales, 1950). Subjective satisfaction comprises two components that are primarily task related and deal with CMCS as a computer system: satisfaction with the interface, and satisfaction with system performance. Subjective satisfaction also includes two factors that are primarily concerned with the social-emotional aspects of CMCS as a mode of human communication: problems with expressing and understanding feelings in writing, and general discomfort or limitations with CMCS as a mode of communication. Benefits comprise two sorts of outcomes: productivity enhance-

ment, and individual career enhancement through increased personal networking. The “busy” professionals using the systems we studied probably would not have allocated the time to complete an extensive set of items. Our goal was to identify a small set of items measuring satisfaction and benefits that would nevertheless encompass the major dimensions of these constructs. Prior iterative research (Bailey & Pearson, 1983; Ives, Olson, & Baroudi, 1983) on “user information satisfaction” points out that measures can be significantly enhanced by such procedures as establishing test-retest reliability, performing more tests of construct validity, and testing the instrument on other samples. This is true of the measures and factors reported in this article. Our results should serve as a starting point for the further development of scales to measure aspects of user acceptance of computer-mediated communication systems, and the results of our factor analyses should be taken as preliminary identifications of dimensions to be included. This analysis has established that acceptance of CMCS is multidimensional. Though system use, subjective satisfaction, and benefits are all significantly correlated, the correlations are moderate in size for the three conferencing systems. For the internal mail system, increases in use do not result in increases in either subjective satisfaction or perceived benefits. This may be due to the limited capabilities of the software for the internal mail system, and/or to greater ease of use of the mail system, so that benefits occur more quickly. With more complex conferencing systems, there tends to be a longer learning period during which users discover the value of a variety of features useful for supporting long-term collaborative group tasks (Hiltz & Turoff, 1981). The lack of correlation between use and perceived benefits for the internal mail system may also be related to the fact that it had the largest proportion of users who felt “required” to u s e the system. Thus, employees would continue to use it even if they perceived no value. Future studies of CMCS in particular, and perhaps of interactive MIS in general. should not assume that usage

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE- November 1989

.” _.-._ .” _ *. . _- . _

395

alone or subjective satisfaction alone are adequate indicators of acceptance or success of a system implementation. Use, subjective satisfaction, and perceived benefits may vary independently. This is the most important theoretical, methodological and practical implication of our findings.

Carey, J. (1980). Paralanguage in computer mediated communication.

Acknowledgments

Culnan, M. J.. & Markus. M. L. (1987). Information technologies. In F. M. Sablin. K. H. Roberts. L. L. Putnam. & W. Porter (Eds.). Hand-

Proceedings of the Association for Computational Linguistics. 61-63.

Culnan. M. I. (1984). The dtmensions of accessibility to online information: Implications for implementing office information systems. ACM Transactions on Office Information Systems. 1. 141- 150.

Culnan, M. 1.. & Bair. J. H. (1983). Human communication needs and organizational productivity: The potential impact of office automation. J. Amer. Sot. for Info. Sri.. 34, 215-221.

Work on this project was partially supported by a grant from the National Science Foundation (DCR 8 121865). The opinions expressed are solely those of the authors and do not necessarily represent those of the National Science Foundation. Many people contributed to this project. Elaine Kerr assisted with the questionnaire design and supervised the data collection for two of the four systems studied. Robert Arms, Cally Bark, Robert Ballentine, Seth Bostrum, Lincoln Brown, Christine Bullens, Claudia Gonzalez, Judy Hinds, Sonia Khalil, Tanmay Kumar, Patricia Lipkus, Margaretta Mattsen, Robert Michie, David Morgan, Jacob Palme, George Reinhart, Danielle Roshirt. Ellen Schreihofer, Andra Stam, Harry Stevens, Bart Voyce, and Lisa Voyce also assisted in this study. We are also grateful to our anonymous reviewers for their helpful suggestions. An earlier version of this article was presented at the Human Technology Interest Group sessions of the Intemational Communication Association, Montreal, May 1987.

References Adriansson, L. (1980). Group communication through computer: Social psychological studies of attitudes to and experience with the effects COM system on the work environment. Sweden: Department of Psy-

chology, University of Gothenburg. Adler, R. P., & Lipinski, H. M. (198 1). HUB: A computer-based communication system. In S. R. Hiltz, & E. B. Kerr, (Eds.), Studies of computer-mediated communication: A synthesis of findings. Newark, NJ: New Jersey Inst. of Tech. Attewell. P., & Rule, J. (1984). “Computing and organizations: what we know and what we don’t know. Communications of the ACM. 27, 1184-1191. Bailey, 1. E., & Pearson, S. W. (1983). Development of a tool for measuring and analyzing computer user satisfaction. Management Science, 29, 519-523. Bair, J. H. (1974). Evaluation and analysis of an augmented knowledge workshop: Final report for phase I. (RADC-TR-74, 79). Griffiss Air Force Base, NY: Rome Air Development Center. Bales, R. F. (1950). Interactive process analysis: A method for the study of small groups. Reading, MA: Addison Wesley. Baroudi, J. J., Olson, M. H., & Ives, B. (1986). An empirical study of the impact of user involvement on system usage and information satisfaction. Communications of the ACM, 29, 232-238. Barrett. G. V., Thornton, C. L.. & Cabe. P. A. (1986). Human factors evaluation of a computer-based information storage and retrieval system. Human Factors. 10. 431-436. Boland. R. J.. The process and product of system design. Management Sci..

24.

887-898.

Bregenzer, J., & Martino. J. P. (1980). Futures research group experience with computerized conferencing. In M. M. Henderson & M. J. McNaughton, (Eds. ). Electronic communication: Technology and impacts AAAS selected symposium, 52. (pp. 65-70). Boulder. CO: Westview Press.

396

book on organizational communication: An inrerdisciplinary rive. (pp. 420-443). Newbury Park. CA, Sage. 420-443.

perspec-

Duranti, A. (1986). Framing discourse in a new medium: Openmgs in electronic mail. The Quarterly Newsletter of the Laboratory of Comparative Human Cognition. 8. 64-7 I.

Edwards. G. C. (1977). An analysis of usage and related perceptions of NLS--a computer based texr

processing and communications system.

Montreal: Bell Canada H.Q. Business Development. Evans. J. (1976, October). Measures of computer and information systems productivity: Key informant Interviews. (Tech. Report. APR20546fTR-5). Pittsburgh, PA: Westinghouse Res. Labs. Gallagher, C. A. (1974). Perceptions of the value of a management information system. Acad. Management J.. 17. 46-55. Guillaume, J. (1980). Computer conferencing and the development of an electronic journal. Canudian J. of Info. Sci.. 21-29. Heimstra, G. (1982). Teleconferencing, concern for face, and organizational culture. In M. Burgoon (Ed.). Communication Yearbook 6. (pp. 874-904). Beverly Hills CA: Sage. Hiltz, S. R. (1978). The impact of a new commumcations medium upon scientific research communities. Journal of Research-Communication Studies, 1. 111-124. Hiltz. S. R. (1984). Online communiries: A case study of the office of the future, Norwood NJ: Ablex. Hiltz. S. R. (1986). Recent developments in teleconferencing and related technology. In A. E. Cawkell (Ed.), Handbook of Information Technolog.v and office Systems (pp. 823-850). Amsterdam: North Holland. Hiltz, S. R., Johnson, K. J.. & Turoff, M. (1986). Experiments in group communication via computer, I: Face-to-face vs. computerized conferences. Human Communication Research, 13. 225-252. Hiltz, S. R., & Kerr, E. B. (1981). Studies of computer-mediated communications systems: A synthesis of thefindings. (Res. Rep. 16). Newark, NJ: Computerized Conferencing and Communications Center, New Jersey Institute of Technology. Hiltz, S. R., Kerr, E. B., & Johnson. K. (1985). Dererminonts of occeptance of computer-mediated communication systems: A longitudinal study of four systems. Newark, NJ. (Res. Rep. 22). Computerized

Conferencing and Communications Center, NJIT. Hiltz, S. R., & Turoff, M. (1985). Structuring computer-mediated com-

munications to avoid information overload.

Comm.

ACM,

28,

680-

689.

Hihz, S. R., & Turoff, M. (1978). The network nation: Human communication via computer. Reading MA: Addison Wesley. Hiltz, S. R., & Turoff, M. (1981). The evolution of user behavior in a computerized conferencing system. Communications of the ACM, 24. 739-75 I.

Ives, B., & Olson. M. H. (1984). User involvement and MIS success: A review of research. Management Sci.. 30. 586-603. Ives, B., Olson, M. H., & Baroudi. J. J. (1983). The measurement of user information satisfaction. Comm. ACM. 26. 785-793. Johansen, R.. Degrasse, R.. Jr., & Wilson. T. (1975). Group communication through computers. Vol. 5: Effects on Working Patterns. Menlo Park CA: Institute for the Future. Johansen. R., Vallee, J.. & Spangler. K. (1979). Electronic meetings: technological ulternatiyes and social c.hoices. Reading. MA: Addison Wesley. Johnson-Lenz. P., & Johnson-Lenz. T. ( 198 I ). The evolution of a tuilored communicutions structure The toprcs .\rtrem. Newark NJ. (Research Rep. 14). Computerized Conferencmg and Communications Center. NJIT.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE- November 1989

mm

Johnson-Lenz. P . & Johnson-Lenz. T. (1980) JEDEC-E/ES Project; S~andurdixfion in Mfnicompuler.‘I.S/ P roducfs tiu E/ec.tronic Information E,xchan~e. (Final Report IO the Nattonal Sctence Foundation). Kaiser, K. M., & Srinivasan. A. ( 1980). The relattonship of user attitudes t,iward design criteria and information systems success. lVafional AIDS Conference Proceeding.s. pp. 201-203. Keen. P. ( 1981). lnformatton systems and orgamzational change. Comm. ACM. 24, 24-33. Kerr. E. B. (1979). Conferencing via computer: Evaluation of compulerassisted planning and management of the White House conference on library and information services. In Informarion for the 1980s: A final report of the Whire House conference on libran and informarion service.7. (pp. 767-805). Washington DC: U.S. Government Printing Office. Kerr. E. B., & Hiltz, S. R. (1982). Cornpurer-mediated communication sysrems: Status and evaluation. New York, Academic Press. King. W. R., & Rodriquez. I. 1. (1978). Evaluating MIS. MIS Quarrerly, 2. 43-52. King. W. R.. & Rodriquez, J. I. (1981). Participative design of strategic decision support systems, An empirical assessment. Managemenf Sci., 27. 717-726. Kling, R. (1980). Social analyses of computing: Theoretical perspectives in recent empirical research, Compuring Surveys, 12. 61-110. Lamont. V. C. ( 1980). Computer Conferencing: The Legitech Experience. In L. A. Parker & C. H. Olgren (Eds.), Teleconferencing and Inreraclive Media Madison, WI: Extention Center for Interactive Programs. University of Wisconsin, Lipinski. H.. Spang. S.. & Tydeman, J. (1980). Supporting task-focussed communication. In A. R. Benenfeld. & E. J. Kazlauskas, (Eds.), Comm u n i c a t i n g i n f o r m a t i o n : P r o c e e d i n g s o f t h e 4 3 r d ASIS a n n u a l m e e t i n g (pp. 158- 160). White Plains, NY: Knowledge Industry Publications. Lucas, H. C., Jr. (1975). Why informarion sysremsfail. New York: Columbia Universtty Press. Lucas. H. C.. Jr. ( 1975). Performance and the use of an information system. Managemenr Sri.. 21. 908-919. Lucas, H. C.. Jr. (1976). The bnplemenration of computer-based models. New York: National Association of Accountants. Lucas. H. C.. Jr. (1978). The use of an interacttve infonation storage and retrieval system in medical research. Cornm. ACM, 21. 197-205. Martino. J. P.. & Bregenzer, J. M. (1981). A trtal of computerized conferencing among a group of future researchers. In S. R. Hiltz & E. B. Kerr. (Eds. ). S t u d i e s o f computer-mediared c o m m u n i c a t i o n : A .synrhesis &findings (Final Report to the National Sctence Foundation). Newark. NJ: Neu Jersey Institute of Technology. McCarroll. J. H. (1980). EIES for a community involved in R&D for the disabled. In M. M. Henderson & M. J. McNaughton. (Eds.). Elecrronic c o m m u n i c a t i o n : T e c h n o l o g y a n d i m p a c t s . AAAS s e l e c t e d s y m p o sium, 52. (pp. 71-76). Boulder, CO: Westview Press. Maish. A. M. (1979). “A User’s Behavior Toward His MIS.” MIS Quart.. 3. 39-52. Moshowitz. A. (1981). On approaches to the study of social issues in computing. Communications of rhe ACM. 24. 146155. Newmann. S.. & Segev, E. (1980). Evaluate your information systems. I. Sysrems Management. 31. Norusis. M. J. (1984). SPSSIPC., Chicago: SPSS Inc. Olson, M. H.. & Ives. B. ( 1981). User involvement in system design: An empirical test of alternative approaches. Information and Management. 4. 183- 196.

Pelz. D. C., & Andrews. F. M. (1966). Scienrisfs in organizations: froducrire c/imcrre for research and derelopmenr. New York: John Wiley. Powers. R. F . & Dickson, G. W. (1973). MIS project management: Myths, opinions. and reality. California Management Ret,.. 15. l47156. Rice, R. E. (1980). Computer conferencing. In B. Dervin & M. Voigt (Eds.). Progress in Communicafion Sciences: Vol. 2 (pp. 215-240). Norwood, NJ: Ablex. Rice, R. E.. & associates. (1984). The new media: Communicarion. research and technology. Beverly Hills, CA: Sage. 1984. Rice, R. E.. & Love, G. (1987). Electronic emotion: a content and network analysis of a computer-mediated communication network. Communication Research, 14. 85-108. Schewe. C. D. (1976). The management information system user: An exploratory behavioral analysis. Acad. Mnnagemenr J., 19. 577-590. Short. J. A., Williams. E., & Christie, B. (1976). The social psychology of telecommunicarions. New York: Wiley. Siegel. E. R. (1980). Use of computer conferencing to validate and update NLM’s hepatitis data base. In M. M. Henderson, & M. J. MacNaughton. (Eds.). Electronic Communications: Technology and Impacts AAAS Se/erred Symposium, 52. (pp. 87-95). Boulder, CO: Westview Press. Sproull, L., & Kiesler, S. (1986). Reducing social context cues: Electronic mail in organizational communication. Management Science, 32, 1492-1512. Steinfield, C. W. (1986b). Computer-mediated communication systems. In M. E. Williams (Ed.), Annual Review of Information Science and Technology: Vol. 21 (pp. 167-202). White Plains, NY: Knowledge Industry Publications. Steinfield, C. W. (1986). Computer-mediated communication in an organizational setting: Explaining task-related and so&emotional uses. In Communications Yearbook 9 (pp. 777-804). Beverly Hills. CA: Sage. Stevens, C. H. (1980). Many-to-many communication through inquiry networking. World Future Society Bullerin. 14. 31-35. Strassman, P. A. (1985). Information payoff: The transformafion of work in the electronic age. New York: Macmillan. Swanson, E. B. (1974). Management information systems: Appreciation and involvement. Managemenr Sci.. 21, 178-188. Tapscott. H. D.. Greenberg, M., & Sartor, L. (1981). Predicring user acceplance of integrared office svsrems. Unpublished report, Toronto, Ontario. Uhlig, R.. Farber. D. J., & Bair, J. H. (1979). The office of the future: communicarion and compurers. Amsterdam: North Holland Umpleby. S. A. (1980). Computer conferencing on general systems theory: One year’s experience. In M. M. Henderson & M. J. McNaughton (Ms.), E l e c t r o n i c C o m m u n i c a t i o n : T e c h n o l o g y a n d I m p a c t s . A A A S s e Iecred symposium, 52. (pp. 55-64). Boulder. CO. Westview Press. Vallee, J., Johanson, R., Randolph, R., & Hastings, A. C. (1974). Group communicarion through computers, a study of social effecrs. Vol. 2. (Report R-33). Menlo Park, CA: Institute for the Future. Vallee. 1.. Johansen. R., Lipinski, H., Spangler, K., & Wilson, T. ( 1 9 7 8 ) . G r o u p c o m m u n i c a t i o n through c o m p u t e r s , s o c i a l , m a n a g e r i a l . and economic issues, Vol. 4. Menlo Park, CA: Institute for the Future. Vasarhelyt. M. A. (1977). Man-machine planning systems: A cognitive style examination of interactive decision making. J. Accouming Res., 15. 138-153. Zmud. R. W. (1979). Individual differences and MIS success: A review of the empirical literature. Managemenr Sri., 23, 966979.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE- November 1989

-*_*_

*

-.

._

,.

397