Technology acceptance in situations with ... - Semantic Scholar

3 downloads 1173 Views 38KB Size Report
The Technology Acceptance Model (TAM) has been widely used to predict user ... management information systems (MIS) research this potential gain is often ...
NOKOBIT-98

Sesjon 2:2a

Technology acceptance in situations with alternative technologies Tom R. Eikebrokk1 and Øystein Sørebø2 1

Faculty of Economics & Social Science, Agder College

2

Department of Strategy and Management, Norwegian School of Economics and Business Administration,

ABSTRACT The Technology Acceptance Model (TAM) has been widely used to predict user acceptance and use based on perceived usefulness and ease of use. However, the empirical tests of this model have given mixed and inconclusive results. In part, this may be due to the fact that the tests generally have treated every setting in agreement with a one choice situation. That is, a situation where the user have no alternative technologies to choose among. In this paper it is argued for that some situations are multiple-choice situations and that these situations require a particular approach. To address this issue, this paper reports on a study of 1009 white-collar workers in a Norwegian oil company that could choose between several technologies when they communicate with others. The setting provides empirical data for assessing how well TAM explain the usage of the different communication technologies in the setting when the situation is treated as a one choice situation versus a multiple-choice situation. The results demonstrate that TAM explain the usage behavior for 5 out of 7 technologies better when the model is treated in accordance with the real situation, that is, a multiple-choice reality. Although this result has important implications for the study of technology acceptance, future studies will have to explore the potential importance of technology comparisons on several other variables than included in this research. The existence of situations involving alternative technologies could have important implications for the development and test of several of the theoretical models available in the ISfield.

INTRODUCTION Companies all over the world continue to invest a substantial amount of their economic resources in information technology (IT). The reasons for this investment vary, but one of the overriding beliefs is that IT offers the potential for improving white-collar performance. That is, IT can increase the efficiency in writing, calculation, presentation, communication etc. In management information systems (MIS) research this potential gain is often regarded as being obstructed by users’ unwillingness to accept and utilize the technology. Because of the persistence and importance of this problem, understanding factors that influence acceptance has been a long-standing issue in MIS-research. One of the most widely used models to explain user acceptance is Davis’ Technology Acceptance Model (TAM) (Davis 1989; Davis et al. 1989). This model suggests that acceptance is determined by the users’ perceptions of usefulness and ease of use. However, the success in explaining user acceptance with this model has been mixed and inconclusive. In part, this may be due to the fact that TAM is usually applied with the implicit assumption that there is no difference between single or multiple-choice situations. That is, every situation is treated as presenting only one specific technology available for the potential users. Hence, when the actual situation is a multiple-choice situation including alternative technologies, research progress may be stimulated by a more conscious utilization of TAM in accordance with this particular situation.

89

Sesjon 2:2a The computer users today are frequently exposed to multiple-choice situations. For example, information workers may not only evaluate an Internet browser when they need information about a topic; they may as well evaluate all available alternatives in the context (e.g. the use of books, colleagues and CD-ROM). Even if this evaluation of alternatives may be an obvious reality, the TAM is usually applied as if every situation was a single-target situation. The main problem with this common simplification is that when researchers (or MIS practitioners) actually are exposed to a multiple-choice situation they may disregard a lot of available and important information in the setting. As a consequence, it may be difficult to obtain valid predictions and explanations of technology acceptance with TAM. Consequently, in certain situation where the users can choose among several alternative technologies it should be more appropriate to apply TAM in accordance with a multiple-choice reality. In the present research we utilize a modified form of TAM and empirically examine its ability to predict usage behavior in a multiple-choice situation. We are particularly interested in how well we can predict and explain user-behavior when TAM is based on relative evaluations and usage versus when TAM is based on nominal evaluations and usage. This comparison characterizes the contrast between the evaluations made in practice by a user in a multiple-choice situation and the common single-target approach that researchers select when they investigate this situation. After presenting the major characteristics of TAM, we discuss a cross-sectional study of 1009 whitecollar workers in a Norwegian oil company which provides empirical data for assessing how well relative evaluations contrasted to nominal evaluations explain the usage of different communication technologies.

THEORY TAM, introduced by Davis (1986), is an adaptation of Ajzen and Fishbein’s (1980) Theory of Reasoned Action (TRA), specifically tailored for modeling user acceptance of information systems. In the last decade, TAM has been the most commonly employed model of IT usage, receiving considerable empirical support (e.g., Davis 1989; Davis et al. 1989; Mathieson 1991; Taylor & Todd 1995). In this article we question the conception of the attitude object in the technology acceptance model (TAM). Both TAM and other TRA related models assume that behavior can be explained by investigating the attitudes towards engaging in a particular type of behavior which includes the attitude object. In TAM behavior in terms of using information technologies has been explained by investigating the perceived usefulness and ease of use which the individual experience or expects when using the specific technology. This model has been used to explain and predict the use of a number of technologies like word processors, spread sheets, databases, and communication technologies including electronic mail, etc. The results from these studies are not conclusive (Moore & Benbasat 1991). Some studies have found positive correlations between ease of use, usefulness and self-reported use, whereas other studies have found that even though the individual express positive attitudes towards the technology, the technology are used rarely or not at all. The empirical evidence of positive attitudes followed by low user acceptance can not be explained within the TAM or TRA themselves. One possible explanation for the mixed empirical results can be the implicit assumption in these models. Both TRA and TAM assume that there is only one attitude object that influences the behavior. In some situations this assumptions can be true, but in others not. Hence, there are situations where the user does not have alternatives to the specific technology that is studied, and there are other situations, like the use of electronic mail, where the user experience a number of alternative methods available for sending a message. Instead of using e-mail, the person can use the phone, send a letter or use another e-mail system if available. Clearly, in this situation alternative technologies represent several attitude objects that might influence the individuals’ attitudes towards using electronic mail. If we study this particular user and his evaluations of the usefulness and ease of use of electronic mail, we could in fact run the risk of not being able to predict his behavior since other technologies that we did not investigate, were more positively

90

NOKOBIT-98 evaluated and hence preferred. In other words, we believe that the user’s attitudes towards using e-mail are formed through a process where the outcomes of using several technologies are compared when the attitudes towards using one specific technology are formed. The user will most likely prefer the technology that, when compared to the others, is evaluated as the best. Because of this implicit assumption in TRA and TAM it seems that these models were not intended for explaining use in situations where the user experience a set of technologies to choose from. As a consequence, when alternative systems exists the models can only demonstrate but not explain the lack of adoption or use for the specific system which is studied. On the other hand, if we include alternative technologies, the models can also tell us why one particular system is not used. By comparing alternatives, the models can overcome the common problem of empirical studies that are not able to explain the observed low correlations between positive attitudes toward a particular system, and the use of this system. In order to overcome this problem and increase the understanding of individuals’ technology acceptance in organizations we need a theory that can handle multiple-choice situations. One way of achieving this is to change the definitions of usefulness, ease of use and use in TAM to account for the individuals’ comparisons between technologies. If the user prefers the best system in situations with alternative systems, the concepts in TAM must be adjusted to reflect the outcome of these comparisons between technologies. Perceived usefulness is in the original version of TAM defined as “the degree to which a person believes that using a particular system would enhance his or her job performance” (Davis 1989). We suggest a more precise definition to account for multi-system situations: “the degree to which a person believes that using a particular system would be better than alternative systems in enhancing his or her job performance”. Likewise, we suggest the following definition of ease of use: “the degree to which a person believes that using a particular system would be of least effort compared to alternative systems”. Finally, we suggest that use is defined as relative use: “the degree to which a person use a particular system compared to the total use of all alternative systems”. With these definitions we no longer assume that there is one attitude object in terms of one particular system that is reflected in the individuals’ attitudes towards using this particular system.

RESEARCH MODEL AND HYPOTESIS The two independent variables perceived usefulness and ease of use along with the dependent variable system use form the conceptual model shown in Figure 1. The two independent variables are expected to affect the dependent variable directly.

Us efullnes s of one s ys tem relative to the total us efullnes s for all s ys tem s Us e of one s ys tem relative to the total us e of all s ys tem s

External F ac tors ; alternative s ys tem s

Eas e of us e of one s ys tem relative to the total eas e of us e for all s ys tem s

Figure 1: TAM in multiple-choice situations

91

Sesjon 2:2a As indicated through figure 1 and in the previous text, our argument in this paper can be tested through the following hypothesis: There are positive correlations between user satisfaction concepts in terms of perceived usefulness and ease of use, and the technology acceptance in terms of system use. In situations where the potential users experience alternative technologies, the relationships between user evaluations and use of a specific technology are more positive when alternative technologies are integrated in the model than when such alternatives are not included in the model.

METHODOLOGY In order to test this hypothesis, a survey was conducted in a Norwegian oil company. A questionnaire was distributed to a random sample of 1009 respondents. Prior to the survey, the questionnaire was pretested on 17 respondents from different hierarchical positions in the company. 495 questionnaires were returned, representing a response rate of 49%. Of all possible responses, 9% were missing. These non-responses were estimated using the EM algorithm (see Little & Rubin 1987 for a full description of this method). Questionnaires with more than 60% missing responses were deleted from the sample, resulting in a final sample of 472 complete questionnaires.

Systems Seven systems were examined in this study. All of the 1009 respondents had access to three electronic mail systems (Memo, Lotus Notes Mail, Lotus Notes Databases). Lotus Notes Databases is an object storage facility that were used to distribute, store and organize electronic messages to members in each respondent’s network. The respondents could also use four traditional communication technologies (Dyadic meetings, Formal group meetings, Telephone and Internal mail). The use of the systems was not mandatory, but it was left to each respondent to decide which technology to use when sending messages.

Operational measures of system use System utilization was measured as self-reported use. For every system, respondents were asked to record the number of work related messages they had sent each day. Except for telephone, use per week was also included as an additional option for less frequent users. The answers were combined into a common measure of technology usage. In order to test the hypothesis, the answers were also combined into a measure of the relative use of each system compared to the total use of all systems.

Measures of usefulness and ease of use The items used to construct the perceived usefulness (U) and ease of use scales (EOU) were adopted from Davis (1989). The wording in both scales were adjusted to fit the specific technologies included in this study. The response options were anchored on a 6 point Likert-type scale ranging from (1) strongly disagree to (6) strongly agree. During pretesting several respondents reported problems interpreting items number 6,7 and 8 in Davis’ original scale for EOU. These items were deleted from the scale. Items 1 and 2 for internal mail were also conceived as problematic and were deleted. Indicator number 3 did only give meaning for Lotus Notes Mail, and was deleted for all other technologies. The resulting scale consisted of 4 items for internal mail, 7 items for Lotus Notes Mail, and 6 items for the other technologies.

92

NOKOBIT-98 The pretest also revealed problems in interpreting items number 5,6 and 8 in Davis’ original scale for usefulness. Because of this, the items were deleted. The resulting scale consisted of 7 out of the 10 items in the original scale. After instrument validation, the data conducted using these scales were transformed into a measure of the U and EOU for every system relative to the total U and EOU reported for all seven systems. In order to test the hypothesis, these relative scores were compared to the nominal scores for each system.

RESULTS Table 1 shows the results from confirmatory factor analysis of convergent and discriminant validity for the measurement scales and items used in this study. Discriminant validity was assessed by comparing the model fit in nested to-factor models which combined usefulness and ease of use into on to-factor model for each technology. It is indicative of discriminant validity if the model fit for the restricted model where the correlation between the factors was fixed to 1.0, is significantly lower than in the freely estimated model. Because our data departs from the assumption of multivariate normality we used a robust comparative fit index which adjusts for this departure (Satorra & Bentler; 1988). Computing the factor reliability for each of the measurement scales assessed convergent validity. It is indicative of convergent validity if the scores for factor reliability for usefulness and ease of use for each technology are above 0.7 (Jöreskog; 1971). Squaring the factor load for each item assessed the reliability for each item in the scales. According to Fornell and Larcker (1981) these values indicate the average variance extracted for each item, and levels above 0.5 indicate acceptable item reliability because each item explains more systematic variance in its factor than error variance. As table 1 shows, all scales have discriminant validity. Also every item in these scales has reliability above the recommended level. The table also shows that Usefulness is sufficiently represented by its respective items for each technology, indicated by factor reliability above the recommended level. For usefulness the scores on factor reliability are slightly below the recommended level for Dyadic Meetings and Telephone. For all other technologies the factor reliability for usefulness is above the recommended level. Table 1. Instrument validation - results from confirmatory factor analysis Variables

#

Factor-

Average

Indicators (retained)

Fita RCFI (freely estimated)

Model-

reliabilityb

variance extractedc (AVE)

Discriminan t Validityd

Lotus Notes Mail

5 of 8

0.973 **

0.895

0.63

0.807**

Lotus Notes Databases

4 of 7

0.972**

0.860

0.61

0.870**

Memo

6 of 8

0.963**

0.904

0.61

0.828**

Dyadic Meetings

2 of 7

0.982**

0.674

0.51

0.922**

Group Meetings

2 of 7

0.980**

0.760

0.62

0.903**

Telephone

2 of 7

0.980**

0.671

0.51

0.872**

Internal Mail

2 of 7

0.988*

0.752

0.57

0.813**

7 of 7

0.973**

0.948

0.72

0.807**

RCFI (restricted model)

Ease of use

Usefulness Lotus Notes Mail

93

Sesjon 2:2a Lotus Notes Databases

7 of 7

0.972**

0.946

0.71

0.870**

Memo

7 of 7

0.963**

0.943

0.70

0.828**

Dyadic Meetings

4 of 7

0.982**

0.863

0.61

0.922**

Group Meetings

4 of 7

0.980**

0.880

0.65

0.903**

Telephone

3 of 7

0.980**

0.784

0.55

0.872**

Internal Mail

2 of 7

0.988*

0.795

0.66

0.813**

a

RCFI (Robust CFI) shows the overall fit for measurement models. RCFI is scaled to adjust for the departure from the assumption of multivariate normality (Satorra & Bentler 1988). Recommended value (Hu & Bentler 1995): > 0.90. b

Factor-reliability. Recommended value (Jöreskog 1971): > 0.70

c

Average Variance Extracted (AVE). Recommended value (Fornell & Larcker 1981): > 0.50

d

Discriminant Validity for “nested” models where a restrictive version of two-factor-models (covariance between factors restricted to 1.0) is compared to a freely estimated version. When the fit in restrictive models is poorer in terms of lower RCFI than in the freely estimated model, the measurement models have discriminant validity. *

p