scale: investigating the relationship between ...

1 downloads 308186 Views 105KB Size Report
Items 1 - 30 - puter software packages (which were significant predictors of CSE) ..... computers and familiarity with computer software packages, accounting for.
J. EDUCATIONAL COMPUTING RESEARCH, Vol. 26(2) 133-153, 2002

DEVELOPING THE COMPUTER USER SELF-EFFICACY (CUSE) SCALE: INVESTIGATING THE RELATIONSHIP BETWEEN COMPUTER SELF-EFFICACY, GENDER AND EXPERIENCE WITH COMPUTERS

SIMON CASSIDY PETER EACHUS University of Salford

ABSTRACT

The article describes the development and validation of the 30-item Computer User Self-efficacy (CUSE) Scale. Self-efficacy beliefs have been identified as a factor which may contribute to the success with which a task is completed. Because of the increasing reliance on computer technologies in all aspects of life, it is important that the construct is measured accurately and appropriately. In particular, the article focuses on the measurement of computer self-efficacy in student computer users and its relevance to learning in higher education. The scale was found to have high levels of internal and external reliability and construct validity. Results also showed there to be significant positive correlations between CSE and computer experience, familiarity with computer software packages (which were significant predictors of CSE) and that owning a computer and computer training increased CSE. In addition, males showed significantly higher CSE than females. It is suggested that the scale may be used to identify individuals, in particular students, who will find it difficult to exploit a learning environment which relies heavily on computer technologies. Once identified, motivational and personal control issues can be addressed with these individuals.

INTRODUCTION Social Cognitive Theory The construct of self-efficacy has emerged as a central facet of social cognitive theory. Social cognitive theory posits that behavior is best understood in terms of 133  2002, Baywood Publishing Co., Inc.

134 / CASSIDY AND EACHUS

“triadic reciprocality” [1], where behavior, cognition, and the environment exist in a reciprocal relationship and thereby influence, or are determined to a great extent, by each other. Self-efficacy can be defined as the beliefs a person has about their capabilities to successfully perform a particular behavior or task. An important distinction needs to be drawn between self-efficacy, which deals with beliefs about the ability to perform actions, and locus of control theory [2] which is concerned with beliefs about the outcomes of such actions. For example, an individual may hold the belief that their environment is, in principle, controllable (i.e., exhibiting an internal LOC) but that they personally do not have the skills/ability with which to exert such control (i.e., exhibiting low self-efficacy beliefs). Levels of self-efficacy are thought to be determined by such things as previous experience (success and failure), vicarious experience (observing others successes and failures), verbal persuasion (from peers, colleagues, relatives), and affective state (emotional arousal, e.g., anxiety). Self-efficacy levels have been shown to be related to choice of task, motivational level and effort, and perseverance with the task. Because self-efficacy is based on self-perceptions regarding particular behaviors, the construct is considered to be situation specific or domain sensitive. That is, a person may exhibit high levels of self-efficacy (indicating a high level of confidence) within one domain, for example sport, while simultaneously exhibiting low levels of self-efficacy within another domain such as academic ability. The suggestion made by Bandura is that the perception that one has the capabilities to perform a task will increase the likelihood that the task will be completed successfully [1]. Computer Self-Efficacy Self-efficacy beliefs have been shown to influence behavior in a wide variety of contexts, e.g., mental and physical health [1, 3], academic achievement [4, 5], and stock market investment [6]. The current article examines self-efficacy beliefs in the context of computer use. Computers are becoming increasingly commonplace in all aspects of everyday life, offering the user more sophisticated and more complex applications of technology. The human computer interface is becoming increasingly intuitive, but for the inexperienced user still poses formidable problems. The power of modern computers has the potential to impact many facets of our everyday lives, but for many people the ability to exert that power is limited by an inability to control that potential. This inability may be real—in that the individual genuinely may not have the necessary skills or abilities—or it may simply be a belief which results in incapacity and poor motivation as in the case of self-efficacy expectations. Self-efficacy beliefs have repeatedly been reported as a major factor in understanding the frequency and success with which individuals use computers. Compeau and Higgins tested several hypotheses related to a hypothetical linear model of computer use based on social cognitive theory [7]. In their study,

COMPUTER USER SELF-EFFICACY SCALE / 135

individuals with high self-efficacy used computers more, enjoyed using them more, and experienced less computer related anxiety. Level of enjoyment and anxiety levels were also identified as significant factors in computer use. The importance of self-efficacy in explaining computer use was also demonstrated by Hill, Smith, and Mann [8] who found that computer self-efficacy beliefs affected whether individuals chose to use computers irrespective of their beliefs about the value of doing so. Experience and Computer Self-Efficacy Computer experience has also been associated with determining levels of computer self-efficacy. Torkzadeh and Koufteros found that the computer selfefficacy of a sample of 224 undergraduate students increased significantly following a computer training course [9]. Hill and colleagues also found a significant positive correlation between previous computer experience and computer selfefficacy beliefs in a sample of 133 female undergraduates [8]. They also found that experience only influenced behavioral intentions to use computers indirectly through self-efficacy beliefs. The suggestion here then, is that it is the type of computer experience which is important rather than computer experience per se. Thus, positive past experience with computers will increase self-efficacy beliefs while negative experience will reduce self-efficacy beliefs. This view is supported by Ertmer, Evenbeck, Cennamo, and Lehman who found that although positive computer experience increased computer self-efficacy, the actual amount of experience (i.e., time on task) was not correlated with self-efficacy beliefs of undergraduate students [10]. A further study by Cassidy and Eachus found that levels of computer self-efficacy failed to increase in a sample of undergraduate students following completion of an introductory information technology module [11]. This was despite an increase in self-reported familiarity with computers (i.e., number of software applications used) in the group. This suggests that it is the quality not the quantity of experience which is a critical factor in determining self-efficacy beliefs. Gender Differences in Computer Self-Efficacy Some studies have reported gender differences as a contributing factor in self-efficacy beliefs. Miura found males to have significantly higher computer self-efficacy than females in a sample of undergraduate students [12]. Males also scored higher on perceived relevance of computer skills to future career, interest in knowing how a computer works, and intentions to take computers courses. More recent work investigating gender differences in computer self-efficacy indicates that the difference may be related to perceived masculinity of the task in question. Murphy, Coover, and Owen found gender differences in relation to self-efficacy for advanced skills and mainframe computer skills, with men showing higher self-efficacy on both [13]. There were however no gender difference for beginning

136 / CASSIDY AND EACHUS

level computer skills. Torkzadeh and Koufteros also found gender differences in self-efficacy for computer file and software management but not for beginning level, mainframe skills, and advanced skills [9]. The difference did disappear following training. Busch also reports higher levels of self-efficacy in men for complex but not simple tasks across different computer applications (i.e., word processing, spreadsheets) which is independent of training (i.e., working with the applications for one year) [14]. It appears that it is the complexity of the task which determines any gender difference in computer self-efficacy. The more complex the task is, the higher is the perceived masculinity factor and hence men show higher self-efficacy for such tasks. Measuring Computer Self-Effficacy The nature of self-efficacy as an egocentric construct demands that it be measured directly, rather than indirectly. Self-efficacy is therefore measured using self-report scales. There are a number of these scales which have been developed to measure self-efficacy in the specific domain of computer use. Vasil, Hesketh, and Podd developed a 9-item measure of computer self-efficacy for use with school children [15]. Children rate their level of confidence for learning nine specific computer related tasks on a 10-point Likert scale. Miura measured computer self-efficacy in a sample of university students using 15 computer related tasks categorized as computer programming, computer course work, and personal uses of computers [12]. Respondents rate their perceived level of confidence for completing each of the tasks, the sum of these ratings provides an overall composite score. Hill and colleague’s scale consists of only four items and doubts have been raised regarding the validity of the scale as a measure of computer self-efficacy given that the majority of items relate only to the general nature of computing [8; (cf. [7])]. Other scales often only incorporate measures of computer self-efficacy as components. An example is the Computer Attitude Scale which includes a 10-item Computer Confidence sub-scale [16]. The Computer Technologies Survey includes a comprehensive 46-item sub-scale measuring selfefficacy in relation to specific computer technologies such as word processing, e-mail, and print functions, but does not provide an overall composite score for self-efficacy [17]. Instead it indicates self-efficacy levels for individual technologies. Busch developed a scale to measure self-efficacy beliefs in relation to two specific software packages; WordPerfect and Lotus 1-2-3 [14]. The scale includes sub-sections to measure self-efficacy for simple and complex tasks on each of the packages. Each sub-section comprises three tasks on which subjects rate themselves on a 5-point scale according to their level of confidence in completing the task. Murphy and colleagues provide a 32-item scale which presents respondents with a series of beginning, advanced, and mainframe computer tasks to which they indicate their level of agreement with the statement “I feel confident . . .” followed by each task [13]. Finally, Compeau and Higgins have

COMPUTER USER SELF-EFFICACY SCALE / 137

developed a 10-item scale concerned with general computer use in the context of completing a job [7]. Respondents have a 10-point Likert scale along which to judge how confident they would be completing a particular [hypothetical] job with the aid of a new [hypothetical] software package and with varying levels of support. The scale was developed using a large sample of, mainly graduate and postgraduate, business professionals and managers. Although the scale has been widely accepted as an important contribution to the measurement of CSE, Compeau and Higgins suggest two major potential limitations brought about by the hypothetical nature of the scale scenario [7]. First, that respondents may not be capable of imagining in the detail necessary to answer the questions and, second, that the scale may be measuring, to some degree, learning self-efficacy in addition to computer self-efficacy (see Table 1). While each of these scales are of some value to the measurement of computer self-efficacy, there are limitations to currently available measures. Several instruments present reliability problems because the instruments comprise too few items and may not be valid in the current context because of the nature of the items [e.g., 8]. Other instruments have been developed using populations of children [e.g., 15] or business professionals [e.g., 7] which may limit their use with more general populations. Some instruments show more fundamental flaws by presenting biased instruments where all items are positively worded [13]. The most substantive limitation of current measures relates to the task specificity of the items comprising many of the scales. Using items with high task specificity appears to dominate all the instruments mentioned, some to a greater degree than others. While it is conceded that task specificity fits with the notion that situation specific measures are superior indicators of self-efficacy, asking subjects to respond to component tasks such as saving files or tasks which relate purely to the functioning of a particular software package, are taking the notion beyond its useful limits. This is particularly so given the current uniformity of software environments provided by Windows, making reference to specific packages relatively obsolete. High task specific scales therefore limit their own application. As such, there is a need for an instrument capable of providing a more general domain specific measure of computer self-efficacy. Here we describe the development and validation of the 30-item Computer User Self-Efficacy (CUSE) scale designed to measure general computer self-efficacy in an adult student population. The rationale for the development of such a scale relates, in general terms, to the impact computers are having on many aspects of life and in particular to the increasing reliance in higher education on computer technology to support learning. Increasingly, students are expected, perhaps because of the intuitive nature of the human computer interface, to be proficient users of an array of software applications. There is often little in the way of formal training, and low self-efficacy may be a significantly limiting factor for students exploring new applications vital for academic progress—the Internet being a prime example. The development of an appropriate measure of computer

138 / CASSIDY AND EACHUS

Table 1. Examples from Instruments Measuring Computer Self-Efficacy Author

Example item

Response format

Vasil et al. (1987)

“Switch on the computer” “Copy files to keep back-ups”

“No confidence” (0)/ “Complete confidence” (5)

Busch (1995)

Three simple and three complex tasks in WordPerfect and Lotus 1-2-3

“No confidence at all” (0)/ “Completely confident” (5)

Miura (1987)

Computer literacy in relation to tasks in three areas; programming/personal computer use/computer course work

“Not very confident” (10)/ “Completely confident” (100)

Hill et al. (1989)

“I will never know how to use a computer” “Computer errors are very difficult to fix”

“Totally agree” (1)/ “Totally disagree” (5)

Murphy et al. (1989)

“I feel confident using the computer to write a letter or essay” “I feel confident troubleshooting computer problems”

“Very little confidence” (1)/ “Quite a lot of confidence” (5)

Compeau and “I could complete the job using the software package if; Higgins –there was no one around to (1995) tell me what to do; –if someone showed me how to do it first” Kinzie and Delcourt (1991)

“I feel confident accessing

previous files with a word processing program”

“Not at all confident” (1)/ “Totally confident” (10)

“Strongly disagree” (1)/ “Strongly agree” (4)

self-efficacy may enable students ‘at risk’ to be identified at an early stage. In addition to validating the scale, the study investigates the relationship between; experience with computers and CSE and gender and CSE, positing that experience, computer training, familiarity with software packages, and ownership of a computer will all be associated with increased CSE [8, 9]; and that males may exhibit significantly higher levels of CSE than females [12-14].

COMPUTER USER SELF-EFFICACY SCALE / 139

METHOD—PHASE ONE; 47-ITEM COMPUTER USER SELF-EFFICACY SCALE The construct domain of computer self-efficacy was sampled using items generated by experienced and inexperienced staff (academic and administrative) and computer users within the University Faculty of Health Care and Social Work Studies. These items constituted an initial 47-item scale where respondents were required to indicate their level of agreement/disagreement to each statement along a 6-point Likert scale. The items were of a general yet domain specific nature, e.g., “I consider myself to be a skilled computer user” (see Appendix A for further examples). Affirmation bias was controlled for by wording half of the statements in a negative manner so that a “disagree” response was needed to add positively to the composite self-efficacy score. Hence, a high score would indicate high computer self-efficacy. Additional Measures In addition to the 47-item computer user self-efficacy scale (Part Two of the instrument), the following related factors were measured in Part One of the instrument (see Appendix A and B): Computer Experience—measured on a 5-point self-report Likert scale from “none” scored as 1 to “extensive” scored as 5. No further data were collected relating to type of experience; Familiarity with Software Packages—respondents ticked a generic list of nine software packages (e.g., “word processing packages,” “spreadsheet,” “statistics packages”—with the option to specify additional packages not listed), each package contributing equally to a summative total based on the number of packages used; Computer Training,— whether respondents “had ever attended a computer training course” YES/NO. Again, no further data were collected relating to type of course; Computer Ownership—whether respondents owned their own computer" YES/NO. Sample The target population for the instrument development study (phase one and phase two) was university students. A total of 101 participants were randomly sampled from a population of university students following a variety of degree programs in the Faculty of Health. The sample represented both young and mature students and consisted of 16 males and 85 females with a mean age of 29.19, SD 7.7 (range 18-52). While it is apparent that females are over represented in the sample, this in fact reflects the student enrollment profile in the Faculty and in universities world wide, where there is an increasing trend for female students to outnumber male students [cf. 18-21]. As such, it is thought reasonable to generalize findings to a general population of university students.

140 / CASSIDY AND EACHUS

Procedure All participants were asked to complete the instrument (Part One and Two) anonymously and voluntarily during a normal lecture period. Results Preliminary analyses [Phase one of instrument development] found the instrument to have acceptable psychometric properties: Reliability

Internal reliability, as measured by Cronbach’s alpha, was high. Alpha was 0.94 indicating a particularly high degree of internal consistency (i.e., an average of correlation coefficients between scale items approaching an optimum coefficient of “1”). This is evidence of homogeneity within items and demonstrates that, despite being a multi-item scale, each item contributes to the measurement of a single construct. Validity

The construct validity of the instrument was demonstrated by significant positive correlations between computer self-efficacy and both computer experience (r = 0.55, p < 0.001) and familiarity with software packages (r = 0.53, p < 0.001). As previous studies [8, 14, 22, 23] have reported convergence between computer self-efficacy and these other variables, findings here support the notion that the instrument is in fact measuring what it purports to measure [computer selfefficacy] and therefore its validity. Scale Refinement

Factor and item analyses conducted on data collected in this part of the study suggested that the scale was unidimensional. This therefore meant that the number of items could be reduced to a more acceptable number without adversely affecting the psychometric properties of the instrument. Through a process of selection based on reliability coefficients and factor loadings, the 47-item scale was refined to 30-items (see Appendix A). METHOD—PHASE TWO; 30-ITEM COMPUTER USER SELF-EFFICACY SCALE The second phase of the study was concerned with both assessing the psychometric properties of the refined 30-item scale and investigating the relationship between self-efficacy; and computer experience, use of software packages (i.e., familiarity), computer training, computer ownership and gender (measured as in

COMPUTER USER SELF-EFFICACY SCALE / 141

Phase One using Part One of the Instrument without modification [Appendix A and B]). Sample The total sample (N) was 212 with a mean age of 26.18 and SD of 8.31 (range 18-55). There were 94 males and 113 females in the sample (gender data were missing for five participants). While the target population for phase two remained university students, the actual sample consisted of five groups of participants; four groups were university students sampled from the Faculties of Health and Computing and a fifth group from outside the University who had completed the CSE scale via the Internet. Internet responses were world wide from a diverse population including academics, undergraduate and postgraduate students, and human resource professionals. The rationale for the inclusion of discrete groups within the sample was to generate validity data for the instrument. Given that experience with computers, and technology in general, is likely to influence levels of computer self-efficacy [8], the instrument should be able to discriminate between participant groups who are known to differ in their experience and familiarity with computers. The four student groups were therefore selected on the basis of likely computer experience given the nature of the course on which they were enrolled, and through consultation with course leaders who were able to comment on professional placements and entry requirements relevant to computer experience. The first group were first year physiotherapy students (n = 48, mean age 23.37, SD 5.28, 10 males, 38 females) who, as a group, would have relatively minimal computer experience. The group who were deemed to have extensive computer experience were software engineering students (n = 65, mean age 21.34, SD 4.24, 59 males, 2 females). Between these two extremes were a group of first year radiographers (n = 27, mean age 22.69, SD 5.56, 3 males, 24 females) who regularly use electronic equipment, a group of post-registration nurses (n = 31, mean age 33.14, SD 7.66, all female) who rarely use computers, and a group of Internet users (n = 41, mean age 34.33, SD 8.79, 22 males, 18 females) who have at least moderate experience of computers. It was predicted that there would be a significant difference in the computer user self-efficacy scores between the five groups, with the software engineers scoring higher than all other groups. The inclusion of the Internet group also broadened the sample parameters thereby increasing the extent to which the instrument could be used in a wider population. Procedure All groups of students completed the refined instrument (Part One and Two) voluntarily during normal lecture time using a pseudonym to enable retest data to be collected. The Internet user group [recruited through Internet user groups] completed the identical instrument online. Respondents in the Internet group

142 / CASSIDY AND EACHUS

returned via e-mail. In order to gather retest data, all groups—except the software engineers and Internet users—completed the scale a second time a month later. Results Findings from phase two of the study provide strong support for the reliability and validity of the Computer User Self-Efficacy Scale. Reliability

Internal consistency of the 30-item scale, measured using Cronbach’s Alpha was high (alpha = 0.97, N = 184). Test-retest reliability over a one-month period was also high and statistically significant (r = 0.86, N = 74, p < 0.0005). Validity

Construct validity was assessed, as in phase one of the study, by correlating the self-efficacy scores with a self-reported measure of computer experience and with number of computer packages used (i.e., familiarity). Both correlations were significant; experience correlated at r = 0.79, p < 0.0005, N = 212 and familiarity correlated at r = 0.75, p < 0.0005, N = 210. Criterion validity (known groups method) was assessed by comparing total computer self-efficacy scores across the five groups. A one-way ANOVA identified a significant main effect for group (F = (4, 207) 50.66, p < 0.0005). Post hoc analysis showed, as predicted, that the software engineers scored significantly higher than all other groups; Internet users scored higher than all groups except software engineers; radiographers scored higher than nurses and physiotherapists; and there was no difference between nurses and physiotherapists: i.e., software engineers > Internet users > radiographers > nurses = physiotherapists (see Table 2). Table 2. Mean Computer Self-Efficacy Scores by Group Group

n

Meana

SD

Software engineers

65

159.05

15.41

Internet users

41

144.02

15.36

Radiographers

27

115.68

26.81

Physiotherapists

48

109.08

31.27

Nurses

31

101.52

30.50

a

Minimum score 30/maximum score 180.

COMPUTER USER SELF-EFFICACY SCALE / 143

Training and Self-Efficacy, Experience, and Familiarity (see Table 3)

The sample were divided into those who had (n = 98) and had not (n = 112) attended a computer training course [according to their response to the question “Have you ever attended a computer training course”]. Independent t-tests revealed that the trained group had significantly higher self-efficacy (t(df 208) 3.06, 1-tailed, p < 0.002). The trained group were also more experienced [according to the 5-point self-report scale “none ” to “extensive”] (t(df 208) 3.6, 2-tailed, p < 0.0005) and familiar with a greater number of packages [according to the self-report item asking respondents to list all the computer software packages “they had used” (t(df 206) 2.88, 2-tailed, p < 0.004). Gender and Self-Efficacy, Experience and Familiarity (see Table 4)

When grouped according to gender, males (n = 94) had higher self-efficacy scores (t(df 205) 9.72, 1-tailed, p < 0.0005), were more experienced (t(df 205) 11.44, 1-tailed, p < 0.0005)) and familiar with a greater number of packages (t(df 205) 10.78, 1-tailed, p < 0.0005) than females (n = 113). Training did not affect the gender difference with males continuing to show higher self-efficacy scores than females in both trained (t(df 92) 6.15, 1 -tailed, p < 0.001) and untrained groups (t(df 109) 6.49, 1-tailed, p < 0.0005). Computer Ownership and Self-Efficacy, Experience and Familiarity (see Table 5)

Comparative analysis was carried out on the basis of grouping the sample according to whether or not they owned their own computer. Those participants

Table 3. Mean Computer Self-Efficacy, Experience, and Familiarity Scores for Trained and Untrained Groups Experiencea

Self-efficacy

Familiarityb

Mean

SD

Mean

SD

Mean

SD

Trained (n = 98)

137.96

28.67

3.86

1.1

4.91

2.19

Untrained (n = 112)

124.37

35.67

3.3

1.2

3.99

2.38

a

Rating 1 to 5 increasing with experience. Score between 0 and infinity.

b

144 / CASSIDY AND EACHUS

Table 4. Mean Computer Self-Efficacy, Experience, and Familiarity Scores for Males and Females Self-efficacy

Experience

Familiarity

Mean

SD

Mean

SD

Mean

SD

Males (n = 94)

150.44

23.12

4.35

0.89

5.96

1.79

Females (n = 113)

113.68

31.22

2.91

0.91

3.12

1.95

Table 5. Mean Computer Self-Efficacy, Experience, and Familiarity Scores for Participants Who Did and Did Not Own a Computer Self-efficacy

Experience

Familiarity

Mean

SD

Mean

SD

Mean

SD

Yes (n = 141)

141.17

27.93

4.0

1.02

5.17

2.21

No (n = 71)

110.48

33.3

2.72

0.88

2.97

1.83

who owned their own computer had higher self-efficacy (t(df 210) 7.07, 2-tailed, p < 0.0005), were more experienced (t(df 210) 9.09, 2-tailed, p < 0.0005) and were familiar with more packages (t(df 210) 7.18, 2-tailed, p < 0.0005). Regression Analyses

In order to assess the relative contribution of experience, familiarity with packages, computer training, computer ownership, age, and gender to the explanation of computer self-efficacy, stepwise regression analyses were carried out. Experience was the most important predicator, accounting for 63.51 percent of the variability in CSE (R2 = .63509, F(l,196) = 341.121, p < 0.0005). Familiarity with software packages was also a significant predictor, accounting for a further 4.23 percent of the variability in CSE (R2 Change = .04234, F Change (2,195) = 25.597, p < 0.0005). None of the other variables were found to contribute significantly to CSE (p > 0.05). As the correlations between independent variables were less than .49, multicolinearity was not considered to be at such a level (i.e., above .8) as to destabilize regression coefficients [24].

COMPUTER USER SELF-EFFICACY SCALE / 145

DISCUSSION The primary objective of the study was to develop and validate a scale to measure computer user self-efficacy. This objective was met with the refined 30-item Computer Users Self-Efficacy Scale achieving more than satisfactory levels of internal reliability (alpha = .97) and external reliability (r = .86). Validity of the scale is indicated by its capability to produce empirical differences between samples grouped according to their expected varying levels of computer competency, and by positive correlations between CSE and experience (r = .79) and CSE and familiarity with software packages (r = .75). The Instrument is generic within the confines of computer use and has been developed using a relatively diverse sample of [mainly] student computer users. On this basis, it is suggested that the Scale offers a level of external validity which is superior to many of the existing measures of CSE. The use of the scale is not limited to specific computer technologies and, it is suggested—given the inclusion of general Internet users in the sample—may be appropriate for use in general adult populations of computer users. As with previous research [e.g., 8, 9], CSE correlated significantly with selfreport measures of experience with computers and with the number of computer software packages used. Owning a computer was also significantly associated with increased CSE. This fits with the general notion that beliefs in self-efficacy develop as a result of interaction with the environment [1]. More importantly perhaps, computer training (i.e., having participated in some form of computer training) was associated with significantly increased CSE. If we venture to assume that training constitutes positive experience, this further validates the finer points of the self-efficacy construct, where positive outcomes, which the individual can reasonably attribute to internal, ability based sources, contribute significantly to increased self-efficacy beliefs. These findings are supported by Torkzadeh and Koufteros [9] who reported increased CSE following training and by Ertmer and colleagues [10] who found that it was positive computer experience—not simply “time on task” [which may well include negative experience]—which contributed to increased CSE. It should be noted, however, that other studies [11] have failed to show any impact of training on CSE, suggesting that training does not inherently address self-efficacy needs. When gender differences in self-efficacy beliefs were explored, males were found to have significantly higher CSE than females. Such findings have been repeatedly reported [e.g., 12], with males also scoring higher for: perceived relevance of computers skills to future career, interest in knowing how a computer works, and intentions to take computer courses. Interestingly, some studies report that the gender difference in CSE is negated following training [9]. Such an effect was not present in the current study, with males consistently showing higher CSE than females in both trained and untrained groups. Some studies have explained the gender difference in terms of the perceived masculinity of the task [13, 14],

146 / CASSIDY AND EACHUS

with males only showing higher CSE than females for those tasks perceived to be masculine (usually the more complex tasks) but not for beginning level tasks. Given that the instrument used in the current study still elucidated gender differences, despite being non-task-specific, perceived masculinity is unlikely to be the solitary determining factor in gender differences in CSE. As Bandura would predict, the results of stepwise multiple regression analysis demonstrated that the most important predictors of CSE were experience with computers and familiarity with computer software packages, accounting for 63.51 percent and 4.23 percent of CSE respectively [1]. When controlling for these factors, gender, computer training, computer ownership, and age were not found to be significant predictors of CSE. This implies that experience with computers and familiarity with software packages are important considerations when explaining the effects of gender, training, and computer ownership on CSE. It may be that the effect of these variables on CSE are no more than a function of the indirect effects of experience and use of software packages. The increasing reliance in higher education on computer technology to support learning, along with the potential effects of self-efficacy beliefs on students motivation to exploit the intuitive nature of the human computer interface, provided a rationale for the development of the CUSE Scale. It is suggested that the scale could be utilized to identify students with low CSE, which may prove an immediate, as well as a long-term obstacle to academic progress. Without intervention, students may remain ill-motivated and perceive themselves as having little personal control over their learning environment. In order to make full use of the CUSE Scale in this sense, it will be necessary to assess the predictive validity of the scale by employing it in further longitudinal studies. It is already apparent however, form findings of the current study, that CSE beliefs are associated, to a significant extent, with experiential learning, including computer training, exposure to a variety of software applications, and access to computing hardware. In addition, CSE beliefs are significantly associated with gender type, a result which is not an artifact of exposure to computer training as has been previously suggested. Even at this stage, these results have important implications for higher education and the management of the learning environment, applying equally to education as a whole, as well as extending to the world of commerce and industry. In order to employ and effectively exploit computer technology, educators need to address, as a key issue, computer user self-efficacy beliefs in learners. In a study exploring the area of academic selfefficacy, Cassidy and Eachus report findings which demonstrate how educational programs can be designed to increase students capability beliefs—rather than knowledge of the topic per se—for specific academic areas, and show that improved capability beliefs are significantly related to attainment of learning objectives [25]. By being aware of the factors which determine capability beliefs in computer users (CSE), and having access to an instrument capable of measuring these beliefs (CUSE Scale), educators can both develop programs

COMPUTER USER SELF-EFFICACY SCALE / 147

which enhance self-efficacy beliefs and put in place mechanisms which support individuals with negative self-efficacy beliefs. Such an approach is likely to increase computer use—surely the fundamental goal of the study of computer self-efficacy—and improve the degree to which learners might meet learning objectives set by programs which utilize, to any degree, a virtual learning environment.

APPENDIX A: Computer User Self-Efficacy Scale The purpose of this questionnaire is to examine attitudes toward the use of computers. The questionnaire is divided into two parts. In Part 1 you are asked to provide some basic background information about yourself and your experience of computers, if any. Part 2 aims to elicit more detailed information by asking you to indicate the extent to which you, personally, agree or disagree with the statements provided. Part 1: Your name _______________________________________________________ Your age ____________________ Your sex:

M

F

Experience with computers: none very limited some experience quite a lot extensive Please indicate (tick) the computer packages (software) you have used Wordprocessing packages Spreadsheets Databases Presentation packages (e.g., Harvard Graphics, Coreldraw) Statistics packages Desktop Publishing Multimedia Other (specify) ____________________________________________________

148 / CASSIDY AND EACHUS

Do you own a computer? YES NO Have you ever attended a computer training course? YES NO Part 2: Below you will find a number of statements concerning how you might feel about computers. Please indicate the strength of your agreement/disagreement with the statements using the 6-point scale shown below. Tick the box (i.e., between 1 and 6) that most closely represents how much you agree or disagree with the statement. There are no correct responses, it is your own views that are important. 1. Most difficulties I encounter when using computers, I can usually deal with. strongly disagree 1 2 3 4 5 6 strongly agree 2. I find working with computers very easy. strongly disagree 1 2 3 4

5

6

strongly agree

3. I am very unsure of my abilities to use computers. strongly disagree 1 2 3 4 5

6

strongly agree

4. I seem to have difficulties with most of the packages I have tried to use. strongly disagree 1 2 3 4 5 6 strongly agree 5. Computers frighten me. strongly disagree 1

2

3

4

5

6

strongly agree

3

4

5

6

strongly agree

7. I find that computers get in the way of learning. strongly disagree 1 2 3 4 5

6

strongly agree

6. I enjoy working with computers. strongly disagree 1 2

8. DOS-based computer packages don’t cause many problems for me. strongly disagree 1 2 3 4 5 6 strongly agree 9. Computers make me much more productive. strongly disagree 1 2 3 4

5

6

strongly agree

COMPUTER USER SELF-EFFICACY SCALE / 149

10. I often have difficulties when trying to learn how to use a new computer package. strongly disagree 1 2 3 4 5 6 strongly agree 11. Most of the computer packages I have had experience with, have been easy to use. strongly disagree 1 2 3 4 5 6 strongly agree 12. I am very confident in my abilities to make use of computers. strongly disagree 1 2 3 4 5 6 strongly agree 13. I find it difficult to get computers to do what I want them to. strongly disagree 1 2 3 4 5 6 strongly agree 14. At times I find working with computers very confusing. strongly disagree 1 2 3 4 5 6

strongly agree

15. I would rather that we did not have to learn how to use computers. strongly disagree 1 2 3 4 5 6 strongly agree 16. I usually find it easy to learn how to use a new software package. strongly disagree 1 2 3 4 5 6 strongly agree 17. I seem to waste a lot of time struggling with computers. strongly disagree 1 2 3 4 5 6

strongly agree

18. Using computers makes learning more interesting. strongly disagree 1 2 3 4 5

strongly agree

6

19. I always seem to have problems when trying to use computers. strongly disagree 1 2 3 4 5 6 strongly agree 20. Some computer packages definitely make learning easier. strongly disagree 1 2 3 4 5 6

strongly agree

21. Computer jargon baffles me. strongly disagree 1 2

3

4

5

6

strongly agree

22. Computers are far too complicated for me. strongly disagree 1 2 3 4

5

6

strongly agree

150 / CASSIDY AND EACHUS

23. Using computers is something I rarely enjoy. strongly disagree 1 2 3 4

5

6

strongly agree

24. Computers are good aids to learning. strongly disagree 1 2 3

5

6

strongly agree

4

25. Sometimes, when using a computer, things seem to happen and I don’t know why. strongly disagree 1 2 3 4 5 6 strongly agree 26. As far as computers go, I don’t consider myself to be very competent. strongly disagree 1 2 3 4 5 6 strongly agree 27. Computers help me to save a lot of time. strongly disagree 1 2 3 4

5

6

strongly agree

28. I find working with computers very frustrating. strongly disagree 1 2 3 4 5

6

strongly agree

29. I consider myself to be a skilled computer user. strongly disagree 1 2 3 4 5

6

strongly agree

30. When using computers I worry that I might press the wrong button and damage it. strongly disagree 1 2 3 4 5 6 strongly agree

Thank you for your time

APPENDIX B: Scoring the Computer User Self-Efficacy Scale

Part 1 Experience with computers—This question is scored using a standard Likert format where “none” is scored as 1 and “extensive” is scored as 5. Number of computer packages used—Here the respondent is scored 1 for each package used and these are summed to give a total score.

COMPUTER USER SELF-EFFICACY SCALE / 151

Part 2 Items 1 to 30 are all scored on a 6-point Likert scale. Items 1, 2, 6, 8, 9, 11, 12, 16, 18, 20, 24, 27, and 29 are positively worded and the respondent’s response is recorded as the actual scale score for these items, e.g., a response of 4 to item 1 will be scored as 4, i.e. Strongly Disagree 1 2 3 4 5 6 Strongly Agree Items 3, 4, 5, 7, 10, 13, 14, 15, 17, 19, 21, 22, 23, 25, 26, 28, and 30 are negatively worded and are scored in reverse, i.e. Strongly Agree 1 2 3 4 5 6 Strongly Disagree A scale score for these items is obtained by subtracting the respondent’s response from 7, e.g., a response of 4 to item 3 will be scored as 3. Summing the scores for all 30 items gives the total self-efficacy score. Using this scoring method, a high total scale score indicates more positive computer selfefficacy beliefs.

REFERENCES 1. A. Bandura, Social Foundations of Thought and Action: A Social Cognitive Theory, Prentice-Hall, Englewood Cliffs, New Jersey, 1986. 2. J. B. Rotter, Generalised Expectancies for Internal Versus External Control Reinforcement, Psychological Monographs, 80:1, 1966. 3. R. Schwarzer, Self-Efficacy: Thought Control of Action, Hemisphere, London, 1992. 4. P. Eachus, Development of the Health Student Self-Efficacy Scale, Perceptual and Motor Skills, 77, p. 670, 1993. 5. P. Eachus and S. Cassidy, Self-Efficacy, Locus of Control and Styles of Learning as Contributing Factors in the Academic Performance of Student Health Professionals, Proceedings of the First Regional Congress of Psychology for Professionals in the Americas, Mexico City, 1997. 6. P. Eachus, Locus of Control, Self-Efficacy and Attributional Style of Investment Professionals, unpublished Ph.D. thesis, Manchester Metropolitan University, 1994. 7. D. R. Compeau and C. A. Higgins, Computer Self-Efficacy: Development of a Measure and Initial Test, MIS Quarterly, June 1995. 8. T. Hill, N. D. Smith, and M. F. Mann, Role of Efficacy Expectations in Predicting the Decision to Use Advanced Technologies: The Case of Computers, Journal of Applied Psychology, 72:2, pp. 307-313, 1987.

152 / CASSIDY AND EACHUS

9. G. Torkzadeh and X. Koufteros, Factorial Validity of a Computer Self-Efficacy Scale and the Impact of Computer Training, Education and Psychological Measurement, 54:3, pp. 813-821, 1994. 10. P. A. Ertmer, E. Evenbeck, K. S. Cennamo, and J. D. Lehman, Enhancing SelfEfficacy for Computer Technologies Through the Use of Positive Classroom Experiences, Educational Technology, Research & Development, 42:3, pp. 45-62, 1994. 11. S. Cassidy and P. Eachus, The Role of Computer Training in Determining Levels of Computer Self-Efficacy in Students, Proceedings of Computers in Psychology (CiP), University of York, United Kingdom, 2000. 12. I. T. Miura, The Relationship of Computer Self-Efficacy Expectations to Computer Interest and Course Enrollment in College, Sex Roles, 16:5/6, 1987. 13. C. A. Murphy, D. Coover, and S. V. Owen, Development and Validation of the Computer Self-Efficacy Scale, Education and Psychological Measurement, 49, pp. 893-899, 1989. 14. T. Busch, Gender Differences in Self-Efficacy and Attitudes Towards Computers, Journal of Educational Computing Research, 12:2, pp. 147-158, 1995. 15. L. Vasil, B. Hesketh, and J. Podd, Sex Differences in Computing Behaviour Among School Pupils, New Zealand Journal of Educational Studies, 22:2, pp. 201-214, 1987. 16. B. H. Lloyd and C. Gressard, Reliability and Factorial Validity of Computer Attitude Scales, Educational and Psychological Measurement, 42:2, pp. 501-505, 1984. 17. M. B. Kinzie and M. Delcourt, Computer Technologies in Teacher Education: The Measurement of Attitudes and Self-Efficacy, paper presented at the American Educational Research Association, Chicago, ERIC Document Reproduction Service No. ED 331 891, 1991. 18. College is ‘Not Cool’—Women Out Number Men in Student Population [online]; BBC Education, November 17th, 1999. Available at: [Accessed April 2001]. 19. Facts and Figures 2000-2001 [online], University of Sussex Press and Communication Office, available at: [Accessed April 2001]. 20. C. Ballantyne, Are They Glad They Came? First Year Students Views of Their University Experience, in Flexible Futures in Tertiary Teaching, A. Herrmann and M. M. Kulski (eds.), Proceedings of the 9th Annual Teaching Learning Forum, Curtin University of Technology, Perth, February 2-4, 2000. 21. T. G. Mortenson, Where Are the Boys? The Growing Gender Gap in Higher Education, College Board Review, 188, pp. 8-17, August 1999. 22. R. Koul and P. Rubba, An Analysis of the Reliability and Validity of Personal Internet Teaching Efficacy Beliefs Scale, Electronic Journal of Science Education, 4:1, 1999. 23. C. A. Decker, Training Transfer: Perceptions of Computer Use Self-Efficacy Among Employees, Journal of Technical and Vocational Education, 14:2, 1998. 24. A. Bryman and D. Cramer, Quantitative Data Analysis with SPSS for Windows, Routledge, London, 1997.

COMPUTER USER SELF-EFFICACY SCALE / 153

25. S. Cassidy and P. Eachus, Learning Style, Academic Belief Systems, Self-Report Student Proficiency and Academic Achievement in Higher Education, Educational Psychology, 20:3, pp. 307-322, 2000.

Direct reprint requests to: Dr. Simon Cassidy School of Community, Health Sciences and Social Care University of Salford Frederick Road Salford M6 6PU United Kingdom