Original article doi: 10.1111/j.1365-2729.2007.00257.x
Technology-assisted learning: a longitudinal field study of knowledge category, learning effectiveness and satisfaction in language learning W. Hui,* P.J.-H. Hu,† T.H.K. Clark,‡ K.Y. Tam‡ & J. Milton§ *College of Information Technology, Zayed University, Abu Dhabi, United Arab Emirates †Accounting and Information Systems, David Eccles School of Business, University of Utah, Salt Lake City, Utah, USA ‡Information and Systems Management, School of Business and Management, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, China §Language Center, School of Humanities, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, China
Abstract
A field experiment compares the effectiveness and satisfaction associated with technologyassisted learning with that of face-to-face learning. The empirical evidence suggests that technology-assisted learning effectiveness depends on the target knowledge category. Building on Kolb’s experiential learning model, we show that technology-assisted learning improves students’ acquisition of knowledge that demands abstract conceptualization and reflective observation but adversely affects their ability to obtain knowledge that requires concrete experience. Technology-assisted learning better supports vocabulary learning than face-to-face learning but is comparatively less effective in developing listening comprehension skills. In addition, according to empirical tests, perceived ease of learning and learning community support significantly predict both perceived learning effectiveness and learning satisfaction. Overall, the results support our hypotheses and research model and suggest instructors should consider the target knowledge when considering technology-assisted learning options or designing a Web-based course. In addition, a supportive learning community can make technology-assisted learning easier for students and increase their learning satisfaction.
Keywords
control group, empirical, information systems, Internet, language learning, satisfaction, World Wide Web.
Introduction
Advances in information and communications technologies have brought about exciting opportunities for fundamental changes in education. Instructors increasingly leverage available technologies to enhance their
Accepted: 2 August 2007 Correspondence: Wendy Hui, College of Information Technology, Zayed University, PO Box 4783, Abu Dhabi, United Arab Emirates. Email:
[email protected]
© 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
students’ learning experiences, such as by creating vivid, playful, interactive learning environments that support multimedia presentations, adaptive online exercises, and virtual discussions with greater student control of learning and pacing. Of particular importance is the Internet, which provides a conveniently accessible and easy-to-use global platform that supports a wide array of learning and knowledge dissemination activities. Not surprisingly, use of Web-based courses and training programs has grown at a rapid pace (Allen & Seaman 2006) and increased the course preparation
Journal of Computer Assisted Learning (2008), 24, 245–259
245
246
and administration efficiency of instructors who can allocate more time and attention to other core learning activities. Some researchers, including Zhang et al. (2004), suggest technology-assisted learning can substitute for some conventional, face-to-face, classroom-based learning. In this context, an instructor can deliver course materials through a designated Web site, from which students access materials and interact with the instructor (and perhaps peers) remotely. A competing view holds that technology-assisted learning should be used only to complement face-to-face learning, which demands a hybrid or blended approach to leverage the respective strengths of each type of learning, such as using technology-assisted learning in some areas rather than replacing face-to-face, classroom-based learning altogether. According to Masie (2002) and Frederickson et al. (2004), this hybrid approach represents a preferable and arguably more advantageous means of using technology-assisted learning. However, with either approach, a fundamental question remains: does the use of technology-assisted learning improve students’ learning effectiveness and satisfaction? Although advocates are optimistic about the value of technology-assisted learning, which offers greater learner control over time, location, pace and repetition (MacFarlane 1992), prior studies generate inconsistent empirical evidence that fails to demonstrate convincingly the relative advantages of technology-assisted over face-to-face learning (Bernard et al. 2004). An emergent contingency approach calls for investigations of the key factors or conditions in which technologyassisted learning results in favourable learning effectiveness and satisfaction (Smith et al. 2006). Several factors have been studied, including gender (Taylor & Nikolova 2004) and learning style (Aragon et al. 2002), but the target knowledge to be disseminated by the instructor and acquired by students represents an important consideration that has received little (if any) attention. No learning settings are universally and equally effective across all subject areas or learning objectives, because some embrace explicit knowledge, whereas others entail tacit knowledge.1 Compared with tacit knowledge, explicit knowledge may be more effectively delivered to and acquired by learners in a technology-assisted learning setting (Rosenberg 2001). Therefore, we expect greater learning effectiveness and satisfaction when technology-assisted learning
W. Hui et al.
supports students’ accumulation of explicit knowledge rather than tacit knowledge. Thus, we underscore the importance of target knowledge, which can moderate the effectiveness and satisfaction associated with technology-assisted learning. In addition, target knowledge, whether explicit or tacit, may have an important interaction effect with the learning medium. A review of extant literature suggests limited empirical investigations of the direct or interaction effects of target knowledge in technology-assisted learning. In addition, we consider learning satisfaction in technology-assisted learning by investigating important antecedents. With this background, we propose a factor model that explains students’ learning satisfaction and empirically test the model using evaluative responses collected from a longitudinal field experiment. We focus on students’ learning of English as a foreign language, which typically spans different aspects of language learning, including vocabulary, grammar, listening, speaking and reading. Although related, these aspects of language learning intrinsically pertain to different types of knowledge (Brown 1995) and demand different types of learning support. The diversity of language learning aspects thus represents a natural scenario for investigating the effects of target knowledge in technology-assisted learning. Furthermore, language learning is multifaceted (Nunan et al. 1987) and becoming increasingly important as globalization expands. As Russell (2007) comments, accelerating international exchanges and interactions are opening opportunities for people who know foreign languages. Tonkin (2003, pp. 150–151) also highlights the importance of learning foreign languages, noting that ‘language learning is . . . a fundamental element in self-understanding’. Therefore, we examine the following research questions: (1) Is learning effectiveness associated with technology-assisted learning contingent on target knowledge? and (2) What are the essential antecedents of learning satisfaction in technology-assisted learning? Our longitudinal field experiment investigates the effects of technology-assisted learning by comparing students’ learning effectiveness across different important aspects of English learning with technologyassisted versus face-to-face learning. Our first study group contains students who use face-to-face learning exclusively, whereas the second uses both technologyassisted and face-to-face learning (i.e. hybrid approach, similar to mainstream practices) (Goodyear et al. © 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
Technology-assisted learning
2005). Our study design thus supports an analysis of between-group differences with respect to the target knowledge category and the combined effect of target knowledge and learning medium.2 We also use the collected data to test the proposed learning satisfaction model, which consists of essential satisfaction antecedents (i.e. perceived learning community support, course learnability, learning effectiveness) in technologyassisted learning. Literature review and motivation Previous research into learning effectiveness in technology-assisted learning
Technology-assisted learning has profound and lasting impacts on education and thus received considerable attention from researchers and educators alike. Considerable efforts work to compare the learning effectiveness of technology-assisted learning with that of conventional, face-to-face learning. Several studies report positive effects of technology-assisted learning, including Johnson et al. (2000), who compare learning methods in human resource developments and show that students in the technology-assisted group perceive the instructor more positively and rate the overall course quality higher than their counterparts in the face-to-face group. Abraham (2002) designs a virtual classroom for student learning about information systems and finds that technology-assisted learning improves learning feedback to students but the higher resulting learning effectiveness is not significantly better than that observed in face-to-face, classroom-based learning. In contrast, other studies indicate that technology-assisted learning does not result in improved learning effectiveness. For example, Aragon et al. (2002) investigate student success and report comparable success for both learning types. Similarly, Piccoli et al. (2001) show that learning performance is comparable between students using technology-assisted learning and those learning from face-to-face instructions. Finally, Newby (2002) observes higher anxiety among students in open laboratories than those in closed laboratories and attributes the difference to the relative perceived availability of instructors. The cumulative empirical evidence pertaining to learning effectiveness thus remains equivocal. A meta-analysis by Bernard et al. (2004) suggests that the impact of technology-assisted learning is not signifi© 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
247
cant, consistent with Clark’s (1983) contention that the delivery medium has only marginal effects on the outcomes of planned instruction, measured according to learning effectiveness or satisfaction. Previous research into learning satisfaction in technology-assisted learning
A review of extant literature on the critical topic of learning satisfaction (Allen et al. 2002; Wang 2003) suggests limited investigations of the essential factors that affect learning satisfaction, even though such investigations are particularly important when considering the relatively high dropout rate associated with technology-assisted learning (Hiltz & Wellman 1997; Kumar et al. 2001). Scrutiny and empirical examinations of key satisfaction antecedents can offer insights into effective strategies for mitigating barriers to learning satisfaction. Consistent with Keller (1983), we define learning satisfaction as the perception of being able to achieve success and positive feelings about achieved outcomes. Furthermore, on the basis of an extensive literature review, we identify three essential satisfaction determinants: learning effectiveness (Keller 1983; Wang 2003), perceived course learnability (Roca et al. 2006) and perceived learning community support (Wang 2003; Liaw 2004; Chou & Liu 2005). Norman and Spohrer (1996) note that measuring learning effectiveness by test scores may not be appropriate for assessing the quality or experience of students’ learning, which affects student retention (Neumann & Finaly Neumann 1989). Following the suggestion by Hiltz et al. (2000), in addition to measuring learning effectiveness objectively by test scores, we examine perceived learning effectiveness, which refers to the extent to which a student believes he or she has acquired specific skills. Learnability also represents a critical dimension of learning evaluations (Bødker & Graves Petersen 2000). According to Martin-Michiellot and Mendelsohn (2000), materials delivered in an easyto-learn fashion can enhance students’ learning effectiveness and satisfaction. In this study, perceived course learnability refers to the degree to which a student considers the course materials delivered through technology-assisted or face-to-face learning easy to learn. In addition, previous studies report an important, positive effect of collaborative learning on people’s
248
learning experiences (Powers & Mitchell 1997). Consistent with Wang (2003), we define perceived learning community support as the extent to which a learning environment creates an active, strongly bonded community that encourages and facilitates knowledge exchanges among peers and their instructors. Experiential learning model and implications for language learning
The experiential learning model assumes an iterative nature of learning through experience, from reflection and conceptualization to action and then enhanced experience (Osland et al. 2001). According to this model, technology-assisted learning may be less effective for some aspects of language skills. For example, by engaging in live speaking drills or role plays, students can recognize their speaking problems directly and concretely (i.e. concrete experience), which enables them to reflect on how to improve (i.e. reflective observation) and develop intuitions or general rules (i.e. abstract conceptualization) for similar scenarios in the future (i.e. active experimentation). Such iterative processing reinforces student learning, but technology-assisted learning provides only limited support in this sense. However, technologyassisted learning may better support other aspects of language learning because of the convenient access it offers to learning materials pertinent to vocabulary, reading, or grammar, which students may study repetitively at their preferred time and pace. In technologyassisted learning, students can learn thoroughly and repetitively about the proper use of vocabulary and sentence structure in an asynchronous and ubiquitous manner. Similarly, students can improve their reading comprehension more effectively through technologyassisted learning. Motivation
The inconsistent findings from previous research result partly from the omission of important confounding factors that moderate the impacts of technologyassisted learning on learning effectiveness or satisfaction (Joy & Garcia 2000). Considerable efforts have been undertaken to reconcile inconsistent findings from prior studies by identifying key moderating factors that can influence learning effectiveness or
W. Hui et al.
satisfaction. For example, Taylor and Nikolova (2004) examine the relationship between gender and learning performance in the context of computer-based reading of Spanish as a second language. They report a significant difference between high- and average-performing male students but show insignificant differences between high- and average-performing female students. Using the learning style inventory developed by Dunn et al. (1989), Cohen (2001) shows that the learning medium leads to differential learning preferences. On the other hand, Neuhauser (2002) reports insignificant differences among students who vary in learning style in online versus conventional learning environments. When Aragon et al. (2002) study learning achievement, they find students learn as effectively in online as in conventional classroom settings, regardless of their differences in learning style, preference, motivation, task engagement, or cognitive control. Overall, previous research provides insufficient empirical evidence to support any important impacts on students’ leaning effectiveness and satisfaction resulting from the use of technology-assisted learning. Consistent with Mayer (2003), however, we believe technology-assisted learning can facilitate certain teaching methods better than others, which means the appropriate question to ask is not simply whether technology-assisted learning is better than face-to-face learning but rather which aspects of technology-assisted learning benefit which kinds of learners in acquiring which types of knowledge. We focus on the issue of knowledge types, which has received less research attention than learner characteristics such as learning styles (Aragon et al. 2002; Neuhauser 2002), gender (Taylor & Nikolova 2004), or intrinsic motivation (Martens et al. 2004). In addition, knowledge type can determine the effectiveness of knowledge transfer, because explicit knowledge is more appropriate for transfer over an electronic channel (Rosenberg 2001). Qvortrup (2006) considers teaching a special form of communication that can pass on some knowledge. Thus, a learning medium is not inherently good or bad; rather, each medium supports a different type of knowledge acquisition (see Mayer 2003). We offer empirical support of this view through our investigation of how technology-assisted learning affects the learning effectiveness of different aspects of second language acquisition. © 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
Technology-assisted learning
Hypotheses and research model
We compare technology-assisted and face-to-face learning on the basis of a specific technology-assisted learning platform (described in the following section). Overall, we postulate favourable technology-assisted learning effectiveness associated with those aspects of English learning in which live human interactions are not essential for the learning activities, as specified by the experiential learning model. We objectively measure students’ learning effectiveness using test scores on listening, vocabulary and grammar exercises. Of these three fundamental language factors, listening skills require the most human interaction, because students gain comprehension by familiarizing themselves with the spoken language, often through concrete experiences with native English speakers, which online settings typically cannot support (e.g. bandwidth constraints). An online learning environment can provide listening exercises, but the effectiveness may not be comparable to classroom-based learning because all students in the classroom acquire listening comprehension when one student engages in a speaking exercise with the instructor. As a result, technology-assisted learning should offer less learning support through concrete experience, which diminishes the effectiveness of the learning cycle conceptualized by Kolb (1976). However, in the acquisition of vocabulary and grammar skills, concrete experience plays a lesser role, so the electronic channel can provide effective lessons. Because the learning materials are available to the students anytime and anywhere, they can absorb materials better at their own pace and take the time to reflect on the proper use of words and grammar. Therefore, we expect technology-assisted learning to enhance the learning of vocabulary and grammar. H1: Students in the face-to-face group show greater improvement in listening comprehension than their counterparts in the technology-assisted learning group. H2: Students in the technology-assisted learning group show greater improvement in vocabulary than their counterparts in the face-to-face group. H3: Students in the technology-assisted learning group show greater improvement in grammar than their counterparts in the face-to-face group.
In addition, we examine students’ satisfaction with technology-assisted learning using a factor model that © 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
249
contains key satisfaction determinants, such as perceived learning community support, learnability and effectiveness. We empirically test the model using structural equation modeling. Existing pedagogical theories emphasize the socially constructed nature of learning, which indicates it essentially involves sharing and negotiation (Gulz 2005). In turn, social interactions represent critical criteria of learning effectiveness (Alavi 1994). Neo (2003) empirically supports collaborative learning for enhancing students’ problem-solving and critical thinking skills. Accordingly, we posit a positive correlation between perceived learning community support and learning effectiveness. In addition, a learning task that imposes lighter cognitive loads benefits those students who have limited prior knowledge about the subject (Ayres 2006), and different representations of learning materials affect the speed of learning (Martin-Michiellot & Mendelsohn 2000). Therefore, perceived course learnability or ease of learning should predict learning effectiveness. H4: In technology-assisted learning, perceived learning community support is positively correlated with perceived learning effectiveness. H5: In technology-assisted learning, perceived learnability is positively correlated with perceived learning effectiveness.
Finally, the ultimate goal of learning is knowledge acquisition. As suggested by Keller (1983), learning satisfaction relates directly to perceptions and feelings about learning effectiveness or outcomes. Therefore, we expect a positive correlation between perceived learning effectiveness and learning satisfaction. As noted by Norman and Spohrer (1996), learning effectiveness alone is insufficient to explain satisfaction; rather, a learning environment that facilitates knowledge sharing and social interactions makes learning increasingly enjoyable. Both Liaw (2004) and Chou and Liu (2005) reveal that information and experience sharing among peers and group members increases students’ learning satisfaction. In addition to being a predictor of perceived learning effectiveness, perceived learning community support should have a direct positive impact on learning satisfaction. A relatively learnable course gives students a sense of satisfaction because they overcome challenges they encounter during the learning process, consistent with the theory of planned behavior, which states that perceived
250
W. Hui et al.
behavioral control directly affects attitudes towards an activity (Ajzen 1985, 1988, 1991). H6: In technology-assisted learning, perceived effectiveness is positively correlated with learning satisfaction. H7: In technology-assisted learning, perceived learning community support is positively correlated with learning satisfaction. H8: In technology-assisted learning, perceived course learnability is positively correlated learning with satisfaction.
Figure 1 depicts our proposed research model for explaining learning satisfaction in technology-assisted learning. It suggests that learning satisfaction is determined jointly by perceived learning effectiveness (i.e. H6), learning community support (H7), and learnability (H8). In addition, perceived effectiveness depends on perceived learning community support (H4) and perceived learnability (i.e. H5). We use structural equation modeling to test our research model, using data we collect in our study. Technology-assisted learning platform and study design Overview of the focal Web site
The technology-assisted learning platform we investigate is an interactive, multimedia Web site specifically designed to support university students’ learning of the fundamental aspects of the English language: reading, speaking, listening, vocabulary and writing. At the time of our study, this Web site contained six distinct modules targeting particular aspects of learning English (e.g. Getting to Know You, Spoken versus Written English, Vocabulary Review).
Perceived Course Learnability +
H5
+ H8 Perceived Effectiveness
+
H4
Perceived Learning Community Support
H7
+ H6
Learning Satisfaction
+
Fig 1 Hypotheses and research model for explaining learning satisfaction.
Each module provides students with online instructions, exercises and diagnostic feedback. Similar to many Web-based learning systems, this site offers little support of live interactions between students and the instructor but does include an array of built-in functions that enable students to access, review and practise with programmed multimedia contents repeatedly at their preferred time and pace. The system also supports different learning activities through easy-to-use look-up tools (available in the context menu) and online selfdiagnosis resources designed to improve students’ accuracy, literacy, or fluency. From the site, students can check their progress and standings in the course and practice the current study unit repeatedly. The course Web site supports reading comprehension by providing hyperlinks that point to required or recommended papers on other Web sites. The studied site further supports a virtual voice discussion forum that facilitates English-speaking skills. This online forum allows student groups (i.e. two or three students) to submit weekly English-speaking exercises. All students installed a software program on their computers to record conversations with group members on a particular topic specified by the instructor. The instructor can access the recorded conversations submitted before the specified due date and perform assessment and diagnosis accordingly. In addition, the Web site provides learning activities designed to overcome pronunciation problems commonly encountered by Cantonese speakers. For example, the sounds of English words that are difficult for native Cantonese speakers to pronounce are listed; students can listen to how native English speakers pronounce these words according to the International Phonetic Alphabet, an internationally recognized phonetic system to represent sounds. With these technologyassisted learning tools and functionalities, individual students learn to differentiate weak (i.e. unstressed, pronounced quickly and softly) and strong (i.e. stressed) forms of English words by listening to songs and normal conversations among native speakers. From recorded conversations, students learn to comprehend the semantic meanings of words and observe the way native speakers customarily connect words together. Students must complete exercises after listening to songs or conversation. To help students build their vocabulary effectively and systematically, the Web site provides word list © 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
Technology-assisted learning
tools, such as dictionaries, thesauruses and word neighbours. Students can add and annotate words or phrases for each unit; as a course requirement, each student must add and annotate at least 20 new words or expressions to his or her ‘personal word list’. The Web site also allows students to test their vocabulary from their personal word list or the key vocabulary words of a unit they have already completed. To improve their writing skills, students regularly submit writing exercises electronically (in MS Word format) for critiquing and grading by the instructor. The Writing module has a built-in dictionary, thesaurus and grammar check designed to identify mistakes prior to submission. The learning system thus reinforces students’ understanding of vocabulary and grammar. When critiquing a submitted written exercise, the instructor adds comments to the document by clicking on icons in an MS Word toolbar that automatically insert ‘canned’ comments and references summarized from common or repetitive errors. This functionality generates additional detailed diagnoses and comments on consistency. The Web site also assists instructors in recognizing and describing errors and allows them to include personal comments at a later time. The instructor’s comments can be logged so the student and the instructor can track and revisit recurring problems. The system also grades submitted exercises automatically according to the number and types of positive and negative comments it receives. In line with the hybrid approach prevalent in modern teaching practices (Masie 2002), the technology-assisted learning method embraces both technology-supported and face-to-face learning. Study Design
Experimental design The primary research method is field experimentation, which allows us to examine the underlying relationships of this phenomenon in its natural setting. A computer program assigned subjects to either the technologyassisted learning or face-to-face learning scenario according to their class schedules, which creates to a between-groups design. Our control group uses face-toface learning exclusively, whereas the treatment group receives a combination of face-to-face and technologyassisted learning. Similar to most learning systems, the course Web site is asynchronous and provides little © 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
251
support of live feedback or simultaneous interactions between subjects and instructors. However, it houses multimedia course materials, such as online instructions, exercises, illustrations, diagnostic feedback and tests pertinent to different fundamental aspects of English training. Subjects The participants are first-year students at a major university in Hong Kong who enrolled in the freshman English class mandated by the university. The target course is offered in multiple sessions, and students were assigned to different sessions according to their class schedules. The control-group subjects met in the classroom twice as often as did the treatment-group subjects but had no access to the course Web site. All subjects received monetary compensation for their time and effort. Dependent variables and measurements Our dependent variables are learning effectiveness (objective and perceived), perceived course learnability, perceived learning community support and learning satisfaction. We measure learning effectiveness objectively by comparing the difference between the pre- and poststudy test scores, conducted at the beginning and end of the semester. The principal instructor in charge of the freshman English course designed both tests, which focus on vocabulary, grammar and listening. We also assess learning effectiveness subjectively using subjects’ self-reported assessments, collected after completion of the study. We examine subjects’ learning satisfaction and assessments of perceived course learnability and learning community support (Piccoli et al. 2001; Aragon et al. 2002; Wang 2003) by adapting previously validated question items to operationalize each investigated construct, with some minor wording changes appropriate to the targeted learning context. Our use of previously validated question items addresses measurement problems that likely prompted equivocal results in prior research (Phipps & Merisotis 1999). All question items are based on a seven-point Likert scale, with 1 as ‘strongly disagree’ and 7 as ‘strongly agree’. To reduce potential anchoring or floor (ceiling) effects that may induce monotonous responses, we randomly sequence the items in the questionnaire. The specific question items we use to
252
W. Hui et al.
measure each investigated construct appear in the Appendix. Data collections Our data are longitudinal, collected in the fall semester (September – December) of 2004. At the beginning of the semester, each subject took an English test online, and this score serves as a baseline against which we evaluate the subject’s learning effectiveness at the end of the semester. We also gather demographic information. At the end of the semester, we measure subjects’ assessments of perceived learning effectiveness and learning satisfaction. For each data collection step, we use documented scripts to inform all subjects explicitly of our objectives and address any concerns about privacy-related issues. In particular, we convey our intent to perform data analyses at an aggregate level rather than in any personally identifiable manner and ensure subjects’ convenient access to their responses and assessments. Data analysis and results
A total of 507 subjects, or 29.4% of the first-year student population, voluntarily took part in the study. We remove incomplete responses from our analysis, including those from students who did not complete both pre- and poststudy tests, the learning style assessment,
or the poststudy survey. As a result, our effective sample includes 438 subjects who averaged 19.1 years of age and were fairly balanced in their gender distribution. Noticeably, more male than female subjects appear in the technology-assisted learning group, but more female than male subjects were in the face-to-face group. We summarize some important demographic information in Table 1, which shows that subjects in both groups were largely comparable in age and advancedlevel English examination scores.3 Approximately half of the subjects in the face-to-face group majored in business, whereas many subjects in the technology-assisted learning group were engineering students. Science students accounted for approximately 30% of the subjects in each group. Thus, it appears that more abstract thinkers joined the face-to-face group than the technologyassisted learning group (i.e. 63% versus 53%), whereas more reflective observers were in the face-to-face group than in the technology-assisted learning group (i.e. 38% versus 28%). However, our analysis shows that these differences do not affect with any statistical significance the dependent variables we investigate. Subjects in both groups are comparable in their general computer competency, Internet experiences and usage. We validate our instrument’s reliability and convergent and discriminant validity. Specifically, we assess reliability using Cronbach’s alpha; as we summarize in
Table 1. Summary of subject demographics.
Age (years) Gender Affiliated school
A-Level English exam score Learning style
Computer skills Average Internet usage per week
Face-to-face
Technology-assisted
19.0 Male: 107 (44.0%) Female: 136 (56.0%) Business: 121 (49.8%) Engineering: 46 (18.9%) Science: 76 (31.3%) A = 6; B = 22; C = 54; D = 79; E = 39; F = 4 Abstract conceptualization: 153 (63%) Concrete experience: 90 (37%) Active experimentation: 150 (62%) Reflective observation: 93 (38%) 4.26 (on a 7-point scale) 20 h: 53 (22%)
19.2 Male: 135 (69.2%) Female: 60 (30.8%) Business: 45 (23.1%) Engineering: 99 (50.8%) Science: 51 (26.2%) A = 1; B = 11; C = 28; D = 65; E = 73; F = 0 Abstract conceptualization: 103 (53%) Concrete experience: 92 (47%) Active experimentation: 140 (72%) Reflective observation: 55 (28%) 4.71 (on a 7-point scale) 20 h: 76 (39%)
© 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
Technology-assisted learning
253
Table 2, the alpha value of each investigated construct exceeds or is close to 0.7, the commonly suggested threshold for reliability (Nunnally & Bernstein 1994). The alpha values observed indicate our measurement items have satisfactory reliability. We also assess the instrument’s convergent and discriminant validity by performing a principal components analysis using the Varimax method with Kaiser normalization rotation. As we show in Table 3, the items that measure the same constructs exhibit considerably higher loadings than do those for measuring other constructs. The eigenvalue of each extracted factor exceeds 1.0, the common threshold value. Overall, our analysis shows that the instrument exhibits adequate convergent and discriminant validity.
Technology-assisted versus face-to-face learning
For each of the learning aspects we study, we perform the following regression:4
Score in November = α + β1 ∗ Score in September + β2 ∗ Technology-Assisted Learning (1) where ‘technology-assisted learning’ is the dummy variable and has a value of 1 if the subjects are in the technology-assisted learning group and 0 otherwise. If the coefficient of technology-assisted learning is significant, we conclude there is a significant difference between technology-assisted and face-to-face learning. As we show in Table 4, technology-assisted learning has a significant impact on students’ performance with
Table 2. Summary of descriptive statistics and construct reliability analysis. Perceived learning effectiveness (six items) Learning satisfaction (six items) Perceived course learnability (three items) Perceived learning community support (three items)
Table 3. Reliability and discriminant validity of the study instrument.
Mean
SD
Alpha
4.54 4.31 4.51 3.94
1.11 1.24 1.14 1.19
0.82 0.90 0.66 0.67
Component extracted Factor 1 LS-1 LS-2 LS-3 LS-4 LS-5 LS-6 PLE-1 PLE-2 PLE-3 PLE-4 PLE-5 PLE-6 PCL-1 PCL-2 PCL-3 PLCS-1 PLCS-2 PLCS-3 Eigenvalue Percentage of variance explained
Factor 3
Factor 4
0.74 0.71 0.75 0.63 0.64 0.68 0.67 0.69 0.67 0.74 0.66 0.58 0.71 0.81 0.56
7.79 21.10
Note: Loadings less than 0.5 are not shown.
© 2007 The Authors. Journal compilation © 2007 Blackwell Publishing Ltd
Factor 2
1.38 20.80
1.25 11.48
0.54 0.72 0.82 1.07 10.48
254
W. Hui et al.
Table 4. Analysis of the effects of technology-assisted learning.
Dependent variable: listening comprehension score in November Intercept Score in September Technology-assisted learning Dependent variable: vocabulary score in November Intercept Score in September Technology-assisted learning Dependent variable: grammar score in November Intercept Score in September Technology-assisted learning
Parameter estimate
T-statistic
P-value
55.50 0.30 -2.77
24.57 9.25 -2.21