Perceived Learning Outcomes and Satisfaction ... - Wiley Online Library

3 downloads 14228 Views 354KB Size Report
Apr 2, 2016 - Department of Accounting, Harrison College of Business, Southeast ... design are examined as potential determinants of online learning ...
Decision Sciences Journal of Innovative Education Volume 14 Number 2 April 2016 Printed in the U.S.A.

 C

2016 Decision Sciences Institute

EMPIRICAL RESEARCH

The Determinants of Students’ Perceived Learning Outcomes and Satisfaction in University Online Education: An Update* Sean B. Eom† Department of Accounting, Harrison College of Business, Southeast Missouri State University, Cape Girardeau, MO 63701, e-mail: [email protected]

Nicholas Ashill Department of Marketing and Information Systems, School of Business Administration, American University of Sharjah, P.O. Box 26666, Sharjah, United Arab Emirates, e-mail: [email protected]

ABSTRACT A stream of research over the past decade that identifies predictors of e-learning success suggests that there are several critical success factors (CSFs) that must be managed effectively to fully realize promise for e-learning. Grounded in constructivist learning theories, this study advances previous work on CSFs in university online education. Structural equation modeling is applied to examine the determinants of students’ satisfaction and their perceived learning outcomes in the context of university online courses. The independent variables of motivation (intrinsic and extrinsic), student selfregulation, dialogue (instructor-student, and student-student), instructor, and course design are examined as potential determinants of online learning outcomes. A total of 372 responses from students who have completed at least one online course at a university in the Midwestern United States were used to examine the structural model. Findings indicate that instructor-student dialogue, student-student dialogue, instructor, and course design significantly affect students’ satisfaction and learning outcomes. However, both extrinsic student motivation and student self-regulation have no significant relationship with user satisfaction and learning outcomes. Finally, intrinsic student motivation affects learning outcomes but not user satisfaction. The findings suggest that course design, instructor, and dialogue are the strongest predictors of user satisfaction and learning outcomes.

∗ Editor-in-Chief’s Note: This article was processed and reviewed independent of the Special Issue. Neither the authors nor the third Guest Editor of the Special Issue were involved at any stage of the editorial review process. † Corresponding Author.

185

186

The Determinants of Students’ Perceived Learning Outcomes

Subject Areas: Critical success factors, Distance education/distance learning, Learning outcomes, Structural equation modeling, and Student satisfaction.

INTRODUCTION Over the past decade, there has been a steady increase in the delivery of online education across the globe. A recent Gallup poll and multiple other indicators, suggest that we are entering a golden age of e-learning and that e-learning could be at a “Tipping Point.ˮ American’s trust in the quality of e-learning has also grown. Thirty-seven percent of Americans agree or strongly agree that online colleges and universities offer high-quality education (Bidwell, 2014). This trust has grown by more than 20% in the past two years. In the United States, the Babson Survey Research Group reported that during the period 2002–2012, the number of students who had taken at least one online course increased for the 10th year in a row, from 1.6 million in 2002 to 7.1 million (33.5% of all American college students) in Fall 2012. The increasing level of online enrollment has not however reached a plateau (Bidwell, 2014). Despite the potential promise of e-learning and the strategic importance of elearning, concerns regarding e-learning’s effectiveness remain, and its acceptance is far from universal. The concerns include a lack of discipline on the part of e-learners, low retention rates (Allen & Seaman, 2013), and poor educational experiences for learners (Eom & Arbaugh, 2011). A stream of research identifying predictors of e-learning success has emerged over the past decade (Alshare, Freeze, Lane, & Wen, 2011; Arbaugh et al., 2009; Eom & Arbaugh, 2011; Eom, Ashill, & Wen, 2006; Mart´ın-Rodr´ıguez, Fern´andez-Molina, Montero-Alonso, & Gonz´alezG´omez, 2015; Mashaw, 2012; Sun, Tsai, Finger, Chen, & Yeh, 2008; Xu, Huang, Wang, & Heales, 2014), and this stream suggests that there are several Critical Success Factors (CSFs) that must be managed effectively to fully realize the promise of e-learning. All predictor and criterion variables (e.g., the several hundred in Arbaugh, Hwang, & Pollack, 2010) used in empirical research on e-learning are directly or indirectly related to the effectiveness of e-learning systems. The primary objective of this study is to identify and investigate the CSFs of students’ perceived learning outcomes and satisfaction in university online education. Understanding the underlying theories of learning upon which e-learning is based is necessary for identifying CSFs of e-learning. Previous research (Eom, Ashill, & Wen, 2006) has examined several predictors of user satisfaction and learning outcomes in the context of university online education including course structure, instructor feedback, self-motivation, learning style, interaction, and instructor knowledge and facilitation. However, this research did not consider the effects of positive and constructive interaction (dialogue) in its conceptualization, and the measurement of course design exhibited poor measurement properties. Moreover, the measurement of motivation did not capture either intrinsic or extrinsic dimensions of student motivation. This study advances previous research on CSFs and e-learning

Promotes socialization and shared understandings of a group of learners. Improves learners’ cognitive information processing abilities.

Collaborativism

Cognitive Information Processing

Forms abstract concepts (knowledge constructed).

Goals

Constructivism

Models

Knowledge is socially and collaboratively constructed.Knowledge is created through sharing. Individuals have - different preferred learning styles learn better when teaching methods and course design match their learning styles

Individuals construct knowledge and learn better when they discover things themselves at their own time and pace.

Assumptions

Implications

Course must be designed to fit a wide range of e-learners’ learning styles such as visual, aural, read/write, and kinesthetic.

Self-regulated learning is essential.Deep cognitive engagement and independent discovery.The instructor becomes the creative mediator of the learning process. Involvement, dialogue, and feedback are critical to learning.

Table 1: Goals, Major assumptions and implications of the constructivist models

Course Design

Instructor-Student DialogueStudent-Student Dialogue

Student Self-RegulationIntrinsic MotivationExtrinsic MotivationInstructor

Constructs Extracted

Eom and Ashill 187

188

The Determinants of Students’ Perceived Learning Outcomes

effectiveness in online education by identifying and testing the importance of a wider set of constructs pivotal to perceived learning outcomes and student satisfaction. These constructs are grounded in constructivist learning models and include self-regulated learning, motivation (intrinsic and extrinsic), instructor, dialogue, and course design. The next section discusses the defining characteristics of e-learning, and presents a research model illustrating the CSFs that affect e-learning systems outcomes that is derived from the theory of learning. This is followed by a description of the cross-sectional survey used to collect data, and the results of a Partial Least Squares (PLS) analysis of the research model. The final section outlines the implications of the results for higher educational institutions and limitations of the study, and proposes a future research agenda.

THEORIES OF LEARNING The defining characteristics of e-learning are derived from the constructivist model of learning. The underlying premise of the model is that knowledge is constructed as opposed to being transferred from the instructor to students, the view of the objectivist or behaviorist model (Piaget, 1977; Vygotsky, 1978). The constructivist model provides a theoretical base for studying e-learning environments (Jonassen, Davidson, Collins, & Haag, 1995). Several other learning models are considered to be extensions of the constructivist model. These include collaborativism, socioculturism, the cognitive information processing model (Leidner & Jarvenpaa, 1995), discovery learning (Bruner, 1985; Vygotsky, 1978), and facilitated learning (Rodgers, 1983). These models all assume that knowledge is constructed rather than instructed. However, they disagree about the way in which knowledge is best constructed. Within the constructivist paradigm, one school of thought (constructivism) believes that knowledge is constructed individually and independently and that students learn better when they discover knowledge themselves at their own time and pace. During the independent discovery process, it is necessary for students to become self-regulated learners as well as active learners. Educational psychologists such as Pintrich and De Groot (1990) believe that self-regulated learning strategies are not enough to enable student achievement, and that students must be motivated to apply self-regulated learning strategies. Some believe that the core of self-regulated learning is self-motivation (Smith, 2001). Another implication of constructivism is the changing roles of instructors from taking center stage to becoming creative mediators and facilitators of the learning process. Another school of thought, collaborativism, assumes that knowledge is socially and collaboratively constructed through sharing. Accordingly, involvement, interaction, and dialogue between students and between the instructor and students are viewed as being critical ingredients to the success of e-learning. A third school of thought, grounded in the cognitive information processing model, assumes that individual learners have different preferred learning styles and thus learn better when teaching methods and course design match their learning styles. The implication is that courses must be designed to fit a wide range of e-learners’ learning styles.

Eom and Ashill

189

Figure 1: System view of e-learning systems. INPUT

PROCESS

STUDENTS • Motivation • Engagement/Efforts

LEARNING /COGNITIVE PROCESS - perception, attention, cognitive load, coding, retrieve/transfer, and metacognition

INSTRUCTOR • Course Design • Facilitation • Communication behaviors including feedback

STUDENT SELF-REGULATION - Motivation (intrinsic and extrinsic) - Learning Strategies (time management, metacognition, effort-regulation, critical thinking, rehearsal, elaboration, and organization) DIALOGUE - Student-Student Dialogue - Student-Instructor Dialogue

LMS/IT • Information Quality • System Quality

Learning Styles (Physiological)

Personality (Affective)

Information Processing Style (Cognitive)

OUTPUT

Learning Outcomes

Satisfaction

Psychological Differences (Psychological)

Table 1 highlights goals, major assumptions, and implications of the constructivist models adapted from Leidner and Jarvenpaa (1995). The last column shows the research constructs in our conceptual model (Figure 1) that were extracted from a review of the three learning models; self-regulated learning, motivation, instructor, dialogue, and course design. We view e-learning as an open system of three entities, students, instructors, and learner management systems (LMS), that continuously interact with one another and with their environments to optimize e-learning outcomes and student satisfaction (Figure 1). The theoretical foundation of the research model (Figure 2) is based on the constructivist learning theories discussed above, and the model itself is derived from a synthesis of the Virtual Learning Environment (VLE) effectiveness model of Piccoli et al. (2001) and the framework of the Technology–Mediated Learning (TML) research of Alavi and Leidner (2001). The VLE model postulates that two antecedents, human dimension and design dimension, determine the effectiveness of e-learning systems. The human dimension is concerned with two human entities, students and instructors, and their various attributes, while the design dimension includes LMS, self-regulated learning and learner control, course design, and interactions among the human entities. Our model views e-learning as an open system of three entities, students, the instructor, and the LMS, continuously interacting with one another and with their environments to optimize e-learning outcomes and student satisfaction. An LMS is often used with a VLE interchangeably. A VLE refers to an operating system and specialized learning management software that allows students and the instructor to plan, organize, monitor, coordinate, and control learning activities to facilitate the learning process and optimize the desired learning outcomes.

190

The Determinants of Students’ Perceived Learning Outcomes

Figure 2: Conceptual model of expected relationships.

Student SelfRegulaon

Intrinsic Student Movaon

Extrinsic Student Movaon

H3a

H1a

H2a

H3b H2b

User Sasfacon

H4a

InstructorStudent Dialogue

H4b H1b H5a

StudentStudent Dialogue

H5b

H6a

Instructor Acvies

Learning Outcomes

H6b

H7a

76b

Course Design

There are two distinct types of process that produce learning outcomes. The process created and managed by the instructor is specified in the course structure and design. The TML framework consists of two major inputs, instructional strategy and information technology, that affect psychological learning processes, which in turn affect learning outcomes including learners’ affective reactions to e-learning (satisfaction). The second type of process characterizes psychological and cognitive student learning; the two processes are interdependent. The students’ learning/cognitive process is composed of a series of phases (perception, attention, cognitive load, coding, retrieval/transfer, and metacognition) that are supported by the different types of memory (sensory memory, working memory, long-term memory) (Alonso, L´opez, Manrique, & Vi˜nes, 2005). The cornerstone of the TML research framework is that the psychological learning process is the primary antecedent of learning outcomes. It is affected by multiple dimensions of learners’ characteristics including biological characteristics/senses (physiological dimension), personality characteristics such as attention, emotion, motivation, and curiosity (affective dimension), information processing styles such as logical analysis or “gutˮ feelings (cognitive dimension), and psychological/individual differences (psychological dimension) (Dunn, Beaudry, & Klavas, 1989). Of these

Eom and Ashill

191

multiple dimensions of learners’ characteristics, prior research in e-learning and TML by information systems researchers has identified several key themes as being CSFs of e-learning outcomes including motivation (Eom et al., 2006; Hiltz, 1997), IT-enabled collaborative learning (Leidner & Fiuller, 1997), and interaction. The TML research framework contributed to the development of our research model which includes the constructs course design and instructor (instructional strategy), extrinsic student motivation, intrinsic student motivation, student self-regulation, instructor-student dialogue, and student-student dialogue (psychological learning processes). However, the focus of the research is not on examining the interdependence of the constructs influencing perceived learning outcomes and the satisfaction of e-learners. Constructivist learning models (Table 1) and the TML research framework (as illustrated in Figure 1) both suggest that there are several pivotal elements of e-learning. However, the TML framework focuses on understanding the relationships between instructional strategy (course design), information technology, psychological learning processes, and learning outcomes.

CSFS OF E-LEARNING SYSTEMS Motivation Several attributes of students, the primary participants of e-learning systems, have been the subject of intense research over the past decade (Bitzer & Janson, 2014). Bitzer and Janson examined 85 articles published in peer-reviewed journals between 2000 and 2013, identifying 31 attributes of students that have a significant effect on satisfaction and learning outcomes. These include prior experience with LMS, computer experience, self-efficacy, learning styles, motivation, metacognition, and learning engagement. Of these, we focus on motivation, self-regulated learning including metacognition, and learning engagement. The term “metacognitionˮ is often used to represent self-regulated learning. Metacognition is the kernel of self-regulated learning theory because it helps students plan, monitor, and control learning activities through the regulation of cognition and learning experiences. It consists of metacognitive regulation, metacognitive knowledge, and metacognitive experience (Flavell, 1979). In a previous study, Eom, Ashill, and Wen (2006) failed to include self-regulated learning as a CSF of e-learning success. This noninclusion is a critical shortcoming of the work. Self-regulated learning is the foundation of constructivism on which distance learning theory is built as shown in Table 1. It is a pivotal learning strategy to achieve the intended e-learning outcome. The present study excludes student learning style which was a focus of previous work (Eom et al., 2006). Learning style is one of the parameters of e-learners that cannot be influenced by the instructor. Student motivation is a psychological construct that activates the selfregulation process (Zimmerman, 2008). According to Ryan and Deci (2000, p. 56), intrinsic motivation is the psychological feature that makes an individual carry out an activity for its inherent satisfaction, for fun, or the challenge entailed, rather than for some separable consequence. Extrinsic motivation on the other hand causes an individual to take an action toward a goal to attain a separable outcome such as reward or recognition. In Eom et al., (2006), intrinsic motivation

192

The Determinants of Students’ Perceived Learning Outcomes

was measured by two indicators. This study however measures the effects of both intrinsic and extrinsic motivation. Continuing research on motivation has produced some empirical evidence of positive links between intrinsic motivation and satisfaction (Eom et al., 2006), motivation and student performance (Castillo-Merino & Serradell-L´opez, 2014), social media engagement and motivational factors (Alt, 2015), and individual players’ peer intrinsic and extrinsic motivation and intention to learn collaboratively and individually in a game-based learning environment (Kong, Kwok, & Fang, 2015). According to Castillo-Merino and Serradell-L´opez (2014), motivation has the most direct, positive, and significant effect on students achievement. We therefore hypothesize: H1a: Students with a higher level of intrinsic motivation in online courses will report a higher level of user satisfaction. H1b: Students with a higher level of intrinsic motivation in online courses will report higher learning outcomes. H2a: Students with a higher level of extrinsic motivation in online courses will report a higher level of user satisfaction. H2b: Students with a higher level of extrinsic motivation in online courses will report higher learning outcomes.

Self-regulation One major assumption of the constructivist learning model is that learners learn better when they discover things themselves at their own time and pace as shown in Table 1. This assumption further implies that when students are self-regulated and independent learners, they will be more successful in an online learning environment. According to Zimmerman (1989, p. 329), self-regulated learners are “metacognitively, motivationally, and behaviorally active participants in their own learning process. Such students personally initiate and direct their own efforts to acquire knowledge and skill rather than relying on teachers, parents, or other agents of instruction.” He further theorized that students’ use of self-regulated learning (SRL) strategies was strongly associated with superior academic functioning, and that this was a significant predictor of students’ academic outcomes. Students’ self-regulated learning has three essential features, self-regulated students (1) select and use their self-regulated learning strategies to achieve desired learning outcomes, (2) continuously monitor the learning process and are responsive to self-oriented feedback about learning effectiveness, and (3) activate their interdependent motivational processes (Zimmerman, 1990). The extant literature has shown that students’ use of SRL strategies (metacognition, time management, and effort regulation) in a traditional face-to-face learning environment is strongly associated with positive learning outcomes (Richardson, Abraham, & Bond, 2012). In the context of e-learning, students’ metacognitive self-regulation and self-esteem in online courses was shown to be positively correlated with their cognitive and emotional engagement (Pellas, 2014). Cognitive engagement refers to students’ active participation in and intellectual efforts to create/construct new knowledge in the learning process using cognitive

Eom and Ashill

193

and metacognitive strategies. Metacognitive strategies refer to a wide range of strategies used by learners to become aware of and be in control of mental thought, including understanding their cognitive processes, learning about their learning styles, becoming aware of their cognitive biases, and determining the most effective problem-solving strategies. Emotional engagement is concerned with high levels of students’ interest in and positive attitudes or values towards the learning process. A systematic review of research from 2004 to 2014 on self-regulated learning strategies and academic achievement in online higher education learning environments reveals that the strategies of time management, metacognition, effort regulation, and critical thinking are positively correlated with academic outcomes, but there is less empirical support for the positive impact of rehearsal, elaboration, and organization (Broadbent & Poon, 2015). We therefore hypothesize: H3a: Students with a higher level of self-regulation in online courses will report a higher level of user satisfaction. H3b: Students with a higher level of self-regulation in online courses will report higher learning outcomes.

Instructor-Student Dialogue and Student-Student Dialogue Unlike traditional face-to-face classes that primarily use the lecture method of teaching, collaborativism assumes that knowledge is socially and collaboratively constructed through the shared understanding of a group of learners and utilizes many different learning models such as social collaborative learning, interactive, and discovery learning (Bruner, 1985; Vygotsky, 1978). The crucial component of the constructivist/cooperative learning model is interaction with other learners (Leidner & Jarvenpaa, 1995; Slavin, 1990). Interaction is a CSF that cuts the distance in e-learning and promotes positive online learning experiences (Boling, Hough, Krinsky, Saleem, & Stevens, 2012). The qualitative study by Boling et al., revealed that most participants viewed courses involving little interaction with other students as being less helpful and making them feel disconnected from their instructors and peers compared with courses and programs that were more interactive. In the e-learning literature, interaction and dialogue have often been used interchangeably. Many empirical studies have measured the effects of all types of interaction (negative, neutral, and positive) on learning outcomes and satisfaction. In this study, the term dialogue refers to purposeful, constructive, meaningful interaction that is valued by each party. Dialogue promotes learning through active participation and enables deep cognitive engagement for developing higher-order knowledge (Moore, 1997; Muirhead & Juwah, 2004). Empirical investigation of the relationship between interaction and both satisfaction and learning outcomes has been an ongoing research topic over the past decade. Unfortunately, considerable empirical research on e-learning in general (Arbaugh & Rau, 2007; Eom et al., 2006; Kuo, Walker, Schroder, & Belland, 2014; Wilson, 2007) and the effects of interaction and learning outcomes specifically have produced conflicting and inconsistent results due to different measures of dependent and independent constructs/variables, methodological problems, and the lack of a commonly accepted conceptual model. Learning outcomes are generally measured by either grades received or perceived learning outcomes.

194

The Determinants of Students’ Perceived Learning Outcomes

However, many studies (Ekwunife-Orakwue & Teng, 2014; Kellogg & Smith, 2009; Wilson, 2007) have failed to establish a connection between interaction and learning outcomes as measured by course grades or between interaction and e-learners’ satisfaction. Course grades are functionally dependent on not only interaction but many other factors including the student’s level of intelligence and effort. This study therefore focuses on the effects of meaningful interaction (dialogue) on perceived learning outcomes and satisfaction. Although the studies examined here are not exhaustive, they give some indication of the inconclusive nature of the findings. For example, Kuo et al. (2014) found that student–instructor (SI) interaction is a significant predictor of student satisfaction but student–student (SS) interaction is not. Arbaugh et al. (2007) showed that all interactions (SS and SI) have a positive and significant effect on student learning outcomes. However, only SS interaction is significant in predicting satisfaction with the delivery medium, and its effect is negative. Another study by Arbaugh and Benbunan-Fich (2007) reported quite different results: SS interaction does not have a significant effect on perception of learning outcomes but SI interaction is significantly associated with higher perceptions of learning. A possible contributing factor to the inconsistent findings is the mix of the use of interaction and dialogue. This study focuses on the relationship between dialogue (positive and meaningful interaction) and two dependent constructs, students’ satisfaction and learning outcomes. Not every interaction leads to enhancing and/or increasing students’ learning outcomes or satisfaction as indicated by some studies (Grandzol & Grandzol, 2010; Wan, Wang, & Haggerty, 2008). These studies used student and faculty time spent in interaction as a measure of the level of interaction, and concluded that increased interaction may diminish desired program reputation. Furthermore, the volume and length of postings by students and instructors in online forums did not necessarily indicate the quality of the forums or learning (Mazzolini & Maddison, 2007). Only meaningful interaction counts. Meaningful interaction directly influences learners’ intellectual growth, stimulates learners’ intellectual curiosity, and helps them engage in constructive learning activities that directly influence their learning outcomes (Hirumi, 2002; Moore, 1997; Vrasidas & McIsaac, 1999; Woo & Reeves, 2007). We therefore we hypothesize: H4a: A higher level of perceived dialogue between the instructor and students in online courses will lead to a higher level of student satisfaction. H4b: A higher level of perceived dialogue between the instructor and students in online courses will lead to higher learning outcomes. H5a: A higher level of perceived dialogue between students and students in online courses will lead to a higher level of student satisfaction. H5b: A higher level of perceived dialogue between students and students in online courses will lead to higher learning outcomes.

Instructor One of the fundamental differences between the constructivist and behaviorist models is the role of the instructor in the learning process. Unlike traditional

Eom and Ashill

195

face-to-face classes that primarily utilize lectures, the primary role of the instructor in e-learning is to “guide on the side” and support learner-centered active learning, instead of being the “sage on the stage” (Collison, Elbaum, Haavind, & Tinker, 2000; Heuer & King, 2004). According to the social collaborative learning model, students learn through the shared understanding of a group of learners. Instruction thus becomes communication-oriented and the instructor becomes a discussion leader. Some researchers on distance education point out that the role of the instructor has generally been a neglected area of research in online environments (Arbaugh, 2010). Arbaugh investigated the two different roles of the instructor, a formal role (teaching presence), and an informal role (immediacy behaviors), and found that both instructor roles were positive and significant predictors of student-perceived learning outcomes and satisfaction in online MBA courses. The formal role involves course design, the act of facilitating discourse, and direct instruction of cognitive social processes to produce meaningful and educationally worthwhile learning outcomes (Anderson, Rourke, Garrison, & Archer, 2001). The informal role refers to communication behaviors that reduce social and psychological distance between students and the instructor, including calling students by their first name, using humor, and making complimentary or prompt comments on assignments. Hung & Chou (2015) identified students’ perceptions of five online-instructor roles in blended and online learning environments: course designer and organizer, discussion facilitator, social supporter, technology facilitator, and assessment designer. Students receiving immediate feedback perceived it to be more useful for learning than delayed feedback and had a more positive attitude towards feedback (Kleij, Eggen, Timmers, & Veldkamp, 2012). In an earlier empirical study (Eom et al., 2006), the instructor construct consisted of a mix of indicators representing their knowledge and roles as facilitator and intellectual stimulator. The current study excludes instructor knowledge as an indicator. Although instructor knowledge is desirable, it is becoming less important than their role as a facilitator as previously discussed. The formal role of the instructor is to facilitate, monitor, and provide timely and helpful feedback on assignments, exams, or projects. The informal role includes expressing a caring attitude and being responsive to student concerns, being a social supporter (Bailey & Card, 2009; Hung & Chou, 2015), exhibiting immediacy behavior (Arbaugh, 2010), and creating a learning community characterized by an atmosphere of trust and reciprocal concern (Wilson, Ludwig-Hardman, Thornam, & Dunlap, 2004). We thus hypothesized: H6a: Students who perceive instructor activities in online courses more favorably will report a higher level of student satisfaction. H6b: Students who perceive instructor activities in online courses more favorably will report higher learning outcomes.

Course Design Course design is part of the formal role of the instructor. According to Moore (1991, p. 3), the course structure “expresses the rigidity or flexibility of the program’s

196

The Determinants of Students’ Perceived Learning Outcomes

educational objectives, teaching strategies, and evaluation methods,” and describes “the extent to which an education program can accommodate or be responsive to each learner’s individual needs.” Course design is concerned with the planning and design of the course structure and with the process, engagement, interaction, and evaluation aspects of the course. A widely accepted course design standard is the Quality Matters (QM) rubric standard. QM is an international organization that represents broad inter-institutional collaboration and a shared understanding of online course quality (https://www.qualitymatters.org). The most recent (2014) QM rubric standards presents 43 standards within 8 categories to improve and certify the design of online and blended courses. (1) Course overview and introduction: This asks the instructor to introduce the purpose and structure of the course and various course components (modules) in the learning management systems and syllabus. (2) Learning objectives: This asks the instructor to state/write learning objectives and measurable learning outcomes from the learner’s perspective. The relationship between learning objectives and course activities must be clearly expressed. (3) Assessment and measurement: This specifically requires a clear grading policy; stated learning objectives must be measured on the assessment using specific and descriptive criteria. (4) Instructional materials: All instructional materials should be current and contribute to the achievement of stated learning objectives. The QM rubric standards include four other categories: learner interaction, course technology, learner support, and accessibility and usability. The last category provide alternative means of access to course materials that meet the needs of diverse learners such as visually or aurally impaired learners. Course design and course structure can influence the learning process and learning outcomes as shown in Swan et al. (2011). This is because “online classes are more successful in supporting deep learning when they are characterized by a community of inquiry” (Rubin & Fernandes, 2013, p. 125). We therefore theorize that course design and structure will be strongly correlated with user satisfaction and perceived learning outcomes, especially when course material is organized into logical and understandable components and are interesting and stimulate elearners’ desire to learn. We thus hypothesize: H7a: Students who perceive course design in online courses more favorably will report a higher level of user satisfaction. H7b: Students who perceive course design in online courses more favorably will report higher learning outcomes.

METHODOLOGY Survey Instrument After conducting an extensive literature review, a list of questions that were logically associated with the factors in the model was designed (Appendix A). The

Eom and Ashill

197

survey questionnaire was based on previous work (Eom et al., 2006) and is in part adapted from the commonly administered IDEA (Individual Development & Educational Assessment) student rating system developed by Kansas State University. In addition, questionnaire items on motivation and student self-regulation were adapted in part from the AIM inventory (Shia, 1998), the college student inventory (Stratil, 1988), and the Motivated Strategies for Learning Questionnaire (MSLQ) (Pintrich, Smith, Garcia, & McKeachie, 1993), an 81-item, self-reported instrument designed to measure college students’ motivational orientations and use of different learning strategies (Pintrich, Smith, Garcia, & McKeachie, 1991). Of three questions (6, 7, and 8) used to measure student intrinsic motivation, two (6 and 7) were selected from the intrinsic goal orientation subset (4 items) of the motivation section of the MSLQ instrument, and one was selected from the academic intrinsic motivation (AIM) inventory (Shia, 1998). Measuring motivation as a psychological construct has been an important on-going issue (Tour´e-Tillery & Fishbach, 2014). Some educational psychologists postulated a tripartite taxonomy of intrinsic motivation (IM): IM-to-know, IM-to-accomplish things, and IM-toexperience stimulation (Vallerand et al., 1992). IM-to-know refers to participation in an activity for its inherent satisfaction, curiosity to learn, and challenge from a need to know and understand (Ryan & Deci, 2000). IM-to-accomplish things is the desire to participate in and complete an activity (Vallerand et al., 1992). All 4 items of the intrinsic goal orientation subset of the MSLQ are concerned with the measurement of intrinsic motivation to know and understand. To capture another dimension of intrinsic motivation (IM-to-accomplish things), it was necessary to add question 8 from the AIM inventory (item 23). Of three questions (9, 10, and 11) used to measure student extrinsic motivation, two (9 and 10) were selected from the extrinsic goal orientation subset (4 items) of the motivation section of the MSLQ instrument, and one was selected from the academic intrinsic motivation (AIM) inventory (Shia, 1998). Three of the four MSLQ questions use grades as measures of external incentives. They reflect two types of external incentive, grades and recognition from others (e.g., family, friends, employer). One item (recognition from peers) was selected from the AIM inventory (Shia, 1998) in lieu of items 7 and 13 (representing grades) from the MSLQ instrument The questions thus diversify the sources of external incentives to include recognition from peers and others, and grades. Questions on self-regulated learning (30 through 33) were adapted from the learning strategies section of the MSLQ instrument. MSLQ learning strategies scales are composed of 9 different subscales: rehearsal, elaboration, organization, critical thinking, metacognitive self-regulation, time/study environment management, effort regulation, peer learning, and help seeking. Selecting 4 items from a pool of 50 items was carried out based on the three essential features of selfregulated learners (Zimmerman, 1990). Zimmerman theorized that self-regulated students (1) plan, set goals, organize, monitor, and evaluate the learning process by themselves (question 30, MSLQ item 78), (2) continuously monitor the learning process and are responsive to self-oriented feedback about learning effectiveness (question 32), and (3) activate their interdependent motivational processes (effort regulation as captured in question 31, MSLQ item 74) by working to achieve the goal. Self-regulated students also select and use specific learning strategies on

198

The Determinants of Students’ Perceived Learning Outcomes

the basis of feedback in the earlier stages (question 33, MSLQ item). The three questions capture essential features of self-regulated learners. The last question on elaboration reflects selected learning strategies which vary by student. Question 32 (monitoring) was drawn from item 24 of the self-regulatory learning section of the college student inventory. Questions 12 and 13 asked students about the formal roles of the instructor who facilitates, monitors, and provides timely and helpful feedback on assignments, exams, or projects. Question 14 asked students about another formal role of the instructor as a facilitator who stimulates students to exerting intellectual efforts beyond those required by face-to-face classes. The intellectual efforts of students are necessary for various types of learning such as independent discovery and cognitive engagement (Vygotsky, 1978), discovery learning (Bruner, 1985) and project type learning (Dewey, 1938). Questions 15 and 16 measured the roles characterized by immediacy behaviors. This informal instructor role includes expressing a caring attitude and being responsive to student concerns, being a social supporter (Bailey & Card, 2009; Hung & Chou, 2015), exhibiting immediacy behavior (Arbaugh, 2010), and creating a learning community characterized by an atmosphere of trust and reciprocal concern (Wilson et al., 2004). Questions related to course design were based on QM standards. Question 25 explored learning objectives (Category 2), Question 26 focused on the logical structures of the modules (Category 1); Questions 27 and 28 asked whether a variety of instructional materials were used to enable e-learners to achieve course objectives (Category 4), and Question 29 was derived from the assessment and measurement category (Category 3).

Sample E-mail addresses of 3,285 students were identified from student data files associated with every online course delivered through the online program of a university in the Midwestern United States. The 41 survey questions were creC . The survey URL and instructions were sent to all ated using SurveyMonkey 3,285 e-mail addresses. A total of 382 valid, unduplicated responses were received (11.63% response rate). Of these responses, 10 incomplete responses with missing values were deleted. Appendix B summarizes the characteristics of sample of 372 students. Research Method PLS, a component-based path modelling technique (Chin, 1998), was used to test the research model. Unlike covariance structural analysis such as LISREL, the objective of PLS is to explain variance in the endogenous variables in a model that has managerial relevance (such as user satisfaction and learning outcomes). PLS is particularly well-suited to operationalizing research models in an applied setting (Edvardsson, Johnson, Gustafsson, & Strandvik, 2000). Tests of the measurement model included estimating internal consistency and assessing convergent and discriminant validity (Hair, Hult, Ringle, & Sarstedt, 2013). To evaluate the structural model, the R2 values of the endogenous constructs and the size, t-statistics, and significance level of structural path coefficients were

Eom and Ashill

199

computed using the bootstrap re-sampling procedure. Bootstrapping with 1,000 samples and sample sizes equal to the original sample size is fundamental to evaluating the significance of path coefficients (Efron & Tibshirani, 1993). To assess the extent of common method bias, the Harman one-factor test was performed following the approach described by Podsakoff et al. (2003). All items measuring the model constructs were entered into a common factor analysis with OBLIM rotation. The results revealed an eight factor structure with no one factor accounting for more than 50% of the variance, thus suggesting the absence of common method bias.

Measurement (Outer) Model Estimation All individual item loadings (Table 2) were close to or above .70 and highly significant using the bootstrap results of PLS. Falk and Miller (1992) and Chin (1998) stated that most loadings should be at least .60 and ideally greater than .70, indicating that each measure accounts for 50% or more of the variance of the underlying latent variable (Bagozzi & Baumgartner, 1994; Fornell & Larcker, 1981; Hair et al., 2013). Loadings of .5 or .6 are considered acceptable if there are additional indicators in the block for comparative purposes (Chin, 1998). With the exception of two items measuring student motivation, loadings were greater than .70 indicating general convergence of the indicators to their respective constructs. However, these two items were retained because they were theoretically grounded and there were other measures in the block for comparison purposes. Construct reliability was assessed based on two measures, the composite reliability measure of internal consistency and average variance extracted (AVE). The internal consistency measure is similar to Cronbach’s alpha except the latter presumes, a priori, that each indicator of a construct contributes equally (i.e., loadings are set to unity). All reliability measures for both models were above the recommended level of .70 (Gefen, Straub, & Boudreau, 2000), thus indicating adequate internal consistency (Table 2). The average variance extracted scores (AVE) also exceeded the minimum threshold of .5 (Fornell & Larcker, 1981), ranging from .51 to .87. Discriminant validity is established when variables do not cross-load on two or more constructs. In other words, each observed variable loads highly on its theoretically assigned construct and not on other constructs. In PLS, discriminant validity is assessed using two methods. First, item loadings and cross-loadings were examined. This analysis revealed that the correlations of each construct with its measures were higher than their correlations with any other measures. Second, the square root of the average variance extracted (AVE) for each construct was compared with the correlation between the construct and other constructs in the model. Adequate discriminant validity is demonstrated when the square root of the AVE for each construct is larger than the correlation between the construct and any other construct in the model (Fornell & Larcker, 1981). Table 3 illustrates that the square root of each value of AVE is larger than any correlation among any pair of latent variables, thereby indicating discriminant validity.

200

The Determinants of Students’ Perceived Learning Outcomes

Table 2: Model validation results Construct and Items Intrinsic Student Motivation Q6 Q7 Q8 Extrinsic Student Motivation Q9 Q10 Q11 Student Self-Regulation Q30 Q31 Q32 Q33 Instructor-Student Dialogue Q21 Q22 Q23 Q24 Student-Student Dialogue Q17 Q18 Q19 Q20 Instructor Activities Q12 Q13 Q14 Q15 Q16 Course Design Q25 Q26 Q27 Q28 Q29 User Satisfaction Q38 Q39 Q40 Q41 Learning Outcomes Q34 Q35 Q36 Q37

Loading

t-Statistic

.84 .74 .51

18.49 11.28 4.69

.69 .84 .71

4.01 7.89 4.65

.77 .74 .73 .76

4.68 3.74 2.81 4.36

.89 .91 .86 .92

25.30 28.53 21.69 27.77

.91 .96 .95 .91

41.57 53.05 53.35 38.20

.91 .84 .79 .82 .89

32.54 24.29 22.56 24.50 29.86

.79 .81 .82 .82 .83

22.63 2.82 21.57 22.68 20.84

.88 .92 .70 .95

36.60 43.54 15.36 37.35

.88 .87 .87 .87

33.89 31.33 25.41 32.79

IC

AVE

.75

.51

.79

.56

.84

.56

.94

.80

.97

.87

.92

.75

.91

.67

.92

.75

.93

.77

Notes: IC: Internal consistency; AVE: average variance extracted, # All significant p