Chapter 3: Best Practices in Teaching Introductory Statistics . .... different from the methods that work best in an online course in the discipline of education. I have also had experiences ...... 1â13). Amsterdam, Netherlands: IOS Press. Gal, I.
The Pennsylvania State University College of Education Adult Education Program
BEST PRACTICES IN ONLINE STATISTICS EDUCATION: A REVIEW OF LITERATURE
A Master’s Paper in Adult Education
By Whitney Alicia Zimmerman
Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Education Fall 2012
Signature: ________________________________________ Date: _________________ Dr. Gary William Kuhne Associate Professor of Education Primary Reader
Abstract Suggestions for best practices for teaching statistics online at the post-secondary level were compiled. Literature related to teaching online and teaching statistics was reviewed as well as the available literature specifically on teaching statistics online. Reoccurring themes were the application of the community of inquiry model, the importance of affective factors, cognitive factors, and considerations of the content. In addition to the critical review of literature, the paper concludes with plans for future research in the form of a doctoral dissertation.
Table of Contents Chapter 1: Introduction....................................................................................................... 1 My Position ...................................................................................................................... 2 About Online Education .................................................................................................. 4 Summary........................................................................................................................ 17 Chapter 2: Best Practices in Online Teaching ................................................................... 18 Environment .................................................................................................................. 18 Choosing Technologies .................................................................................................. 21 Considerations of the Content ...................................................................................... 22 Learner Control.............................................................................................................. 27 Student-Student Interactions ........................................................................................ 28 Instructor-Student Interactions..................................................................................... 31 Professional Development ............................................................................................ 33 Conclusions .................................................................................................................... 34 Chapter 3: Best Practices in Teaching Introductory Statistics .......................................... 36 Fundamental Statistical Principles ................................................................................ 36 Affective Factors ............................................................................................................ 44 Cognitive Considerations ............................................................................................... 54 Other considerations ..................................................................................................... 56 Summary........................................................................................................................ 57 Chapter 4: Review of Literature in Online Statistics Education ........................................ 58 Affect Research.............................................................................................................. 58 Achievement Research .................................................................................................. 60 Instructional Considerations ......................................................................................... 62 Conclusions .................................................................................................................... 65 Chapter 5: Final Discussion ............................................................................................... 67 Plans for Dissertation Research..................................................................................... 69 Final Conclusions ........................................................................................................... 71 References......................................................................................................................... 72
1 BEST PRACTICES IN ONLINE STATISTICS EDUCATION: A REVIEW OF LITERATURE
Chapter 1: Introduction Online statistics education is an important topic that affects many adult students who are working towards online degrees. In the fall of 2010, more than 6.1 million students were taking at least one online course (Allen & Seaman, 2011). Many of those online learners are likely to be required to take a statistics course at some point in their academic careers (see Onwuegbuzie & Wilson, 2003; Zeidner, 1991 as cited by DeVaney, 2010). Completing a statistics course that is of high quality will help students in the long run. They will use their statistical knowledge in their careers and in their everyday lives (Gal, 2004; Gelman & Nolan, 2002; Kosonen & Winne, 1995). There is a relatively large body of literature on teaching online as well as some on teaching statistics in traditional settings, but much less literature is available that combines the two to provide suggestions of best practices for teaching statistics online. Though statistics education has a longer history than online education, research in the area of statistics education has only become of great interest in recent decades (Garfield & Ben-Zvi, 2004). The purpose of this paper is to review literature related to teaching statistics online to compile a list of best practices. This will help both instructors and instructional designers who work with online statistics courses. It will help me not only as an instructor, but also as a researcher; the paper concludes with a proposal for a related dissertation study.
2 This chapter introduces you to my perspective as the author and provides general background information on online education. The “My Position” section serves as a miniature interpretive autobiography. The following section introduces the reader to some of the most influential concepts in distance education. Each of these sections is vitally important to understanding the remainder of the paper.
My Position It is important to acknowledge my previous experiences and viewpoints on learning because they influence how I interpret the ideas presented in this paper. They also highly influence how I present my ideas and what I choose to focus on. This should be taken into account when interpreting this paper as a reader. Personal Background and Goals My academic background is in psychology and education. In college I was a psychology major and took many courses in elementary and secondary education. I was involved with two research projects. I assisted with research using rats to examine the effects of age on time perceptions; that study was based off of behaviorist principles. My senior year I conducted an independent research project on gender difference in the social and cognitive play of preschool students based off of the works of Piaget (1962) and Parten (1932). My interest in online statistics education obviously did not arise from either of those experiences. Rather, I believe it developed as a result of my experiences as a student. As a student, I have taken 12 statistics courses, 4 of those courses were entirely online while many of the others utilized supplementary online resources. I have also taken many online courses in education, psychology, and other social sciences. As a student, I have
3 noticed that the instructional methods that work best in an online statistics course are quite different from the methods that work best in an online course in the discipline of education. I have also had experiences from the instructor perspective. I served as a teaching assistant for an introductory educational statistics course for three semesters. I independently taught an intensive six-week summer course in intermediate educational statistics. As a teaching assistant and instructor of face-to-face statistics courses I have explored some of the available online tools with mixed results. Some of the methods I have used have been based on literature specific to statistics education or higher education in general. I am completing concurrent degrees in Adult Education and Educational Psychology. While completing this Master’s Paper in Adult Education I am also beginning a doctoral dissertation in Educational Psychology. Through the research that I have done to complete this Paper I was able to narrow down my focus and plan a dissertation study which will be discussed in Chapter 5. My Beliefs on Learning My views on learning have been highly influenced by my experiences in psychology. My beliefs are primarily cognitive-based with some behaviorist influences. In recent years my focus has been on cognitive load theory, a cognitively based theoretical framework, and to a lesser extent, multimedia learning (see Mayer, 2005). Research based on Cognitive load theory is vast as shown by a review of instructional design research published between 1980 and 2008 that found that “Cognitive Load Theory” was the second most used phrase and
4 that six of the ten most cited articles were based on the theory (Ozcinar, as cited by Paas, van Gog, & Sweller, 2010). Cognitive load theory, which is the foundation of my perspective, is discussed in terms of three times of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is the complexity of the material that is trying to be learned (Plass, Moreno, & Brünken, 2010). Extraneous cognitive load is produced by factors other than the complexity of the material such as the instruction methods used. Germane cognitive load refers to the cognitive resources devoted to the learning processes (Plass et al., 2010). Germane cognitive load is different from the intrinsic and the extraneous in that it is not additive, it is independent of the other two (Sweller, 2010). Cognitive Load Theory will be discussed in later chapters in relation to teaching online and teaching statistics.
About Online Education Online education refers to courses that are offered with an online component. Within that, there are varying levels of online components and face-to-face components. For this paper, the descriptions proposed by Allen and Seaman (2011) will be used. A “traditional” course is one with no online component, the entire course is delivered in a face-to-face setting (note that a written correspondence or a telecourse course could also fit this description, though such courses will not be included in the majority of the discussions in this paper). A “web facilitated” course is one that is primarily taught in a face-to-face setting with less than thirty percent of the content being delivered online; most on-campus courses that use a course management system such as ANGEL, Blackboard, or WebCT will fall into this category. A “blended” or “hybrid” course is one in which thirty to eighty
5 percent of the course content is delivered via online means and the remaining proportion of content is delivered during face-to-face meetings. Finally, an “online” course is one in which more than eighty percent of the course content is delivered online; these are the courses that this paper is primarily concerned with (I. E. Allen & Seaman, 2011). Online education is a relatively young discipline that may be thought of as a subcategory of distance education and sometimes of adult education as well. Though online education is a somewhat new field, it has been growing rapidly. From 2009 to 2010 enrollment in online courses saw a ten percent increase while enrollment in all higher education only saw a one percent increase. Nearly one-third of all students in higher education take some online courses (I. E. Allen & Seaman, 2011). The Internet is now the most popular form of distance education (Cejda, 2010). There is a great abundance of literature on teaching online which will be discussed in Chapter 2. There are cultural factors related to online teaching and learning. In Asian countries, online education is not as popular as it is in America. Similarly, Arab students have been shown to be more apprehensive about online education than American students (Yu, 2011). This paper will focus specifically on online education in the United States. Though within the American context it is important for instructors to be aware of possible cultural differences in their virtual classrooms. Online Learners “The learner is the most important element of the online learning environment...” (Conceição, 2007, p.6 )
6 Online learners are typically thought of as adult learners. The stereotypical online learner is a busy adult with career, family, and numerous other responsibilities that make attending a physical campus inconvenient or impossible. The flexibility that often comes with online coursework gives students the opportunity to pursue a higher level of education when they otherwise may have been unable to do so (Conceição, 2007; Noel-Levitz, 2011). While this is not always the case (Hansman & Mott, 2010), distance education and online education have strong ties to the field of adult education. Thus, it is important to discuss what is known about adult learners and online education before jumping into the real purpose of this paper, online statistics education. In terms of the demographics of online students, a recent large-scale survey of nearly 100,000 online students at over 100 institutions found that two-thirds of online students were female. The majority (fifty-five percent) of students was 35 years old or older and an additional thirty percent were between the ages of 25 and 34 years. Thirty percent of students surveyed were graduate level students. These students were all given a survey of their perceptions. Students who are primarily online students were compared with those who are primarily on campus students; online students reported higher levels of satisfactions in all areas that were measured: institutional perceptions, enrollment services, instructional services, academic services, and student services. On average, online students placed more importance on institutional perceptions than any other area and significantly more than on campus students (Noel-Levitz, 2011). Not all adult learners are returning students. Being an adult learner is more about characteristics of the individual than it is concerned with age. In recent years, it has been
7 noted that more students who previously fell into the category of traditional students now are classified as adult learners. This is due to characteristics such as parenthood, marital status, employment status, military experience, and other responsibilities that are associated with being an adult (Hansman & Mott, 2010). According to the aforementioned large-scale survey of online students, some of the top reasons for enrolling in online courses included convenience, flexible pacing, work schedule, program requirements, reputation, and cost (Noel-Levitz, 2011). Moore and Kearsley (2005) discuss why adults choose distance education courses. They reiterate that adult students tend to be very busy individuals with family, career, and other obligations. Time and energy are limited and distance coursework may be the most efficient means of obtaining the desired education. Adult learners are both internally and externally motivated. Many pursue education for personal development while others do so to improve their career statuses, thus increasing their incomes (Hansman & Mott, 2010). Although adults usually have years of experience as students, for many there is a gap in time between educational endeavors. This can lead to anxiety (M. G. Moore & Kearsley, 2005). An empirical study of first year online doctoral students found that students tended to have a moderate level of anxiety. Anxiety towards online learning was higher than anxiety towards computers or the Internet. Anxiety decreased slightly over the course of a semester (Bolliger & Halupa, 2012). Some anxiety can have a positive influence on the learning experience. However, if anxiety is too high it can interfere with learning (Teigen, 1994; Yerkes & Dodson, 1908) and may lead to students dropping out (M. G. Moore & Kearsley, 2005).
8 Theories of Distance Education Before teaching online can be discussed, it is necessary to have a basic understanding of the common theories in the field of adult and online education that much of the research in the field is based on. Knowles model of andragogy is a theory of adult learning; it will be discussed along with some of its criticisms. The community of inquiry model, commonly attributed to Garrison, Anderson, and Archer, will be discussed as well as Moore’s theory of transactional distance. Andragogy Just as pedagogy refers to the teaching of traditional aged students (as well as K-12 students), Knowles concept of andragogy deals with teaching adults. This is based off of the premise that children and adult learn differently. The theory of andragogy gives information about adult students and how they learn. This can be applied to the adult students who comprise the majority of online learners. The primary tenants of andragogy are that adults are independent and like to have some control over their learning experiences; adults prefer to have some input into what and how they learn; adults need to know why they are learning the given material and must perceive it as useful; adults do not simply accept everything they are told or read; adults have a wealth of personal experiences that can be incorporated into their learning; adults are ready to learn; adults often learn to solve current problems, not possible future problems; and adults have a higher level of intrinsic motivation than children (Chan, 2010; McGrath, 2009; Merriam, 2001; M. G. Moore & Kearsley, 2005).
9 Knowles model of andragogy is not universally accepted. Personally, I see his concepts as being preferences as opposed to requirements. While adults may have these preferences, this does not mean that they will learn best if their instruction is based upon these preferences. I also disagree with the idea that adults are intrinsically motivated. Many adults pursue higher education as a means to obtaining higher paying employment which is extrinsic motivation (Hansman & Mott, 2010). In the field, a common criticism is that andragogy lacks empirical evidence. According to Taylor and Kroth (2009), “the general body of literature critiquing andragogy asserts that it lacks the fundamental characteristics of a science because it cannot be measured” (p. 7). There are questions concerning whether or not andragogy is something that can be tested and used in empirical research. If it cannot be tested, then it cannot possibly be disproven. This leads to a greater philosophical debate as to whether or not a theory is really a theory if it cannot be disproven. The lack of clarification as to what andragogy looks like in practice has also been criticized. Classroom tests and grades are not in accordance with andragogical principles. This makes measuring the effectiveness of the application of the principles difficult. Community of Inquiry A community of inquiry has been defined as “a group of individuals who collaboratively engage in purposeful critical discourse and reflection to construct personal meaning and confirm mutual understanding” (“Community of Inquiry,” 2011). The community of inquiry model, as visually represented in Figure 1, describes the educational experience as being comprised of social presence, cognitive presence, and teaching presence
10 (Garrison, Anderson, & Archer, 2000). The three types of presence work together to form the educational experience (“Community of Inquiry,” 2011). This theory, or “framework,” was derived from the literature available at the time. The researchers found that much of the literature was focused on social and cognitive aspects as well as aspects of the facilitator (Garrison & Archer, 2007).
Figure 1: Elements of an Educational Experience (Garrison et al., 2000)
The community of inquiry model has been supported by empirical research. A community of inquiry survey has been developed to measure student perceptions of social, cognitive, and teaching presence in online courses (Arbaugh et al., 2008). A validation study
11 using a sufficient number of participants (N = 412) yielded good psychometric properties; the Cronbach’s alpha measure of internal consistency for the three subscales were high ranging from .92 to .96 and a factor analysis confirmed that the items loaded on the three components (Díaz, Swan, Ice, & Kupczynski, 2010). Research studies have found that social, cognitive, and teaching presences are interrelated. Wanstreet and Stein (2011), for example, found a high correlation between social and cognitive presence (r = .799, p < .001) and a moderate correlation between teaching and social presence (r = .492, p = .038). The correlation between teaching and cognitive presence, however, was weak and not statistically significant (r = .219, p = .382). These correlations were calculated using 418 “units of meaning” obtained from an online course’s chat space. Structural equation modeling techniques have also been used to confirm the structure of the community of inquiry model (Garrison, Cleveland-Innes, & Fung, 2010). The community of inquiry model has been used to predict student perceptions in an online MBA course where it was also confirmed to be comprised of three distinct constructs (Arbaugh, 2008). Garrison and Arbaugh (2007) have recognized the need for more research in online courses, especially in disciplines other than education. Research studies have found differences between disciplines; Arbaugh, Bangert, and Cleveland-Innes (2010) found differences, especially in cognitive presence between courses in applied and pure disciplines. It has also been suggested that the model needs to be applied to whole courses as opposed to focusing on discussions (Archer, 2010; Shea, Hayes, & Vickers, 2010).
12 Most recently, a revised community of inquiry model has been proposed. A graphic representation is presented in Figure 2. The researchers have proposed a fourth type of presence: learning presence. They have also proposed new directional relationships (Shea et al., 2012). Another study by the same group of researchers has shown that learning presence may serve as a moderating variable for the other types of presence. Factors of the learner may impact how they perceive teaching, social, and cognitive presences (Shea & Bidjerano, 2012). Research is currently limited on this revised model due to it being in the earliest stages. More research is needed before this model can be adopted more widely.
Figure 2: Revised community of inquiry model (Shea et al., 2012)
Transactional Distance One of the most influential frameworks in distance education to date, the theory of transactional distance began in the early 1970s with Moore’s work on distance education
13 and learner autonomy, though the term “transactional distance” was not used until the early 1980s (Giossos, Koutsouba, Lionarakis, & Skavantzos, 2009; Shearer, 2010). According to Moore (2007), his early work was “an attempt to establish the identity of a previously unrecognized field of research… this theory would identify and describe teaching and learning that did not take place in classrooms, but took place in different locations” (p. 89). Transactional distance in not concerned with the physical distance between learners and instructors but rather the hypothetical space within which their relationships live (Giossos et al., 2009; Gorsky & Caspi, 2005; M. G. Moore, 1993). It provides a framework for discussing the interactions between dialog, structure, and learner autonomy (Shearer, 2009). Moore (2007) depicts this as a linear relationship between the level of dialog and the level of structure of a course as seen in a more complex graphical representation in Figure 3. This depiction shows that as the amount of dialog increases the transactional distance decreases. As the level of individualization increases the transactional distance decreases (Stover, 2004). The less individualized a course is the more structure it has. As structure increases the level of necessary dialog decreases and transactional distance increases (Shearer, 2010).
Figure 3: Relationship between dialog, structure, and transactional distance (Stover, 2004)
The concept of transactional distance has been applied in the areas of educational theory, research, practice and policy. It has been used as the building block of theoretical discussions and empirical research studies in distance education (Kang & Gyorke, 2008; Shearer, 2009; Shin, 2001). Applications to practice and policy range from discussion to the distance education policy deficit in Africa (Gokool-Ramdoo, 2009) to applications in secondary education settings (Murphy & Rodríguez-Manzanares, 2008) and postgraduate settings (Falloon, 2011). While Moore’s theory is seminal in the field of online education, it is not supported universally. The field lacks sufficient empirical studies to validate the theory. A review of published empirical studies by Gorsky and Caspi (2005) found that in the publications they reviewed “either data only partially supported the theory or, that if they apparently did so, the studies lacked reliability, construct validity, or both” (Abstract). They argue that the
15 theory may be a “tautology” based on the idea that more dialog results in a sense of less distance in students. Coming from my experiences with online education with a high level of learner-learner communication, I agree with the argument of Gorsky and Caspi (2005). Assumption: No Significant Difference An assumption of this paper is that there is not a significant difference in the outcomes or overall quality of online courses and traditional face-to-face courses. While this is not yet universally accepted, it is supported by research (Russell, 2001). In a recent report by Allen and Seaman (2011), they found that “one-third of all academic leaders continue to believe that the learning outcomes for online education are inferior to those of face-to-face instruction” (p. 5). The proportion of academic leaders who believe that online education is the equivalent to or superior to face-to-face education is increasing though, from fifty-seven percent in agreement in 2003 to sixty-seven percent in agreement in 2011 (I. E. Allen & Seaman, 2011). Research articles on distance education published as early as 1928 have provided evidence for what may be referred to as the “no significant difference phenomenon.” In his book, The No Significant Difference Phenomenon, Russell (2001) compiled 355 research studies and found that an overwhelming proportion of those studies showed no significant difference between distance education and face-to-face education. Only a fraction of the studies included in Russell’s research utilized the Internet as the primary mode of communication. The earliest study reviewed was a 1928 study on correspondence study (Crump as cited by Russell, 2001).
16 More recent meta-analyses have also addressed the differences between online and face-to-face instruction. Allen, Bourhis, Burrell, and Mabry (2002) studied the differences between courses offered face-to-face and those offered at a distance via different methods such as video, Internet, or written correspondence course. They found a statistically significant difference in favor of face-to-face courses in terms of student satisfaction though the practical significance as measured by effect size was negligible. A more recent metaanalysis commissioned by the U.S. Department of Education examined difference between face-to-face and online/blended courses. Of the 50 contrasts they extracted from the research studies they examined, 11 were in favor of online/blended learning and 3 were in favor of face-to-face instruction. When comparing completely online courses to face-to-face courses the results were not statistically significant nor were they practically significant (mean effect size Hedges’ g+ = + 0.05, p=.46). They did find that using online instruction to supplement face-to-face instruction was beneficial (mean effect size Hedges’ g+ = + 0.35, p < .0001) (Means, Toyama, Murphy, Bakia, & Jones, 2010). Online courses are thought to produce results similar to those of traditional face-toface courses. It may be argued, however, that the courses typically studied are the most well-designed courses (Russell, 2001). With that in mind, we may still conclude that online education has the potential to produce equivalent results. The purpose of this paper, however, is not to address the quality or potential of online education it is to identify best practices for teaching statistics online; this paper is written with the assumption that a statistics course can be successfully presented in the online format.
Summary The preceding chapter served to give the reader a sense of the perspective of the writer. Cognitive theories of learning as well as the community of inquiry model and transactional distance theory in distance education may underpin the discussions in the following chapters. The assumption that online education is not inherently inferior to traditional face-to-face instruction must also be accounted for as the purpose of this paper is not to lobby for online education but instead to provide suggestions for best practices. Chapter 1 included a very basic overview of the discipline of online education. Chapter 2 will focus on best practices in teaching online. Both the design of an online course and interactions that occur within the course, in other words, what happens before the course begins and what happens while the course is running. Chapter 3 will explore the existing literature on teaching statistics. The majority of this section will be focused on teaching statistics in the traditional face-to-face setting because the literature in that setting is quite rich. Chapter 4 will look specifically at literature pertaining to teaching statistics online. Finally, Chapter 5 will include a summary of findings with suggestions for best practices and plans for future research in the form of a doctoral dissertation.
Chapter 2: Best Practices in Online Teaching There are many books available that provide information about online teaching. This chapter will summarize and synthesize the suggestions frequently made in many of those books. Empirical research studies will be included to provide supportive evidence. The literature review process began with an outline developed using the books on teaching online. From there, empirical studies were found to build on the topics commonly discussed in the books. This chapter is not meant to be an exhaustive summary of all online teaching concepts; that would be well beyond the scope of this chapter. Rather, the goal is to identify the main concepts related to online teaching that can be applied to teaching statistics online in Chapters 4 and 5. The online course design process is time consuming and complex when compared to traditional course design (Ragan & Terheggen, 2003). There are many factors of the learners and the content that must be considered during the process. One of the major factors is the overall structure of the course management system. In most cases, instructors and even course designers have little control over the course management system that their institution uses as a platform for their courses. Thus, the selection of course management systems will not be discussed. The focus here will be on aspects on teaching online that the instructor typically has some control over; aspects that are typically the responsibility of the institution will not be discussed.
Environment Philosophers have questioned where online learning actually takes place (for example, Kolb, 2000; McKie, 2000). The conversation here, however, will be more applied
19 and practical in nature. Both the physical environment that the learner is in and the virtual environment of the course are important to consider. In a traditional face-to-face course the instructor is in front of his or her students and can monitor their level of attention. This is not possible in an online course. Each student is likely in a different environment; some or many of those environments may be less than ideal with distractions and no instructor present to keep them on task. While instructors have little or no control over where their students are completing their coursework, they may make a point at the beginning of the course to discuss the importance of working in a good physical learning environment (Stavredes, 2011). The students are often separated geographically from one another and from the instructor, but, as the community of inquiry model and transactional distance theory note, there is a more psychological type of distance that must be addressed (Garrison & Archer, 2007; M. G. Moore, 2007). The interactions that students experience are a vital part of their online learning experience. Not only for the direct learning that can occur but also for the social aspect that leads students to sense like they are part of a learning community (Bach, Haynes, & Lewis Smith, 2007). Being in a different physical location than their instructors can cause anxiety for students (Blake, 2000). While instructors cannot completely alleviate all of their students’ anxieties, they can often reduce anxieties using the methods discussed throughout this paper. The structure of an online course should be conducive to learning. Bach, Haynes, and Lewis Smith (2007) illustrate this by pointing out that the way one behaves in a church, in most cases, is very different from how he or she behaves in a football stadium. The
20 environment influences behaviors. “Learning spaces need to be populated by learning communities” (Bach et al., 2007, p. 99). It is important to set the tone for the course on the first day. Students should be welcomed to the class and given direction so they do not feel lost or left out of the group. Online courses frequently begin with the instructor and students introducing themselves to one another via email or a course discussion board (Draves, 2000). This gives students an opportunity to practice using the course tools before they begin digging into the course content while at the same building a sense of community and likely increasing student retention (Ragan & Terheggen, 2003). The course environment should take into account cultural issues. While it may not be possible to create a learning environment that is ideal for learners from all cultural backgrounds, instructors and instructional designers should be aware of cultural differences and be sensitive to differences. Ways incorporate cultural aware include activities that require students to discuss their cultural backgrounds, assuring that course materials are not culturally biased, and monitoring student interactions for instances of stereotyping or other culturally insensitive tones (Liu, 2007). Emotion can also play a large role in how students approach online learning. Achievement emotions effect students’ self-regulatory behaviors as observed through their study habits. Achievement emotions can be negatively impacted by feelings of isolation or from frustration associated with navigating their online course materials. Experts suggest that online instructors and instructional designers should consider students’ emotions when designing and teaching in the online setting (Artino & Jones, 2012). More specifically, empirical research has found that positive achievement emotions such as enjoyment have
21 positive impacts on student learning behaviors. Negative achievement emotions, especially boredom, have negative impacts on students learning behaviors. Attention should be given to these achievement emotions when designing and teaching online courses (Tempelaar, Niculescu, Rienties, Gijselaers, & Giesbers, 2012).
Choosing Technologies Before selecting the technologies to be incorporated into a course there are a number of factors that must be considered: student characteristics, geographic locations of students, technology available to students, students' goals, goals of the organization, and financial considerations. All of these factors will influence the technologies that are selected (Shearer, 2007). The use of technology should be thought of as a way to encourage the formation of a community within the course (Ragan, 2000). Students need to be able to use the technology of the course (Bach et al., 2007). In terms of cognition, the extraneous cognitive load imposed by the technology should be limited. If a student is using a new or complicated technology, his or her ability to focus on the content will be limited. When introducing a new technology the complexity of the material should be minimal to allow learners to focus their cognitive resources on mastering the technology. When the technology has been mastered, the complexity of the material may be increase (Stavredes, 2011; Sweller, 2010). The technological requirements of the course should be made clear at the beginning and students should be given the opportunity to acquire the necessary skills before the course content is introduced (Ragan & Terheggen, 2003).
22 Technology should be selected that helps students meet the stated learning objectives of the course, not vice versa. The goals of the course may require synchronous or asynchronous communications which dictate the appropriate technologies. Synchronous communication technologies include live chat rooms, video or audio communication methods such as Skype, or live interaction applications like Second Life; these all require learners to be available and active at the same time. Asynchronous communication technologies, such as online discussion boards and email, do not require all learners to participate at the same time which makes it more flexible (Sweat-Guy, 2007). It may also allow students the time to reflect before posting (Black, 2005) which may increase their confidence (Rahman, Yasin, Yassin, & Nordin, 2011). Asynchronous communication has been criticized for its minimal social presence, when compared to synchronous communication, which may lead to students’ perceptions of isolation (McInnerney & Roberts, 2004). Research studies have compared the two types of communication methods and have found mixed results. Some have found that online students tend to prefer synchronous (see Sweat-Guy, 2007, p. 87) while others have found that students prefer asynchronous (see Black, 2005). The type of communication methods that are selected should take into account characteristics of the students, for example their technological skills, schedules, and time zones, as well as the learning objectives of the course.
Considerations of the Content There are three types of interaction through which learning occurs: learnerinstructor, learner-learner, and learner-content. The amount of each of these types of instruction should be based on characteristics of the students and the nature of the course
23 content (Shearer, 2007). The instructional methods selected should always be congruent to the learning objectives of the lesson and course. In many cases it is not appropriate to simply translate the design of a face-to-face course to the online format. The design of an online course should take advantage of the online setting and its often asynchronous nature (Bach et al., 2007). The roles of the students, instructors, and environment are different in an online course. Teaching strategies that work in a face-to-face course may not work in an online course; in other words, pedagogical content knowledge may be environmentally dependent (Conceição, 2007). The complexity and nature of the course content must be taken into account when designing an online course. Cognitive load theory is a model of learning that pays special attention to working memory, where information is consciously processed. This deals with the complexity of the content. Mayer’s theory of multimedia learning, on the other hand, takes the nature of the content into account. Both of these theories are highly applicable to the online learning context. Research on cognitive load theory is vast; in a review of publications between 1980 and 2008 Ozcinar (as cited by Paas et al., 2010) found that “cognitive load theory” was the second most population phrase used in instructional design research. The theory can, and should, be applied to online instruction design in order to create learning opportunities for students that use their cognitive resources most effectively and efficiently. According to educational psychologist and well known cognitive load theorist John Sweller (2005) “instructional design that proceeds without reference to human cognition is likely to be random in its effectiveness” (p. 28). It is imperative that the cognitive processes
24 of learners be taken into account when developing instructional materials. This is especially important in online educational settings where instructional designers need to choose the appropriate instructional methods. Within the theory, cognitive load is discussed in terms of three different types: intrinsic, extraneous and germane. Intrinsic cognitive load deals with the complexity brought by the material to be learned; extraneous is the complexity brought by factors other than the difficulty of the material, such as the instruction methods being used or the manner in which the material is presented. Germane load is somewhat different from the other two in that it refers to the cognitive resources within the student that can be devoted to learning the material (i.e., the resources that can be devoted to working with intrinsic load). Working memory has a limited capacity, thus the more cognitive resources that must be devoted to extraneous cognitive load, the less that can be devoted to intrinsic cognitive load (Mayer, 2005; Plass et al., 2010; Sweller, 2010). The amount of intrinsic, extraneous, and germane load associated with a task is determinant on characteristics of the learner. Intrinsic load will likely be higher for a novice learner as compared to an expert learner because the material will be much more difficult for them. Extraneous cognitive load will be higher in learners who are not familiar or comfortable with the instructional methods or technologies being used. On the other hand, germane cognitive load is likely to be higher for an expert learner when compared to a novice learner (Brünken, Plass, & Moreno, 2010). Cognitive overload occurs when the learner does not have the cognitive capacity to handle all of the information it need to
25 process. This leads to frustration and inability to perform at the optimal level (Rutkowski & Saunders, 2010). Podcasts and other forms of audio media are commonly used in online courses. The use of such media may be convenient for many online learners who are busy adults; they may listen to recorded materials while driving to work or while at the gym. Unfortunately, audio is not always the most effective way to present instructional material; in many cases it can be quite ineffective. In discussing the use of audio media in online courses, Clark (2005) notes that audio is “transient” (p. 601), meaning that it is continuously moving. Segments of audio made available for online students should be kept short and should not contain information that students will need to refer back to. The use of podcasts should be limited to material that is simple enough for students to comprehend at the pace of the audio and should be kept short to decrease the occurrence of cognitive overload of loss of interest (Clark, 2005). The use of audio in conjunction with visual displays requires a separate discussion. The use of dual-modalities should be well thought out. Online courses have the capacity to use dual-modalities (i.e. audio and visual material together), this may aid in learning if used effectively but the cognitive processes by which students use these instructional materials must be taken into account. Audio and visual are considered to be the two subsystems of working memory that can work together to create more efficient learning experiences. Many research studies have shown that students who receive complementary audio and visual instructional materials together learn more efficiently than students who receive all instructional materials visually, such as a diagram with text (Brünken, Plass, & Leutner,
26 2004). The key here is that the audio and visual materials must be complementary and not redundant. Clark (2005) is an excellent resource providing a list of guidelines for the use of audio with visual displays. This includes using audio to explain complicated visual artifacts and omitting unnecessary components. The effort needed to comprehend a complicated visual display may be lessened by the inclusion of audio that explains it. It is better to use audio to explain complicated visual material than to use text as the use of audio and visual together maximized the use of the students’ cognitive resources. Unnecessary audio, such a background music or the reading of text that is also displayed, needlessly increases cognitive demands and should be avoided (Clark, 2005). Videos are sometimes used in online courses as a way to present content. Care should be used with animations as they may put more stress on students’ extraneous cognitive loads. Research has shown that in some cases using static images is more efficient and effective than using animations; in many cases static images and animations produce similar results, though there are things that can be done to improve their effectiveness (Ayres & Paas, 2007). Giving students control over animations has been shown to help, for example making pause and remind buttons available and easy to navigate (Hasler, Kersten, & Sweller, 2007). In general, the simplest method of delivery that meets the lesson and course objectives should be used. When introducing a new instructional method, it is best to begin with content that the student is already familiar with. The high extraneous cognitive load of
27 a new teaching method coupled with the high intrinsic cognitive load of new content will likely lead to cognitive overload (see Sweller, 2005, 2010).
Learner Control Learner control refers to the amount autonomy that students experience. It is an important factor when selecting appropriate instructional methods. Online courses typically require a higher level of learner control than courses with a face-to-face component do (Yukselturk & Yildirim, 2008) and they tend to be more learner-centered as opposed to teacher-centered (Liu, 2007). In the online setting it is particularly important to consider the amount of autonomy that learners are expected to have. For example, a student may do very well in a course with a high level of autonomy if she has high levels of motivation and aptitude while a student with low levels of motivation and aptitude may do better in a course with more structure and less learner autonomy (Shearer, 2007). Course design should take into account students’ time management abilities. Expectations of how much time should be devoted to the course as well as within the course where time should be spent should be clear (Bach et al., 2007). Self-efficacy, locus of control, prior knowledge, and metacognitive skills have also been discussed as they impact students’ efforts (Hannafin, Hill, Oliver, Glazer, & Sharma, 2003; Stavredes, 2011). Students’ motivation levels may influence their abilities to be successful online learners. In fact, Stavredes (2011) devotes an entire chapter of her book Effective Online Teaching to theories of motivation; she cites Moore’s theory of transactional distance and argues that online instruction requires the use of different techniques than face-to-face instruction as the physical distance between learners and instructors may lead to feelings of
28 isolation making it difficult for students to stay motivated. Many experts are in favor of the use of self-directed learning concepts in online education which often synonymous with independent or autonomous learning (Anderson, 2007; Draves, 2000).
Student-Student Interactions The community of inquiry model’s social presence component emphasizes the importance of social factors in online learning. Students establish themselves as individuals within the course while building relationships with other students and gaining their trust. This leads to the formation of a community of inquiry (Ragan, 2000; Stavredes, 2011). This is especially important in online courses that require social interactions in the form of discussion boards or other collaborative work (Stavredes, 2011). Research has shown that student satisfaction is lower when there is a lack of student-student interaction (for example, Swan, 2001; Yukselturk & Yildirim, 2008). The incorporation of student-student interaction in an online course positively impacts students’ motivation levels (Chou, as cited in Sweat-Guy, 2007) and lessen feelings of isolation (Liu, 2007). Discussion boards are frequently used in online courses. While some believe that discussion boards are simply used to recreate the interactions that occur in face-to-face courses, others argue that they do much more. Levine (2007), for example, states that online discussion boards can be used to encourage higher-order thinking in a constructivist manner through their interactions with their classmates. “The discussion board has the potential to provide the basis for creating a climate whereby the learning process is not limited by the traditions of face-to-face instruction” (Levine, 2007, p. 68). The learning that occurs via
29 online discussion boards is often tied to social constructivist theories of learning (Black, 2005). In an online course it is often customary to have students write introductions at the beginning of the semester (Ko & Rossen, 2010). In a face-to-face setting students can physically see characteristics of their classmates that may influence how they act, react, and perceive their communications. In an online environment students do not have the visual cues that give them information about their classmates’ ages, races, disabilities, or personalities. Gender may also not be a given in the online classroom as many names are not gender specific. While there may be instances when this type of anonymity is desirable in an attempt to control prejudice. In many cases, it is helpful for students to learn more about their classmates in order to reduce anonymity and increase the sense of community (Blake, 2000). Many students complain that they do not like group work. Group projects are often included in online courses with the argument that they prepare students for situations they will likely face in the real world. Group work encourages students to learn from one another (Stavredes, 2011). Instructors play an important role in group projects. Stavredes (2011) summarizes the instructor’s role in group work with five points: (1) setting the stage, (2) creating the environment, (3) modeling the process, (4) guiding the process, and (5) evaluating the process (see pp. 143-144). “By focusing on criteria that relate to being a good team member, learners can be satisfied that those learners who choose not to participate fully will not be able to earn the same grade as those who do, which can lead to greater satisfaction
30 with team activities” (Stavredes, 2011, p. 144). Experienced educators often suggest having students complete peer evaluations (Ragan & Terheggen, 2003; Stavredes, 2011). Common conflicts in online groups include learners who do not contribute, culturallyrelated differences, disagreements, poor communication, technical issues, and aggression (Stavredes, 2011). Stavredes (2011) recommends having a predetermined conflict resolution process for groups; if they cannot resolve their conflict in their group, then the instructor may need to intervene. Outside of group work there are additional forms of misbehavior that may occur in an online course that instructors need to address (Tropp, 2007). Online instructors have noted that students behave differently in an online environment; they are more likely to take risks and say things that they may not say if they were face-to-face with their classmates (Black, 2005; Joinson, 2007). This has been described as the online disinhibition effect. Individuals may act out of character by oversharing about their personal lives or by being overly kind; this has been labeled as benign disinhibition and may be a tactic used to develop one’s own personal identity (Suler, 2004). Online misbehaviors may be caused by a different type of disinhibition: toxic disinhibition. This occurs when individuals engage in behaviors that they would not engage in in the real world such as arguing with others, using insensitive language, or inhospitably criticizing others. In the field of psychology, toxic disinhibition has been described as “a blind catharsis, a fruitless repetition compulsion, and an acting out of unsavory needs without any personal growth at all” (Suler, 2004, p. 321). Similarly, “flaming” may occur. There are numerous definitions of flaming, though most agree that it involves negative
31 communications that may be perceived as rambling (Joinson, 2007). These concepts have been backed up by research that has shown that individuals engage in more spontaneous self-disclosure in online communications than in face-to-face communications (Joinson, 2001). “Suler (2004) identifies six main factors that lead to an ‘online disinhibition effect’… dissociative anonymity, invisibility, asynchronicity, solipsistic introjection, dissociative imagination, and minimization of authority” (Joinson, 2007). Online communications in which participants can see each other, such as in a video chat, decrease the frequency of these types of behaviors (Joinson, 2001). Negative types of communication can be managed in the online class by decreasing students’ senses of anonymity and making them more aware of the lack of social cues in the online environment (Joinson, 2007). Instructors need to monitor student interactions and intervene when toxic behaviors occur (Tropp, 2007). Psychological aspects of student interactions should be considered by instructors. The aforementioned discussion of the online disinhibition effect is one example of this. Efficacy may also have a great impact on student participation. A study of online discussions in a post graduate course found that some students had low Internet efficacy and were not familiar with online discussion boards. In those cases it is helpful to provide students with guidance and instructor expectations (Rahman et al., 2011).
Instructor-Student Interactions Instructor presence in another component of the community of inquiry model. It has a positive impact on student satisfaction. “The bottom line is that you must continuously monitor and engage with learners to help them persist” (Stavredes, 2011, p. 151). Stavredes (2011) lists five primary purposes of instructor-learner interaction: (1) encourage
32 participation; (2) encourage knowledge construction and critical thinking; (3) monitor student progress; (4) provide feedback; and (5) encourage students to be self-directed (see p. 164). Interactions between instructors and students can decrease the transactional distance perceived by students and help alleviate some students’ feelings of isolation. Instructors should build a positive relationship with students early in the semester (Stavredes, 2011). Interactions with students should be timely to facilitate their cognitive development and avoid a decline in motivation (Rahman et al., 2011). They should also be positive and not judgmental (Van Keuren, 2006). Research studies have shown that the use of audio and video communication decreases transitional distance and increases teaching presence by making instructors seem more real (Borup, West, & Graham, 2012; Mathieson, 2012). However, recall the cognitive load implications of using video for instructional purposes (see Clark, 2005). Videos may be most appropriate as a way to introduce students to the course or a new unit; they should not be used as the primary content delivery method. Stavredes (2011) lists eight types of interactions that instructors may use to encourage the cognitive development of students: (1) prompts, (2) elaboration, (3) clarification, (4) weaving, (5) perspectives, (6) inferences and assumptions, (7) implications, and (8) summaries (see pp. 157-158). Most of these are self-explanatory. Weaving is a technique that combines the contributions of multiple students (Stavredes, 2011). The length of the instructor’s postings should be kept short as students may not read long
33 postings in their intended entirety. Humor should be used cautiously; sarcasm cannot always be read and may be misinterpreted by students (Draves, 2000).
Professional Development Online faculty professional development is important as many online instructors, especially in the discipline of post-secondary statistics, do not have a formal background in teaching and learning. Online instructors have learned from personal experiences and from professional development experiences. For many, online teaching is a new concept. While nearly all have had experiences as students in face-to-face settings that they have learned from, the number who have had experiences as online students is far fewer. Professional development opportunities vary from institution to institution. Around ninety-four percent of institutions with online programs do offer some sort of training or mentoring for their faculty who teach online. Seventy-two percent offer internally organized professional development and fifty-eight percent facilitate informal mentoring (I. E. Allen & Seaman, 2011). After performing a literature search, Moore (2006) concluded that “overall, the picture regarding the faculty response to opportunities for training and development in teaching at a distance is not very bright” (p. 61). He found that faculty participation was higher in instances where attendance was required or rewards were given. More research into the impact of professional development may lead to more institutions placing an emphasis on it.
Conclusions There is an abundance of books and articles on the topic of online instruction. After reviewing many of them, I wrote this chapter focusing on the reoccurring themes. The community of inquiry model and Moore’s theory of transactional distance could be related to many of the hot topics in online education. I was also able to draw from the field of cognition to apply cognitive load theory and multimedia learning concepts. The topic of teaching online will be revisited in Chapter 4 with the discussion of teaching statistics online. The following set of bullet points summarizes the key issues and suggestions for best practices that were uncovered in the review of literature on teaching online. Though these points were not explicitly addressing online statistics education, they may be applied in that specific context.
Create a positive and professional environment
Begin an online course by asking students to introduce themselves on a class discussion board; encourage interaction
When selecting technology and learning activities take into account the learning objectives and the complexity of the materials
When designing and teaching online, don’t forget to take into account characteristics of the students (motivation, need for structure, etc.)
Monitor student interactions; intervene if toxic behaviors occur
Be timely in responding to student requests and providing feedback
Be explicit in communications, do not use sarcasm or humor that may not be communicated via email
Professional development opportunities should be provided by institutions and attendance should be required
Chapter 3: Best Practices in Teaching Introductory Statistics “Statistics teaching can be more effective if teachers determine what it is they really want students to know and do as a result of their course- and then provide activities designed to develop the performance they desire” (Garfield, 1995, p. 32)
Just as the area of online teaching is broad, the research on teaching introductory statistics is also quite broad. This chapter will first discuss the fundamental principles of introductory statistics that underlie many of the topics taught in such a course. After discussions concerning content, topics related to instruction will be discussed. Emphasis will be placed on the affective factors associated with online statistics courses. Cognitive factors will also be considered.
Fundamental Statistical Principles From my personal experiences and after an extensive review of literature on teaching statistics, four key principles in introductory statistics were selected for focus here: variation, central tendency, probability theory, and statistical reasoning. These four principles are reoccurring themes in the topics typically covered in introductory statistics courses. This chapter focuses primarily on studies and theories that were developed in the face-to-face setting. These principals also warrant great attention in the online environment. Methods for teaching statistics specifically in an online setting will be the focus of the final two chapters. Along with descriptions of each of the four key principles that have been selected, suggestions for teaching will be given. I suggest that these key principles be explicit in an
37 introductory statistics course. They are the “big ideas” that should be at the center of the course (Wiggins & McTighe, as cited by P. Cobb & McClain, 2004, p. 381). Variation Variation is a key concept to understanding statistics (Hammerman & Rubin, 2004; Wild & Pfannkuch, 1999). It was a reoccurring theme in the articles and books reviewed; “it is the fact that all processes vary which creates the need for statistics” (MeletiouMavrotheris & Lee, 2002, p. 24). It is because of variation that we need statistics (G. Cobb & Moore, 1997). Understanding the concept of variation is essential if students are going to understand statistics therefore it should also be a central part of teaching statistics (Gould, 2004; Meletiou, 2002; Pfannkuch & Wild, 2004; Reading & Shaughnessy, 2004). It has been argued that variability should be given more of a focus than probability theory when teaching introductory statistics (Ballman, 1997) and that it may be the root of many students’ statistical misconceptions (Meletiou, 2000). Despite the increased attention being given to variability, the research on how students reason about variability is underdeveloped (Ben-Zvi & Garfield, 2004a; Meletiou, 2000). In order for students to understand statistical concepts they must have an understanding of variation, or noise, that occurs in a sample of data (Garfield & Ben-Zvi, 2004). The difference between explained and unexplained variation plays a vital role in many statistical procedures (Pfannkuch & Wild, 2004). Older methods of teaching statistics do not focus on this but rather focus on measures of centrality and sometimes randomness, thus students do not develop an understanding of the importance of variation (Meletiou, 2000). In some cases statisticians want to decrease variance and in other cases they want to
38 increase it. It is a complex concept but one that is necessary in order to fully understand inferential statistics (Gould, 2004). Visual representations using computer software may aid in some students’ understandings of variability (Hammerman & Rubin, 2004). In a study of elementary school children, the use of the concepts of growing samples and shapes were used to encourage reasoning of variability. They explored the shape of distributions, such as weight, and were able to reasoning that there would be more people near the mean than at the extremes (Bakker, 2004). Though that particular study, as well as many others, focused on elementary aged children and the focus of this overall paper is on adult students, many of the concepts do transfer. The inclusion of statistics in the K-12 curriculum is relatively recent with attention given to the subject in the 1980s, for example by the National Council of Teachers of Mathematics (Shaughnessy, 1992). Central Tendency The statistical principle of central tendency is typically introduced at the beginning of introductory statistics courses. It is at the center of many of the inferential statistics topics covered, such as t-tests and analysis of variance (“ANOVA”), and is involved in nearly all topics covered, such as z score transformations, correlation, and regression (Coladarci, Cobb, Minium, & Clarke, 2011; Glass & Hopkins, 1996; Reading & Shaughnessy, 2004). Though some experts believe that it is given too much attention (Reading & Shaughnessy, 2004), without an understanding of central tendency statistics students would be lost. Research on the development of knowledge of central tendencies has been conducted on elementary school-aged children. It shows that learners develop at least a
39 basic understanding of measures of central tendency long before they are undergraduate or graduate students in a formal statistics course (for example Konold & Pollatsek, 2004; Mokros & Russell, 1995; Strauss & Bichler, 1988). Changes have been made at the K-12 level since the 1990s to include more statistics instruction in an attempt to increase students statistical reasoning and understandings of concepts such as central tendency and variability (Reading & Shaughnessy, 2004). While there is a fair amount of research focusing on the initial development of an understanding of central tendency and its implications for K-12 education, the focus in this paper is on teaching statistics to adults at the undergraduate or graduate level. Experts suggest that much of what is known about younger students may also be applied to adult learners (Gal & Garfield, 1997a). Related to central tendency, the Central Limit Theorem is a very important statistical principal that should be the focus of an introductory statistics course (Konold & Pollatsek, 2004). “The Central Limit Theorem provides a theoretical model of the behavior of sampling distributions, but students often have difficulty mapping this model to applied contexts. As a result, students fail to develop a deep understanding of the concept of sampling distributions and therefore often develop only a mechanical knowledge of statistical inference. Students may learn how to compute confidence intervals and carry out tests of significance, but they are not able to understand and explain related concepts, such as interpreting a p-value. (Chance, DelMas, & Garfield, 2004, p. 311)
40 Interactive computer applets are available that allow students to manipulate factors such as sample size or normality (for example Ogden, 1996; “Sampling distribution,” n.d.). However, research has shown that simply making these applets available for students to use does not help them to gain a strong conceptual understanding of sampling distributions. The use of such applets can be enhanced by asking students to make predictions before running the applet (Chance et al., 2004). Probability Theory Probability is relevant in everyday life. Advertisers frequently make claims involving probabilities, such as “4 out of 5 doctors recommend this product.” Prescriptions come with warnings such as “20% of people who take this drug experience severe side effects.” Regardless of their education levels, adults should be able to understand probabilities as they impact their lives (Gal, 2004; Metz, 1997). Though some experts may disagree (Ballman, 1997; Meletiou-Mavrotheris & Lee, 2002; D. S. Moore, 1999), basic probability theory does play an important role in learning statistics. It is vital for understanding the concept of p-values which underlie statistical tests and are the primary tool for making decisions in modern day statistics that rely heavily on statistical software. A basic understanding of probability and chance is necessary (Gal & Garfield, 1997a). Misconceptions concerning conditional probabilities, such as those related to statistical testing, are common. More specifically, there is confusion as to why we reject the null hypothesis but do not accept it. This may be related to students’ understandings of
41 causality and conditionality. Researchers recommend that this issue be combated with the use of real life examples (Shaughnessy, 1992). Statistical Reasoning Statistical reasoning is a principle statistical concept as it pulls together the procedural and the practical. “Statistical reasoning may be defined as the way people reason with statistical ideas and make sense of statistical information (Garfield & Chance, 2000). This involves making interpretations based on sets of data, representations of data, or statistical summaries of data. Much of statistical reasoning combines ideas about data and chance, which leads to making inferences and interpreting statistical results. Underlying this reasoning is a conceptual understanding of important ideas, such as distribution, centre, spread, association, uncertainty, randomness, and sampling.” (Garfield, 2003, p. 23) According to Tempelaar, Gijselaers, and van der Loeff (2006), “statistical reasoning is based upon an understanding of important concepts such as distribution, location and variation, association, randomness, and sampling, and aims at making inferences and interpreting statistical results” (¶ 1). Statistical reasoning may be difficult for students for a number of reasons. For one, the abstract nature of statistics create hurdles for learners. The logic behind the null and alternative hypotheses and being able to reject the null hypothesis but not accept the alternative (instead we “fail to reject the null”) also goes against some human nature (DelMas, 2004).
42 Experts in the field of statistics education argue that statistical reasoning should be the focus of introductory statistics courses as opposed to the more traditional focus on procedures and mathematical calculations (Ben-Zvi & Garfield, 2004b). “Statistical reasoning needs to become an explicit goal of instruction if it is to be nourished and developed” (DelMas, 2004, p. 91). An international researcher tells a story about a visit to an American 5th grade classroom during a statistics lesson. The teacher asks a question unrelated to measures of centrality and a student “thoughtlessly muttered ‘meanmedianmode,’ as if it were one word” (Bakker, 2004, p. 64). The purpose of this example is to show that students do not always use statistical reasoning but rather attempt to regurgitate what they have been taught. Being able to compute the mean, median, and mode is only useful if students know when to do so. Some view statistics as belonging to the category of liberal arts with connections to mathematics because the focus should not be on the computational aspects but rather the conceptual underpinnings (D. S. Moore, 1998) and, in fact, “statistics did not originate within mathematics” (Pfannkuch & Wild, 2004, p. 17). The mathematical skills necessary for the typical introductory statistics course are minimal (G. Cobb & Moore, 1997) with the most complex procedures being square roots and orders of operation. Thus, statistics should not be considered a discipline to be taught by mathematicians (D. S. Moore, 1988). The focus of statistics courses should be on the application of statistical concepts and not mathematics. Closely related to statistical reasoning is the notion of statistical literacy. This includes the ability to be responsible consumers of statistical information by being able to interpret and evaluate critically as well as the ability to communicate with others concerning
43 such information. Statistical literacy is relevant in a number of situations. Most directly, students will need to be able to understand research in their field of study. Statistical literacy also relates to students’ personal lives when they are gambling at a casino or reading a newspaper article (Gal, 2004). Empirical research supports the idea that learning statistical rules does transfer to everyday contexts (Kosonen & Winne, 1995). Students’ Misconceptions While statistics makes intuitive sense to some students, for many it may seem irrational (Hawkins, 1996). Some of the misconceptions that I have encountered as an instructor and through my review of literature will be discussed. The issues discussed here are by no means exhaustive. Somewhat of a self-fulfilling prophecy, though not in the conventional psychological sense, occurs when students have data but fail to base their conclusions on the descriptive and inferential statistics that they have performed. Personal beliefs may impact how students interpret their statistical findings which may lead to misinterpretations (Pfannkuch & Wild, 2004). Students are more critical of results that oppose their initial expectations (Meletiou, 2000). The Statistical Reasoning Assessment (SRA) is an instrument that measures students’ statistical reasoning skills as well as their misconceptions. It was originally developed as a measure to be used with a new high school level statistics curriculum (Tempelaar et al., 2006). It consists of 20 multiple choice items related to statistics and probability (Garfield, 2003). Those 20 items may be used to compute 8 correct reasoning subscales and 8 misconception subscales (Tempelaar et al., 2006). While the short length of the assessment
44 makes it attractive, its psychometric properties are questionable. Correlations between SRA results and final course grades are low (Garfield, 1998 and Garfield & Chance, 2000 all as cited by Tempelaar et al., 2006). Correlations between items are also low (Garfield, 1998, Garfield & Chance, 2000, and Liu, 1998 all as cited by Tempelaar et al., 2006).
Affective Factors The affective aspects of learning statistics should not be ignored (Gordon, 2004). Many students fear their required statistics courses and come into the class with a high level of anxiety sometimes humorously referred to as “statisticophobia” (Dillon, 1988; Dunn, Smith, & Beins, 2007). For some students, statistics is hurdle standing in the way of obtaining their degrees. Their anxieties may be so high that they put off taking their required statistics course until one of their final semesters (Onwuegbuzie & Wilson, 2003). Non-cognitive factors can impact students’ performances, such as attitudes, personal interests, expectations, and motivation levels (Ashaari, Judi, Mohamed, & Wook, 2011; Yi & Jeon, 2009). My personal experiences as well as the literature on statistics education support these notions. Attitudes and beliefs can play a huge role in the statistics classroom. As a teaching assistant and instructor I have witnessed what may best be described as the self-fulfilling prophecy; some students are so overwhelmed with fear, anxiety, and feelings of low selfefficacy that it interferes with their ability to put forth their best efforts. Some experts recommend monitoring students’ attitudes and using that information when planning for their learning (Cashin & Elmore, 2005; Gal, Ginsburg, & Schau, 1997).
45 We will later see, in an examination of the Yerkes-Dodson Law, that some anxiety is a good thing. But, for some students the level of anxiety is so high that it is debilitating. The first step to helping those with anxiety is assessing it (Ghinassi, 2010). In the following section I will review some of the most frequently used surveys of attitudes and anxiety specifically in the statistics classroom. Measures of Statistics Attitudes and Anxieties Numerous instruments for measuring statistics-related anxiety and attitudes have been developed, for example the Statistics Anxiety Rating Scale, Statistics Attitude Survey, Attitudes Toward Statistics, and Survey of Attitudes Toward Statistics. Empirical research on the attitudes and anxieties of statistics students has often depended on these measures. I may rely on one or more of these measures in my dissertation or future research. These instruments have been subjected to studies of their psychometric properties. There is empirical evidence that supports the relationship between scores on these assessments and student performance in introductory statistics courses (Cashin & Elmore, 2005). The Survey of Attitudes Toward Statistics (SATS) was initially developed more than fifteen years ago and was originally designed with four subscales: affect, cognitive competence, value, and difficulty. The original form (SATS-28) is comprised of 28 items that are rated on a 7-point Likert-type scale (Cashin & Elmore, 2005; Schau, Stevens, Dauphinee, & Del Vecchio, 1995). The SATS was developed and refined using a relatively large sample of American students. Both undergraduate and graduate students were initially involved in the development of the assessment. However, only undergraduate students were included in the factor analysis that provides evidence for the instrument’s four factor structure because
46 of noted differences between undergraduate and graduate courses (Schau et al., 1995). Since its original development, a 36 item form of the survey (SATS-36) has also become popular and both four and six factor models have been tested. The six factor structure contains the four original factors and also interests and effort. Pre and post test versions of the instrument are also available: identical items in future and past tense (VanHoof, Kuppens, Sotos, Verschaffel, & Onghena, 2011). Estimated time to complete the survey is 10 minutes (Schau, 2005a). Since the SATS development in 1995, a Korean version of the assessment has been developed: the K-SATS. A validation study conducted in Korea showed that the K-SATS may consist of five factors: interests, effort, cognitive competence, value, and difficulty (Yi & Jeon, 2009). The SATS has also been translated and studied in Italian (Chiesi & Primi, 2009), German (Strobl, Dittrich, Seiler, Hackensperger, & Leisch, 2010; Zimprich, 2012), and Dutch (VanHoof et al., 2011). The psychometric properties of the SATS have examined its structure. Confirmatory factor analysis methods were used with the SATS-36 to compare four and six factor models. Because items are on an ordinal level scale, Robust Maximum Likelihood (RML) methods were selected. Polychoric correlations were used to estimate the underlying variables that were then used in the confirmatory analyses of the different factor structures. Covariance methods with Satorra-Bentler adjustments were also used. Modified versions of each model were also tested after initial analyses were used to eliminate some items. While the difference between the two models was very small, the six factor model was shown to be superior (VanHoof et al., 2011).
47 A different instrument with a very similar name, the Attitudes Toward Statistics, was developed by a researcher at the University of Nebraska- Lincoln in the mid-1980s. Items were written to be focused on students’ attitudes towards the class they were enrolled in and attitudes towards the application of statistics in their major discipline. These two subscales are labeled “course” and “field,” respectively, and they share little variance (Wise, 1985). The Statistics Attitude Survey is noted as being the first instrument of statistics attitudes to be developed (Cashin & Elmore, 2005). The Survey consists of 34 items on a traditional 5-point Likert scale. Analyses from data collected from undergraduate and graduate students at The Pennsylvania State University have shown that it is a one dimensional structure (Roberts & Bilderback, 1980). A study of the Statistics Attitude Survey and the Attitudes Towards Statistics Scale showed that they were very highly correlated (observed r = .88; with adjustment for measurement error r = .96). That study was performed in response to criticisms from the developer of the Attitudes Towards Statistics Scale (Wise) who claimed that many of the Statistics Attitude Survey’s items were inappropriate because they were measuring statistics achievement and not attitudes (Roberts & Reese, 1987). While there is much overlap between all of these attitudes instruments, as inferred by positive correlations identified in validation studies (Cashin & Elmore, 2005; Schau et al., 1995), the most popular instrument in the literature appears to be the Survey of Attitudes Towards Statistics which has been used in numerous research studies since its original development (Schau, 2005b).
48 The Statistics Anxiety Rating Scale (STARS) is different from the aforementioned instruments as it focuses more clearly on anxiety towards statistics. It has been cited as being the most frequently used measure of statistics anxiety. The full survey consists of 51 items rated on 5-point scales (Papousek et al., 2012). It is estimated the entire scale can be completed in about 15 minutes (Schneider, 2011). The 51 items load onto 6 subscales: (1) test and class anxiety, (2) interpretation anxiety, (3) fear of asking for help, (4) self-concept, (5) worth of statistics, and (6) fear of statistics teachers. Subscales 1 through 3 can be labeled “STARS-Anxiety” and subscales 4-6 as “STARS-Attitudes” (Papousek et al., 2012). Some research studies have selected to use a subset of the subscales, for example DeVaney (2010) only used the first three subscales that focus on anxiety. As with the SATS, the STARS has been translated and studied in various languages such as German (Papousek et al., 2012). Its psychometric properties have also been confirmed using a sample of students from the United Kingdom (Hanna, Shevlin, & Dempster, 2008). Impact Affective Factors Attitudes Students attitudes and beliefs concerning statistics are formed in a number of ways before they ever step foot into my classroom. Many graduate-level students have taken a required statistics course as undergraduate students. Their experiences there may heavily influence their predetermined beliefs about their graduate-level course. Other experiences also influence perceptions, such as what other students have said. A study that examined the preconceptions of high school students with no previous formal experience with statistics found that many associated statistics with numbers, graphs, and descriptive
49 statistics such as those frequently reported in sports such as batting average in baseball (Gal & Ginsburg, 1994, as cited by Gal et al., 1997). Other students believe that statistics is a branch of mathematics and their attitudes towards mathematics generalize to statistics (Gal et al., 1997). Other researchers have shown that students’ attitudes towards statistics are not entirely negative. A study with undergraduate students in a business statistics course used the Survey of Attitudes Towards Statistics and obtained results that showed more favorable attitudes towards statistics than other similar studies found. They found a positive relationship between students’ prior experiences with statistics and their attitudes; students with some prior knowledge had more positive attitudes. A positive relationship was also found between confidence and attitudes; students who were confident that they would succeed in the statistics course had more positive attitudes (Mills, 2004). The aforementioned study by Mills (2004) and others have examined the impact of gender on attitudes towards statistics and have found mixed results in terms of statistical significance. Those that were statistically significant tend to still have low effect sizes. The conflicting findings should be examined in greater detail. There may be an interaction between gender and another variable such as academic major that impacts the relationship between gender and attitude. Or, there may be measurement related issues or simply the impact of sample size on statistical analyses. Mills’ study did find a difference between males and females; females had more negative attitudes. A study in Germany used a translated form of the Survey of Attitudes Towards Statistics and also found gender differences in the attitudes of first year undergraduate students enrolled in a mandatory introductory statistics
50 course; females had more negative attitudes (Strobl et al., 2010). Similar results were observed with the Statistics Attitude Scale; correlations between both pre and post scores with gender were statistically significant (pretest r = .26, posttest r = .19) which results in a very low effect size of approximately r2 = .04 signifying that 4% of the variance in attitude scores is shared with variance in gender (Roberts & Saxe, 1982). Other studies have not found gender differences; for example, Cashin & Elmore (2005) found much lower correlations between gender and the subscales of the SATS ranging from r = .02 to r = .11 which indicate extremely low effect sizes (r2 = .0004 to r2 =.0121) and virtually no practical significance. Anxiety Research in the field of psychology has examined the relationship between anxiety and performance. Most notably, the Yerkes-Dodson Law depicts this relationship as an inverted, U-shaped curve (Teigen, 1994). Though originally developed using animal research (see Broadhurst, 1957; Yerkes & Dodson, 1908), it is also widely applied to human learning. As depicted in Figure 4, the relationship between arousal level (in this case anxiety) and performance is moderated by the complexity of the task. Subfigures (b) and (c) are of our primary interest here. They show that when a task is simple, the optimal level of anxiety is higher than when a task is difficult. When a task is easy, anxiety may actually improve performance. When a task is difficult, a higher level of anxiety is debilitating as it may restrict what the learner can attend to (Hanoch & Vitouch, 2004; Teigen, 1994). More recently, the Yerkes-Dodson Law has been expanded upon, though the primary tenants as applied in this setting remain constant (Hanoch & Vitouch, 2004).
Figure 4: Yerkes-Dodson Law (Teigen, 1994)
Research on gender differences in terms of anxiety also tends to find little practical significance. A study in the United States used the Statistics Anxiety Rating Scale and found statistically significant gender differences, but measures of practical significance were negligible (Rodarte-Luna & Sherry, 2008). A study of undergraduate students, also using the Statistical Anxiety Rating Scale, did not find gender differences, nor did it find any differences between ethnic groups or students of different ages: traditional 18-24 years old versus non-traditional 25+ years old (Bui & Alfaro, 2011). A study of students in a business
52 statistics course did find differences between traditional and non-traditional students in terms of their subscale scores on the Statistics Anxiety Rating Scale. Non-traditional students had consistently higher levels of anxiety, though only the results on the Test and Class Anxiety subscale were approaching statistical significance (t 119 = 1.36, p = .089). Measures of effect size were neither given nor calculable due to lack of data concerning variability (Bell, 2003). A study that measured anxiety using one scale did find statistically significant gender differences and a relatively high effect size. Students rated their anxiety on an 11-point scale from “not at all anxious” to “extremely anxious.” Females gave significantly higher ratings indicating higher levels of anxiety (t 54 = 2.77, p 0.8); correlation between gender and anxiety was r = .35 (thus r2 = .1225). However, the study also examined gender differences in terms of performance and found no significant difference; if anything females were outperforming males consistently (Bradley & Wygant, 1998). These findings may be due to the impact of the research design. Using one universal question may measure perception directly while the use of multiple, less direct, questions may measure the construct in a different way. Some of the fears of students may be rooted in their apprehensions concerning mathematics (Onwuegbuzie & Wilson, 2003). As previously discussed, the focus of statistics is not on mathematics, but rather on conceptual understandings of statistical concepts (see Cobb & Moore, 1997; D. S. Moore, 1988; Pfannkuch & Wild, 2004). Informing students that the focus is not on mathematics may alleviate some of their worries. The first day of class is a good time to get a sense of students’ fears and work towards alleviating them (Jacobs,
53 1988). Some professors suggest that students approach introductory statistics more like they would approach a language course as opposed to a mathematics course (Hastings, 1988). Efficacy In addition to students’ attitudes and anxieties towards statistics, their self-efficacies in relation to their statistics abilities are related to their performance in their statistics course (Finney & Schraw, 2003). A study of undergraduate psychology students at an Australian university found that the majority of students did not want to enroll in a statistics course but did so because it was a requirement; the students who did willingly enroll in a statistics course had higher grades in the course than those who had negative initial attitudes (Gordon, 2004). Some of this correlation may be due to the accuracy of students’ efficacies. Psychological research has shown that expectations impact behaviors whether they are accurate or inaccurate (Bandura, 1977, 1978, 1986). If instructors can improve students’ expectations they may in turn improve their attitudes and performances. Summary of Affective Influences To summarize the discussion on the impact of affect in statistics courses, instructors should be aware of the impact that anxiety and attitudes can have on student performance. There are numerous quick measures of attitudes and anxiety that are appropriate for classroom use. Some anxiety is helpful and can improve performance, but too much anxiety may be detrimental (Teigen, 1994). Students with unhealthy levels of anxiety may benefit from counseling with a trained professional psychologist.
Cognitive Considerations The research-based principles of cognitive load theory can be used to guide the creation of appropriate instructional materials. “Cognitive load theory suggests that effective instructional material facilitates learning by directing cognitive resources toward activities that are relevant to learning rather than towards preliminaries to learning” (Chandler & Sweller, 1991, p. 293). Many of the concepts of cognitive load theory discussed in Chapter 3 are also applicable to statistics instruction. The use of examples has been examined from a variety of angles. First, the application of cognitive load theory principles will be applied to the use of examples in an introductory statistics course. Then, the need for examples using real life data will be discussed. When using examples, the initial examples should be simple (Gelman & Nolan, 2002). Statistics textbooks and statistics lectures typically include fully worked examples. Research has shown that students prefer to study with worked examples (Atkinson, Catrambone, & Merrill, 2003; Chew, 2007). The development of examples and the examples chosen to be used in class should be selected carefully with all aspects of the examples and the needs of the students in mind. The reasoning and rational behind examples should be explicitly available to students (Chew, 2007). Experts also suggest that the use real data aids in student learning (Ben-Zvi & Garfield, 2004b; DelMas, 2004; D. S. Moore, 1997). Statistics are not merely concerned with numbers and mathematical functions, but rather exist within a context (G. Cobb & Moore, 1997). Students who learn using contextually-relevant examples, that is those related to
55 their own field of study, have greater long-term transfer (Godfrey, 1988). Using real data reinforces the importance of learning statistics, shows how statistics is relevant in different disciplines, and presents real-life applications of the statistical material being learned (Ballman, 2000). Having students work with data that they have collected themselves also promotes the development of statistical reasoning skills (DelMas, 2004). Gelman and Nolan's (2002) book Teaching Statistics: A Bag of Tricks includes numerous ideas for activities that involve using real data; they argue that this helps students learn as they become more actively involved. The use of computer software to perform statistical operations has also been a source of debate. It is necessary for students to be able to use statistical software in order for them to be efficient and successful in real life. The use of such software may impact their understanding of the underlying statistical concepts. Thus, many experts argue that students should be taught using conceptual formulas before software is introduced (D. S. Moore, 1997). Problem-solving research is also highly applicable (Atkinson et al., 2003; Catrambone, 1994). Researchers have examined how students use worked examples and have found that often times they simply memorize the steps used in the examples (Catrambone, 1994). Students who focus on the conditions of the examples tend to learn better; looking beyond the surface features of the problem is extremely helpful when learners are later asked to transfer what they have learned to a new and different problem (Chi, Bassok, Lewis, Reimann, & Glaser, 1989). Learners who focus on subgoals within examples and try to understand why each step is taken also typically have better rates of transfer. Examples can
56 be designed to emphasize the important features of a problem to aid in student learning (Catrambone, 1994). Problem-solving research in the discipline of statistics shows that using conceptual formulas aid in transfer better than using computationally simpler methods. Students who learn with conceptual formulas perform better on tasks involving near and far transfer. They are also better at explaining their processes (Atkinson et al., 2003).
Other considerations The primary focus of statistics instruction in this paper is on the more traditional and currently widespread methods. Recent years have seen the introduction of the randomization-based method of introductory statistics instruction, in contrast to consensus methods. Research on the randomization-based instructional methods is relatively new. The randomization-based method has been shown to improve long-term retention without any negative impacts to other types of learning (Tintle, Topliff, Vanderstoep, Holmes, & Swanson, 2012). It is also important to mention that, though not discussed in greater detail here because it is worthy of an entire paper itself, an often difficult part of instructional design is the development of assessment materials. Merely assessing students’ abilities to perform the necessary mathematical is not enough as it does not assess their overall understanding of the content. “The challenge faced by all educators involved in statistics education is to identify assessment methods that are able to elicit and reveal student learning” (Gal & Garfield, 1997, p. 7). While research has shown that students prefer assessments that are untimed and allow for the use of some supporting materials, such as open-note exams or
57 the use of “cheat sheets” (Onwuegbuzie, 2000), it is the responsibility of the instructors and instructional designers to select assessments that are in line with the objects of the course. For more discussion on assessment in statistics courses I recommend the book The Assessment Challenge in Statistics Education edited by Gal and Garfield (1997b).
Summary The research presented in this chapter was primary derived with the face-to-face learning environment in mind. However, these concepts are also applicable to teaching statistics in the online setting. The following is a summary of the key issues that instructional designers and instructors of online statistics courses should be aware of:
Emphasize the key underlying principles (i.e. variation, central tendency, probability theory, etc.)
Assess the attitudes and anxiety levels of students; address any necessary issues
Reduce unnecessary anxiety by reducing transactional distance
Keep cognitive load theory and other educational research in mind when developing examples and other course materials
Use real data when possible (Ben-Zvi & Garfield, 2004b; DelMas, 2004; D. S. Moore, 1997)
Chapter 4: Review of Literature in Online Statistics Education “The most effective teachers will have a substantial knowledge of pedagogy and technology, as well as comprehensive knowledge about and experience applying the content they present.” (D. S. Moore, 1997, p. 134)
Moving from the traditional face-to-face setting to an online setting should be done with care. From both discussions in online education and discussions in statistics education, it is generally accepted that technology should be used to meet the needs of the content (D. S. Moore, 1997). Instructional methods selected should be chosen with the needs of the course content at the forefront. Research specifically on online statistics education is relatively limited. Searches for “online” and “Internet” in the Statistics Education Research Journal resulted in zero results; a search for “distance” returned only one result (“SERJ- Statistics Education Research Journal,” 2010). A thorough review of literature from 1999-2009 on statistics courses taught at a distance came to overall positive conclusions. They noted that most of the literature on online education in other disciplines can also be applied to online statistics education. In their conclusion they made a plea for future research, both quantitative and qualitative, utilizing strong research methods (Mills & Raju, 2011).
Affect Research Earlier the impact of psycho-emotional factors on students’ perceptions and performance in online environments and in statistics courses were discussed. There can be a great deal of anxiety for many students associated with taking an online course and with
59 taking an introductory statistics course. The online setting may lead to feelings of isolation (McInnerney & Roberts, 2004). It also requires different types of study skills in order to be successful; for example a high level of motivation and good time management skills (Hannafin et al., 2003; Stavredes, 2011; Yukselturk & Yildirim, 2008). Statistics is a subject that many students fear (Dillon, 1988; Dunn et al., 2007). In this section, research studies and other publications that highlight the impact of affective factors in online statistics courses will be reviewed. Some of the instruments for measuring attitudes and anxiety in statistics students discussed in Chapter 3 have been applied to students in the online setting. One study used the Survey of Attitudes Towards Statistics (SATS) and a revised form of the Statistics Anxiety Rating Scale to compare students in an online and face-to-face graduate-level educational statistics course. Students in the study were primarily adults averaging an age in their early 30s. They found that students in the face-to-face group tended to have more positive attitudes towards the class at the beginning of the semester. However, online students were more likely to have improved attitudes from the beginning to end of the course (DeVaney, 2010). Similar findings were obtained in a study in Israel that used a two-factor measure: (1) motivation and satisfaction and (2) teaching and learning. Students in an online section of the course had consistently lower mean ratings than those in a face-to-face section of the course. But, again, students in the online section of the course saw a greater increase from the beginning to end of the course (Katz & Yablon, 2003). A research study from Thailand compared students taking an online business statistics course to those taking the course in a traditional face-to-face setting. Measures of
60 affect, cognitive competence, value, and easiness were taken before and after the course. Students in the two groups were comparable before the beginning of the course. At the end of the 16 week course students in the online course improved on each subscale while those in the traditional course stayed the same or declined. The online group significantly outperformed the traditional group in terms of affect, cognitive competence, and value. However, differences in terms of perceptions of easiness were not significantly. Through their use of quantitative and qualitative methods, the researchers concluded that the online students had more positive attitudes towards statistics overall (Suanpang, Petocz, & Kalceff, 2004). That particular study was conducted abroad, thus results may be different when considering American students.
Achievement Research The quality of online courses has been the subject of many debates. Numerous empirical studies have been performed on online statistics courses that address quality, typically through the examination of student performance. A study of a biostatistics course compared students taking the course on campus, as a hybrid course, and completely online. Both undergraduate and graduate students were included in the study and no significant differences were found between students in the three conditions in terms of their final grades, exam grades, homework grades, or project grades (Evans et al., 2007). A similar study of first-year undergraduate students from a university in Israel compared students who completed a mandatory introductory statistics course online versus on campus. Again, no significant differences were found between the
61 final exam scores of students in the two conditions (Katz & Yablon, 2003). This is consistent with empirical studies reviewed by other researchers (Tudor, 2006). Similar to the studies of Evans et al. (2007) and Katz and Yablon (2003), a study of students taking courses entitled “Applied Statistics for Industry I” and “Applied Statistics for Industry II” did not find differences between the final letter grades of on and off campus students. The majority of students were graduate-level students. The course, offered at Iowa State University, was an introductory level course that focused on the practical aspects of statistics. Video was the primary mode of instruction. The author did note a difference between the two groups in terms of their class projects. He noted that off campus students’ projects were often related to their work and thus were more interesting (Stephenson, 2001). Contrary to some of the aforementioned studies, a study of students in a business statistics course found that students in the online section outperformed those in the face-toface section. The study included 159 students who were enrolled in a traditional lecture section of the course on campus and 48 students who were enrolled in an online section of the course. In an attempt to make the two sections as equivalent as possible, they were both taught by the same instructor. Because students were able to select the section of the course they enrolled in, characteristics of the students were analyzed. There were not differences between the two sections in terms of the semester hours enrolled in, experience with the online platform (WebAssign), commuting status, or employment status. However, students in the online section worked more hours on average, were more likely to be female, had higher GPAs, were older, and were more likely to have experience as an online
62 student (Dutton & Dutton, 2005). This emphasizes the differences between the student populations who tend to enroll in online and face-to-face courses and is in line with what is known about adult learners as discussed in chapters 1 and 2.
Instructional Considerations In addition to the empirical studies previously discussed, literature related to instructional design of online statistics courses and instruction of online statistics courses was also reviewed. Suggestions from empirical studies will also be discussed here in an attempt to synthesize what is known about effect techniques for teaching statistics in an online setting. James Madison University has developed a model for designing an online introductory statistics course with underpinnings in both information processing theory and constructivism. They incorporated seven principles of teaching undergraduate students: 1. Encouraging student-faculty interaction 2. Encouraging cooperation among students 3. Encouraging active learning 4. Giving prompt feedback 5. Emphasizing time on task 6. Communicating high expectations 7. Respecting diverse talents and ways of learning (Harris, Mazoué, Hamdan, & Casiple, 2007) James Madison University has also applied the Just-in-Time Teaching (JiTT) method. In a face-to-face course, JiTT asks students to respond to questions online before coming to
63 class. This gives the instructor information about the students’ levels of understanding before class and customize the lesson with this in mind (Benedict & Anderton, 2004; Harris et al., 2007). JiTT techniques could also be applied to a completely online course. This would give instructors a means to identify students’ misconceptions before formal assessments. Instructional designers and online instructors should also take into account some of the common issues that students in online statistics courses face. Living away from campus may make it more difficult for a student to obtain the necessary materials. Textbooks typically need to be purchased online so it is important to give students ample notice concerning the textbooks or other materials that they will need to purchase before the course begins. Not having access to campus computer labs also means that students need to purchase the appropriate statistical software. Again, students need ample time to purchase software (Evans et al., 2007). The cost of the software that students are required to purchase should be taken into account before the course is designed. If possible, lower priced software should be selected, especially in an introductory statistics course where students will not use the extra features in more expensive software. Some universities are making it possible for students to access statistical software from off-campus location. For example, The Pennsylvania State University has made Minitab and SAS, along with other software, available to students off-campus through their Remote Applications Service (“Remote application services,” 2012). Other universities make students versions of software available for purchase at a lower cost (Stephenson, 2001). Instructors should be aware of
64 what is available at their university and share that information with students before the course begins. Also in the planning of an online statistics course, instructors should allocate a sufficient amount of time to working with their students during the course. Because the primary form of communication is via email, instructors should anticipate more emails from their online students than from their face-to-face students (Evans et al., 2007). Clarity of communication, in both instructor communications and the design of the course, is extremely important in the online setting (Sloboda, 2005). Having high quality teaching assistants can help the primary instructor greatly by reducing work load. Surveys collected from students at the end of statistics courses has shown that they desire interaction with the instructor(s) and feedback on their assignments (Tudor, 2006). The use of discussion boards is encouraged to promote interaction. While it may be difficult, discussion board questions should be written that requires to students to engage in higher level learning. Answers to the questions should not simply be found in the textbook or lecture notes, but should require a higher level of synthesis. Sloboda (2005) provides the following examples of appropriate discussion board questions: “1. Give an example of where regression analysis can be used. How is regression analysis being used in your workplace, or how should it be used? Can you think of examples specifically related to strategy formulation and implementation? 2. What is correlation analysis? How can correlation analysis be used in a business decision or examples specifically related to strategy formulation and
65 implementation? How can correlation analysis be misused to explain a cause and effect relationship?” (p. 5). In addition to contributing to students’ knowledge acquisition, discussion board interactions can also increase students’ perceptions of presence as related to the community of inquiry model of online learning (see “Community of Inquiry,” 2011; Garrison & Archer, 2007).
Conclusions It may be necessary for instructors of online statistics courses to consult research in disciplines due to the limited availability of research specifically on online statistics education. In a review of online statistics literature, Mills and Raju (2011) do note that it is appropriate to apply research from other disciplines. In the review of literature specifically on teaching online statistics a number of interesting findings were discovered that instructors should be aware of:
Students in online statistics courses tend to have poorer attitudes at the beginning of the course than students taking a statistics course in a face-to-face setting (DeVaney, 2010; Katz & Yablon, 2003)
There are cultural differences in terms of students’ attitudes (for example, Suanpang et al., 2004)
Students in face-to-face statistics courses do not generally outperform those in online courses (Evans et al., 2007; Katz & Yablon, 2003; Tudor, 2006)
There are differences between the students who elect to enroll in online courses and those who taking courses on campus (Dutton & Dutton, 2005)
Online students not have access to all of the resources that on campus students do; consider this when requiring students to purchase expensive software and textbooks (Evans et al., 2007; Stephenson, 2001)
It is possible incorporate discussion boards into an online statistics course, however it requires well designed questions to encourage interaction (Sloboda, 2005)
Chapter 5: Final Discussion A reoccurring theme in the review of literature was the applicability of the community of inquiry model and the theory of transactional distance. As a student taking online statistics courses these are models that I could easily apply to my personal experiences as well. Future research should apply the community of inquiry model to online statistics courses. This is of particular interest because the nature of statistics courses is quite different from that of courses in the liberal arts, humanities, or social sciences; this is in line with suggestions made by experts in the field (see Arbaugh et al., 2010; Garrison & Arbaugh, 2007). Acknowledging the characteristics of the content was address in terms of both teaching online and teaching statistics. More specifically, cognitive load theory was applied in a number of ways. The technology selected for use in an online statistics course should take into account the complexity of the material and the learning objectives of the lesson. If a new, complicated type of technology is being introduced the content should be familiar to students to avoid cognitive overload. Cognitive load should also be taken into account when creating course materials (see Paas et al., 2010; Sweller, 2005). Audio and video should be used with caution (Clark, 2005). There is a lot of research related to the creation of examples. Experts in statistics education suggest that examples should use real life data (Ben-Zvi & Garfield, 2004b; DelMas, 2004; D. S. Moore, 1997). Another theme was the importance of affective factors. Related to the community of inquiry model, isolation occurs when there is a lack of social presence (“Community of Inquiry,” 2011; Garrison & Archer, 2007; Liu, 2007). Statistics courses do not often require
68 the level of interaction that courses in other disciplines do due to the nature of the content. This may lead to feelings of isolation among students. In Chapter 3, student-student and student-instructor interactions were discussed. Chapter 4 included examples of good discussion board questions that could be used to prompt student interaction. The application of psychological constructs, such as attitudes and anxiety, to the specific area of statistics education is of great interest to me. There are a number of instruments available to measure both statistics attitudes and statistics anxiety. The most popular instruments seem to be the Survey of Attitudes Toward Statistics (Schau, 2003, 2005a, 2005b) and the Statistics Anxiety Rating Scale (Cruise, Cash, & Bolton, 1985, as cited by Papousek et al., 2012). The discussion of affect in face-to-face courses in Chapter 3 is also applicable to the online setting. While some anxiety has a positive impact on performance, too much anxiety can be debilitating (Teigen, 1994). Online statistics instructors should acknowledge that their students likely have some negative attitudes and anxieties towards both statistics and online learning. In the online environment instructors cannot visually see their students; they must use other methods to gauge the attitudes and anxieties of their students. In the online setting instructors may need to be more explicit in these assessments. I have proposed a study for my dissertation that will address some of the issues that I have uncovered related to online statistics education. I plan to further explore the impact of attitudes and anxiety on learning. Psychometric properties of the SATS instrument will be examined in a new way. Comparisons will be made between undergraduate and graduate students as well as online and on campus students. A criticism of previous studies is that
69 they tend to focus on more specific populations such as undergraduate education majors or MBA students (Arbaugh et al., 2010; Garrison & Arbaugh, 2007). By using introductory courses taught through the Department of Statistics, students from a variety of majors will be included in the study.
Plans for Dissertation Research This review of literature of online and statistics education has helped me to discover new areas of interest and narrow down my dissertation research. My dissertation research will continue to focus on statistics education and the implications of teaching and learning online. It will also examine the application of psychological constructs such as attitudes and anxiety. Research Design Attitudes and anxiety levels in undergraduate and graduate students completing introductory statistics courses online and on campus will be compared in a 2x2 factorial design. Undergraduate-level participants will be students in STAT200. Graduate-level participants will be students in STAT500. On campus students will be enrolled in courses at the University Park Campus. Online students will be enrolled through World Campus or in an online section of the course offered through University Park. Instrumentation will include the Survey of Attitudes Towards Statistics (SATS; Schau et al., 1995) and a subset of items from the Statistics Anxiety Rating Scale (STARS). The longer version of the SATS with an assumed six factor structure will be used (SATS-36). The demographic questionnaire that accompanies the SATS will also be administered (see Schau, 2003). The full version of the STARS consists of 51 items on 6 subscales. For the study in
70 question, only 2 subscales of the STARS will be used: (1) test and class anxiety and (2) interpretation anxiety. This will be done to keep the length of the assessment to a minimum to reduce rater fatigue. A pre-test only design will be used. Students will have the option to voluntarily participate in the research study. Their anonymity will be assured. They will be informed that their participation in the study will help to improve their experience as well as the experience of other students like them who will be enrolled in introductory statistics courses in the future. At the discretion of the course instructors, extra credit may be given to students who participate in the study. Analyses Analyses will focus on measurement issues and research questions related to the application of the constructs in real-life teaching situations. In terms of measurement issues, the underlying structure of the instruments will be compared for students on campus and online as well as for undergraduate- and graduate-level students. Previous measurement studies will provide guidance. VanHoof et al. (2011) used Robust Maximum Likelihood (RML) techniques when examining scores from the SATS-36 because items are measured on an ordinal-level scale; more specifically, they selected to use RML for ordinal data with polychoric correlations and RML with covariances. The present study will replicate these procedures. Polychoric correlations are used to estimate the underlying continuous variables which are then used with confirmatory factor analyses techniques to examine the structure. The covariance matrix procedures with Satorra-Bentler adjustments will be used just as they were by
71 VanHoof et al. (2011). These methods will be used in conjunction with the categorical level (undergraduate versus graduate) and medium (online versus on campus) variables to confirm that the 6 factor structure is consistent over all conditions. In addition to the psychometric analyses, comparisons will be made using analysis of variance (ANOVA) techniques. Factorial ANOVA or multiple analysis of variance (MANOVA) techniques will be used to test for differences between students of different academic levels and for students taking the course via the different mediums. Correlational and regression techniques will be used to examine the relationships between the SATS-36 subscales, STARS subscales, and demographic information.
Final Conclusions I began to write this paper with multiple purposes. As an instructor, I wanted to improve my practice. Similarly, I wanted to create something that could help other instructors. Finally, I wanted to use this project as a first step towards my dissertation research. I feel that I was successful in reaching all of these goals. While preparing this paper I read many books and articles about online education and statistics education. I now feel well-versed in both of these areas as a result of this experience. This will help me in the future as an instructor and as I prepare my dissertation. I hope to continue my research with online education and statistics education after graduation. Through the publication of related research, the work that I have done and will do will reach the instructional designers and instructors who can then implement its conclusions in their work.
References Allen, I. E., & Seaman, J. (2011). Going the distance: Online education in the United States, 2011. Babson Park, MA. Retrieved from http://www.onlinelearningsurvey.com/reports/goingthedistance.pdf Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A meta-analysis. American Journal of Distance Education, 16(2), 83–97. Anderson, B. (2007). Independent learning. In M. G. Moore (Ed.), Handbook of Distance Education (2nd ed., pp. 109–122). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Arbaugh, J. B. (2008). Does the community of inquiry framework predict outcomes in online MBA courses? The International Review of Research in Open and Distance Learning, 9(2). Retrieved from www.irrodl.org/ Arbaugh, J. B., Bangert, A., & Cleveland-Innes, M. (2010). Subject matter effects and the community of inquiry (CoI) framework: An exploratory study. Internet and Higher Education, 13, 37–44. Arbaugh, J. B., Cleveland-Innes, M., Díaz, S. R., Garrison, D. R., Ice, P., Richardson, J. C., & Swan, K. (2008). Developing a community of inquiry instrument: Testing a measure of the community of inquiry framework using a multi-institutional sample. Internet and Higher Education, 11, 133–136. Archer, W. (2010). Beyond online discussions: Extending the community of inquiry framework to entire courses. Internet and Higher Education, 13, 69.
73 Artino, A. R., & Jones, K. D. (2012). Exploring the complex relations between achievement emotions and self-regulated leaning behaviors in online learning. Internet and Higher Education, 15(3), 170–175. Ashaari, N. S., Judi, H. M., Mohamed, H., & Wook, T. M. T. (2011). Student’s attitude towards statistics course. Procedia Social and Behaviorial Sciences, 18, 287–294. Atkinson, R. K., Catrambone, R., & Merrill, M. M. (2003). Aiding transfer in statistics: Examining the use of conceptually oriented equations and elaborations during subgoal learning. Journal of Educational Psychology, 95(4), 762–773. Ayres, P., & Paas, F. (2007). Making instructional animations more effective: A cognitive load approach. Applied Cognitive Psychology, 700, 695–700. Bach, S., Haynes, P., & Lewis Smith, J. (2007). Online learning and teaching in higher education. New York: Open University Press. Bakker, A. (2004). Reasoning about shape as a pattern in variability. Statistics Education Research Journal, 3(2). Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ3(2)_Bakker.pdf Ballman, K. (1997). Greater emphasis on variation in an introductory statistics course. Journal of Statistics Education, 5(2). Retrieved from http://www.amstat.org/publications/jse/v5n2/ballman.html Ballman, K. (2000). Real data in classroom examples. In T. L. Moore (Ed.), Teaching statistics: Resources for undergraduate instructors (pp. 11–18). Washington, DC: Mathematical Association of America.
74 Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavior change. Psychological Review, 84(2), 191–215. Bandura, A. (1978). Reflections on self-efficacy. Advances in Behavioral Research and Therapy, 1, 237–269. Bandura, A. (1986). Social foundations of thought and action. Englewoods Cliffs, NJ: PrenticeHall, Inc. Bell, J. (2003). Statistics anxiety: The nontraditional student. Education, 124(1), 157–162. Ben-Zvi, D., & Garfield, J. B. (2004a). Research on reasoning about variability: A forward. Statistics Education Research Journal, 3(2). Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ3(2)_forward.pdf Ben-Zvi, D., & Garfield, J. B. (2004b). Statistical literacy, reasoning, and thinking: Goals, definitions, and challenges. In D. Ben-Zvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 3–15). Dordrecht, The Netherlands: Kluwer Academic Publishers. Benedict, J. O., & Anderton, J. B. (2004). Applying the Just-in-Time Teaching approach to teaching statistics. Teaching of Psychology, 31, 197–199. Black, A. (2005). The use of asynchronous discussion: Creating a text of talk. Contemporary Issues in Technology and Teacher Education, 5(1), 5–24. Retrieved from http://www.citejournal.org/vol5/iss1/languagearts/article1.cfm Blake, N. (2000). Tutors and students without faces or places. In N. Blake & P. Standish (Eds.), Enquiries at the interface: Philosophical problems of online education (pp. 201– 216). Malden, MA: Blackwell Publishers.
75 Bolliger, D. U., & Halupa, C. (2012). Student perceptions of satisfaction and anxiety in an online doctoral program. Distance Education, 33(1), 81–98. Borup, J., West, R. E., & Graham, C. R. (2012). Improving online social presence through asynchronous video. Internet and Higher Education, 15(3), 195–203. Bradley, D. R., & Wygant, C. R. (1998). Male and female differences in anxiety about statistics are not reflected in performance. Psychological Reports, 82, 245–246. Broadhurst, P. L. (1957). Emotionality and the Yerkes-Dodson Law. Journal of Experimental Psychology, 54(5), 345–352. Brünken, R., Plass, J. L., & Leutner, D. (2004). Assessment of cognitive load in multimedia learning with dual-task methodology: Auditory load and modality effects. Instructional Science, 32, 115–132. Brünken, R., Plass, J. L., & Moreno, R. (2010). Current issues and open questions in cognitive load research. In J. L. Plass, R. Moreno, & R. Brünken (Eds.), Cognitive Load Theory (pp. 253–272). New York: Cambridge University Press. Bui, N. H., & Alfaro, M. A. (2011). Statistics anxiety and science attitudes: Age, gender, and ethnicity factors. College Student Journal, 45(3), 573–585. Cashin, S. E., & Elmore, P. B. (2005). The Survey of Attitudes Towards Statistics Scale: A construct validity study. Educational and Psychological Measurement, 65(3), 509–524. Catrambone, R. (1994). Improving examples to improve transfer to novel problems. Memory & Cognition, 22(5), 606–615. Cejda, B. (2010). Online Education in Community Colleges. In R. L. Garza Mitchell (Ed.), Online Education (pp. 7–16). Hoboken, NJ: Wiley Periodicals, Inc.
76 Chan, S. (2010). Applications of andragogy in multi-disciplined teaching and learning. Journal of Adult Education, 39(2), 25–35. Chance, B., DelMas, R. C., & Garfield, J. B. (2004). Reasoning about sampling distributions. In D. Ben-Zvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 295–323). Dordrecht, The Netherlands: Kluwer Academic Publishers. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. Chew, S. L. (2007). Designing effective examples and problems for teaching statistics. In D. S. Dunn, R. A. Smith, & B. C. Beins (Eds.), Best practices for teaching statistics and research methods in the behavioral sciences (pp. 73–91). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13(2), 145–182. Chiesi, F., & Primi, C. (2009). Assessing statistics attitudes among college students: Psychometric properties of the Italian version of the “Survey of Attitudes toward Statistics (SATS).” Learning and Individual Differences, 19, 309–313. Clark, R. C. (2005). Multimedia learning in e-courses. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning (pp. 589–616). New York: Cambridge University Press.
77 Cobb, G., & Moore, D. S. (1997). Mathematics, statistics, and teaching. The American Mathematical Monthly, 104(9), 801–823. Cobb, P., & McClain, K. (2004). Principles of instructional design for supporting the development of students’ statistical reasoning. In D. Ben-Zvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 375–395). Dordrecht, The Netherlands: Kluwer Academic Publishers. Coladarci, T., Cobb, C. D., Minium, E. W., & Clarke, R. B. (2011). Fundamentals of statistical reasoning in education (3rd ed.). Hoboken, NJ: John Wiley & Sons, Ltd. "Community of Inquiry." (2011). Retrieved from http://communitiesofinquiry.com/ Conceição, S. C. O. (2007). Understanding the environment for online teaching. In S. C. O. Conceição (Ed.), Teaching strategies in the online environment (pp. 5–11). Hoboken, NJ: Wiley Periodicals, Inc. DeVaney, T. A. (2010). Anxiety and attitudes of graduate students in on-campus v. online statistics courses. Journal of Statistics Education, 18(1). Retrieved from http://www.amstat.org/publications/jse/v18n1/devaney.pdf DelMas, R. C. (2004). A comparison of mathematical and statistical reasoning. In D. Ben-Zvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 79–95). Dordrecht, The Netherlands: Kluwer Academic Publishers. Dillon, K. (1988). Statisticophobia. In M. E. Ware & C. L. Brewer (Eds.), Handbook for teaching statistics and research methods (p. 3). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Draves, W. A. (2000). Teaching online. River Falls, WI: Learning Resources Network.
78 Dunn, D. S., Smith, R. A., & Beins, B. C. (2007). Overview: Best practices for teaching statistics and research methods in the behavioral sciences. In D. S. Dunn, R. A. Smith, & B. C. Beins (Eds.), Best practices for teaching statistics and research methods in the behavioral sciences (pp. 3–11). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Dutton, J., & Dutton, M. (2005). Characteristics and performance of students in an online section of business statistics. Journal of Statistics Education, 13(3). Retrieved from http://www.amstat.org/publications/jse/v13n3/dutton.html. Díaz, S. R., Swan, K., Ice, P., & Kupczynski, L. (2010). Student ratings of the importance of survey items, multiplicative factor analysis, and the validity of the community of inquiry survey. The Internet and Higher Education, 13(1-2), 22–30. Evans, S. R., Wang, R., Yeh, T.-M., Anderson, J., Haija, R., Mcbratney-Owen, P. M., Peeples, L., et al. (2007). Evaluation of distance learning in an “introduction to biostatistics” class: A case study. Statistics Education Research Journal, 6(2), 59–77. Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ6(2)_Evans.pdf Falloon, G. (2011). Making the connection: Moore’s theory of transactional distance and its relevance to the use of a virtual classroom in postgraduate online teacher education. Journal of Research on Technology in Education, 43(3), 187–209. Finney, S. J., & Schraw, G. (2003). Self-efficacy beliefs in college statistics courses. Contemporary Educational Psychology, 28(2), 161–186. Gal, I. (2004). Statistical literacy. In D. Ben-Zvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 47–78). Dordrecht, The Netherlands: Kluwer Academic Publishers.
79 Gal, I., & Garfield, J. B. (1997a). Curricular goals and assessment challenges in statistics education. In I. Gal & J. B. Garfield (Eds.), The assessment challenge in statistics education (pp. 1–13). Amsterdam, Netherlands: IOS Press. Gal, I., & Garfield, J. B. (Eds.). (1997b). The assessment challenge in statistics education. Amsterdam, Netherlands: IOS Press. Gal, I., Ginsburg, L., & Schau, C. (1997). Monitoring attitudes and beliefs in statistics education. In I. Gal & J. B. Garfield (Eds.), The assessment challenge in statistics education (pp. 37–51). Amsterdam, Netherlands: IOS Press. Garfield, J. B. (1995). How students learn statistics. International Statistical Review / Revue Internationale de Statistique, 63(1), 25–34. Garfield, J. B. (2003). Assessing statistical reasoning. Statistics Education Research Journal, 2(1), 22–32. Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ2(1).pdf Garfield, J. B., & Ben-Zvi, D. (2004). Research on statistical literacy, reasoning, and thinking: Issues, challenges, and implications. In D. Ben-Zvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 397–409). Dordrecht, The Netherlands: Kluwer Academic Publishers. Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87–105. Retrieved from http://communitiesofinquiry.com/sites/communityofinquiry.com/files/Critical_Inquiry_ model.pdf
80 Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. Internet and Higher Education, 10, 157–172. Garrison, D. R., & Archer, W. (2007). A Theory of Community of Inquiry. In M. G. Moore (Ed.), Handbook of Distance Education (2nd ed., pp. 77–88). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Garrison, D. R., Cleveland-Innes, M., & Fung, T. S. (2010). Exploring causal relationships among teaching, cognitive and social presence: Student perceptions of the community of inquiry framework. Internet and Higher Education, 13, 31–36. Gelman, A., & Nolan, D. (2002). Teaching statistics: A bag of tricks. New York: Oxford University Press. Ghinassi, C. W. (2010). Anxiety. eBook: Greenwood. Giossos, Y., Koutsouba, M., Lionarakis, A., & Skavantzos, K. (2009). Reconsidering Moore’s Transactional Distance Theory. European Journal of Open, Distance, and E-Learning, 2. Retrieved from http://www.eurodl.org/index.php?article=374 Glass, G. V., & Hopkins, K. D. (1996). Statistical methods in education and psychology (3rd ed.). Boston, MA: Allyn and Bacon. Godfrey, A. B. (1988). Should mathematicians teach statistics? The College Mathematics Journal, 19(1), 8–10. Gokool-Ramdoo, S. (2009). Policy deficit in distance education: A transactional distance. The International Review of Research in Open and Distance Learning, 10(4). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/702/1326
81 Gordon, S. (2004). Understanding students’ experiences of statistics in a service course. Statistics Education Research Journal, 3(1), 40–59. Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ3(1)_gordon.pdf Gorsky, P., & Caspi, A. (2005). A critical analysis of transactional distance theory. Quarterly Review of Distance Education, 6(1), 1–11. Gould, R. (2004). Variability: One statistician’s view. Statistics Education Research Journal, 3(2). Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ3(2)_Gould.pdf Hammerman, J. K., & Rubin, A. (2004). Strategies for managing statistical complexity with new software tools. Statistics Education Research Journal2, 3(2). Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ3(2)_Hammerman_Rubin.pdf Hanna, D., Shevlin, M., & Dempster, M. (2008). The structure of the statistics anxiety rating scale: A confirmatory factor analysis using UK psychology students. Personality and Individual Differences, 45(1), 68–74. Hannafin, M., Hill, J. R., Oliver, K., Glazer, E., & Sharma, P. (2003). Cognitive and learning factors in web-based distance learning environments. In M. G. Moore & W. Anderson (Eds.), Handbook of Distance Education (1st ed., pp. 245–260). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Hanoch, Y., & Vitouch, O. (2004). When less is more: Information, emotional arousal and the ecological reframing of the Yerkes-Dodson Law. Theory & Psychology, 14(4), 427–452. Hansman, C. A., & Mott, V. W. (2010). Adult learners. In C. E. Kasworm, A. D. Rose, & J. M. Ross-Gordon (Eds.), Handbook of Adult and Continuing Education (2010 ed., pp. 13–23). Thousand Oaks, CA: SAGE Publications, Inc.
82 Harris, C. M., Mazoué, J. G., Hamdan, H., & Casiple, A. R. (2007). Designing an online introductory statistics course. In D. S. Dunn, R. A. Smith, & B. C. Beins (Eds.), Best practices for teaching statistics and research methods in the behavioral sciences (pp. 93–108). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Hasler, A. S., Kersten, B., & Sweller, J. (2007). Learner control, cognitive load and instructional animation. Applied Cognitive Psychology, 21, 713–729. Hastings, M. W. (1988). Statistics: Challenge for students and the professor. In M. E. Ware & C. L. Brewer (Eds.), Handbook for teaching statistics and research methods (pp. 6–7). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Hawkins, A. (1996). Myth-conceptions! In J. B. Garfield & G. Burrill (Eds.), Research on the role of technology in teaching and learning statistics (pp. 1–14). University of Grenada, Spain: The International Statistical Institute. Jacobs, K. W. (1988). Instructional techniques in the introductory statistics course: The first class meeting. In M. E. Ware & C. L. Brewer (Eds.), Handbook for teaching statistics and research methods (p. 4). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Joinson, A. N. (2001). Self-disclosure in computer-mediated communication: The role of selfawareness and visual anonymity. European Journal of Social Psychology, 31(2), 177– 192. Joinson, A. N. (2007). Disinhibition and the internet. In J. Gackenbach (Ed.), Psychology and the internet (2nd ed., pp. 75–92). Burlington, MA: Academic Press. Kang, H., & Gyorke, A. S. (2008). Rethinking distance learning activities: A comparison of transactional distance theory and activity theory. Open Learning, 23(3), 203–214.
83 Katz, Y. J., & Yablon, Y. B. (2003). Online university learning: Cognitive and affective perspectives. Campus-Wide Information Systems, 20(2), 48–54. Ko, S., & Rossen, S. (2010). Teaching online: A practical guide (3rd ed.). New York: Routledge. Kolb, D. (2000). Learning places: Building dwelling thinking online. In N. Blake & P. Standish (Eds.), Enquiries at the interface: Philosophical problems of online education (pp. 135– 148). Malden, MA: Blackwell Publishers. Konold, C., & Pollatsek, A. (2004). Conceptualizing an average as a stable feature of a noisy process. In D. Ben-Zvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 169–199). Dordrecht, The Netherlands: Kluwer Academic Publishers. Kosonen, P., & Winne, P. H. (1995). Effects of teaching statistical laws on reasoning about everyday problems. Journal of Educational Psychology, 87(1), 33–46. Levine, S. J. (2007). The online discussion board. In S. C. O. Conceição (Ed.), Teaching strategies in the online environment (pp. 67–74). Hoboken, NJ: Wiley Periodicals, Inc. Liu, Y. (2007). Building and sustaining learning communities. In N. A. Buzzetto-More (Ed.), Principles of effective online teaching (pp. 137–153). Santa Rosa, CA: Informing Science Press. Mathieson, K. (2012). Exploring student perceptions of audiovisual feedback via screencasting in online courses. The American Journal of Distance Education, 26(3), 143–156.
84 Mayer, Richard E. (2005). Cognitive theory of multimedia learning. In Richard E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 31–48). New York: Cambridge University Press. McGrath, V. (2009). Reviewing the evidence on how adult students learn: An examination of Knowles’ model of andragogy. Adult Learner: The Irish Journal of Adult and Community Education, 99–110. McInnerney, J. M., & Roberts, T. S. (2004). Online learning: Social interaction and the creation of a sense of community. Educational Technology & Society, 7(3), 73–81. McKie, J. (2000). Conjuring notions of place. In N. Blake & P. Standish (Eds.), Enquiries at the interface: Philosophical problems of online education (pp. 123–133). Malden, MA: Blackwell Publishers. Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidencebased practices in online learning: A meta-analysis and review of online learning studies. Retrieved from http://www2.ed.gov/rschstat/eval/tech/evidence-basedpractices/finalreport.pdf Meletiou, M. (2000). Developing students’ conceptions of variation: An untapped well in statistical reasoning. (Unpublished doctoral dissertation). The University of Texas at Austin. Meletiou, M. (2002). Concepts of variation: A literature review. Statistics Education Research Journal, 1(1), 46–52. Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ1(1).pdf
85 Meletiou-Mavrotheris, M., & Lee, C. (2002). Teaching students the stochastic nature of statistical concepts in an introductory statistics course. Statistics Education Research Journal, 1(2), 22–37. Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ1(2).pdf Merriam, S. B. (2001). Andragogy and self-directed learning: Pillars of adult learning theory. New Directions for Adult & Continuing Education, (89), 3–14. Metz, K. E. (1997). Dimensions in the assessment of students’ understanding and application of chance. In I. Gal & J. B. Garfield (Eds.), The assessment challenge in statistics education (pp. 223–238). Amsterdam, Netherlands: IOS Press. Mills, J. D. (2004). Students’ attitudes toward statistics: Implications for the future. College Student Journal, 38(3), 349–361. Mills, J. D., & Raju, D. (2011). Teaching statistics online: A decade’s review of the literature about what works. Journal of Statistics Education, 19(2). Retrieved from http://www.amstat.org/publications/jse/v19n2/mills.pdf Mokros, J., & Russell, S. J. (1995). Children’s concepts of average and representativeness. Journal for Research in Mathematics Education, 26(1), 20–39. Moore, D. S. (1988). Should mathematicians teach statistics? The College Mathematics Journal, 19(1), 3–7. Moore, D. S. (1997). New pedagogy and new content: The case of statistics. International Statistical Review / Revue Internationale de Statistique, 65(2), 123–137. Moore, D. S. (1998). Statistics among the liberal arts. Journal of the American Statistical Association, 93(444), 1253–1259.
86 Moore, D. S. (1999). What shall we teach beginners? International Statistical Review / Revue Internationale de Statistique, 67(3), 250–252. Moore, M. G. (1993). Theory of transactional distance. In D. Keegan (Ed.), Theoretical Principles of Distance Education (pp. 22–38). New York: Routledge. Moore, M. G. (2006). Editorial: Faculty professional development. American Journal of Distance Education, 20(2), 61–63. Moore, M. G. (2007). The Theory of Transactional Distance. In M. G. Moore (Ed.), Handbook of Distance Education (2nd ed., pp. 89–105). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Moore, M. G., & Kearsley, G. (2005). Distance Education: A Systems View (2nd ed.). Belmont, CA: Wadsworth. Murphy, E., & Rodríguez-Manzanares, M. A. (2008). Revisiting transactional distance theory in a context of web-based high-school distance education. Journal of Distance Education, 22(2), 1–14. Noel-Levitz. (2011). 2011 Research report: National online learners priorities report. Iowa City, IA. Retrieved from https://www.noellevitz.com/documents/shared/Papers_and_Research/2011/PSOL_rep ort 2011.pdf Ogden, R. T. (1996). Central Limit Theorem applet. Retrieved from http://www.stat.sc.edu/~west/javahtml/CLT.html Onwuegbuzie, A. J. (2000). Attitudes toward statistics assessments. Assessment & Evaluation in Higher Education, 25(4), 321–339.
87 Onwuegbuzie, A. J., & Wilson, V. A. (2003). Statistics anxiety: Nature, etiology, antecedents, effects, and treatments- a comprehensive review of the literature. Teaching in Higher Education, 8(2), 195–209. Paas, F., van Gog, T., & Sweller, J. (2010). Cognitive Load Theory: New conceptualizations, specifications, and integrated research perspectives. Educational Psychology Review, 22, 115–121. Papousek, I., Ruggeri, K., Macher, D., Paechter, M., Henne, M., Weiss, E. M., Schultera, G., et al. (2012). Psychometric evaluation and experimental validation of the Statistics Anxiety Rating Scale. Journal of Personality Assessment, 94(1), 82–91. Parten, M. B. (1932). Social participation among preschool children. Journal of Abnormal and Social Psychology, 27, 243–269. Pfannkuch, M., & Wild, C. (2004). Towards an understanding of statistical thinking. In D. BenZvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking1 (pp. 17–46). Dordrecht, The Netherlands: Kluwer Academic Publishers. Piaget, J. (1962). Play, dreams, and imitation in childhood. New York: W. W. Norton & Company. Plass, J. L., Moreno, R., & Brünken, R. (2010). Introduction. In J. L. Plass, R. Moreno, & R. Brünken (Eds.), Cognitive Load Theory (pp. 1–6). New York: Cambridge University Press. Ragan, L. C. (2000). Good teaching is good teaching: The relationship between guiding principles for distance and general education. Journal of General Education, 49(1), 10– 22.
88 Ragan, L. C., & Terheggen, S. L. (2003). Workload management strategies for the online environment. University Park, PA. Retrieved from http://www.worldcampus.psu.edu/pdf/fac/workload_strat.pdf Rahman, S., Yasin, R. M., Yassin, S. F. M., & Nordin, N. M. (2011). Examining psychological aspects in online discussion. Procedia, Social and Behavioral Sciences, 15, 3168–3172. Reading, C., & Shaughnessy, J. M. (2004). Reasoning about variation. In D. Ben-Zvi & J. B. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 201–226). Dordrecht, The Netherlands: Kluwer Academic Publishers. Remote application services. (2012). Retrieved from https://cms.psu.edu/section/content/Default.asp?WCI=pgDisplay&WCU=CRSCNT&ENT RY_ID=114D126CF57749EA986054BD06CD33DA Roberts, D. M., & Bilderback, E. W. (1980). Reliability and validity of a statistics attitude survey. Educational and Psychological Measurement, 40(1), 235–238. Roberts, D. M., & Reese, C. M. (1987). A comparison of two scales measuring attitudes towards statistics. Educational and Psychological Measurement, 47(3), 759–764. Roberts, D. M., & Saxe, J. E. (1982). Validity of a statistics attitude survey: A follow-up study. Educational and Psychological Measurement, 42(3), 907–912. Rodarte-Luna, B., & Sherry, A. (2008). Sex differences in the relation between statistics anxiety and cognitive/learning strategies. Contemporary Educational Psychology, 33, 327–344. Russell, T. L. (2001). The No Significant Difference Phenomenon. International Distance Education Certification Center (IDEC).
89 Rutkowski, A.-F., & Saunders, C. S. (2010). Growing pains with information overload. Computer, 43(6), 96, 94–95. SERJ- Statistics Education Research Journal. (2010). Retrieved July 11, 2012, from http://www.stat.auckland.ac.nz/~iase/publications.php?show=serj "Sampling distribution." (n.d.). Retrieved from http://onlinestatbook.com/stat_sim/sampling_dist/index.html Schau, C. (2003). Survey of Attitudes Toward Statistics: SATS-36 Pre. Retrieved from http://www.evaluationandstatistics.com/bizwaterSATS36monkey.pdf Schau, C. (2005a). SATS background. Retrieved from http://www.evaluationandstatistics.com/id18.html Schau, C. (2005b). Others’ references that include the SATS. Retrieved from http://www.evaluationandstatistics.com/id22.html Schau, C., Stevens, J., Dauphinee, T. L., & Del Vecchio, A. (1995). The development and validation of the survey of antitudes toward statistics. Educational and Psychological Measurement, 55(5), 868–875. Schneider, W. R. (2011). The relationship between statistics self-efficacy, statistics anxiety, and performance in an introductory graduate statistics course (Doctoral dissertation). University of South Florida. Retrieved from http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=4530&context=etd Shaughnessy, J. M. (1992). Research in probability and statistics: Reflections and directions. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 465–494). New York: Macmillan Publishing Company.
90 Shea, P., & Bidjerano, T. (2012). Learning presence as a moderator in the community of inquiry model. Computers & Education, 59(2), 316–326. Shea, P., Hayes, S., Smith, S. U., Vickers, J., Bidjerano, T., Pickett, A., Gozza-Cohen, M., et al. (2012). Learning presence: Additional research on a new conceptual element within the community of inquiry (CoI) framework. Internet and Higher Education, 15, 89–95. Shea, P., Hayes, S., & Vickers, J. (2010). Online instructional effort measured through the lens of teaching presence in the community of inquiry framework: A re-examination of measures and approach. International Review of Research in Open and Distance Learning, 11(3), 127–154. Shearer, R. (2007). Instructional Design and the Technologies: An Overview. In M. G. Moore (Ed.), Handbook of Distance Education (2nd ed., pp. 219–232). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Shearer, R. (2009). Transactional Distance and Dialogue: An Exploratory Study to Refine the Theoretical Construct of Dialogue in Online Learning (Doctoral dissertation). The Pennsylvania State University. Shearer, R. (2010). Transactional distance and dialogue in online learning. 26th Annual Conference on Distance Teaching & Learning. Retrieved from http://www.uwex.edu/disted/conference/Resource_library/proceedings/29897_10.pdf Shin, N. (2001). Beyond interaction: Transactional presence and distance learning (Doctoral dissertation). The Pennsylvania State University.
91 Sloboda, B. W. (2005). Improving the teaching of statistics online: A multi-faceted approach. The Journal of Educators Online, 2(1), 1–13. Retrieved from http://www.thejeo.com/Archives/Volume2Number1/SlobodaFinal.pdf Stavredes, T. (2011). Effective online teaching: Foundations and strategies for student success. San Francisco, CA: Jossey-Bass. Stephenson, W. R. (2001). Statistics at a distance. Journal of Statistics Education, 9(3). Retrieved from http://www.amstat.org/publications/jse/v9n3/stephenson.html Stover, A. B. (2004). Learning architecture online: Distance education. Retrieved from http://home.comcast.net/~abstover/learning_arch/learning_arch_05.html Strauss, S., & Bichler, E. (1988). The development of children’s concept of the arithmetic average. Journal for Research in Mathematics Education, 19, 64–80. Strobl, C., Dittrich, C., Seiler, C., Hackensperger, S., & Leisch, F. (2010). Measurement and predictors of a negative attitude towards statistics among LMU students. In T. Kneib & G. Tutz (Eds.), Statistical Modelling and Regression Structures: Festschrift in Honour of Ludwig Fahrmeir (pp. 217–230). Berlin: Springer-Verlag. Suanpang, P., Petocz, P., & Kalceff, W. (2004). Students attitudes to learning business statistics: Comparison of online and traditional methods. Educational Technology & Society, 7, 9–20. Suler, J. (2004). The online disinhibition effect. CyberPsychology & Behavior, 7(3), 321–326. Swan, K. (2001). Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22(2), 306–331.
92 Sweat-Guy, R. (2007). The role of interaction in e-learning. In N. A. Buzzetto-More (Ed.), Principles of effective online teaching (pp. 83–103). Santa Rosa, CA: Informing Science Press. Sweller, J. (2005). Implications of cognitive load theory for multimedia learning. In Richard E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning (pp. 19–30). New York: Cambridge University Press. Sweller, J. (2010). Element interactivity and intrinsic, extraneous, and germane cognitive load. Educational Psychology Review, 22, 123–138. Taylor, B., & Kroth, M. (2009). Andragogy’s transition into the future: Meta-analysis of andragogy and its search for a measurable instrument. Journal of Adult Education, 38(1), 1–11. Teigen, K. H. (1994). Yerkes-Dodson: A law for all seasons. Theory & Psychology, 4, 525–547. Tempelaar, D. T., Gijselaers, W. H., & van der Loeff, S. S. (2006). Puzzles in statistical reasoning. Journal of Statistics Education, 14(1). Retrieved from http://www.amstat.org/publications/jse/v14n1/tempelaar.html Tempelaar, D. T., Niculescu, A., Rienties, B., Gijselaers, W. H., & Giesbers, B. (2012). How achievement emotions impact students’ decisions for online learning, and what precedes those emotions. Internet and Higher Education, 15(3), 161–169. Tintle, N., Topliff, K., Vanderstoep, J., Holmes, V.-L., & Swanson, T. (2012). Retention of statistics concepts in a preliminary randomization-based introductory statistics curriculum. Statistics Education Research Journal, 11(1), 21–40. Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ11(1)_Tintle.pdf
93 Tropp, L. (2007). My computer ate my homework: Deconstructing misbehavior in online learning. In N. A. Buzzetto-More (Ed.), Principles of effective online teaching (pp. 247– 264). Santa Rosa, CA: Informing Science Press. Tudor, G. E. (2006). Teaching introductory statistics online-- Satisfying the students. Journal of Statistics Education, 14(3). Retrieved from www.amstat.org/publications/jse/v14n3/tudor.html Van Keuren, J. (2006). Web-Based Instruction: A Practical Guide for Online Courses. Lanham, MD: The Rowman & Littlefield Publishing Group, Inc. VanHoof, S., Kuppens, S., Sotos, A. E., Verschaffel, L., & Onghena, P. (2011). Measuring statistics attitudes: Structure of the survey of attitudes toward statistics (SATS-36). Statistics Education Research Journal, 10(1), 35–51. Retrieved from http://www.stat.auckland.ac.nz/~iase/serj/SERJ10(1)_Vanhoof.pdf Wanstreet, C. E., & Stein, D. S. (2011). Presence over time in synchronous communities of inquiry. American Journal of Distance Education, 25, 162–177. Wild, C., & Pfannkuch, M. (1999). Statitistical thinking and empirical enquiry. International Statistical Review / Revue Internationale de Statistique, 67(3), 223–248. Wise, S. L. (1985). The development and validation of a scale measuring attitudes toward statistics. Educational and Psychological Measurement, 45(2), 401–405. Yerkes, R. M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habitformation. Journal of Comparative Neurology and Psychology, 18(5), 459–482. Yi, H. S., & Jeon, S. (2009). Validation study of Korean version of Survey of Attitudes Toward Statistics (K-SATS). Korean Journal of Applied Statistics, 22(5), 1115–1129.
94 Yu, R. (2011, May). eLearning’s benefits are obvious: Why don't they like it? Learning Solutions Magazine. Retrieved from http://www.learningsolutionsmag.com/articles/674/elearnings-benefits-are-obviouswhy-dontthey-like-it Yukselturk, E., & Yildirim, Z. (2008). Investigation of interaction, online support, course structure and flexibility as the contributing factors to students’ satisfaction in an online certificate program. Journal of Educational Technology & Society, 11(4), 51–65. Zimprich, D. (2012). Attitudes toward statistics among Swiss psychology students. Swiss Journal of Psychology, 71(3), 149–155.