THE RELATIONSHIP BETWEEN PART-TIME ONLINE FACULTY’S TECHNOLOGICAL, PEDAGOGICAL, AND CONTENT KNOWLEDGE AND STUDENT GRADES by Wadad Kaaki Copyright 2016
A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in Higher Education Administration
University of Phoenix
ABSTRACT Current research from 2010-2016 indicates online learner grades have dropped at forprofit virtual institutions. During this same period, part-time online faculty made up for 80-90% of online faculty at for-profit virtual institutions. There is evidence of low online learner grades in an era of increased use of part-time online faculty. The purpose of this quantitative non-experimental study was to examine the relationship between seven self-reported predictor variables of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. A total of 81 out of 148 faculty members participated in the TPACK Survey. Multiple linear regressions and a Pearson r correlation coefficient were used to analyze data. Results of the analyses indicated the seven self-reported predictor variables of technological, pedagogical, and content knowledge of part-time online instructors did not predict online learners’ grades. Study findings imply that the domains of technological, pedagogical, and content knowledge of part-time online instructors do not account for low online learners’ grades. Close analysis of other predictor variables that may account for low online learner grades is recommended.
iii
DEDICATION This dissertation is dedicated to my parents who I miss every day. I know they would be proud to see what became of their youngest daughter. I also dedicate this dissertation to my husband, children, and family. This dissertation is a symbol of hope, perseverance, hard work, and effort. I am the first one in my entire family to pursue higher education and I believe that this degree has helped me encourage, and empower women in my family, and it has allowed me to become a better role model to those that want to follow my lead. My sister Ferial has always been my friend and cheerleader; she has a special place in my heart forever. I will always be grateful for the unconditional love and support of those around me in my educational journey and life, thank you to all that supported me when things became almost impossible.
iv
ACKNOWLEDGMENTS I would like to express my deepest appreciation to numerous individuals. First, I would like to thank my cohorts enrolled in the Doctorate of Philosophy in Higher Education Administration at University of Phoenix for staying in touch and checking up on me. Second, I appreciate and acknowledge my dissertation chair and committee members, Dr. Lauryl Lefebvre, Dr. Barbara Baethe, and Dr. Abdelmagead Elbiali, for their continuous feedback, support, and guidance in making this dissertation possible. My committee supported me at the most difficult stages of my dissertation journey and for that, I am blessed and honored to have worked with such a talented committee. I am forever grateful for their contributions.
v
TABLE OF CONTENTS Contents
Page
List of Tables .......................................................................................................................x List of Figures .................................................................................................................... xi Chapter 1: Introduction ........................................................................................................1 Background of the Problem .....................................................................................2 Problem Statement ...................................................................................................3 Purpose of the Study ................................................................................................4 Significance of the Study .........................................................................................6 Nature of the Study ..................................................................................................7 Research Questions and Hypotheses .......................................................................8 Theoretical Framework ..........................................................................................10 Definitions..............................................................................................................14 Assumptions...........................................................................................................15 Scope, Limitations, and Delimitations ...................................................................15 Scope ..........................................................................................................16 Limitations .................................................................................................16 Delimitations ..............................................................................................17 Summary ................................................................................................................18 Chapter 2: Review of the Literature...................................................................................20 Title Searches, Articles, Research Documents, and Journals ................................20 Online Education ...................................................................................................21 Brief history ...............................................................................................22
vi
Recent Expansion...................................................................................................23 TPACK ..................................................................................................................26 Instructional Quality ..............................................................................................28 Pedagogy and Theory ............................................................................................31 Virtual Faculty Profile ...........................................................................................36 Responsibilities and Attitudes................................................................................37 Curriculum and Design ..........................................................................................39 Human Resources and Recruitment .......................................................................42 Faculty training ..........................................................................................43 Faculty training models..............................................................................45 Professional development ..........................................................................47 Cycles of professional development ..........................................................51 Faculty Effectiveness .............................................................................................53 Conclusion .................................................................................................57 Chapter 3: Methodology ....................................................................................................62 Research Method ...................................................................................................62 Research Design.....................................................................................................63 Research Questions and Hypotheses .....................................................................65 Population and Sampling .......................................................................................66 Informed Consent and Confidentiality...................................................................68 Data Collection ......................................................................................................69 Instrumentation ......................................................................................................71 Reliability and Validity ..........................................................................................72
vii
Data Analysis .........................................................................................................75 Summary ................................................................................................................78 Chapter 4: Results ..............................................................................................................79 Research Design and Method ................................................................................80 Faculty Population and Sample Demographics .....................................................80 Faculty Professional Development Frequency ......................................................83 Reliability Analysis ................................................................................................84 Student Mean Grades .............................................................................................85 Research Questions/Hypotheses Results ...............................................................87 Research Questions and Hypotheses .....................................................................88 Research Question 1/Hypothesis 1 ............................................................89 Research Question 2/Hypothesis 2 ............................................................92 Research Question 3/Hypothesis 3 ............................................................94 Research Question 4/Hypothesis 4 ............................................................96 Summary ................................................................................................................98 Chapter 5: Conclusions and Recommendations ................................................................99 Conclusions ..........................................................................................................100 Limitations ...........................................................................................................106 Implications..........................................................................................................107 Recommendations for Practice............................................................................109 Recommendations for Further Research..............................................................113 Summary ..............................................................................................................116 References ........................................................................................................................120
viii
Appendix A: Permission to Use Existing Survey ............................................................148 Appendix B: Premises, Recruitment, and Name (PRN) Use Permission ........................149 Appendix C: Informed Consent Agreement ....................................................................150 Appendix D: TPACK Survey- Electronic........................................................................152 Appendix E: Electronic Survey Host Website Privacy Policy SurveyMonkey® ............160 Appendix F: Histogram for Student Grades ....................................................................173 Appendix G: Histogram for Technological Knowledge ..................................................174 Appendix H: Histogram for Content Knowledge ............................................................175 Appendix I: Histogram for Pedagogical Knowledge .......................................................176 Appendix J: Histogram for Pedagogical Content Knowledge .........................................177 Appendix K: Histogram for Technological Content Knowledge ....................................178 Appendix L: Histogram of Technological Pedagogical Knowledge ...............................179 Appendix M: Histogram of Technological Pedagogical Content Knowledge ................180
ix
LIST OF TABLES Table 1: Age of Part-Time Online Faculty ........................................................................81 Table 2: Race of Part-Time Online Faculty .......................................................................82 Table 3: Online Teaching Experience of Part-Time Online Faculty .................................82 Table 4: Professional Development ...................................................................................83 Table 5: Reliability Coefficients ........................................................................................84 Table 6: Descriptive Statistics ...........................................................................................85 Table 7: Skewness and Kurtosis Coefficients ....................................................................86 Table 8: Regression Coefficients for Student Grades ........................................................91 Table 9: Regression Coefficients for Student Grades ........................................................93 Table 10: Regression Coefficients for TPACK .................................................................96 Table 11: Summary of Hypotheses Tested and Outcomes ................................................97
x
LIST OF FIGURES Figure 1: Technological Pedagogical Content Knowledge (TPACK) ...............................12 Figure 2: Histogram of Standardized Residuals for Student Grade ...................................90 Figure 3: Scatterplot of Plot Standardized Residuals and Standardized Predicted Values.... ............................................................................................................................................91 Figure 4: Histogram of Standardized Residuals for Student Grade ...................................92 Figure 5: Scatterplot of Plot Standardized Residuals and Standardized Predicted Values ............................................................................................................................................93 Figure 6: Histogram for Technological Pedagogical Content Knowledge ........................94 Figure 7: Scatterplot of Plot Standardized Residuals and Standardized Predicted Values .... ............................................................................................................................................95
xi
Chapter 1 Introduction Online learning has been recognized as an alternate route and educational style of learning and attending classes because of the unique dynamics of online learning pedagogy (Allen & Seaman, 2007, 2009, 2011; Collopy & Arnold, 2009; Crawford-Ferre & Wiest, 2012; Lacey, 2013; Tipple, 2010). Instructors that facilitate online courses require technological, pedagogical, and content knowledge in order for learners to benefit from the courses (Mishra & Koehler, 2006; Shulman, 1986). The TPACK theoretical framework measures self-reported responses of instructors in three main areas, technology, pedagogy, and content knowledge, and four other domain areas where the constructs technology, pedagogy, and content knowledge overlap. There are a total of seven areas related to technology, pedagogy, and content knowledge. The TPACK Survey tool proposed to measure self-reported responses on part-time online faculty working at a private for-profit virtual institution. Online instructors are usually trained utilizing a corporate method that is a short-term course-training module (Mayadas, Bourne, & Bacsich, 2009). The training and assessment of instructors may not be as effective when modules are poorly designed, when instructors do not apply module concepts to the actual courses instructed, or if instructors do not have the extensive knowledge and experience in cyber world interactions (Jayaram et al., 2012). Online faculty may also be ineffective if department leaders do not offer purposive training and applicable practice in the seven domains of technological, pedagogical, and content knowledge (Ball & Cohen, 1999; Mishra & Koehler, 2006). Chapter 1 outlines the background of the problem and the problem to be researched. Introduced in this chapter
1
are the purpose and significance of the research, the hypotheses, and research questions, and an overview of the theoretical framework supporting the topic of this research study. Background of the Problem Online enrollment has grown significantly at degree-granting postsecondary institutions; online course enrollment was 16,611,710 in 2002 or 9.6% of the total students enrolled while in 2011, course enrollment was 20, 994,113, representing a growth of 32% of the total students enrolled in online courses (Allen & Seaman, 2013). Ninety percent of all higher education institutions in the United States offer online courses, which includes virtual and bricks-and-mortar institutions (Collopy & Arnold, 2009; Crawford-Ferre & Wiest, 2012; Lacey, 2013; Tipple, 2010). In 2012, 4,726 private for-profit 4-year virtual institutions offered degree programs and courses that are identical in curriculum to traditional campus-based programs and courses (Ginder & Sykes, 2013). In 2012, 61.3% of online learners in the United States were enrolled in private for-profit 4-year virtual institutions (Ginder & Skyes, 2013; NCES, 2014). In a fast paced technologically advanced world (Gabriel & Kaufield, 2008), an online instructor must keep up with technology and the changes that can impact teaching performance quality. Higher education administrators, especially academic affairs departments, strive to ensure that they are hiring qualified personnel that can support the institute’s mission and goals. Once hired, online instructors require close evaluation about instructional skill, experience, technological, pedagogical, and content knowledge when instructing courses (Gabriel & Kaufield, 2008; Mishra & Koehler, 2006; Mishra, Koehler, & Kereluik, 2009).
2
Problem Statement The general problem is that online learners achieve lower grades than learners that attend traditional classrooms, and more online learners are likely to drop or fail courses than learners in traditional classrooms (Council on Educational Technology & Learning Innovation [CETLI], 2013; Johnson & Mejia, 2014; Nistor & Neubauer, 2010; Xu & Jaggars, 2013). A longitudinal study comprised of 750,000 students in 2012 indicated that 79.4% of students that attended online courses successfully completed their courses, while 85.9% of students that attended traditional courses completed their courses (NCES, 2014). The high percentage of incomplete online courses lasted from 2002 to 2014 in every subject area affirming learners are less likely to succeed in online courses (Johnson & Mejia, 2014; NCES, 2014). Columbia University Community College Research Center examined 500,000 community and technical college courses in both traditional and face-to-face courses in the state of Washington. The number of students as part of this study was 40,000 and results indicated that learners enrolled in online courses received lower grades than did learners in traditional courses. The mean grade point average for online learners was 2.77 on a 4.0 scale, and students in traditional courses averaged a 2.98 grade point average on a 4.0 scale (Xu & Jaggars, 2013). Part-time online faculty account for between 80% and 90% of faculty at private for-profit virtual institutions (NCES, 2014). The specific problem is that there is evidence of low online postsecondary learner grades in an era of increased use of parttime online faculty (Johnson & Mejia, 2014; NCES, 2014; Roby, Ashe, Singh, & Clark,
3
2013; Xu & Jaggars, 2013). The Online Learning Consortium (OLC) (2011) reported 72% of virtual universities in the United States offered full-time online faculty preparation using their own internal faculty training programs. Based on a sample of 2,512 virtual universities, faculty instructional training might not be effective, as only 1030% of online faculty improved in performance according to faculty evaluations (OLC, 2011). Existing literature regarding low online learner grades is limited and does not regard the predictor variables of part-time online faculty members that are the majority of instructors at for-profit virtual institutions. Newly employed part-time online instructors in technologically driven educational atmospheres in the first year of the profession arrive as inexperienced in the seven areas of technology, pedagogy, and content knowledge (Mishra & Koehler, 2006; Mishra et al., 2009; Schmidt et al., 2010). In accord with this, self-reported predictor variables of part-time online faculty may have some relationship with online learners’ grades. Purpose of the Study The purpose of this quantitative non-experimental study was to examine the relationship between self-reported predictor variables technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPACK) of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. Part-time online faculty account for between 80% and 90% of faculty at private for-profit virtual institutions (NCES, 2014). Part-time online faculty appointees
4
at this institution type were the focus of this study because part-time online faculty currently account for a large percentage of instructors at virtual institutions in the United States. The TPACK Survey instrument measures seven self-reported constructs or domains that branch from technological knowledge, pedagogical knowledge and content knowledge. In this study, the TPACK Survey measured the seven self-reported predictor variables technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPACK) of part-time online instructors. Volunteer participants self-reported their own perceptions and evaluated their instructional practices within the seven domains of technology, pedagogy, and content knowledge. The critical difference between actual and self-reported knowledge is that self-reported knowledge is how participants perceive, measure, and rate themselves according to their own judgment (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). A private for-profit virtual university was selected for the setting of this study given the exclusive attention to online education as part of the institutional mission. Private for-profit virtual universities have increased recruiting part-time online faculty to instruct online courses (Allen & Seaman, 2009, 2011; Cross & Goldenberg, 2009; Heyman, 2010; NCES, 2010; West, 2010). On average, between 80% and 90% of faculty hired at for-profit virtual institutions worked on a part-time basis in 2013 (NCES, 2014).
5
Significance of the Study Improvement in the online learning environment is an ever-changing process in higher education. Instructors are required to adjust to the continuous updates of course design, content, and student needs in order to make content meaningful, easier to comprehend and applicable to current events, and real life experiences (Allen & Seaman, 2007, 2009, 2013; Heyman, 2010; Ko & Rosen, 2001; Palloff & Pratt, 2003; Shiffman, 2009). Research indicates that many online instructors lack quality faculty instructional training and professional development opportunities (Allen & Seaman, 2011, 2103; Cook, Dickerson, Annetta & Minogue, 2011; OLC, 2011). This study’s findings can provide direction on how to prioritize our faculty assessment and professional development in the distance learning sectors for any higher education institute offering courses online and consequently procure rewarding benefits to learners that choose to learn online. Hence, a study that focuses on the seven domains of technological, pedagogical, and content knowledge of part-time online faculty and online student grades at a virtual institution would be valuable. An examination of seven domains of technological, pedagogical, and content knowledge of part-time online faculty and online learners’ grades could help identify and formulate solutions for faculty so that retention and better grades may be achieved. Information collected from this study may assist higher education sectors in supporting online education departments with understanding and characterizing seven domains of technological, pedagogical, and content knowledge of part-time online faculty that may predict the grades of online learners.
6
Nature of the Study This study was conducted using a quantitative non-experimental approach to examine the relationship between self-reported predictor variables technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), technological pedagogical content knowledge (TPACK) of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. Quantitative research was suited for this study in order to examine the relationship between the self-reported predictor variables and criterion variable. Testing for a predication between two or more variables constitutes a quantitative research approach when analyzing numeric indicators and nominal and ordinal data (Cooper & Schindler, 2008; Miles & Shevlin, 2001; Stangor, 2011). A quantitative study was appropriate in examining the relationship between the selfreported predictor variables (seven domains of technological, pedagogical, and content knowledge of part-time online instructors) and criterion variable (student grades). The intention of this study was not to determine cause and effect, but rather to investigate the prediction between self-reported predictor variables and the criterion variable. Non-experimental research is classified as either cross-sectional or longitudinal based on the timing of collecting data (Burns & Grove, 2005; Cooper & Schindler, 2008; Marczyk, DeMatteo, & Festinger, 2005). A faculty TPACK Survey (Schmidt et al., 2010) (see Appendix D) collected self-reported data about the part-time online instructors’ seven areas of technological, pedagogical, and content knowledge; student grades within the courses taught by the same instructors were also a part of the study.
7
Burns and Grove (2005) affirmed that collecting data one time constitutes a crosssectional data collection method. For this study, data collected was cross-sectional where the TPACK Survey (Schmidt et al., 2010) was distributed one time to volunteer part-time online instructors in form of a questionnaire via an online survey (SurveyMonkey.com®) link for part-time online instructors at the private for-profit virtual institution. The associated students’ grades for the courses were also cross-sectional, collected only one time, and provided by the student affairs department at the same institution. Nonexperimental design was appropriate for this study because little information existed about the current existing phenomenon. Experimental design did not apply to this study because experimental design entails manipulation or treatment of the variables within a study (Field, 2013). This study only studied the frequencies, and data was collected without any treatment or manipulation of the predictor variables and criterion variable. Research Questions and Hypotheses The purpose of this quantitative non-experimental study was to examine the relationship between self-reported predictor variables technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), technological pedagogical content knowledge (TPACK) of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. The following research questions guided the study: RQ1: What is the relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors?
8
H10: There is no statistically significant relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors. H1a: There is a statistically significant relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors. RQ2: What is the relationship between the online learners’ grades and selfreported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors? H20: There is no statistically significant relationship between online learners’ grades and self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors. H2a: There is a statistically significant relationship between online learners’ grades and self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors. RQ3: What is the relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors?
9
H30: There is no statistically significant relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors. H3a: There is a statistically significant relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors. RQ4: To what extent, if any, is self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors related to online learners’ grades? H40: Self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time instructors is not related to online learners’ grades. H4a: Self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time instructors is related to online learners’ grades. Theoretical Framework This study was based on the theoretical framework lens of Mishra and Koehler (2006). Mishra and Koehler (2006) specialized in a professional development and instructional experience theoretical framework that is referred to as the Technological Pedagogical Content Knowledge (TPCK) or (TPACK) framework. Shulman (1986, 1987) first developed and introduced the pedagogical content knowledge (PCK) framework; however, one decade later, technology had expanded, and became more advanced. Mishra and Koehler (2006) added technology (T) and extended the framework
10
to apply to current instructional design and especially in online instructional settings. Technological Pedagogical Content Knowledge was renamed to TPACK in order to become easier to reference according to Mishra and Koehler (2006). Although the TPACK emphasizes and is based on three areas, technological, pedagogical, content knowledge, the domain areas of technology, pedagogy, and content knowledge intersect with another that formulate into seven domains or constructs. The TPACK framework highlights that online instructors need a range of technological, pedagogical, and content knowledge to improve student understanding and achievement within courses instructed. The theory focuses on three major areas with a total of seven overlapping constructs: technology (T), pedagogy (P), and content knowledge (CK). According to this framework, the seven overlapping domains of technology, pedagogy and content knowledge practices promote instructional knowledge, a better understanding and purpose for effective learning to occur within courses instructed (Hofer & Grandgenett, 2012; Mishra et al., 2009). TPACK focuses largely on technological skill development of instructors, the environment of the instructor and experiences in the actual usage of technology. Mishra and Koehler’s (2006) TPACK model includes the seven following domain areas and the connections in between all three main areas (see Figure 1):
11
Figure 1: Technological Pedagogical Content Knowledge (TPACK) Note: Reproduced by permission of the publisher, Koehler, M. & Mishra, P. © 2012 by tpack.org. Technological Knowledge (TK) refers to the common technologies such as digital books, Internet and digital video, learning management systems or LMS, software, and the methods to present information, and current newer technologies that are newly developed (Mishra & Koehler, 2006). Technological Content Knowledge (TCK) is the act of usage of technological knowledge in order to enrich content for effective instruction (Mishra & Koehler, 2006). Technological Pedagogical Knowledge (TPK) is the usage of technological knowledge in order to enrich pedagogy for effective instruction (Mishra & Koehler, 2006).
12
Content Knowledge (CK) refers to the subject matter being taught and structure of knowledge, or what the instructor knows about a subject that they instruct (Mishra & Koehler, 2006). Pedagogical Knowledge (PK) includes the process and practice or methods of instruction, such as purpose, standards, methods to instruct, and strategies for student learning to occur (Shulman, 1986). Pedagogical Content Knowledge (PCK) a creative blend of pedagogy and content knowledge to instruct effectively (Shulman, 1986). Technological Pedagogical Content Knowledge (TPACK/TPCK) is the collective knowledge of how to incorporate content knowledge with applicable pedagogical methods, with the usage of developing technology so that learners are able to learn, and grasp subject matter (Mishra & Koehler, 2006; Voogt, Fisser, Pareja, Tondeur, & van Braak, 2013). The combined knowledge of technology, content, and pedagogy involves ways to succeed in technology-based learning (Mishra & Koehler, 2006; Voogt et al., 2013). The TPACK model has been utilized, researched, and applied to a myriad of current studies involving the assessment of all types of faculty, and teachers within preservice training assessments (Ball & Cohen, 1999; Greenberg, 2010; Gregory & Salmon, 2013; Jaipal & Figg, 2010; Kereluik, Mishra, Fahnoe, & Terry, 2013; Moore, 2013; Özgün-Koca, Meagher, & Edwards, 2010; Özmantar, Akkoç, Bingölbali, Demir, & Ergene, 2010; Voogt et al., 2013; Ward, 2010). TPACK was first developed to prepare instructors into becoming effective, and competent without the usage of our current multifaceted technologies (Shulman, 1986). The current TPACK is an updated version of
13
PCK as it addresses the knowledge that aids instructors to expand technological usage which overall will support pedagogical strategies. TPACK is a versatile framework that can be especially useful for online driven courses in education (Hofer & Grandgenett, 2012; Mishra et al., 2009). Definitions The following are definitions of terms help to ensure consistency and support throughout the study. Online faculty training: Faculty training refers to the instructional preparation an instructor receives at the institution prior to instructing online courses (Puzziferro, 2005). Online instructional experience: Online instructional experience refers to the experience an instructor possesses when instructing online courses recognized by successful courses taught or experience with the Learning Management System (Seaman, 2009). Online learner grades: Online learner grades or student grades are points, scores earned or the letter grade achieved for a course completed by an online learner (Allen & Seaman, 2011). Part-time faculty: Part-time faculty is temporary instructors that are compensated for filling part-time positions. Part-time faculty are also at-will employees that are hired either per class or per hour (less than 30 hours a week) and do not instruct full time (Lieberwitz, 2007). Professional development: Continuous, organized training or applicable activities to improve or change beliefs, inform, supplement, and offer reinforcement of current skills and knowledge to improve student performance (NEA, 2011).
14
Retention: Students that stay or remain in one term as a percentage of the number of students that started in the previous term, to maintain student enrollment through an educational program until graduation (Heyman, 2010; NCES, 2011). Assumptions A quantitative research method was employed to minimize qualitative factors of human biases and judgments that can be prone to error (Pagano, 2010; Vogt, 2007). One assumption in this study concerning volunteer part-time online instructors was the honesty of part-time online instructors that responded to the survey questionnaire about their online self-reported responses in the seven areas of technological, pedagogical, and content knowledge. Self-reported responses are subject how participants perceive, measure, and rate themselves according to their own judgment which was assumed to be accurate. A second assumption was that student grades are better when instructors are proficient in the virtual learning setting with regard to the seven areas of technological, pedagogical, and content knowledge. Another assumption was that the student scores collected from the student affairs division were accurate and represented the students’ achieved scores and did not contain errors. The final assumption was that the TPACK Survey (Schmidt et al., 2010) instrument for faculty was an accurate measurement of the volunteer participants’ self-reported predictor variables in the seven areas of technological, pedagogical, and content knowledge. Scope, Limitations and Delimitations The focus of this study concentrated only on part-time online faculty, as they are currently the largest faculty members at for-profit virtual institutional settings in the
15
United States. A quantitative research method and the TPACK Survey (Schmidt et al., 2010) were used to examine the predictive relationship between the seven self-reported domains of technological, pedagogical, and content knowledge of part-time online instructors working at a private for-profit virtual institution and online learners’ grades. Scope. The scope of this study included part-time online faculty and online learners’ grades at a private for-profit virtual institution, which involved an analysis of the seven self-reported domain areas of technological, pedagogical, and content knowledge of part-time online instructors and the student scores earned by online learners by multiple linear regression (MLR) and Pearson r (correlation coefficient). The TPACK Survey (Schmidt et al., 2010) was administered to only part-time online faculty members and was proposed to measure the self-reported technological, pedagogical, and content knowledge domains. Grades of online learners were disclosed for the students that attended courses taught by the same part-time instructors by the institute’s student affairs department. This study was proposed to collect, examine and analyze data for part-time online instructors and students. The study aimed to identify if the seven domain areas of technological, pedagogical, and content knowledge of part-time online instructors at a virtual for-profit institution had a predictive relationship with online student grades or achievement within the course. Limitations. Limitations are potential restrictions that affect the generalizability, usefulness of findings, and are considered weaknesses of a study (Creswell, 2009; Simon, 2011). The accuracy of all seven domains of technological, pedagogical, and content knowledge of part-time online instructors may not have applied to all instructors at virtual for-profit institutes in the United States and the TPACK Survey (Schmidt et al.,
16
2010). The TPACK Survey (Schmidt et al., 2010) contained self-reported data by volunteer participants, which was also a limitation. The instrument and methods used in data collection did not explore all the characteristics of all online instructors and online learners, as there were many characteristics that were not part of the scope of the study. Hence, generalizability did not apply to all part-time online instructors that instruct at other locations. Volunteers differ from non-volunteers in that volunteers may be more opinionated and motivated than other participants (Vogt, 2005). Voluntary participation can lead to bias and invalid conclusions such as inaccurate generalizations from the participants to the larger population (Vogt, 2005). In addition, self-reported responses are subject to bias factors, such as common methods bias or social-desirability bias also referred to as SDB (Podsakoff et al., 2003). Negative affectivity (NA) and positive affectivity (PA) are personality variable bias factors that can occur when self-reported surveys are utilized. For example, the selfreported information of volunteer participants with high negative or negative responses is more likely to be biased in a negative fashion when self-reporting about themselves according to their negative perceptions of themselves or the questions. Negative affectivity increases associations among variables, especially when self-reportable variables that reflect self-perceptions that are emotional, personal, or experience related (Spector, 2006). Delimitations. Delimitations for this study included the sample size, and selecting specific volunteer participants in order to make the study more practical with limiting the amount of data to analyze. However, this study only represented part-time online faculty at one private for-profit virtual institution, and not any institution at the
17
community college level or public sector. The results were not generalizable outside of the population of where the sample was drawn because of the uniqueness of the sample available for this study. Full-time instructors were excluded from this study at a private for-profit virtual institution; only volunteer part-time online instructors participated. Only volunteer part-time online instructors at a private for-profit virtual institution were invited and those that volunteered were surveyed. The grades of the students within the classes instructed by the same instructors were also analyzed. Full-time or tenured online instructors were not the focus of the study. Data collected did not apply to all instructors at other private for-profit virtual institutions, public institutions, or non-profit institutions. The findings from this study did not infer a cause and effect and placed emphasis between the self-reported predictor variables and criterion variable. Even if data indicated a prediction between variables, it did not prove that they caused one another (Marcyk et al., 2005; Miles & Shevlin, 2001). Summary The purpose of this quantitative non-experimental study was to examine the relationship between self-reported predictor variables (technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), technological pedagogical content knowledge (TPACK) of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. The problem is that there is evidence of low online postsecondary learner grades and retention of online postsecondary learners in an era of increased use of part-time online faculty. The seven domain areas of technological, pedagogical, and
18
content knowledge of part-time online faculty may contribute to the retention and success of online learners. The amount of self-reported technological, pedagogical, and content knowledge that part-time online faculty hold at the time that they instruct an online course may be relative to online student success rates or grades. Chapter 2 reviews the literature relevant to the self-reported predictor variables of part-time online faculty: the seven areas of technological, pedagogical, and content knowledge; and criterion or outcome variable, student grades.
19
Chapter 2 Literature Review Chapter 2 contains a discussion of the research findings with regard to online faculty, training, professional development, the growth on part-time online faculty, and the seven areas of the TPACK. This chapter presents the historical and theoretical framework that emphasizes the study. Articles and studies reviewed in this chapter contribute to the understanding of the phenomenon of instructional preparation, experience, the seven areas of technological knowledge, and pedagogical knowledge, content knowledge of part-time online faculty and appropriateness of the self-reported predictor and criterion variables used in this study. The focus of the literature review is to gain insight into research conducted in the areas that contribute to the effectiveness and proficiency of part-time online faculty. Title Searches, Articles, Research Documents, and Journals The resources consulted for this literature review were University of Phoenix Library, EBSCOhost database, ProQuest database, LexisNexis, JSTOR, dissertations and abstracts, and Sage Knowledge. The titles used for collecting articles and other documents for this dissertation included keywords such as online, online education, online learning, student success, technological knowledge, content knowledge, pedagogical knowledge, online training, distance education, instructional pedagogy, higher education, college and university faculty, faculty evaluation, faculty professional development, effective learning, lifelong learning, and effective teaching. While conducting searches that cover the topic of online instructional training, areas of TPACK, online instructional experience, and student retention and success, a
20
myriad of data and research documents were found that discussed faculty preparation, professional development, online training, student retention, and student satisfaction with courses, professional development, and online learning theory. Another theme in the literature found was faculty development and evaluation. Online Education Online learning is defined as an accessible form of interactive distance learning and often referred to as e-learning, virtual learning and technology enhanced learning (TEL) (Moore, Dickson-Deane, & Galyen, 2011). A major asset for choosing to learn online is that geographical limitations are reduced (Kolowich, 2010) and learning online enables many learners with family obligations and work schedules to learn wherever they have computer access and at their preferred time because of its versatility and flexibility. Online learners that chose to attend online courses at traditional campuses met similar backgrounds and demographics of learners at traditional campuses who are not attending online courses. Study findings revealed that these traditional online learners tended to be older, and 63% more learners were female than were men, half of the learners had at least one child, and resided within 40 miles of their traditional college (Doyle, 2009). There are many reasons that learners choose online learning. Research has indicated that students and instructors favor online courses because of two main reasons, flexibility and convenience (Allen & Seaman, 2009, 2013; Kolowich, 2010; McKerlich , Riis, Anderson, & Eastman, 2011; Pastore & Carr-Chellman, 2009; Rowh, 2007; Sher, 2009). Due to the growth of online learning, more students have increased in enrollment, hence, the involvement of student and academic affairs is vital for students to
21
complete their online programs and excel with their educational goals (Allen & Seaman, 2009; 2013; Beck, 2010; Kolowich, 2010). Anderson (2008) approached online learning as a set of responses and actions to address the strategic plan of a university. In order to understand learning and success, formative and summative assessment and learning strategies in online courses and suggesting some ways that assessment practices lead to different types of learning methods support learning in all modalities and ways (Arend, 2007). Hence, the seven constructs of technological, pedagogical, and content knowledge (TPACK) of instructors also requires assessment to improve and refine instructional quality that can lead to inform the nature and design of useful TPACK practices that can aid instructors in gaining proficiency in their roles. Brief history. When online education was first introduced in the 1980s, many educational institutions were skeptical about whether or not online learning was a valid mode of education and whether or not it was effective. Traditional face-to-face and webbased or video learning modes have also been enhanced using web-based learning to include book resources and supplemental materials online, such as e-books and reading resources, and materials for a course. Later in the 1990s, the computer era and online learning environments began to emerge (Palloff & Pratt, 2007). Distance education was once a home study or video based learning method. It has now expanded into web-based learning using platforms, and LMSs (Learning Management Systems) or other organized interfaces with continuous innovation taking place.
22
Currently, online instruction and learning have become integrated with the usage of the World Wide Web, which affects the different ways that students can learn (Keengew & Kidd, 2010). The virtual learning landscape in higher education continues to advance and technological advancement has enabled the usage of electronic libraries, communication, learning resources, meetings, and the course setting to take place in a virtual learning atmosphere. The online learning community has no limitations or boundaries and continues to evolve and improve in presentation (Harris & Parrish, 2006). Recent Expansion Online programs have grown and become more common in order to meet the needs of students, so that learners can achieve educational goals. The need for flexibility, convenience, and affordability are common reasons for online program growth in the past decade, enabling students with the access to attain their degree or certificate that might not be possible for them using a traditional classroom environment (Allen & Seaman, 2009). Allen and Seaman (2009) reported that one in four students was enrolled in an online course and the number of online learners in the fall of 2008, stood at 4.6 million students, which was a 17% increase from the previous year. Their data source was derived from the Online Learning Consortium (OLC) that surveyed 2,500 virtual postsecondary institutions. A more recent study done by Allen and Seaman (2013) showed that online course enrollment was 16,611,710 in 2002, or 9.6% of the total students enrolled while in 2011, course enrollment was 20,994,113, a growth of 32% of the total students enrolled in online courses (Allen & Seaman, 2013).
23
Indeed, online education has grown and expanded. Online educational programs and courses have spread and grown rapidly; however, faculty members in higher education did not accept the online learning modality and were skeptical about online education (Weimer, 1990). Now, there is a wider acceptance of online education, in 2012, one-third of chief academic officers believed online learning is an acceptable method of instruction and learning (Allen & Seaman, 2103; Lacey, 2013). Another major study that took place between 2008 and 2009 is the Sloan National Commission on Online Learning Benchmarking Study. A total of 231 interviews took place involving institutional representatives (presidents and chancellors, chief academic officers, online learning administrators, faculty leaders, professors, and online learners) across 45 separate virtual institutions. The study included close to one million campusbased college students and a minimum of 100,000 online learners (Seaman, 2009). This study was designed to illuminate how robust online learning initiatives are supported and created by key organizational strategies, processes, and procedures (Seaman, 2009). This qualitative study also emphasized the importance of the relationship between part-time online faculty and online learners, but did not explore of any possible variables such as student grades, and the relationship with online faculty or the areas of TPACK. In 2012, the Sloan Consortium completed another follow-up study which exposed the number of students taking at least one online course; the results exceeded 6.7 million, an increase of over 570,000 over the past year and thirty-two percent of students in higher education enrolled in at least one course online (Allen & Seaman, 2013). The Babson Survey Research Group (2013) conducted a survey titled Survey of Online Learning. The survey revealed that in 2013 over 6.7 million learners attended
24
online courses. This study agrees with the data collected by the 2012 OLC study. Currently, online education occurs worldwide and integrated into public school systems. Online education has been recognized by the Higher Learning Commission (HLC) and major accrediting bodies as equal to traditional instruction (Higher Learning Commission, 2013). The United Nations Educational, Scientific, and Cultural Organization (UNESCO) World Conference on Higher Education (1998) addressed how the world changes along with diversity as a global unity of changes especially in higher education. The three special characteristic of the changes were constant and uninterrupted, rapid and tending to accelerate, and would affect every arena including social life and activity (UNESCO, 1998). UNESCOs (1998) message is still regarded as a guideline for higher education changes, particularly distance education because distance learning emerged as an answer to meet the changes of how students want to learn (Moore, 2013). The United States Distance Learning Association (USDLA) has continued to research, and remains committed to online learning since 1986 (Rickard, 2010). However, to remain at the forefront of the technologically driven world, developmental revisions and changes must continue to occur that motivate positive behaviors in students with career driven success in virtual learning environments (HLC, 2013). A main concern currently is competitiveness within the global economy that requires acquisition of skills that are critical to building students’ personal understanding and interpretation. These skills are critical thinking, complex problem solving, and communication or collaboration (Gradel & Edson, 2009, 2010). Colleges and universities are expanding their curriculum to address the need for adopting and meeting requirements set forth, as
25
seen in the growth in online learning and programs, with corresponding increased emphasis on instructional training and experience. The Higher Education Act of 1965, regulated, restricted and limited Title IV Student Financial Assistance in distance education programs for institutes to remain eligible to participate in Title IV Programs (U.S. Department of Education, 2014). The newer Higher Education Opportunity Act of 2008 (HEOA) addressed changes that did not limit online learning, yet instead, acknowledged the importance of online learning and access to online learning making this method of learning even more validated by those that did not favor or approve of it. Online learning has started to become accepted and has emerged as equivalent to face-to-face learning because of lively course designs, and curriculum replication found in traditional ground campuses, yet, mirrored through the online course platform and enhanced with advanced technology. TPACK The TPACK conceptual framework is a foundation of effective instructional practices that focus on current learning that involves technology within courses (Mishra et al., 2009). Instructors deliver beneficial courses with a mixture of understanding related to technological concepts, how to deliver content using current technological tools, and constructive methods to instruct using instructional pedagogy (Mishra et al., 2009; Shulman, 1986). Another element of the TPACK framework covers instructor’s self-reported experience with subject matter, or content knowledge of which instructional concepts are simple and which are more difficult to learn to ensure learners understand and relate to the materials and concepts within any course (Mishra et al., 2009; Shulman, 1986).
26
Mishra and Koehler (2006) stated that when student learning environments allow opportunities for instructors and students to study subject matter with the usage of technology, more meaning is derived from the content. Understanding how to use effective instruction and evaluation within online instruction may not be a developed skill for some part-time online instructors, especially if the part-time online instructor is hired to fill in for just one course without knowledge of the objectives for the course. Some part-time instructors may not know much about course or program dynamics. According to the Mishra & Koehler (2006) instructors need to tap into students’ prior knowledge, have knowledge of epistemological theories, aid in the development of newer theories, and reinforce older knowledge and previous understandings. Online instructors will need to know when and why to use certain technological activities in online courses, and have that TPACK knowledge of how to do so (Kereluik et al., 2013). Jaipal and Figg (2010) examined the TPACK within courses that utilized technology with pre-service teachers. Jaipal and Figg (2010) created a seven-week course of study where TPACK was the focus. Technology enhanced activities for context and purpose, pedagogical dialogue, technological knowledge or TK development using demonstrations and how to apply skills and knowledge in technology were the tasks and activities in the seven-week course study. All elements of the TPACK were administered in the training module. Results of this study indicated that planning and applying technology within courses was the most important task for online teachers across various subjects. Planning and applying technology within the course requires technological knowledge. Student grades and student outcomes were not part of the study.
27
The TPACK framework was intended to measure instructional knowledge, mainly technological, pedagogical, and content knowledge and from these three main constructs a total of seven areas overlap. The TPACK currently focuses on explaining usage of digital technology to enhance online instruction, however, the way instructors acquire this knowledge is not part of the how instructors develop their TPACK. The instructor’s self-reported knowledge and experience in online instruction includes proficiency with computer systems, hardware, and the understanding of standard software tools, such as web-browsers, e-mail, and word processors (Mishra & Koehler, 2006). According to Mishra and Koehler (2006) technological knowledge exceeds the basic knowledge of completing simple tasks such as installing, upgrading hardware or software, data management, and keeping up the basic technological advancements. Online instructors need to master many other tasks and be knowledgeable of advanced technology, in addition to having basic computer experience. Shulman (1986) and Mishra and Koehler (2006) stated that instructors must have knowledge and understanding of the subject matter that they instruct especially theories, concepts, and be able to connect thoughts and ideas about the subject matter, which is content knowledge. Mishra and Koehler (2006) also stated that it is necessary for instructors to have unique knowledge about teaching methodology that involves processes, practices, and methods inclusive in educational reasoning, standards and goals, which is pedagogical knowledge. Instructional Quality Instructional quality and instructional material is measured in many ways by a larger scale than just course-by-course within the institution. Accrediting bodies,
28
organizations, the Online Learning Consortium (OLC) previously known as the Sloan Consortium, and the Quality Matters Higher Education Rubric are common organizations that measure instructional quality (Council on Higher Education Accreditation, 2002). The quality of online education and works of the OLC have been committed in assisting creating, improving, and redeveloping online learning communities (OLC, 2014). OLCs online instructional quality benchmark is titled Quality Scorecard. The modules of the Quality Scorecard are: Institutional Support Technology Support Course Development / Instructional Design Course Structure Teaching & Learning Social and Student Engagement Faculty Support Student Support Evaluations & Assessment Polly, Mims, Shepherd, and Inan (2010) highlighted that faculty support is essential to instructors, and activities should include examples of methods so that an instructor can recognize the relationship between technology and content, and how technology is utilized for the content areas. In addition, the relationship between technology and pedagogy can be guided by faculty support (Mishra & Koehler, 2006). Faculty support reinforces the understanding of technology, including content and pedagogy, and especially pedagogies that promote learning. Instructors should be
29
knowledgeable about the intersections and overlapped areas of TPACK (technology, pedagogy, and content knowledge) (Mishra & Koehler, 2006; Özgün-Koca et al., 2010). In addition, networking opportunities are methods in which faculty support is achievable and can be an advantage at the institution (Lefebvre, 2008). The Quality Matters Higher Education Rubric (QM) contains a set of eight general standards for online course design in order to support learning outcomes of instructors and learners and ensures course components are linked to learning objectives and they are: Course Overview and Introduction Learning Objectives (Competencies) Assessment and Measurement Instructional Materials Course Activities and Learner Interaction Course Technology Learner Support Accessibility and Usability The QM does not account for instructional training. However, the QM rubric does account for the curriculum design and program accountability of online course development, which does apply to course development, instructional design, content structure, and course activities implemented in the course (Shattuck, Zimmerman, & Adair, 2014). Piña and Bohn (2014) conducted a study that focused on the creation of quality rubrics and assessment of online instructional quality and course design that included
30
course observations and student surveys. According to Piña and Bohn (2014), the results from typical and previous surveys that are conducted in order to yield evidence about the actions and behaviors of part-time online instructors cannot be evaluated fairly since not all instructors are a part of the course design process. The amount of knowledge parttime instructors possess concerning course development and design is not always expert; hence, creating the newer study and rubrics to evaluate the actions and behaviors of online instructors. Research findings indicated that the student-centered approach is utilized in online learning and demonstrated in curriculum design (Allen & Seaman, 2009; Beck, 2010; Gardner, 2009; Pastore & Carr-Chellman, 2009; Taylor & Holley, 2009). Online presence, discussions, use of examples, welcoming attitude, supportive learning, and flexibility are some of the needs of learners in online environments (Cole & Kritzer, 2009). Both curriculum and design promote the activities for instructor involvement in the process of learning and reinforcement of online presence, discussion, and support of online learning for a student-centered approach. Instructional quality is promoted and conducted at the organizational level and not just within the course (Allen & Seaman, 2009; Friedman, 2011; Mishra & Koehler, 2009; Shulman, 1986; Taylor & Holley, 2009). Pedagogy and Theory Online learning theory and pedagogy explain why something happens and how it happens through curiosity and questioning principles and theories that involve learning that are proposed to assist in the understanding of how online learners learn (Harasim, 2012). Pedagogies and theories apply to both instructors and students. When learning
31
theories surfaced in the 20th century, behaviorist, cognitive, and constructivist theories were the three major theoretical frameworks that had shaped the study of learning (Harasim, 2012). Now, in the 21st century, previous constructivist models namely computer-mediated communication (CMC) theory evolved into online collaborative learning (OCL) by Harasim (2012) and community of inquiry (COI) by Garrison, Anderson, and Archer (1999) (Bates, 2014). The COI documented three components that focus on online learning: teaching presence, cognitive presence and social presence (Garrison & Arbaugh, 2007). The first element, teaching presence, can be found within the course design, support, and route of cognitive and social processes in order to promote personalized, meaningful, and positive educational learning experiences. The second element, cognitive presence, refers to how learners build and make meaning by practicing reflection and dialogue (Garrison et al., 1999, 2001; Xin, 2012). Lastly, social presence, referred to the transformation of community, relationship, and cooperation within online courses, as actual people within the course that exchange ideas. Swan and Ice (2010) stated that the COI framework has been educators worldwide have utilized and applied it successfully because of its meaningful and effective analysis of online learning because it applies to instruction and instructional training as much as it applies to students and learning. An understanding of learning and comprehension principles, as noted by Ambrose, Bridges, DiPietro, Lovett, and Norman (2010) aids instructors to : See why certain teaching approaches are or are not supporting students’ learning.
32
Generate or refine teaching approaches and strategies that more effectively foster student learning in specific contexts. Transfer and apply these principles to new courses. An instructor would need to learn how to motivate, engage learners and invoke prior knowledge. Social, cognitive, and cultural differences in learners involve different learning motivators. Learning and comprehension principles require a deeper understanding from the instructor regarding the backgrounds of learners (Ambrose et al., 2010). Learning pedagogy was described as process of knowledge that occurs through an interchange of experience by Kolb (1984). Kolb (1981, 1984) regarded learning as a fourstage cycle consisting of concrete experience (CE), a feeling dimension; reflective observation (RO), a watching dimension; abstract conceptualization (AC), a thinking dimension; and active experimentation (AE), a doing dimension. Surveys and interviews that reflect on the four learning paradigms of the Kolb model would reflect how learners perceive instruction and comprehend according to certain activities offered in an online platform and if whether the facilitator reinforced certain learning strategies throughout the course they instructed. Uniqueness of learners and their learning objectives require careful consideration of several dimensions. One example is the following seven dimensions of effective practice for faculty introduced by Chickering and Gamson (1987): encourage student-faculty contact encourage cooperation among students encourage active learning
33
provide prompt feedback to students emphasize time on task communicate high expectations respects diverse talents and ways of learning Utilizing a learning management system (LMS) can organize the instructional content, and experiences of instructors, but the values, attitudes and beliefs, which are, also a part of the instructional content, can be difficult to change. The mindset of online instructional development and continued improvement is necessary in all methods of instruction, including online instructional approaches (Moore, 2013). Technological knowledge of the LMS is vital in online learning environments (Mishra & Koehler, 2006). Schubert-Irastorza and Fabry (2011) conducted a study about the satisfaction of online learners within their courses and the implications were that in order to improve instruction, instructors should partake in certain tasks, namely: 1. Organize instruction and provide clear expectations. 2. Provide timely and meaningful feedback. 3. Be actively present. Online learners and instructors in online learning communicate through the delivery of content. The study of Schubert-Irastorza and Fabry (2011) regarded how students felt about their learning experiences as a method of assessing how instructors contributed in their courses online. Sher (2009) conducted a study which showed that the interaction of instructors with student and the interaction of students with students in online courses contributed to
34
student satisfaction and learning. Effectiveness of instructors is centered on the interaction that can increase student knowledge (Andreson, 2009; Bradley, 2009; Fabry & Elder, 2013) in all settings. Student knowledge requires that material to be of value to the student. Educational quality has two approaches, conventional and industrial (Hirumi, 2005). The goals of these two approaches or movements are similar in that they both assure the online learning is effective and efficient for students and users. The difference between the two approaches is that conventional quality focuses on the quality of online learning and industrial quality focuses on practicality, proficiency, and expertise (Hirumi, 2005). Online instructional pedagogy and theory is divided into three areas to understand online instructional initiates better, and according to Harasim (2012): Designing courses with effectiveness in mind for the online environment of students. Evaluating to measure how students are learning Past research to reference how past history of successes and failures can mold newer changes. Goodyear (2005) developed a framework that addressed understanding different pedagogical approaches in online learning or educational settings within an organizational context. The Goodyear (2005) model discussed how four types of pedagogies, philosophy, high-level pedagogy, pedagogical strategy, and pedagogical tactics support educational settings. In addition, the educational setting requires student activities that involve a combination of tasks, and learning environment that results positive learning outcomes. Online learning theory and pedagogy are rooted in
35
observations and events surrounding the new technologies within courses and are determined through experimental practices such as trial-and-error, and adaptation of conventional didactic exercises to online environments, while combining formal (primary, secondary or tertiary) and non-formal (training, certification, professional development) learning (Harasim, 2012). Virtual Faculty Profile There is plenty of relevant literature pertaining to part-time faculty in higher education, but not enough to compile a detailed profile about virtual part-time faculty at for-profit virtual colleges and universities. Literature that concerns part-time online faculty does not effectively communicate the experiences, behaviors, motivators, or views of the profession in detail. The American Federation of Teachers (AFT) collected data from 500 part-time instructors from two-year or 4-year institution of higher education. The ages of part-time faculty and the findings were that 33% of part-time faculty were between the ages of 18 to 44, 31% are between the ages 45 to 54, and 36% are age 55 and over (Hart Research Associates, 2010). According to and Levin and Hernandez (2014), current literature regarding parttime faculty revealed at least four notable themes of concern: The growth in value or part-time faculty. Illustrative evidence regarding characteristics, employment, duties, job satisfaction, organizational position, and workload. The negative effects of part-time faculty in higher educational institutions on retention rates. Diversity and types of part-time instructors
36
Accrediting bodies have pressured higher education institutions into hiring full-time faculty. Since the rapid growth of part-time online faculty in online educational settings, 80-90% of virtual instructors are part-time instructors (NCES, 2014). In the past two decades, part-time online faculty are a concern and are underrepresented in recent studies especially the impact of part-time online faculty on retention rates, and the success of online learners (McLean, 2006). West (2010) stated that diverse experience and expertise are qualities that part-time faculty bring to the higher education setting that makes parttime faculty valuable employees. Lefebvre (2008) summarized several studies and demographics of part-time online faculty working at brick-and-mortar institutions and virtual universities. The summary indicated that part-time online faculty is more educated than full-time faculty. Between 70% and 90% of faculty members held terminal degrees, while only 26% full-time faculty held terminal degrees. Part-time online faculty are also older, retired, and have extensive work experience (Lefebvre, 2008). Responsibilities and Attitudes The responsibilities and attitudes of part-time online faculty in higher education are contingent on the employment terms, conditions, and types of governance of the campuses that they are employed in. Organizational structure, campus, mission, and values of the institutional culture are grounds for the foundation of the role and attitudes of part-time online faculty. Issues that part-time online faculty are faced with are rooted within the online organizational development structure, milieu, and job performance expectations (Levin & Hernandez, 2014; LoBasso, 2013; Paloff & Pratt, 2003). According to current studies, part-time faculty are not content with the salary, benefits, job security, health insurance, faculty campus benefits, seniority, increased 37
opportunities for more online courses, or appointed instructional employment contracts (Dolan, 2011; Kramer, Gloeckner, & Jacoby, 2014; Langen, 2011; U.S. Department of Education, 2010). In addition, some limitations such as office equipment, office space to hold office hours and a work phone are benefits that are not supplied to part-time faculty (Hart Research Associates, 2010; Langen, 2011). Part-time online instructors, in contrast to part-time traditional campus instructors, often feel secluded, or not a part of the organizational decision making processes, changes, and do not know about things that occur within the campus (Caruth & Caruth, 2013; Kramer et al., 2014; Langen, 2011; Levin & Hernandez, 2014; LoBasso, 2013). Instructors at for-profit institutions claimed they are not supported enough by the online department after hire, and experience minimal contact with the actual staff or department chairs (Langen, 2011; Sixl-Daniell, Williams & Wong, 2006). Seclusion from organizational decision making may also result in part-time faculty not being able to understand or know how to acquire newer skills required to instruct, especially if courses are redesigned, which can hinder the instructor’s experience and skills in the courses that they instruct. According to LaPointe, Terosky, and Heasley (2015), community and collegiality are missing assets for part-time online faculty. Part-time online faculty members do not interact as much as full-time faculty with other faculty members or online campus community, which can lead to the feeling of isolation (Caruth & Caruth, 2013; Dolan, 2011; Fabry & Elder, 2013). Some online instructors, especially part-time online instructors, may not be involved in the campus or organizational activities, have knowledge of what changes are undergoing, and lack of interaction with the
38
administration (Langen, 2011; Levin & Hernandez, 2014). Instructional communication, reflection, and even collaboration with co-workers are limited and minimal (Beck, 2010; Dolan, 2011; Duncan & Barnett, 2009; Fabry & Elder, 2013). These factors separate part-time faculty from full-time faculty and the organization. Eagan, Jaeger, and Grantham (2015) along with the Higher Education Research Institute (HERI) conducted a study of 4,000 part-time faculty that were employed in over 300 colleges and universities. Based on surveys and follow-up interviews, 73% of parttime instructors were not satisfied working as part-time instructors (Eagan et al., 2015). The part-time faculty members of this study were not satisfied because teaching online requires a large amount of labor, has little monetary compensation, and has insufficient opportunities to interact with students (Bradley, 2009; Clift, 2009; Cook-Wallace, 2012; Jaschik & Lederman, 2013). Faculty mentioned that the feeling of exhaustion originated from poorly developed instructional material, where the faculty must know how to correct, and modify what is presented in the curriculum so that it can apply to the key learning points discussed (Hart Research Associates, 2010; Hirumi, 2008). The feeling of burn out can also be caused by the time and effort it takes to revise course material when an instructor does not have knowledge of the prerequisites, key learning goals, and program of study course sequence. Curriculum and Design Online educators struggle in presenting and supplementing the instructional design of a course (Desai, Hart, & Richards, 2008). Some instructors tend to overload students because of their lack of experience in online instruction (Dykman & Davis, 2008). According to Brigance (2011) online courses and their design are a constant
39
process of change and that curricular modifications are essential to address the needs of careers, constituents, and societal transformations. Castle and McGuire (2010) discussed vital essentials of competent instructional design as a mixture of advanced technology and competent instructors that can connect with learners. The construction of content requires constant updates because of the changes occurring with online learners and their expectations (NACOL, 2009). Such expectations relate to usage of social networking, visual and auditory enhanced technological advancements occurring within the World Wide Web. Newer changes in online instruction are composed of a set of best practices, and virtual classroom instructional strategies that are used when instructing online courses (Allen & Seaman, 2009; Anderson, 2008; Arend, 2007; Artino, 2008; Cassidy, 2011). Curriculum and design of online courses use the same developmental curriculum requirement as ground campuses because the courses must follow the same criterion and quality standards of the institution. Expert knowledge in pedagogy and technology is required for effective course content and design to apply to curriculum accordingly (Caplan & Graham, 2004). Although when courses are offered online, the courses still require the same amount of effort of faculty and students, if not more than traditional ground campus courses. Courses contain lectures, syllabi, weekly discussions, assignments and methods of assessment so that learners can sense an online course (Ko, & Rosen, 2001; Palloff, & Pratt, 2003). Student mastery of learning outcomes in online courses relies also heavily on curriculum and design within online courses or how the course is set up for learning to take place.
40
Many instructors that are involved in content and curriculum development are also involved in the instructional design process (Shea, Sau Li, & Pickett, 2006). Syllabus, content and grading must be clear within the courses, which is also part of the course development process. Part-time faculty may not have the authority to change the syllabus or content where they are employed. Some faculty may be able to supplement the course. Supplementing courses with outside resources can make the course more interesting and allows sharing of current events and real life examples within the course as links. A qualitative case study conducted by Abdellatief, Abu Bakar, Marzanah, and Abdullah (2011) indicated that course development and curriculum promotes the success of both instruction and learning online, particularly within high-quality educational systems. From Abdellatief et al. (2011) we learned that instructional material and preparation is necessary to meet the needs of diverse learners and that faculty steer the direction of the curriculum to benefit learners. An alignment of instructional materials with the content presented organizes the learning where every core element within the course is in harmony within course design. Newton and Hagemeier (2011) conducted a qualitative focus group study on curriculum and design. Their study indicated curriculum development activities require skills that are attained by experience and instructional preparation of instructors to promote positive learning in online courses. The required skills varied according to the personalities, and roles of instructor, without focus to instructional preparation and experience.
41
Özmantar et al. (2010) conducted a qualitative study of the online course website for mathematics students to analyze the structure of online content and 53 tasks within the online courses. TPACK components were incorporated in the assessment and within the paradigms of assessing the course and conducting the training (TK, PK, and CK). The conclusion of the study indicated that the instructors felt they became instructors instead of learners after the training. When course material and course design are poorly developed or missing vital factors for instruction to be effective, an online instructor will need to know how to fill in the missing gaps, and to make the content more comprehensible, which can be time consuming and exhausting (Moore, 2013). Instructors can work with connecting to student in online courses as the connectedness of instructors can change attitudes, increase test performance, allow a chance for learning, increase retention rates, and form learning populations (Schullo, Hilbelink, Venable, & Barron, 2007). However, learning experience is not reinforced by the content material of a course alone (Castle & McGuire, 2010). The online instructor must reinforce content and create methods of revision and modification of material quite often so that the learning material is more coherent, if necessary, through practice and experience (Ball & Cohen, 1999; Castle & McGuire, 2010; Cross & Goldenberg, 2009). Human Resources and Recruitment As higher education continues to progress with an increase in virtual faculty and growth of online learners, fostering educational excellence in staff is still a priority (Kezar & Lester, 2009). Hiring part-time online faculty is a rapidly growing practice at institutions of higher education. Many higher education institutions have just recently started to recruit part-time online faculty because of diversification and addition of newer
42
online programs and online courses (Hawkins, Barbour, & Graham, 2012). The department of academic affairs is committed to quality instruction and retention of staff by working closely with the human resources department in recruiting, hiring, training, promoting ongoing professional development, evaluating retaining, and ensuring faculty members are performing efficiently by evaluating faculty members (Birnbaum, 1988; McQuiggan, 2012). Human resource management in higher education comprises numerous constructs in the processes of the recruitment of part-time faculty that fill virtual positions. The cost of recruiting and training employees requires a successful hiring strategy that causes human resources managers to decide on accepting only trained and experienced employees as a preferred qualification (Shammot, 2014). However, there is little statistical data on faculty hiring practices at virtual universities (Lefebvre, 2008). The human resources departments at for-profit virtual campuses communicate with department chairs so that positions are addressed and the need is expressed for a position to be filled when appropriate. Both of these departments require closer analysis and each division operates under different rules and requirements. For example, in association to the budgeting and funding available for hiring part-time instructors, studies have shown it is less costly to hire part-time instructors than hiring full-time instructors (Dolan, 2011; Monks, 2007). Faculty training. Faculty training refers to how an online instructor is prepared and given prompts and guidelines to instruct in an online learning platform or LMS. Instructional training is usually conducted by the distance-learning department within the institute prior to employment to ensure that certain tasks are mastered in order for an
43
instructor to be able to maneuver within the online course and content efficiently (Mayadas et al., 2009). Online faculty is trained using a corporate or general method of instructional training for all online faculty (Allen & Seaman, 2013; Bedford, 2009; Mayadas et al., 2009). The method is similar to orientation, where the instructor is shown how the LMS works, where assignments are located and how to grade or participate in discussion. Some virtual institutions only provide instructional training that informs the instructor how to maneuver within the course module without specific instructional strategies. Online training and preparation to train online faculty usually takes place for a few weeks in duration, which shows the instructor what the platform does, how to e-mail or communicate from the platform, grades papers, conduct discussion and manage an online classroom. Instructional training does not teach strategies in course instruction and methods to improve, but rather the overall corporate management of the course (Allen & Seaman, 2009, 2013; Aragon & Johnson, 2008; Bedford, 2009; Cross & Goldenberg, 2009; Mayadas et al., 2009). The TPACK theoretical framework supports macro and micro levels of training that can apply to instructional practices (Porras-Hernandez & Salinas-Amescua, 2013). The amount of formal training in online instruction also differs for part-time online faculty and relies on the organizational departments that employ and train them. The amount of experience also can differentiate, as there are instructors that do not always instruct online and do not instruct often. Leadership in higher education is the foundational planning that allows for training, and changes to occur within the institution that can promote changes that exist because of particular activities within training faculty
44
(Bedford, 2009; Bush, 1984). Adapting the skill to using unlimited learning and teaching tools is overlooked when preparation or training for instruction is introduced to online instructors as the training for instructors usually involves specific training using the course platform and not the actual way to present course content (Beck, 2010; Beebe, Vonderwell, & Boboc, 2010; Bowden, 2009). The amount of interaction is not the same in all online colleges and universities, as there are many methods for online or distance learning which require different ways and methods of interacting (Beck, 2010; Gradel & Edson, 2009). Training that involves the actual LMS is more applicable to daily tasks instructors practice within the online course (Lonn & Teasley, 2009). Fabry and Schubert (2009) affirmed that interactivity and communication are both key elements in successful online instruction based on a quasi-experimental study involving 255 students for 23 online courses and 17 online instructors to determine effectiveness of online instructors. From the study, training and expertise in shaping instructors to become active participants, that can provide clear directions, expectations, and be available were paradigms created in a plan for increasing instructor-student and student-student interaction in online courses (Fabry & Schubert, 2009; Fabry & Elder, 2013). Faculty training models. When instructors are trained to instruct online, they need to have knowledge relative to learning and understand how to promote learning online. Hiring and retaining qualified, experienced, and savvy instructors has been an approach in sharpening the power of offering a rich curriculum in both online and traditional courses (Johnstone, 2007). Instructors need to find ways to promote learning
45
when challenged with technology and lack of face-to-face interaction (Allen & Seaman, 2011, 2013; Bedford, 2009; Hart; 2012; Schulte, 2010) which may not be a simple task without proper guidance, practice, and training that concluded from qualitative case studies, interviews, and survey analyses. The basis for Ball and Cohen’s theory (1999) was on Shulman’s (1986) PCK framework. Ball and Cohen’s theory explored the instructional improvement efforts linked to professional development, course training, and online instructional experience pedagogies. Specific to experience, knowledge, skill, and personal attributes in online instruction, according to Ball and Cohen (1999), should be exclusively implemented in online instructional training and preparation in order to be beneficial to the instructor and learner in the online learning environment. Purposive training, activities, and experience are the main domains of Ball and Cohen’s (1999) theoretical approach to instruction. Purposive instructional training is training that is specific to addressing the skills in online instruction that are necessary within the course that apply to content knowledge, the course design, and management of an online course. Experience in online instruction entails an adoption of mastered practices, such as common deliverables, effective communication, timely feedback, and an overall understanding of how to online learners learn and methods to guide them to succeed. The practice-based theory of Ball and Cohen (1999) conceptualized that effective instruction as a profession is mastered by training, and experience that applies to the professional practice. Specific to instructional training, Ball and Cohen (1999) identified the concept of practice for online instructors is grounded in professional development and training with an emphasis on student learning outcomes.
46
Harris and Hofer (2011) also recommended an activity types approach model that integrated instructional planning with an emphasis on standards and curriculum focused on learning outcomes. The focus is not on the technologies that lead to the outcome but the technological pedagogical content knowledge (TPCK) application that applies to professional development skill practices to aid in instruction success. Instructors that apply strategies that are specifically applied to context promote student learning (Harris & Hofer, 2011; Palloff & Pratt, 2011). Ghilay and Ghilay (2014) conducted a mixed methods study that involved the perceptions of 20 lecturers from 10 different institutional training evaluations of faculty and staff. The Training to Management of Online Courses (TMOC) model was used to help course managers and some instructors create, develop, and deliver courses. The training model activities to assess the training methods used included lectures, booklets, instructional video clips, personal guidance, classroom and home practice, annual projects, and pedagogical implication of technology. The study concluded training modules should train staff members systematically, especially before staff members begin to manage online courses and to continue training during the process of instruction. The study did not analyze student success, satisfaction, retention, grades, or the relation to instructional outcomes. Online instructional training models can prompt and coach an online instructor to do such tasks readily of repairing and aligning the content to the course learning objectives. Course training modules does not guarantee that all instructors will actually do these tasks correctly and apply them to the courses (Ball & Cohen, 1999; Cross & Goldenberg, 2009; Palloff & Pratt, 2011; Shea & Bidjerano, 2010).
47
Professional development. Professional development practices and goals that apply to the seven constructs of technological, pedagogical and content knowledge are changes and improvements of the practice of instructors, the actual change of attitudes and beliefs of the instructors, and a change in learning outcomes, such as improved grades (Guskey, 2000; Huberman, 1985). Strategic planning of professional development activities at colleges and universities are necessary processes in addressing the needs of learners through professional development planning, which is driven to support institutional effectiveness on a larger scale (Harris & Martin, 2012; Lake, 2003). Students in higher education are evolving and newer programs are created to meet the diverse learning needs of learners, hence, the professional development activities of faculty can be proposed to address the needs of learners (Friedman, 2011; Gregory & Salmon, 2013; Herman, 2012; Palloff & Pratt, 2011). The professional development planning and training of faculty employed within online campuses requires a specialization and understanding of online learning as a different approach than traditional classroom learning (Ball & Cohen, 1999; Cook et al., 2011; Hart, 2012; Moore, 2013). Toch (2010) agreed for the need of technological proficiency of instructors in the newer technology-based learning environments as an advantage to the learning environment. The Professional and Organizational Development Network in Higher Education (POD Network) is an organization that was established in 1976 to support faculty development, professional development, and organizational development initiatives. The POD Network assists and finds methods to support human resources in the development of professional development that is applicable to the nature of the tasks and goals of the
48
educational departments that require support in enhancing skills and experiences of faculty or staff. POD Network’s studies have indicated a large need for integration of technology-based training for members of higher education (McKee, Johnson, Ritchie, & Tew, 2013). The research of McKee et al. (2013) indicated that the usage of technology and a variety of practices and experiences in professional development enhances online pedagogy and assists learners to succeed in online courses. As part of the TPACK framework, online instructional experience is how previous practice and knowledge, combined with pedagogical awareness of an online instructor enables the promotion of online learning, especially the way the course feels, and the important details within the course structure that enables student learning to take place (Koehler, Mishra, Kereluik, Shin, & Graham, 2014; Mishra et al., 2009). The TPACK framework suggests that professional development aligned with effective instruction practices are driven by all areas that overlap in the TPACK framework. Shattuck et al. (2014) denoted that expertise is a critical demographic in the profession of an online instructor. At least two years of current online teaching experience and rigorous training of the quality of online instruction and adherence to the domains in the QM rubric are useful when peer reviewers are selected to conduct course and instructional reviews (Shattuck et al., 2014). Experience was assessed in how to the QM was applied within courses by online instructors. Greenberg (2010) conducted a study and suggested that the QM rubric aids in the development of a quality product by conducting professional development, and involves all faculty in the online sector from course designer to students. Monroe (2011) conducted another study involving the QM rubric. This study specifically focused on
49
instructional designers, instructors, subject matter experts, and administrators. Results of the study and application of the QM rubric enabled the design of courses but did not assist with instructional training of faculty in online instruction, results for part-time faculty were not mentioned. Ward (2011) utilized the QM rubric at the University of Akron, conducting a study to develop a process for new online instructors to improve online learning. The TPACK (Technological, Pedagogical, and Content Knowledge) was the conceptual framework of this study. Results indicated that the QM training and course revision process assisted online faculty in understanding online interaction, however part-time online instructors were not part of the study. Lye (2013) conducted a study that integrated the TPACK framework to train instructors in higher education for an Information Communication Technology (ICT) program in Malaysia. The study concluded that implementing the TPACK model left the training and professional development department with many questions on the accountability of the questions asked in the survey, and the measurement of training after the study was conducted. Lye (2013) concluded that academic staff and their training is a continuous developmental process and sometimes obstacles and finding ways to prepare instructions to incorporate online technology was not integrated correctly and warranted more observation. The online instructor’s experience as a facilitator is quite different from that of a traditional campus instructor. Previous studies regarding faculty evaluation via end-ofcourse surveys and internal faculty reviews indicated some faculty is not performing as efficiently as they should (Baker-Doyle, 2010). Some faculty concerns about online
50
teaching are related to if an instructor is employed full-time or not (Bowden, 2009). Some part-time faculty members also do not perceive a need for growth and lifelong learning because of isolation, lack of respect and the sense of belonging or feeling of exclusion at the workplace and do not value professional development activities (BakerDoyle 2010; Kezar & Lester, 2009). Anderson (2008) indicated that content area knowledge, technological expertise, and persistence are key elements of success in online instruction. Moser (2007) stated that many virtual instructors lack technological and pedagogical skill to incorporate educational technology into the courses instructed. Palloff and Pratt (2011) stated engaging in streaming newer technologies in the classroom is done by master or experienced instructors take risks and incorporate technology in online classrooms. Professional development is one way of introducing ways for instructors to apply newer technologies on the online classroom (Palloff & Pratt, 2011). Friedman (2011) discussed the Continuing Professional Development (CPD) model. This model follows a formal and systemized method to aid in improving skills, reinforce existing skills, and gaining newer knowledge in the working environment for instructors. The CPD model supports knowledge and skills and warrants that methods of professional development should remain current and appropriate to the constant changes in the learning needs and environment of online instruction (Friedman, 2011). Cycles of professional development. Professional development usually follows employment, and takes places after training and instruction have already been conducted (McQuiggan, 2012). Professional development practices of past theories are not always
51
developed for a part-time online faculty member because of the lack of inclusion of parttime faculty in campus professional development activities (Shattuck et al., 2014). The Center for Community College Student Engagement Survey (2010) took place where 658 colleges in the United States participated. The results of this survey indicated that there is a need for faculty professional development that encompasses instructional strategies that can engage students. The activities within professional development need to be aligned with the course content and organization of task to make meaning for student and to enable positive instructional output. Gregory and Salmon (2013) denoted that professional development for online instructors is usually offered annually at higher education institutions, and professional development is often not content related or aimed at the instructional practices of the online instructor’s courses. Some professional development activities may address certain concerns such as plagiarism, using certain software, or learning how to use new features in the LMS; the TPACK model categorizes LMS activities within the TK dimension and subject-specific technology parallels with TCK (Mishra et al., 2009). Putman, Smith, and Cassady (2009) drew attention to three major components to the critical mechanisms that lead to the achievement of self-improvement and developmental skills in professional development, which are reflection, taking risks with a safe environment, and continuous feedback to promote improvement. Successful training programs should focus on the development of the knowledge, skills, and strategies for teaching and learning (Moore, 2013). At times, the content that is presented for training does not apply to the actual content being taught, but instead, it just assists with navigation within the courses and completing certain tasks within the course (Ball &
52
Cohen, 2009). In addition, online faculty needs opportunities to collaborate with other members to benefit in activities of professional development (Harris & Martin, 2012). Professional development plans that are designed for online instructors must also emphasize that assessment for learning is both a process and an understanding of studentcentered assessment can aid in better learning and accurate measurements in assessment. (Beebe et al., 2010). Attaining quality within an online instructional program surpasses an elaborate course design; it is an entirely intricate mechanism of creativity and zealous instructional approach. In order to complete professional development that can benefit an instructor online, an instructor must first recognize the newer trends that have emerged, such as social networking, videos, podcasts, and webcam conferences, to name a few. Technology will require constant improvement to address and learn newer ways of instructing online courses (Allen & Seaman, 2009, 2011, 2013, Mishra & Koehler, 2006; Palloff & Pratt, 2011; Shulman, 1986). Grossman and Loeb (2010) discussed the practice-based frameworks of professional development of instructors and indicated that effective instructors ought to be recruited and selected since instructors are the core of learning in all levels of instruction. The more informed and involved an online instructor is with technology, the more the instructor can offer learners that are immersed in the latest trends of communication online and offline (Mishra et al., 2009; Moore, 2013). Faculty Effectiveness Effectiveness of part-time faculty has been addressed in several studies, but not at for-profit virtual institutions according to current research. Learning and evaluation emerged together when conducting literature searches because essentially evaluation of
53
effectiveness is a part of the learning process (Rastgoo & Namvar, 2010) since learners are granted grades or points for completing their courses. For online learners, success in online courses is just as important as it is for traditional learners that attend courses on ground because learners need to pass a course and move forward with the degree program or to complete a required course which may be mandatory. Retention rates of learners, faculty performance, and grade points averages in addition to graduation rates have been a concern in online sectors (Johnson & Mejia, 2014; LoBasso, 2013; Palloff & Pratt, 2011; Roby et al., 2013). Grades are a result of assessment, and assessment is a universal application that measures individual endorsement, and improves teaching and quality of feedback on instruction (Bonnel & Boehm, 2011; Peterson & Irving, 2008). Different types of assessment are utilized for different areas of required analysis. For example, Palomba and Banta (1999) defined assessment as an organized collection, analysis, and application of information regarding educational programs proposed mainly to enhance and improve student learning and development. Some assessment techniques online include a project or portfolio that is developed over weeks, while other forms of assessment may be quizzes or short answer assignments for learners. In this study, assessment is the final assessed grade achieved in a course, what a learner earns at the end of the grading term for that specific course. Pereira et al. (2011) developed a pedagogical model that utilized an e-folio as an alternative assessment for online learners. The model noted that learning tools and instructional strategies could add value and enhancement to learning. These strategies and learning tools allow versatility to students when learning online, with an aim to meeting multiple learning intelligences, utilize collaboration, reflection, and online
54
learning community (Beck, 2010). TPACK was developed around helping instructors apply technology, pedagogy, content, and knowledge so that learners within the online course achieve and succeed in the courses attended. Johannesen (2013) explained that effectiveness with training modules and the LMS are not assessed enough in educational to aid in effective instructor outcomes. On an administrative level, Pereira et al. (2011) noted that satisfaction of learners, metacognition or previous experiences, as well as administrative involvement justified the success of programs for online learners. Both instructors and learners require solid knowledge about what they are involved in when it comes to the courses in which they participate. Arend (2007) stated that the quality of student learning is through the cognitive processes students use to study, sometimes referred to as learning strategies. In an online learning environment when learning does not take place in a face-to-face setting, how an instructor teaches using different learning strategies is a skill. When the student is independently learning, they may not experience the instructional strategies used by the instructor or the instructor does not have that experience in applying learning strategies within online instruction. In a study concerning student grades and the instructional experience of instructors utilizing technology, Bernard et al. (2009) conducted a study that involved both students and instructors in the training of the LMS and the courses instructed. Instructors were allowed to edit the syllabi, adjust and create assignments and formulate discussion question of their choice. The results of the study’s findings were that instructional experience and a deep understanding of technology in virtual learning is an
55
advantage for both students and instructors to interact effectively for better student comprehension and success. Faculty was not part-time instructors in this study. Galy, Downey, and Johnson (2011) studied indicators for success of students and grades in online classes; the results indicated that online student success relied on three components: perceived usefulness, perceived ease of use, and the students’ ability to work independently. How learners apprehend and perceive the course modules is modeled by a combination of instructional presentation and course design that which involves instructional training. Jeffcoate (2010) reported that results of student success in online courses is because of the creation of actual course material pertinent to the tasks students practice within their employment and allowing students to pace the course to their own needs as most effective. However, many learners have no concept of where to locate services like tutoring and library content online, and faculty are a way to introduce these resources to online learners (Thomsett-Scott & May, 2009). Results of studies also indicated that learning along with participation is more meaningful than nonparticipation (Hart, 2012; Ormrod, 2013; Palloff & Pratt, 2011). Eagan and Jaeger (2009) conducted a study that assessed student transfer rates at 4year colleges and universities and the effect of part-time faculty. The sample included over 1.5 million students in 107 community colleges in California. Student characteristics and academic achievement were analyzed, as well as gender, race and ethnicity, age, citizenship, major, financial aid information, and enrollment status. The technique used for this study was a hierarchical generalized linear modeling (HGLM). The findings concluded that part-time faculty caused less student transfers to a 4-year
56
institution. Wang (2009) indicated that the learning pedagogy of better student outcomes is rooted in the methods and approaches of instructors, and that effective instructors are a combination of three TPACK elements with other positive characteristics to engage and affect learners. Conclusion The literature reviewed for this study involved examining, organizing theories and studies, and identifying critical gaps, agreements, and disagreements between the literature and the themes surrounding part-time online faculty members. The theoretical framework (TPACK) established and paved the direction for the assessment of part-time online faculty’s self-reported seven domains of technological, pedagogical, and content knowledge. Technology, pedagogy and content knowledge were explored which are the core of the TPACK, where all three domains overlap each other, which is PCK, TCK, and TPK, with a total of seven domain areas. The current literature reviewed focused on technology, pedagogy, and content especially literacy using the TPACK framework (Ball & Cohen, 1999; Greenberg, 2010; Gregory & Salmon, 2013; Jaipal & Figg, 2010; Kereluik et al., 2013; Lye, 2013; Moore, 2013; Özgün-Koca et al., 2010; Özmantar et al., 2010; Ward, 2010). The TPACK framework and studies concerning part-time online instructional knowledge, experience, students, and professional training and development support the study’s predictor variables (seven domains of technological, pedagogical, and content knowledge) and criterion variable (student grades). Since part-time online faculty is the largest members of employees at for-profit institutions, part-time online faculty was the center of the research and literature review.
57
From the studies regarding part-time online faculty, we can imply that part-time online faculty are diverse members in the virtual higher education milieu that are underrepresented in current research because of the rapid growth of part-time online faculty in the past 10 years. Literature that described part-time online faculty included recent growth of part-time faculty, instructional quality, TPACK, pedagogy and theory, virtual faculty profile, responsibilities and attitudes, curriculum and design, human resources and recruitment, faculty training, faculty training models, professional development, cycles of professional development, and faculty effectiveness. There is an absence of quantitative research method designs that have used the TPACK assessment model and survey to measure the self-reported the seven domains of technological, pedagogical, and content knowledge of part-time online faculty in higher education since the rapid growth of part-time online faculty. The growth of part-time online faculty members in higher education gave rise to a new research study. For example, a number of research studies have been conducted about part-time online faculty to examine the dynamics that surround the career in online learning environments. Professional development, training, and experience in online instruction were recognized as necessary requirements in the career of online instruction in the existing literature. Other dynamics for the career of online instruction also involved administrative challenges as part-online faculty addressed the lack of involvement in campus activities and organizational changes such as curriculum development, course content, and changes in which they do not participate. Ball and Cohen’s theoretical model (1999) presented purposive training and activities for the career of online instructors and the involvement with the achievement of
58
learners. Content knowledge, course design, and course management were the center of the theoretical model. Training in TPACK and applying the training purposely for the student outcomes might be rewarding for expanding professional development or training for new or existing faculty. Previous research studies regarding the domains of technological, pedagogical, and content knowledge focused on the knowledge of technology within course training of pre-service teachers by inclusion of certain activities within training modules (Harris & Hofer, 2011; Jaipal & Figg, 2010). Development of quality content in many techniques, and applying strategies for instructors to achieve better learning outcomes was abundant in previous research. The focus and goals of previous studies, regarding the TPACK theoretical model, observed how preparation of instructors resulted in changes in performance, and how instructors selected learning activities and technologies (Ball & Cohen; 1999; Ghilay & Ghilay, 2014; Harris & Hofer, 2011; Mishra & Koehler, 2006; Schmidt et al., 2010). TPACK studies focus on how instructors become more deliberate and goal driven, how instructors plan course instruction and become more studentcentered, and aware of distinct standards in technology integration (Ball & Cohen, 1999; Ghilay & Ghilay, 2014; Harris & Hofer, 2011; Mishra & Koehler, 2006). The assessment methods and tools that measure TPACK constructs focus on the before and after training self-assessment of the measurements. TPACK studies employ self-reported assessment. In other studies, an activities approach model was suggested with an emphasis on course criterion and outcomes was completed and did not regard just technology alone but also included technological pedagogical content knowledge (TPCK) and contained
59
professional development skill practices to intended aid in instructional success. Instructors that apply strategies that specifically relates to context promotes student learning (Ball & Cohen, 1999; Harris & Hofer, 2011; Mishra & Koehler, 2006; Palloff & Pratt, 2011). Studies regarding TPACK self-reported and assessment of performance had been grounded in the instructor’s thoughts and self-perception about their own usage of technology within instruction as a self-assessment. There is an abundance of published studies and literature where the TPACK and all domain areas within it are the foci. The majority of existing studies are qualitative, mixed, and mainly experimental in nature. Some TPACK studies did not include different instructors that maybe experienced professionals or employed part-time in the online field. Previous research concentrated on pedagogical knowledge (PK) without combining technological knowledge (TK). Further research on the demographics of pre-service teachers that participated in TPACK research is necessary as well as newly hired part-time faculty members. It would help characterize how experienced the population was compared to part-time online instructors in order to better understand if self-assessment of TPACK were utilized more because of newly developed experience or training of instructors. This study proposed to address a gap in the literature about part-time online instructors’ self-reported domain areas of technological, pedagogical, and content knowledge and that relationship to students’ grades in a predictive manner. Earlier studies did not include student grades as a criterion to investigate if part-time online instructors’ self-reported technological, pedagogical, and content knowledge predicted online student grades.
60
Current research studies indicated a gap in the literature as the literature did not observe part-time online faculty, but instead studied full-time faculty. TPACK on student grades and effective instruction were not covered in one single study that discussed both in relationship to one another. Chapter 3 outlines the research method and the design of the study.
61
Chapter 3 Methodology The purpose of this quantitative non-experimental study was to examine the relationship between self-reported predictor variables technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), technological pedagogical content knowledge (TPACK) of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. Chapter 3 includes an in-depth description of the research methodology, followed by the research design guiding this study. The research question and hypothesis were reiterated. The target population was described and the sample and sampling method was explained. The data collection steps were explained and the instrument used in the study was described. The reliability and validity was presented. The data analysis plan that will answer the research question was explained. This chapter concluded with an overview of reliability and validity measures. Research Method There were two choices of research methods for this study, qualitative or quantitative (Cohen, Cohen, West, & Aiken, 2003; Neuman, 2011; Steinberg, 2011; Vogt, 2007). Babbie (2010) indicated that quantitative methods involves objectively measuring, numerically analyzing gather data collected through polls, questions, or surveys. When the goal of a study is to test a hypothesis of a predictive relationship between predictor and a continuous criterion variable, a quantitative study is suggested (Steinberg, 2011; Vogt, 2007).
62
Applying a qualitative method for this study was not feasible because qualitative methods do not analyze the independence between the variables using numbers or scales, but qualitative studies would describe the lived learning experiences and instructional experiences of both learners and instructors (Denzin & Lincoln, 2007). The approach of quantitative research is quite different from qualitative research; there was no direct contact, case studies, or interviewing of participants in this study (Creswell, 2009; Denzin & Lincoln, 2007; Marczyk et al., 2005, Stangor, 2011). The data collected was gathered and analyzed through quantitative methods. Denzin and Lincoln (2007) affirmed that quantitative studies are different from qualitative studies mainly because quantitative studies measure “…quantity, amount, intensity, or frequency” (p. 14) as suggested for this study. The process of conducting this study did not directly influence or experiment with any of the variables of interest but instead analyze relationships between variables, which aligns with a correlational method (Field, 2013). A quantitative study was appropriate for this study because the data collected about online students (grades) and part-time online instructors (seven domains of technological, pedagogical, and content knowledge) were not qualitative in nature. Students were not contacted or asked about their lived experiences or feelings, and no interviews or face-to-face discussion occurred. The collected data was represented in the form of numbers or scale of quantity was analyzed. Research Design There are three types of quantitative research designs: non-experimental, quasiexperimental, and experimental (Cohen et al., 2003; Marczyk et al., 2005; Neuman, 2011; Steinberg, 2011; Vogt, 2007). The conduction of this study used a quantitative, non-
63
experimental research design in attempting to establish the relationship between the selfreported predictor variables in the seven domain areas of technological knowledge, pedagogical knowledge and content knowledge of part-time online instructors and the criterion variable, grades of the online learners that attended the courses with the same instructors. According to Leedy and Ormrod (2010), a correlational design is most appropriate for data collection that involves two or more variables with specific measurements for each of the variables and data analysis in identification of any interrelationships. Non-experimental research was appropriate for this study because the participants were not under any treatment, control, or influence (Vogt, 2007). An experimental research design entails manipulation, treatments, or condition changes for the participants within the study and can make causal inferences about the variables relationship (Cohen et al., 2003; Marczyk, et al., 2005). Quantitative studies use descriptive statistics to present summaries of the data collected. Descriptive statistics concentrate on summarizing and describing data (Marczyk et al., 2005). There are four types of descriptive statistics: measures of central tendency (mean and median scores within a distribution), measures of dispersion (variance and standard deviation for describing scatter of deviations from the mean), measures of relative position (percentile or z-scores), and measures of association (correlation, the ways scores vary together, or covariance) (Cramer, 1998; Vogt, 2007). Quantitative correlational studies use measures of association between variables, which were utilized in this study to help describe the significance and relationship of the data. However, measures of association did not describe cause and effect, as the intent of this study was to assess an association, and relationship between the predictor variables
64
and the criterion variable. Therefore, this study used a quantitative, correlational, nonexperimental research design in order to answer the research questions and hypotheses. Research Questions and Hypotheses RQ1: What is the relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors? H10: There is no statistically significant relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors. H1a: There is a statistically significant relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors. RQ2: What is the relationship between the online learners’ grades and selfreported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors? H20: There is no statistically significant relationship between online learners’ grades and self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors. H2a: There is a statistically significant relationship between online learners’ grades and self-reported pedagogical content knowledge (PCK), technological
65
content knowledge (TCK), and technological pedagogical knowledge (TPK), of part-time online instructors. RQ3: What is the relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors? H30: There is no statistically significant relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors. H3a: There is a statistically significant relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors. RQ4: To what extent, if any, is self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors related to online learners’ grades? H40: Self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time instructors is not related to online learners’ grades. H4a: Self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time instructors is related to online learners’ grades.
66
Population and Sampling The group of interest or population chosen for this study was part-time online faculty members at a private for-profit virtual institution in the United States. The participants of this study were recruited through the Academic Affairs department, which sent an e-mail to part-time online instructors employed at the private for-profit virtual institution. Online students were not contacted but their grades were part of the study. The number of online part-time faculty employed at the private for-profit virtual institution was between 207 and 225. For the term that the survey took place, only 148 part-time online faculty were available. The part-time online faculty employed at the institution was between the ages of 30 and 65, and resided in scattered regions of the United States. The number of students per class per instructor ranged from 10-35 students. Sampling is choosing a small group from a larger group and studying the small group or the sample to learn about the large group or actual population in a study (Suskie, 1996; Vogt, 2007). The researcher used convenience sampling to secure a pool of volunteer participants who met inclusion criteria. Convenience sampling is a form of non-probability sampling in which the researcher recruits individuals who meet specific inclusion criteria to participate in a study (Farrokhi, 2012; Johnson & Christensen, 2014). The researcher sent the survey to the entire population, so each individual had the same probability to be in the final sample. The participants of this study were recruited through the Academic Affairs department, which sent an e-mail to part-time online instructors employed at the private for-profit virtual institution. Online students were not contacted but their grades were part of the study.
67
For this study, a power analysis with recommended parameters consisted of a power of at least .80, an alpha of .05, and at least a small effect size for a proper sample size to be calculated (Howell, 2013). G*Power 3.1.9 was used to calculate an appropriate sample size using the above parameters. Using a multiple linear regression with 3 predictor variables, a sample of at least 77 participants was needed to warrant empirical validity (Faul, Erdfelder, Buchner, & Lang, 2014). Informed Consent and Confidentiality An Informed Consent Agreement form (see Appendix C) that contained a confidentiality statement was sent to potential participants via the academic operations department of the virtual institution in order to identify if the participants wished to volunteer for the study. If participants agreed to participate and volunteer in the survey by selecting, “I accept the above terms” then they were able to click within the Informed Consent Agreement and the link directed them to the survey. If participants decided not to participate and volunteer, and chose the option “I do not accept the above terms,” they were not directed to the survey and received a message that thanked them for their time. Confidentiality and anonymity of participants were taken into account, so no names or personal information was asked for from either online instructors or online learners. Information could not be revealed because the survey questions did not allow for any commenting and only multiple-choice responses were recorded and sent with blind identifiers. The virtual institution also could not see or access responses and hold participants accountable for their personal responses. Instructors were assigned a unique identity code (ID) to access the survey and this ID code was used to match student grades. The survey site does not have access or the ability to identify the Internet
68
Protocol (IP) addresses, locations, e-mails, names, or personal information of participants. The privacy policy of the Survey Monkey™ (see Appendix E) website also prevented any information from being shared or exposed to any unauthorized parties. No demographic questions such as institution name or location were asked to maintain privacy of participants. Any volunteer participant was free to withdraw from the study without penalty or loss of any privileges by exiting the online survey or communicating by method shown on the letter of invitation to participate such as e-mail, phone, or mail (see Appendix C). Only participants that understood and agreed to participate in this voluntary study were allowed to access the survey. After the study was conducted, and finalized, all data were given a protective password and will be stored in the researcher’s computer for 3 years. After 3 years, all data will be shredded, deleted, and destroyed. No one will have access to the information or disclose private information about participants or the virtual institution. Data Collection A private for-profit virtual institution that offers online undergraduate and graduate degrees allowed such a study to be conducted about part-time online faculty (see Appendix B). The private-for profit institution agreed to distribute invitations via e-mail and the Internet. The e-mail included a web link to the Survey Monkey™ survey and the Informed Consent (Appendix C) and was be sent through academic affairs to the population of part-time online instructors. The private for-profit virtual institution provided the grades that students earned for the courses taught by the same part-time online instructors that volunteered and participated in the survey. Part-time online instructors’ responses were anonymous. The data collected (student grades and scores)
69
were composed through the student affairs department of the virtual institution and without disclosure of student names. For this study, a faculty survey was conducted online. The part-time online instructors (volunteer participants) received a solicitation via e-mail that was sent out by the academic operations division of the private for-profit virtual institute to undisclosed recipients or participants to invite them to the study and describe the purpose of the study. Students were not contacted; only their grades were disclosed by the student affairs department and were matched up according to the instructor ID. This maximized the anonymity of participants and their students and reduced the risk of a breach of confidentiality. For this study, the survey solicitation was conducted via Internet or web as this was the simplest method to manage and control large groups and it is less costly than mail, less bothersome than telephone calls, and protects the confidentiality of participants (Dillman, 2007). There are advantages and disadvantages when using surveys to collect data (Keppel & Zedeck, 1989; Vogt, 2007). Surveys that are mailed through the postal system are costly, home-mailing addresses may not always be accurate, mail may be thrown away once it arrives, mail can get lost, or the survey may reach the wrong person it was intended to reach. Participants may also delay in mailing back the survey forms that could risk delay in the collection of data. A part-time faculty solicitation e-mail (see Appendix C) was sent only to parttime online instructors (volunteer participants). The e-mail message contained a direct web link to the site that hosted the actual online survey questionnaire (SurveyMonkey.com®) and the Informed Consent form. Responses were anonymous, and no record or personal details were included after the volunteer participants had
70
completed and submitted their responses. The sets of data were used to analyze each volunteer participant’s responses and Technological Pedagogical Content Knowledge (TPACK) scores for technological knowledge, pedagogical knowledge and content knowledge for all seven areas of the TPACK. The student operations division reported the grades of students that attended the courses with the same instructors, and student grades were also anonymous. Instrumentation The TPACK Survey (Schmidt et al., 2010) for part-time virtual faculty participants addressed three major constructs technological knowledge, pedagogical knowledge and content knowledge for part-time online instructors with a total of seven overlapping domain areas. The TPACK Survey was created by Schmidt et al. (2010) as a technique to measure each of the seven domains of the TPACK (technological, pedagogical, and content knowledge) with the final instrument consisting of 47 Likertscale items (Koehler et al., 2014; Schmidt et al., 2010). Permission was granted in order to use the TPACK Survey instrument, and to slightly modify questions and wording or remove sections for research purposes (see Appendix A). Technological knowledge, pedagogical knowledge, and content knowledge and overlapping areas were included in the survey. The rationale and process used to select this survey for this study was relevant to the literature regarding the predictor variables (technological knowledge, pedagogical knowledge, and content knowledge and overlapping areas) of part-time online instructors. The TPACK Survey (Schmidt et al., 2010) questions pertaining to part-time online technological knowledge, pedagogical knowledge, and content knowledge and overlapping areas were divided into groups. The first set of questions gathered how
71
much content knowledge part-time online faculty self-reported that they possessed; the second set of questions gathered how much technological knowledge part-time online faculty self-reported that they possessed; and the third set of questions gathered how much pedagogical knowledge part-time online faculty self-reported that they possessed. The three predictor variables (technological knowledge, pedagogical knowledge, and content knowledge) were ordinal data measured on a 5-point Likert-scale. For example, an item from the technological knowledge portion stated, “I know how to solve my own technical problems.” A sample item from the pedagogical knowledge portion stated, “I know how to assess student performance in a classroom.” Moreover, a sample item from the content knowledge scale stated, “I have sufficient knowledge about the subject I instruct.” Each item had responses ranging from “Strongly Disagree” = 1 to “Strongly Agree” = 5. Responses for all of the items that correspond to each construct were averaged in order to obtain one overall score for each of the three predictor variables. Higher composite scores represented a higher proficiency in the constructs. Student grades were also an area of the actual study and were measured according to the average points received in courses. Data released included student percentages earned for the courses instructed by the participating instructors, which were treated as a ratio variable, from 0 to 100, with higher numbers indicating a higher proficiency within the course. Reliability and Validity Reliability is a vital element that determines if a study is consistent and yields reliable results (Fox, 2008). There are three different types of reliability issues in a study: stability reliability, representative reliability, and equivalence reliability (Neuman, 2011).
72
Stability reliability is a concern that occurs with the patterns of data or changes over time with the data. In representative reliability is a concern of how well the measurements yielded by the instrument are reliable with other groups and different classes. Equivalence reliability involves multiple tests in order to measure consistent and stable results (Vogt, 2007). When results are consistent, results are highly reliable (Neuman, 2011; Vogt, 2007). Instrument reliability measures variables in a predictable, stable, and consistent manner (Marczyk et al., 2005). If reliability is low, a researcher will not be able to determine the relationship between predictor variables and criterion variable even if they are evident. In order for an instrument to be valid, it must measure the variables identified in the research questions (Cooper & Schindler, 2008). Research validity is subject to four threats: construct, internal, external and statistical (Marczyk et al., 2005). Construct validity refers to whether or not the instrument measures what it is proposed to measure (Fox, 2008; Wilcox, 2008). The construct validity of the TPACK Survey (Schmidt et al., 2010) instrument measures the seven domain areas of technological, pedagogical, and content knowledge of the predictor variables. Internal validity confirms that the research design is the correct one chosen. Internal validity of a study should have a single, clear explanation of the results of the study’s predictor variables, the criterion, or outcome variable, and the extent of their relationship. Cronbach’s alpha and exploratory factor analysis on each domain of the TPACK established construct validity of the TPACK survey instrument and has been supported by Schmidt et al. (2010). Archambault and Crippen (2009) and Schmidt et al. (2010) both pilot tested the TPACK for instrument
73
validity and found it statistically reliable in measuring the self-reported seven domains of technological, pedagogical, and content knowledge of part-time online instructors. External validity determines how well the study findings would yield similar results and generalized to other populations (Cooper & Schindler, 2008). The use of convenience sampling yields a potential threat to external validity. According to Vogt (2007) unrepresentative sampling poses a threat to validity in research design because it reduces the “…ability to generalize from a sample to a population” (p. 82). Findings from the data analysis may not be generalizable to the greater population, if the sample is not comparable to the target population. To minimize the potential threat to external validity, the researcher recruited a diverse sample of participants to provide data for the study. A description of the target population and sample was provided to allow the reader to determine how well the sample reflects the composition of the population. A threat to external validity is Type I Error (false positive), which exists when a null hypothesis is rejected when it is correct (Vogt, 2007). A Type II Error (false negative) can occur if the null hypothesis is not rejected when there is actually an effect present (Marczyk et al., 2005; Vogt, 2007). However, in this study, Type I and Type II threats were minimized by conducting a power analysis in order to ascertain the minimum sample size needed to ensure validity. One more threat to external validity is that non-experimental studies do not conclude cause and effect or impact as experimental studies to determine the cause and effect and impact between variables. A nonexperimental approach was chosen because of the inability to perform an experimental study on participants. The approach was further justified because of the nature of the study and intent to examine only the predictive relationship of online students’ grades by
74
the seven domains of technological, pedagogical, and content knowledge of part-time online instructors at a private for-profit virtual institution. Researchers have established high levels of internal validity and internal consistency of the TPACK Survey (Schmidt et al., 2010). Schmidt et al. (2010) established acceptable internal consistency by observing Cronbach’s alpha between 0.75 and 0.92 for each of the seven overlapping constructs of the TPACK survey (Koehler et al., 2014). Replication to other studies that utilize the same research design indicates a reliable research technique because the outcomes produced similar results. In addition, the population size, particularly part-time online instructors at one private for-profit virtual institution, made this study reliable since sample sizes suggested for the study were large enough to generalize the results. Data Analysis Data collected was uploaded into SPSS version 21.0 for Mac. All data were evaluated for missing data and extreme cases. The existence of extreme cases, or outliers, was assessed by the evaluation of standardized values. The standardized values were generated for each of the TPACK scores and each individual case was inspected for values above 3.29 and below -3.29 (Tabachnick & Fidell, 2012). All cases with missing data were evaluated for patterns that are were random. Volunteer participants who did not complete major sections of the survey were excluded from the final sample. Descriptive statistics were assessed to describe the demographics of the sample and the variables of interest used in the analyses. Frequencies and percentages were calculated for ordinal data and means and standard deviations were evaluated for continuous data (Howell, 2013). To answer the research questions 1-3, multiple linear
75
regression (MLR) was conducted. Multiple linear regressions are an appropriate analysis when the purpose of the research is to assess if there is a predictive relationship among a set of continuous or categorical predictor variables on a continuous response variable (Stevens, 2009). The following regression equation (main effects model) was used: 𝑦 = 𝛽0 + 𝛽1 𝑥1 + 𝛽2 𝑥2 + 𝛽3 𝑥3 + 𝜀 where 𝑦 = the response variable, 𝛽0 = constant (which includes the error term), 𝛽1 = first regression coefficient, 𝛽2 = second regression coefficient, 𝛽3 =third regression coefficient, x = predictor variables, and ε = the residual error (Tabachnick & Fidell, 2012). A total of three regression analyses were conducted to reflect research questions 1-3. For Research Question 1, the self-reported predictor variables were technological, pedagogical, and content knowledge of part-time online instructors and the criterion variable was online learners’ grades. The self-reported predictor variables for Research Question 2 were pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge of part-time online instructors and the criterion variable was online learners’ grades. For Research Question 3, the self-reported predictor variables were pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge and the criterion variable was TPACK of part-time online instructors. The standard entry method for MLR was used. The standard method enters all of the predictors simultaneously into the model. Predictors were evaluated based on what each added to the criterion variable in terms of predictive power that was different from the predictability provided by all other predictors (Tabachnick & Fidell, 2012). The F test was used to assess whether the model
76
that included the predictor variables jointly predicted the criterion variable. The multiple correlation coefficient of determination, or R2, was assessed and examined in order to determine how much variance in the response variable was accounted for by the group of predictors. The t-test was examined to evaluate each predictor for significance and beta (B) coefficients were used to determine the size of the prediction effect for each significant, predictor variable. For significant predictors, a one-unit increase in the predictor variable will indicate that the criterion variable will increase or decrease by the size of the unstandardized beta coefficients, holding all other predictors constant. The assumptions of MLR were assessed before any analyses were conducted. The assumptions included normality, homoscedasticity and no significant multicollinearity. The assumption of normality assumed that the distribution between the predictor variables and the outcome variable followed a bell shaped curve, while homoscedasticity assumed that values of predictors were equally distributed along the regression line. Both normality and homoscedasticity were examined using scatter plots (Tabachnick & Fidell, 2012). The absence of multicollinearity presumed that the predictor variables were not closely related and were evaluated using Variance Inflation Factors (VIF). A violation of the assumption was suggested by VIF values over 10, which indicated multicollinearity (Stevens, 2009). To examine Research Question 4, a Pearson r correlation analysis was conducted. The variables of interest for the analysis were TPACK and online learners’ grades. Pearson correlation analysis was appropriate because of the intention to assess the relationship between two variables. Given that all variables were continuous (interval/ratio data) and the hypotheses sought to assess the relationships, or how the
77
distribution of the z-scores varied, Pearson r correlation analysis was the appropriate bivariate statistic (Pagano, 2010). Pearson correlation is the bivariate measure of association of the strength and direction of the relationship between two continuous variables (Pallant, 2010). Correlation coefficients vary from -1 to 1. A correlation of -1 indicates a perfect negative relationship, and a correlation of +1 indicates a perfect positive relationship. A correlation coefficient of 0 indicates no relationship. Positive coefficients suggest a direct relationship, where as one variable of interest increases the other variable of interest also increases. Negative coefficients suggest an indirect relationship, where as one variable increases the other variable of interest decreases. Summary Research method and design complement one another with the intent to address planning and the techniques of conducting research. Clarity and planning the management of the entire research attempt is vital in the process of quantitative studies (Cohen et al., 2003; Vogt, 2007). Chapter 3 discussed the rationale for selecting quantitative instead of qualitative design. The steps regarding the collection of student grades and part-time online faculty self-reported data were described. The voluntary and confidential aspects of participation were explained. The seven constructs of the selfreported predictor variables of part-time online instructors’ technological knowledge, pedagogical knowledge, and content knowledge were operationalized using the TPACK Survey (Schmidt et al., 2010). Part-time online instructors’ self-reported constructs of technological knowledge, pedagogical knowledge, and content knowledge along with their students’ grades were the focus of this study. Chapter 4 presents the results of the data analyses in order to answer the research questions.
78
Chapter 4 Results There is evidence of low online postsecondary learner grades (Johnson & Mejia, 2014; Roby et al., 2013; Xu & Jaggars, 2013) and retention of online postsecondary learners (Council on Educational Technology & Learning Innovation [CETLI], 2013; Johnson & Mejia, 2014; NCES, 2014; Nistor & Neubauer, 2010). Part-time online faculty account for between 80-90% of faculty at private for-profit virtual institutions (NCES, 2014). The purpose of this quantitative non-experimental study was to examine the relationship between self-reported predictor variables (technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPACK) of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. Chapter 4 presents the results of the data for this study and will include an overview of research design and method, faculty population and sample demographics, faculty professional development frequency and student grade, and descriptive statistics. The chapter concludes with the research questions and hypotheses testing results. Multiple linear regression (MLR), Pearson r, and descriptive analysis were used to analyze the data. Relationships that are statistically meaningful between the self-reported predictor variables and criterion variable were identified. Reporting of validity and reliability measures for the study included reliability analysis, data screening, and descriptive statistics.
79
Research Design and Method A quantitative design utilizing multiple linear regression analysis and Pearson r correlation analysis for examining the predictor variables and criterion variable was used in the study because of the nature of the research questions and hypotheses. Quantitative research methods utilize numbers, hypotheses, and statistical analyses to address research questions that are posed for a study (Cooper & Schindler, 2008; Miles & Shevlin, 2001; Stangor, 2011). In this study, procedures performed were analysis of measurable data that was collected from volunteer participants (part-time online faculty). This quantitative study measured the predictive relationship between self-reported predictor variables and criterion (outcome) variable, employing a non-experimental correlation method that did not directly influence any of the variables of interest but instead analyzed the predictive relationships between variables (Field, 2013). Faculty Population and Sample Demographics The population chosen for this study was part-time online faculty members at a private for-profit virtual institution in the United States. Participants of this study were recruited through the academic affairs department, which sent an e-mail to part-time online instructors employed at the private for-profit virtual institution. A convenience sampling approach was employed to secure volunteer participants for the study. Eightyone part-time online faculty members participated in the TPACK Survey (Schmidt et al., 2010) for a response rate of 57%. Three survey responses were partially completed and not included in the total. There was only one missing value, which was substituted for the series mean since the sample size was very small. A TPACK Survey (Schmidt et al., 2010) was distributed via web to 148 part-time online faculty members at a private for-
80
profit virtual institution headquartered in the United States in January 2016 via a SurveyMonkey.com® link. The operations program manager at the virtual institution emailed all the participants to invite them. The survey was administered to the entire available population, so each individual had the same probability to be in the final sample. Participants were given an option to volunteer and participate in the survey. If they did not want to volunteer, they were given an option to opt-out and exit without any penalty or obligation. The sample was less than the estimated population because only 148 part-time online faculty members were available for the January 2016 term. The faculty sample that received the questionnaire was representative of the population of interest. The institution shared that 55% of the population were male, and 45% of the population were female. Regarding age, the values were accurate and highest between the ages of 30-59 as the institution reported. With regard to race, 75% of the population were white, and 25% were black, other, or from multiple races. Degrees held for the population were 89% with master’s degrees and 11% with doctorate degrees. From the data analysis, the majority of respondents (53.1%, n = 43) were males and 46.9% (n = 38) were females. Regarding age, 48.1% (n = 39) were 18-44, and the remaining 51.9% (n = 42) were 44 years of age or older as presented in Table 1. Table 1 Age of Part-Time Online Faculty Age 18 – 29 30 – 44 45 – 59 60+ Total
n
% 1 38 30 12 81
Cumulative % 1.2 46.9 37 14.8 100
1.2 48.1 85.2 100
81
Regarding race, the majority of part-time online faculty was white (72.8%, n = 59), Asian (6.2%, n = 5) and black or African-American faculty (6.2%, n = 5) was equally distributed. Faculty race is presented in Table 2. Table 2 Race of Part-Time Online Faculty Race
%
n
White Black or African-American Asian Native Hawaiian or other Pacific Islander From multiple races Not Answered Total
59 5 5 1 10 1 81
72.8 6.2 6.2 1.2 12.3 1.2 100
Valid % 73.8 6.3 6.3 1.3 12.5
Eighty-five percent (n = 69) of part-time online faculty self-reported that they held master’s degrees and 15% (n = 12) of part-time online faculty held doctorate degrees. Regarding online teaching experience, 8.6% (n = 7) of part-time online faculty had one year or less; 32.1% (n = 26) had 1-4 years, and 59.3% (n = 48) of part-time online faculty had five years or more. Online teaching experience is presented in Table 3. Table 3 Online Teaching Experience of Part-Time Online Faculty Teaching Experience 1 year or less 1-4 years 5-10 years 10 or more years Total
n
% 7 26 27 21 81
Cumulative % 8.6 32.1 33.3 25.9 100
82
8.6 40.7 74.1 100
Faculty Professional Development Frequency The annual amount of self-reported professional development in the areas of technology, content knowledge, and pedagogy for part-time online faculty volunteer participants was ascertained by three questions on the survey. Frequency responses ranged from 1 (no professional development for the year) to 4 (at least 4 professional development events per year). The highest area for self-reported professional development endorsement was content knowledge (mean score of 2.68) with more frequent professional development in that area. Based on all the responses, all participants self-reported that they completed professional development activities two to three times a year as presented in Table 4. Table 4 Professional Development Professional Development
N Minimum Maximum
M
Technology As an online instructor, how often do practice professional development that is specialized in technology?
81
1
4
2.4 1.1
Content/Knowledge As an online instructor, how often do you practice professional development specialized in the content that you instruct?
81
1
4
2.7 1.1
SD
Pedagogy As an online instructor, how often do you 81 1 4 2.4 1.1 practice professional development specialized in the pedagogy of online instruction? Note. 1=None, 2=Once or twice a year, 3=3 times a year, 4=At least 4 times a year.
83
Reliability Analysis The reliability of the TPACK Survey (Schmidt et al., 2010) instrument constructs for the sample was tested with Cronbach’s alpha. Reliability is a measure of consistency. Cronbach’s alpha and exploratory factor analysis in each domain of the TPACK established construct validity of the TPACK Survey instrument. Schmidt et al. (2010) also tested the reliability for all areas of the TPACK Survey. Archambault and Crippen (2009) and Schmidt et al. (2010) both pilot tested the TPACK Survey for instrument validity and found it statistically reliable in measuring all seven self-reported constructs of the technological, pedagogical, and content knowledge of part-time online instructors. The formula to test reliability computes the average correlation among items on a survey instrument. Therefore, there must be a minimum of two items for a construct or area in the survey in order for the reliability to be computed. Reliability for the constructs that were tested ranged from α = .94 for technological knowledge to 𝑎 = .97 for technological pedagogical knowledge. The minimum acceptable reliability is .70. Reliability coefficients are presented in Table 5. Table 5 Reliability Coefficients Construct Technological Knowledge (TK) Content Knowledge (CK) Pedagogical Knowledge (PK) Pedagogical Content Knowledge (PCK) Technological Content Knowledge (TCK) Technological Pedagogical Knowledge (TPK) Technological Pedagogical Content Knowledge (TPACK)
84
N of Items Cronbach’s alpha 6 0.943 3 0.950 7 0.960 1 N/A 1 N/A 9 0.968 1 N/A
Student Mean Grades Student grades for each part-time online instructor ranged from a mean of 14.5 to 100 (M = 68.07, SD = 17.51). Using a traditional grading scale, this means that the parttime faculty who took the volunteered to take the survey had student grade averages in their courses of a “D” (M=68 or 68%). Part-time faculty participants had the highest selfreported endorsements in the areas of content knowledge (M = 4.37, SD = 0.91) and pedagogical knowledge (M = 4.12, SD = 0.91). Faculty participants had the lowest selfreported endorsements in the areas of technological pedagogical knowledge (M = 3.71, SD = 1.01) and technological knowledge (M = 3.90, SD = 0.88). Descriptive statistics are presented in Table 6. Table 6 Descriptive Statistics Variable Student Grade Technological Knowledge (TK) Content Knowledge (CK) Pedagogical Knowledge (PK) Pedagogical Content Knowledge (PCK) Technological Content Knowledge (TCK) Technological Pedagogical Knowledge (TPK) Technological Pedagogical Content Knowledge (TPACK)
N Minimum Maximum M SD 81 14.5 100 68.1 17.5 81 1 5 3.9 0.88 81 1 5 4.37 0.91 81 1.71 5 4.12 0.91 81 1 5 4.15 0.92 81 1 5 3.95 1.19 81 1.33 5 3.71 1.01 81 2 5 4.07 0.88
The data were screened for normality with skewness and kurtosis statistics and with histograms. Distributions of scores with skewness and kurtosis coefficients between ±2 are considered to be within normal limits. As indicated in Table 7, the data approximated normality.
85
Table 7 Skewness and Kurtosis Coefficients Variable Student Grade Technological Knowledge (TK) Content Knowledge (CK) Pedagogical Knowledge (PK) Pedagogical Content Knowledge (PCK) Technological Content Knowledge (TCK) Technological Pedagogical Knowledge (TPK) Technological Pedagogical Content Knowledge (TPACK)
N Statistic 81 81 81 81 81 81 81 81
Skewness Statistic Std. Error -0.361 0.267 -1.020 0.267 -1.570 0.267 -0.832 0.267 -1.080 0.267 -1.170 0.267 -0.568 0.267 -0.830 0.267
Kurtosis Statistic Std. Error -0.093 0.529 1.120 0.529 2.120 0.529 -0.414 0.529 0.968 0.529 0.462 0.529 -0.651 0.529 0.182 0.529
The skewness for student grades = -.361 and the kurtosis = -.093. The histogram for student grade is presented in Appendix F. The skewness for self-reported technological knowledge (TK) = -1.02 and the kurtosis = 1.12. The histogram for selfreported technological knowledge (TK) is presented in Appendix G. The skewness for self-reported content knowledge (CK) = -1.57 and the kurtosis = 2.12. The histogram for self-reported content knowledge (CK) is presented in Appendix H. The skewness for self-reported pedagogical knowledge (PK) = -.832 and the kurtosis = -.414. The histogram for self-reported pedagogical knowledge (PK) is presented in Appendix I. The skewness for self-reported pedagogical content knowledge (PCK) = -1.08 and the kurtosis = .968. The histogram for self-reported pedagogical content knowledge (PCK) is presented in Appendix J. The skewness for self-reported technological content knowledge (TCK) = -1.17 and the kurtosis = .462. The histogram for self-reported technological content knowledge (TCK) is presented in Appendix K. The skewness for self-reported technological pedagogical knowledge (TPK) = -.568 and the kurtosis = .651. The histogram for self-reported technological pedagogical knowledge (TPK) is presented in Appendix L. The skewness for self-reported technological pedagogical content knowledge (TPACK) = -.83 and the kurtosis = .182. The histogram for self86
reported technological pedagogical content knowledge (TPACK) is presented in Appendix M. Research Questions/Hypotheses Results Four research questions and four related hypotheses were originated for investigation. To answer the research questions 1-3, multiple linear regressions were conducted. Multiple linear regressions are an appropriate analysis when the purpose of the research is to assess if there is a predictive relationship among a set of continuous or categorical predictor variables on a continuous response variable (Stevens, 2009). To examine Research Question 4, the Pearson r correlation was conducted. The assumptions of MLR were assessed before any analyses were conducted. The assumptions included normality, homoscedasticity and no significant multicollinearity. Homoscedasticity assumes that values of predictors are equally distributed along the regression line. Both normality and homoscedasticity will be examined using scatter plots (Tabachnick & Fidell, 2012). The normality of the residuals was analyzed. A residual is the difference between the observed values and the model predicted values of the dependent variable. Standardized values exceeding ±3 were excluded from the analyses. One problem in multiple linear regression analysis is multicollinearity that exists when moderate to high intercorrelations are present between the predictor variables (Stevens, 2009). In order to assess multicollinearity, a Variance of Inflation Factor (VIF) was performed. A violation of the assumption was signified by inspecting VIF values over 10, which would cause a multicollinearity concern (Stevens, 2009). None of the
87
predictor variables measured 10, or were intercorrelated; hence, the absence of multicollinearity presumed that the predictor variables were not closely related. Research Questions and Hypotheses RQ1: What is the relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK) and content knowledge (CK) of part-time online instructors? H10: There is no statistically significant relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors. H1a: There is a statistically significant relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors. RQ2: What is the relationship between the online learners’ grades and selfreported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors? H20: There is no statistically significant relationship between online learners’ grades and self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors. H2a: There is a statistically significant relationship between online learners’ grades and self-reported pedagogical content knowledge (PCK), technological
88
content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors. RQ3: What is the relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors? H30: There is no statistically significant relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors. H3a: There is a statistically significant relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK), and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors. RQ4: To what extent, if any, is self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors related to online learners’ grades? H40: Self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time instructors is not related to online learners’ grades. H4a: Self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time instructors is related to online learners’ grades. Research Question 1/Hypothesis 1. What is the relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge
89
(PK), and content knowledge (CK) of part-time online instructors? The three selfreported predictor variables were technological knowledge, pedagogical knowledge, and content knowledge of part-time online instructors. The outcome or criterion variable was online learners’ grades. Analysis of residuals indicated one statistical outlier, which was excluded. The remaining residuals ranged from -2.37 to 1.98. The normal histogram for the residuals is presented in Figure 2.
Figure 2. Histogram of Standardized Residuals for Student Grade The data were examined for homoscedasticity. A scatterplot confirmed that the data met the assumption of homoscedasticity as presented in Figure 3.
90
Figure 3. Scatterplot of Plot Standardized Residuals and Standardized Predicted Values The variance inflation factors (VIF) for the predictor variables ranged from 1.47 to 3.00. Since the values were less than 10, no significant problem of multicollinearity was observed. However, the regression model was not statistically significant, F(3, 76) = 1.64, p = .187; adjusted R2 = .02. Regression coefficients are presented in Table 8. Table 8 Regression Coefficients for Student Grade Variable
B SE B (Constant) 51.09 9.91 Technological Knowledge 4.77 2.55 Content Knowledge 0.017 3.48 Pedagogical Knowledge -0.238 3.36 Note. Dependent variable = Student Grade.
β 0.253 0.001 -0.013
t 5.16 1.87 0.005 -0.071
p
VIF
0 0.065 0.996 0.944
1.47 3 2.82
H10 stated that there is no statistically significant relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) of part-time online instructors. The regression model
91
was not statistically significant, F(3, 76) = 1.64, p = .187; adjusted R2 = .02. Therefore, the null hypothesis was not rejected. Research Question 2/Hypothesis 2. What is the relationship between the online learners’ grades and self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of parttime online instructors? The three self-reported predictor variables were pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge of part-time online instructors. The outcome variable was online learners’ grades. The residuals ranged from -2.85 to 1.88, which was in the range of normality. The normal histogram for the residuals is presented in Figure 4.
Figure 4. Histogram of Standardized Residuals for Student Grade The data were examined for homoscedasticity. A scatterplot confirmed that the data met the assumption of homoscedasticity as shown in Figure 5.
92
Figure 5. Scatterplot of Plot Standardized Residuals and Standardized Predicted Values The variance inflation factors (VIF) for the predictor variables ranged from 1.60 to 2.34. Since the values were less than 10, no significant problem of multicollinearity was observed. The regression model was not statistically significant, F(3, 77) = 1.24, p = .301; adjusted R2 = .009. Regression coefficients are presented in Table 9. Table 9 Regression Coefficients for Student Grade Variable (Constant) Pedagogical Content Knowledge Technological Content Knowledge Technological Pedagogical Knowledge
B 50.51 3.33 -0.283 1.31
SE B 9.35 2.67 2.51 2.94
β 0.176 -0.019 0.076
t 5.4 1.25 -0.113 0.445
p
VIF
0 0.216 0.911 0.658
1.6 2.36 2.34
Note: Dependent Variable = Student Grade. H20 stated that there is no statistically significant relationship between online learners’ grades and self-reported pedagogical content knowledge (PCK), technological
93
content knowledge (TCK), and technological pedagogical knowledge (TPK) of parttime online instructors. The regression model was not statistically significant, F(3, 77) = 1.24, p = .301; adjusted R2 = .009. Therefore, the null hypothesis was not rejected. Research Question 3/Hypothesis 3. What is the relationship between selfreported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors? The three self-reported predictor variables were pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge of part-time online instructors. The outcome variable was self-reported TPACK (Technological Pedagogical Content Knowledge). After excluding one case with a residual outside the range of normality, the residuals ranged from -3.09 to 2.13. The normal histogram for the residuals is presented in Figure 6.
Figure 6. Histogram for Technological Pedagogical Content Knowledge
94
The data were examined for homoscedasticity. A scatterplot confirmed that the data met the assumption of homoscedasticity. See Figure 7.
Figure 7. Scatterplot of Plot Standardized Residuals and Standardized Predicted Values The variance inflation factors (VIF) for the predictor variables ranged from 1.60 to 2.36. Since the values were less than 10, no significant problem of multicollinearity was observed. The regression model was statistically significant, F(3, 76) = 42.27, p < .001; adjusted R2 = .61. This means that 61% of the variance in TPACK can be explained by the predictor variables. Examination of the univariate statistics revealed that each predictor variable was related significantly and positively to TPACK. Pedagogical Content Knowledge was significantly and positively related to TPACK (β = .21, t = 2.39; p = .019). Self-reported technological content knowledge was significantly and positively related to TPACK (β = .27, t = 2.50; p = .014). Self-reported technological
95
pedagogical knowledge was significantly and positively related to TPACK (β = .42, t = 3.89; p < .001). Regression coefficients are presented in Table 10. Table 10 Regression Coefficients for TPACK Variable (Constant) Pedagogical Content Knowledge Technological Content Knowledge Technological Pedagogical Knowledge
B 1.24 0.195 0.191 0.348
SE B 0.285 0.081 0.077 0.09
β 0.213 0.27 0.417
t
p 4.37 2.39 2.5 3.89
VIF
0 0.019 0.014 0
1.6 2.36 2.34
Note. Dependent Variable = TPACK (Technological Pedagogical Content Knowledge). H30 stated that there is no statistically significant relationship between selfreported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) and self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors. The regression model was statistically significant, F(3, 76) = 42.27, p < .001; adjusted R2 = .61. Since all of the predictor variables were significantly related to TPACK, the null hypothesis was rejected. Research Question 4/Hypothesis 4. To what extent, if any, is self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors related to online learners’ grades? Research question four/hypothesis four was investigated with the Pearson r. There was no significant relationship between selfreported TPACK and online learners’ grades r(79) = .14, p = .221, two-tailed. H40 stated that self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time instructors is not related to online learners’ grades. There was no significant relationship between self-reported TPACK and online learners’ grades
96
r(79) = .14, p = .221, two-tailed. Therefore, the null hypothesis was not rejected. Table 11 provides a summary of all alternate hypotheses tested and their outcomes. Table 11 Summary of Hypotheses Tested and Outcomes Hypothesis
Statistical Test Significance
Outcome
0
H1 : Technological, pedagogical, and content Multiple Linear knowledge of part-time online instructors does Regression statistically predict online learners’ grades.
p = .187
Not Supported
H20: Pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge of parttime online instructors does predict online learners’ grades.
Multiple Linear Regression
p = .301
Not Supported
H3 : Pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge does predict TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors.
Multiple Linear Regression
p < .001
Supported
H40: TPACK is related to online learners’ grades.
Pearson r
p = .221
Not Supported
0
97
Summary Four research questions and related hypotheses were investigated. It was determined that self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) significantly predicted self-reported TPACK (Technological Pedagogical Content Knowledge) of parttime online instructors. However, self-reported technological, pedagogical, and content knowledge of part-time online instructors did not predict online learners’ grades. Selfreported pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge of part-time online instructors did not predict online learners’ grades. TPACK was not significantly related to online learners’ grades. Online learner grade averages were considered low (M=68, or 68%) for the courses attended online. Implications of these results will be discussed in Chapter 5.
98
Chapter 5 Conclusions and Recommendations Online postsecondary learners achieve lower grades than learners that attend traditional classrooms, and more online learners are likely to drop or fail online courses than learners in traditional courses. Part-time faculty account for 80% to 90% of faculty at private for-profit virtual institutions. Hence, part-time online faculty account for a large percentage of instructors at private for-profit virtual institutions. The purpose of this quantitative non-experimental study was to examine the relationship between selfreported predictor variables (technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), technological pedagogical content knowledge (TPACK) of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. The results of the study indicated that all self-reported areas of technological, pedagogical, and content knowledge of part-time online instructors did not predict online learner grades. However, self-reported pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge were significantly related to part-time online instructor’s self-reported TPACK or areas of theoretical model where all constructs overlap as shown in Figure 1. Chapter 5 contains information based on the study findings. The main features of this chapter are conclusions, limitations, implications, recommendations for practice, recommendations for further research, and a summary.
99
Conclusions The four research questions and hypotheses helped with the selection of the research study design making it possible to reveal quantitative results between part-time online faculty in higher education, which currently represents between 80% and 90% of faculty in virtual institutes, and online learners’ grades, which are significantly lower in virtual courses. The study’s findings were informed by Mishra and Koehler’s (2006) TPACK theoretical model of advanced online instructional environments. There were seven self-reported predictor variables: technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPACK). The outcome variable was online learner grades. Revealing the answers to the four research questions included assessing relationships using multiple linear regressions and the Pearson r coefficient. Demographical data was collected to better understand and classify the participants. Demographical findings included age, sex, race, online teaching experience, degrees held, and frequency of professional development activities in the categories of technology, pedagogy, and content knowledge. The following is a discussion of conclusions and results in accord with the research questions. RQ1: What is the relationship between online learners’ grades and self-reported technological knowledge (TK), pedagogical knowledge (PK) and content knowledge (CK) of part-time online instructors? The first research question posed was if self-reported technological, pedagogical,
100
and content knowledge or TK, PK, CK of part-time online instructors predicted online learners’ grades. The goal of the first research question was to analyze the self-reported technological, pedagogical, and content knowledge or TK, PK, CK of part-time online instructors to see if the three constructs had some predictive relationship to online learners’ grades. Results for these three areas of the TPACK theoretical model provided the conclusion that self-reported technological, pedagogical, and content knowledge did not predict or have a relationship with learner’s grades. A significant relationship was not observed between self-reported technological, pedagogical, and content knowledge of part-time online instructors and online learners’ grades Self-reported technological, pedagogical, and content knowledge were inclusive areas within the TPACK theoretical framework that combined technologies such as digital books, Internet and digital video, LMS, software, and the methods that presented subject matter that use current technologies (Mishra & Koehler, 2006). Similar to this study, in previous studies, participants of the TPACK Survey self-evaluated themselves with the same constructs. Although participants were be able to self-assess their own areas of knowledge of technology, pedagogy, and content knowledge, however, they may have not accurately evaluated if they were applying exactly what that they were asked about and practiced within their courses. Results may not be accurate due to selfassessment of participants. RQ2: What is the relationship between the online learners’ grades and selfreported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) of part-time online instructors? The second research question posed was if self-reported pedagogical content
101
knowledge, technological content knowledge, and technological pedagogical knowledge (PCK, TCK, TPK) of part-time online instructors predicted online learners’ grades. The goal of the second research question was to analyze self-reported pedagogical content knowledge, technological content knowledge, and pedagogical content knowledge of part-time online instructors to see if the three constructs had some predictive relationship to online learners’ grades. Results for these three areas of the TPACK theoretical model provided the conclusion that self-reported PCK, TCK, TPK did not predict or have a relationship with learner’s grades. Prior study findings assessed the instructional self-reported TPACK constructs using qualitative, quasi- experimental, or mixed methods approaches to help understand the way the constructs were applied by instructors in faculty training, technology use, and course content development. The previous studies conducted concentrated on utilizing the TPACK model as a guide for developing the areas of the TPACK within courses (Fabry & Schubert, 2009; Harris & Hofer, 2010; Mishra & Koehler, 2006). Activity ideas, usage of technology, instructional strategies, and specific prompts were included in developed courses tailored for the areas of the TPACK framework to be inclusive of the content that blends methods with technological usage, effective instructional strategies, and solid activities that support the subject matter (Harris & Hofer, 2010). Results in previous studies were qualitative and experimental, hence, participant responses may have been different in this study. RQ3: What is the relationship between self-reported pedagogical content knowledge (PCK), technological content knowledge (TCK), and technological pedagogical knowledge (TPK) and self-reported TPACK (Technological Pedagogical
102
Content Knowledge) of part-time online instructors? The third research question posed was if self-reported pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge (PCK, TCK, TPK) predicted self-reported TPACK of part-time online instructors. The goal was to measure if self-reported pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge could predict self-reported TPACK or how all areas of the constructs of self-reported technology, pedagogy and content knowledge support student learning outcomes. Results indicated that selfreported PCK, TCK, and TPK predicted self-reported TPACK of the part-time online instructors. PCK, TCK, and TPK are the constructs of that overlap areas of technology, pedagogy and content knowledge in the TPACK models. This significant research finding may be due to the structure of technologically driven conceptual framework of the TPACK. The TPACK constructs of PCK, TCK, and TPK factors were not as representative as student grades. The structure of the TPACK model, where all seven areas of the TPACK where constructs overlap, may have been closely aligned and initiated the prediction of self-reported TPACK due to the reliance of each other’s area’s within each of the constructs of the theoretical TPACK framework. Results indicated a reliance of the TPACK constructs on each another. RQ4: To what extent, if any, is self-reported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors related to online learners’ grades? The fourth research question was posed to what extent, if any, is TPACK related to online learners’ grades. The goal was to measure if self-reported TPACK, or the core
103
constructs within the TPACK framework, had any relationship or correlationship to online learners’ grades. The results indicated there was no relationship between selfreported TPACK and online learner grades. Research Question 4 moved the analysis beyond looking at the parts of self-reported technology, pedagogy, and content knowledge and incorporated the areas to assess how strongly instructors agreed or disagreed that they can integrate each area into their teaching strategy comprehensively. The analysis indicated there was no correlationship between self-reported TPACK and student grades. Other studies have not tested TPACK constructs on one another, hence, more research is recommended to illuminate the relationship between the constructs that apply to part-time online faculty for more accurate results to be drawn. Demographical information regarding the educational background of part-time online faculty collected in this study indicated that 89% of part-time online faculty held master’s degrees and 11% held doctorate degrees. Previous study participants that contributed in TPACK research studies within the literature indicated that preservice instructors held only bachelor’s degrees or were entering master’s programs (Fabry & Schubert, 2009; Harris & Hofer, 2010; Mishra & Koehler, 2006; Schmidt et al., 2010). Other significant demographical data included the experience held by instructors. Data collected regarding online instructional experience indicated that only 8.6% of part-time online faculty had one year or less of experience, 32.1 % of part-time faculty had 1-4 years of experience, and 59.3% of part-time faculty had five years or more. Previous instructor participants in TPACK research studies emphasized that preservice instructors, or new instructors were predominantly first year developing instructors with little or no instructional experience and newly introduced to the profession.
104
Findings from this study regarding data about professional development indicated that professional development involved content knowledge activities as this was the highest score self-reported by the participants. The results supported previous studies’ information that indicated TPACK assessment could enrich content for effective instruction and blend pedagogy and content knowledge to instruct effectively while utilizing technological knowledge to enrich pedagogy for effective instruction (Mishra & Koehler, 2006; Schmidt et. al, 2010). Participants in previous studies were interviewed using open-ended prompts, so the participants could discuss with the interviewer how they used certain areas of the selfreported TPACK constructs and why, which helped the interviewer and participants decide on assessing how much they know, their experiences, and their usage of technological knowledge they perceived while instructing courses and self-reported them simultaneously (Fabry & Schubert, 2009; Harris & Hofer, 2010). Prior TPACK studies did not measure TPACK performance of part-time online instructors on student grades and did not evaluate the frequency and areas of professional development that part-time online instructors participated in annually. Participants in this study were not given choices within the research questions to assess other variables that may be part of their online instructional profession. Such variables may include assessment of the constraints in the learning environment, the LMS, and the ability to discuss their understanding of their own technology, pedagogy and content knowledge. Many areas of professional practice were not included the TPACK selfassessment survey. The TPACK assessment tool was utilized in the way it was designed
105
which might not yield enough variables to correspond with high or low student grades. A side observation extracted from this study was that current research supported the study’s results regarding low student grades for online learners. The findings from this current study affirmed that students who attend online courses achieve lower grades as the results of this study indicated that 68% was the average online students’ grades. Findings of this study supported previous research related to distance education and part-time online faculty members, low online learner grades and the growth of part-time online faculty (Allen & Seaman, 2013; Harris & Martin, 2012; Mishra et al., 2009; Porras-Hernandez & Salinas-Amescua, 2013). Limitations There were various limitations of the study. A limitation for the study was the concern regarding the honesty and perception of volunteer participants about their own self-reported technological, pedagogical, and content knowledge (TPACK) as the survey itself was comprised self-reported data. The self-reported data and honesty of selfevaluation of the part-time faculty participants may not have yielded accurate responses. Opinions of a faculty member’s own mastery of TPACK may or may not be accurate, as some participants may have observed themselves as proficient in certain areas of the TPACK Survey (Schmidt et al., 2010) instrument when in reality they were not. Other limitations included two study design assumptions, one is that student grades were equal to student academic performance, and student academic performance or outcome reflected the quality of faculty teaching and associated professional development. The online grades of student also did not reflect dropped courses of students and online grades were not directly linked to instructors’ self-assessments during data collection and
106
analysis. There are numerous very distinct variables assessed by the TPACK. Compacting all areas of the TPACK into one research question would not have provided the opportunity to consider each variable to the necessary extent, due to analysis beyond looking at the parts of only technology, content, pedagogy. Incorporation of TPACK constructs led the researcher to comprehensively self-assess how strongly instructors agreed or disagreed that they could integrate each area into their teaching strategy. A limitation of examining a small population could be improved by sampling additional private for-profit organizations to gain access to a larger pool of part-time online faculty. Data were collected from students at only one virtual institution limiting the number of responses. Another limitation was the size of the instructor sample used in the study. The number of part-time online instructors was approximately 148, of which 81 instructors voluntarily participated in the study. The part-time online instructor population was collected from part-time online faculty at one private for-profit virtual institution. Results may have differed if all faculty members and student grades were the sample focus at the virtual institution or if additional institutions were included in the study. The type of faculty and sample size limited generalizability of the research results. The last limitation was the dependence on quantitative data instead of qualitative data. In a qualitative study, participants would have discussed their perceptions regarding the domains of TPACK constructs and experiences regarding low performance of learners in online classes. Implications The implications for this study are there may be other factors outside of the
107
factors investigated that relate to the recent decline in online learners’ grades. There are three vital factors of instructional knowledge for instructors: the understanding of content, the understanding of teaching, and the understanding of technology (Mishra, Koehler, & Henriksen, 2011). The study results imply the importance of understanding and expanding the current knowledge regarding the domains of technological, pedagogical, and content knowledge theory and online learners’ grades. Close analysis of instructional factors gathered from the literature in the areas of faculty preparation, training, and professional development that acknowledges the importance of online learner grades, achievement, and improved methods of assessing online learners is recommended. Online learners’ grades achieved within online courses may not be an appropriate or accurate measurement of student-learning outcomes in virtual learning environments. The methods of learners’ performance evaluation within online courses may need to be addressed differently than using a traditional grading system within current virtual institutions. Online learners may need to be evaluated differently in online courses so that they are evaluated appropriately. Perhaps, developing a newer grading system for online learners to be assessed differently is an idea that may replace current online grading systems in the future. Another implication of the study regarding self-reported content knowledge, can imply that professional development activities that focus on areas of technology and pedagogy could be enriched so learners achieve better grades with close attention to content and purposive activities. Professional development activities that partake in actions that involve technology, pedagogy, and content knowledge can generate
108
meaningful learning (McKee et al., 2013; Palloff & Pratt, 2011). Learning more about the seven domains of technological, pedagogical, and content knowledge of part-time online faculty maybe useful to institutional leaders in higher educational institutions offering online courses, even though the results within this study indicated no relationship between instructors’ TPACK and online learners’ grades. There are many more variables not covered in the TPACK that may have a direct impact or causal relationship with online learners’ grades. Institutional leaders may want to consider a closer examination of curriculum, design, course development, instructional quality, responsibilities and attitudes, human resources, recruitment, cycles of professional development, faculty effectiveness, and experience in delivering instruction would help in the better assessment of TPACK performance. Regarding the growth of online courses and part-time online instructors instructing online courses, focusing on part-time faculty members requires exhaustive examination. Technology, pedagogy, and content knowledge performance are areas that online instructors participate in daily within online course instruction. Higher education leaders can continue to foster student learning and address the concern for student success and retention rates in online learning environments while ensuring that part-time online faculty are applying the seven areas of technology, pedagogy, and content knowledge within their courses. Recommendations for Practice Higher education stakeholders in online departments and human resources can plan to provide different methods of faculty orientation based on the TPACK framework
109
for new part-time online faculty members . Part-time online faculty can participate in orientations that prepare them with competency in the seven areas of technology, pedagogy, and content knowledge for application in their future courses with student learning outcomes in mind. Orientation for new appointees would not just introduce how the online courses and LMS function. Orientation would also demonstrate how to create lessons that promote usage of the TPACK areas for student mastery of the learning outcomes, not measured by grades achieved by students but by measuring performance within certain tasks to complete. The findings of Research Question 3 affect how we might design orientation programs. The TPACK framework supports student learning outcomes and the findings of this study indicated that all areas of the self-reported constructs of technology, pedagogy and content knowledge had a relationship to one another (TPACK). Part-time online faculty orientation may include hands-on practice, technology assessment, content analysis, mentoring, and mock courses that require specific prompts that trigger TPACK mastery with close observation. Participants in the orientation could be evaluated by other members and given opportunities to choose how they are applying all areas of the TPACK domains to their lessons. Lastly, orientation may focus on how part-time online faculty can better assess and award learners’ grades so online learners receive an accurate record of their learning accomplishments. Stakeholders and department chairs can support, develop, and promote professional development in alignment with human resources department with opportunities that address the seven areas of technology, pedagogy and content knowledge within specific activities that can supplement the talents of existing part-time online instructors. Part-time online faculty can participate in professional development
110
through newer methods such as identifying where part-time faculty are the weakest and build on activities that would help in improving skills that are not often practiced in the profession. When self-reported professional development frequency was assessed in the areas of technology, pedagogy, and content knowledge, the findings indicated professional development activities involving content knowledge were the most practiced by part-time online instructors. The least practiced areas of professional development that part-time instructors participated in were technology and pedagogy professional development activities. Activities that supplement and blend both technology and pedagogy could be created and specially designed to increase technological and pedagogical professional development of part-time online faculty. Professional development could be ongoing, given shift of our future knowledge base overtime, to address specific areas requiring professional development to improve certain areas of instruction. An expansion of the TPACK theoretical model to include other constraints such as limitations within the online instructional and learning environment can extend the findings of TPACK self-assessments. Although there was no relationship between selfreported TPACK and student grades, stakeholders in higher education can still integrate professional development activities that improve online learner experiences and achievement. Professional development can still be student-centered, as online learning has continued to expand, and online retention continues to be problematic. Hence, professional development activities that can provide instructors better methods of assessing online learners and evaluating online learners may support achievement of better grades in online courses.
111
Online faculty training occurs before courses are instructed. Online training modules for part-time online faculty can be enhanced by developing activities within the actual training of in the areas of TPACK that can directly affect the courses instructed by part-time online faculty regarding the seven areas of technology, pedagogy, and content knowledge. Department leaders and human resources departments can create customized faculty training rubrics, best practices, and specific prompts for a variety of subjects that can categorize activities for each TPACK area. An online instructor may face many constraints in the practice of teaching and identifying these constraints and addressing them within training may be valuable prior to allowing instructors to instruct. The faculty training directed towards TPACK may be difficult to measure; however, evaluation of certain performances or tasks could lead to a better understanding of what types of training could promote better mastery of areas in TPACK, instead of selfreported evaluation. Faculty evaluation is discussed in the following section regarding how faculty is evaluated by the administrators. Training that utilizes different methods of assessment of online faculty members that concentrates on the difficulties of online instruction, in addition to TPACK area barriers, may benefit online instructors. Training that focuses on improvement of techniques in assessing student learning alternatively may produce better assessment of online learner grades. Faculty evaluation methods with close regard to TPACK domains can be improved by expansion of the constructs of the theoretical model by not limiting the paradigms to only the areas of technology, pedagogy, and knowledge. For example, an addition of one or two more areas that can supplement student-learning outcomes could help to illuminate the effects of the TPACK with student performance. In addition, faculty may
112
need to be evaluated by others versus self-assessment. Development of different paradigms that may measure or allow different areas to study could aid with how instructors can be evaluated differently by their department leaders versus self-analysis as the TPACK warrants. Frequency of faculty evaluation is vital to ensure instructors are remaining competent and keeping up with the continuous changes that occur in online learning environments. Since the findings indicated there was no relationship between self-reported TPACK and grades, perhaps department chair evaluation of faculty could include review of course grade distributions and grading rubrics. Faculty evaluation can be performed internally, within the courses, and can be linked to course data and student learning outcomes within the courses instructed. Full participation of online learners would be necessary to evaluate faculty using a newer faculty evaluation method. Current evaluations techniques mentioned in this study did not specifically regard areas of the TPACK framework and the method used by instructors to evaluate themselves. Instead, previous studies regarded the improvements that instructors perceived and not the actual performance evaluation or what was achieved. If faculty performance evaluation of faculty included the TPACK model constructs within faculty evaluations, a newer evaluation method could be developed. Recommendations for Further Research The goal of the researcher was to provide an improved understanding of selfreported TPACK and online student grades by examining the relationship between selfreported TPACK domains of part-time online faculty and online learners’ grades. Even though the results of the study indicated no relationship existed regarding self-reported
113
TPACK areas and online learner grades, this does not that limit there are other relationships or areas that were not part of this study. Researchers may consider the need to explore other factors that may affect online learners’ grades with a concentration in part-time faculty TPACK as a follow-up. Effective professional development opportunities, activities, and goals that focus on student performance, as reflected by achievement, would require more analysis. A discussion on some of these areas provides recommendations for further research. The study’s results did not conclude that self-reported technological, pedagogical, content knowledge (TPACK) of part-time online instructors had any relationship to online learners’ grades. A future study that measures the actual TPACK of online instructors over time, with careful regard to how each area of the TPACK is measured with an assessment of rigorous tasks instead of collecting self-reported data would make a robust future study. For example, creation of an assessment tool that aligns all areas of the TPACK Survey areas with student outcomes that are specific to the TPACK areas while analyzing student assessment would be one future research idea for virtual institutions. Future researchers could consider developing a TPACK measurement or assessment instrument that does not utilize self-reported data and can still measure the TPACK areas of the model by scoring participants according to skill within areas of the constructs. As previous research indicated online learner grades are lower than learners grades at brick-mortar schools; a qualitative study involving more than one institution and a larger amount of participants, that includes all faculty is another study design idea. There are other factors that may help predict online learner grades, and the success of
114
online learners may not be related to only the TPACK areas of their online instructors. An examination of student performance outcomes rather than grades would be a recommendation with close regard to how student performance could be improved by implementing certain tasks for each of the seven constructs of the TPACK model. A mixed methods study that includes using both quantitative and qualitative research could be done to augment the findings from this study. A qualitative study could also be conducted where instructors could report about their skills, areas of TPACK, and might describe their own lived experiences, challenges, and perceptions about their roles and tasks as part-time online faculty. Another qualitative study suggestion would be to interview online learners and ask them about their online learning experience with part-time online faculty using the same predictors (TPACK) and by conducting questions about the student’s observations of their instructor’s usage of TPACK. Students may also share why they are not performing well in their online courses regarding instructor’s TPACK and see if there is a cause and effect relationship. Other predictor variables were not part of this study, there are many other variables not included in this study about online instructors. More than one higher education institute may also be studied to explore institutional and student profile differences regarding other faculty members. For example, other private and non-profit virtual institutions can be examined. If a larger sample, additional participants, and many virtual institutions were part of future studies, perhaps newer data could yield different results and assist in studying the relationship better between self-reported TPACK and online learners’ grades. Some follow up study suggestions include:
115
Examining participants by observing TPACK over a series of courses or a full year. Preparing another assessment for the observer that can yield TPACK competency of faculty instead of having faculty self-report about themselves. Interviewing online learners to ask them about their experiences and observations of online instructors and why some grades were high or low. Summary The purpose of this quantitative non-experimental study was to examine the relationship between self-reported predictor variables technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK), content knowledge (CK), pedagogical knowledge (PK), pedagogical content knowledge (PCK), and technological pedagogical content knowledge (TPACK) of part-time online faculty working at a private for-profit virtual institution and the criterion variable, online learners’ grades. The relationship of the variables was determined using the TPACK Survey (Schmidt et al., 2010) multiple regression analysis and Pearson r correlation analysis. The statistical analysis software used for data analysis was IBM SPSS version 21 for Mac. The population was part-time online faculty from a private for-profit virtual institution in the United States. The participants were 81 of 148 part-time online faculty members. Four research questions guided the study. Results of this research study add to the body of knowledge in higher education administration, especially online instructional divisions and human resources by revealing existing practices and applications of technology, instructional assessment, preparation, and training of online faculty. The results provide higher education leaders
116
support for future part-time faculty training initiatives for utilizing preparation methods and techniques for offering optimum online student support. The findings may influence a newer best practices model that can benefit online virtual institutes in retaining and improving student grades and preparing specifically designed training modules while prioritizing student learning outcomes. Higher education administrators in online learning departments may use the findings from this study to help develop successful online instructors by analyzing the areas of technological, pedagogical, and content knowledge of their part-time online faculty members (PCK, TCK, TPK) and applying specifically targeted tasks that may influence improvement in online student grades according to the subject or course activities within the course. Technology, pedagogy and content knowledge practices proves to promote instructional knowledge, a better understanding and purpose for effective instruction in accordance to recent studies and the TPACK theoretical framework (Mishra & Koehler, 2006; Niess, 2011; Schmidt et al., 2010). The study findings suggest there are additional dynamics that affect the way part-time online faculty members perceive, achieve, and self-report, and assess themselves regarding their technological, pedagogical, and content knowledge while instructing online courses. Such dynamics include how the courses are set up by in the institution, the actual LMS, how much freedom an instructor is given to design, or revise syllabi and course content, and if they are given opportunities to learn fresher methods of using the LMS to apply usage of technology towards the course instructed online (OLC, 2014; Shea, & Bidjerano, 2010). Existing literature validates that part-time online faculty are categorized as rapidly growing instructors in higher education with a continued projected growth that has lasted
117
over 5 years with continuous growth. They bring professional backgrounds, work experiences, and an array of diverse skills that range from medical, business, human services, and organization leadership to the online classroom. Part-time faculty is underrepresented in decision-making within online departments and may not be given opportunities or support by the institution to practice areas within TPACK or prepare activities that focus on student-learning outcomes. The findings from this study did not conclude that any area of the TPACK of parttime online instructors predicted or related to online learners’ grades. However, selfreported pedagogical content knowledge, technological content knowledge, and technological pedagogical knowledge (PCK, TCK, and TPK) significantly predicted selfreported TPACK (Technological Pedagogical Content Knowledge) of part-time online instructors. PCK, TCK, TPK are the three core areas within the TPACK theoretical model where all components overlap with other core components within the theoretical model. The TPCK framework, especially components that design the core areas of the TPACK, may promote further studies in developing professional development in the rapidly growing technologically driven instructional atmosphere of online learning. The information generated in this study could lead to exploration and future modifications to existing procedures in assessment of online learners, evaluation, professional development, and training of part-time online instructors, and in the augmentation of TPACK assessment methods and administration. Higher education stakeholders have the opportunity to reflect on how they are addressing the areas of technological, pedagogical, and content knowledge of their part-time and even full-time faculty members who instruct online courses; mainly for reasons other than concern for
118
the impact on student grades. However, a deeper understanding of assessment and achievement of online learners is necessary to assess if student achievement is due to the TPACK areas or other factors in relation to the educational milieu.
119
REFERENCES Abdellatief, M., Abu Bakar, M. S., Marzanah, A. J., & Abdullah, R. (2011). A technique for quality evaluation of e-learning from developer’s perspective. American Journal of Economics and Business Administration, 3(1), 157-164. doi:10.3844/ajebasp.2011. Allen, I. E., & Seaman, J. (2007). Online nation: Five years of growth in online education. Babson Park, MA: Babson Survey Research Group. Allen, I. E., & Seaman, J. (2009). Learning on demand: Online education in the United States. Babson Park, MA: Babson Survey Research Group. Allen, I. E., & Seaman, J. (2011). Going the distance: Online education in the United States. Babson Park, MA: Babson Survey Research Group. Allen. I. E., & Seaman, J. (2013). Changing course: Ten years of tracking online education in the United States. Babson Park, MA: Babson Survey Research Group. Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco, CA: Jossey-Bass. Anderson, T. (2008). The theory and practice of online learning (2nd ed.). Athabasca, AB: Athabasca University Press. Aragon, S., & Johnson, E. (2008). Factors influencing completion and noncompletion of community college online courses. American Journal of Distance Education, 22(3), 146-157. doi:10.1080/08923640802239962
120
Archambault, L., & Crippen, K. (2009). Examining TPACK among K-12 online distance educators in the United States. Contemporary Issues in Technology and Teacher Education, 9(1). doi:10.1080/15391523.2009.10782535 Arend, B. D. (2007). Course assessment practices and student learning strategies in online courses. Journal of Asynchronous Learning Networks,11(4), 3-12. Retrieved from http://sloanconsortium.org/jaln/v11n4/course-assessmentpractices-and-student-learning-strategiesonline-courses. Artino, A. (2008). Practical guidelines for online instructors. TechTrends, 52(3), 37- 45. doi:10.1007/s11528-008-0153-x Babbie, E. R. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. Baker-Doyle, K. (2010). Beyond the labor market paradigm: A social network perspective on teacher recruitment and retention. Education Policy Analysis Archives, 18(26), 1-14. Retrieved from http://epaa.asu.edu/ojs/article/view/836 Ball, D., & Cohen, D. (1999). Developing practice, developing practitioners. In L. Darling-Hammond, & G. Sykes (Eds.) Teaching as the learning profession: Handbook for policy and practice (pp. 3-32). San Francisco, CA: Jossey- Bass. Bates, T. (2014, November 24). Two design models for online collaborative learning: Same or different? [Weblog comment]. Online Learning and Distance Education Resources. Retrieved from http://www.tonybates.ca/2014/11/28/two-designmodels-for-online-collaborative-learning-same-or-different/
121
Beck, V. S. (2010). Comparing online and face-to-face teaching and learning. Journal on Excellence in College Teaching, 21(3), 95-108. Retrieved from http://celt.muohio.edu/ject/issue.php?v=21&n=3 Bedford, L. (2009). The professional adjunct: An emerging trend in online instruction. Online Journal of Distance Learning Administration, 12(3). Retrieved from http://www.westga.edu/~distance/ojdla/fall123/bedford123.html Beebe, R., Vonderwell, S., & Boboc, M. (2010). Emerging patterns in transferring assessment practices from F2F to online environments. Electronic Journal of eLearning, 8(1), 1-12. Retrieved from http://academicconferences.org/ejournals.htm Bernard, R. M., Abrami, P. C., Borokhovski, C., Wade, A. Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243-1289. doi:10.3102/0034654309333844 Birnbaum, R. (1988). How colleges work: The cybernetics of academic organization and leadership. San Francisco, CA: Jossey-Bass. Bonnel, W., & Boehm, H. (2011). Improving feedback to students online: Teaching tips from experienced faculty. The Journal of Continuing Education in Nursing, 42(11), 503-509. doi:10.3928/00220124-20110715-02
122
Bowden, R. G. (2009). The postsecondary professoriate: Problems of tenure, academic freedom, and employment law. Academy of Educational Leadership Journal, 13(3), 17-36. Retrieved from http://www.alliedacademies.org/academy-ofeducational-leadership-journal/ Bradley, J. (2009). Promoting and supporting authentic online conversations-which comes first-the tools or instructional design? International Journal of Pedagogies & Learning, 5(3), 20-31. doi:10.5172/ijpl.5.3.20 Brigance, S. K. (2011). Leadership in online learning in higher education: Why instructional designers for online learning should lead the way. Performance Improvement, 50(10), 43-48. doi:10.1002/pfi.20262\ Burns, N., & Grove, S. K. (2005). The practice of nursing research: Conduct, critique, and utilization. St. Louis, MO: Elsevier/Saunders. Bush, R.N. (1984). Effective staff development. Making our schools more effective: Proceedings of three state conferences. San Francisco, CA: Far West Laboratory for Educational Research and Development. Caplan, D., & Graham, R. (2004). The development of online courses. In T. Anderson (Ed.), Theory and practice of online learning (2nd ed., pp. 245-263). Edmonton, AB: Athabasca University Press. Caruth, G., & Caruth, D. (2013). Adjunct faculty: Who are these unsung heroes of academe? Current Issues in Education, 16(3), 1-11. Retrieved from http://cie.asu.edu/ojs/index.php/cieatasu/article/viewFile/1269/528
123
Cassidy, S. (2011). Self-regulated learning in higher education: Identifying key component processes. Studies in Higher Education, 36, 8, 989-1000. doi:10.1080/03075079.2010.503269 Castle, S. R., & McGuire, C. J. (2010). An analysis of student self-assessment of online, blended, and face-to-face learning environments: Implications for sustainable education delivery. International Education Studies, 3(3), 36-40. Retrieved from http://www.ccsenet.org/journal/index.php/ies/article/download/5745/5308 Chickering, A. W., & Gamson, Z. F. (1987, March). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39, 3-7. Retrieved from https://www.conahec.org/resource/aahe-bulletin Clift, E. (2009, May 21). I’ll never do it again. Chronicle of Higher Education, 55(38). Retrieved from http://chronicle.com/article/Ill-Never-Do-It-Again/44250/ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Erlbaum. Collopy, R. M. B., & Arnold, J. M. (2009). To blend or not to blend: Online and blended learning environments in undergraduate teacher education. Issues in Teacher Education, 18(2), 85-101. Retrieved from http://www1.chapman.edu/ITE/ Cook-Wallace, M. K. (2012, Summer). Testing the significance of core components of online education. The Business Review, Cambridge, 19(2), 64-70.
124
Cook, M., Dickerson, D. L., Annetta, L. A., & Minogue, J. (2011). In-service teachers’ perceptions of online learning environments. Quarterly Review of Distance Education, 12(2), 73-79. Retrieved from http://www.infoagepub.com/quarterlyreview-of-distance-education.html Cole, J. E., & Kritzer, J. B. (2009). Strategies for success: Teaching an online course. Rural Special Education Quarterly, 28(4), 36-40. Retrieved from http://edu543spring2012.wiki spaces.com/Best+Practices+in+Teaching+Online+Courses Cooper, D. R., & Schindler, P. S. (2008). Business research methods (10th ed.). New York, NY: McGraw-Hill/Irwin. Council of Educational Technology and Learning Innovation (2013). Online technology and the future of higher education. Boston, MA: Boston University. Retrieved from http://www.bu.edu/edtechcouncil/symposium Council on Higher Education Accreditation. (2002). Accreditation and assuring quality in distance learning. Washington, DC: CHEA Institute for Research and Study of Accreditation and Quality Assurance. Retrieved from http://www.chea.org/pdf/mono_1_accred_distance_02.pdf Cramer, D. (1998). Fundamental statistics for social research: Step by step calculations and computer techniques using SPSS for Windows. New York, NY: Routledge Academic. Crawford-Ferre, H., & Wiest, L. R. (2012). Effective online instruction in higher education Quarterly Review of Distance Education, 13(1), 11-14. Retrieved from http://www.infoagepub.com/quarterly-review-of-distance-education.html
125
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage. Cross, J. G., & Goldenberg, E. N. (2009). Off-track profs: Nontenured teachers in higher education. Cambridge, MA: MIT Press. Desai, M. S., Hart, J. & Richards, T. C. (2008). E-learning: Paradigm shift in education. Education, 129(2), 328-333. Retrieved from http://www.editlib.org/p/107217/ Dillman, D. (2007). Mail and internet surveys (2nd ed.). Hoboken, NJ: Wiley. Dolan, V. (2011). The isolation of online adjunct faculty and its impact on their performance. The International Review of Research in Open and Distance Learning, 12(2), 62-77. Athabasca, AB: Athabasca University Press. Doyle, W. R. (2009). Online education: The revolution that wasn’t. The Magazine of Higher Learning, 41(3), 56-58. doi:10.3200/CHNG.41.3.56-58 Duncan, H., & Barnett, J. (2009). Learning to teach online: What works for pre-service teachers. Journal of Educational Computing Research, 40(3), 357-376. doi:10.2190/ec.40.3.f Dykman, C. A., & Davis, C. K. (2008). Online education forum: Part two-teaching online versus teaching conventionally. Journal of Information Systems Education, 19(2), 157-164. Retrieved from http://www.jise.appstate.edu/Issues/19/V19N2P157abs.p Eagan, K., & Jaeger, A. (2009). Effects of exposure to part-time faculty on community college transfer. Research in Higher Education, 50(2), 168-188. doi:10.1007/s11162-008-9113-8
126
Eagan, M. K., Jaeger, A. J., & Grantham, A. (2015, May-June). Supporting the academic majority: Policies and practices related to part-time faculty’s job satisfaction. The Journal of Higher Education. 86(3), 448-483. doi:10.1353/jhe.2015.0012 Fabry, D. L., & Elder, D. (2013, Spring). Improving online teaching effectiveness through reflection and collaboration. Perspectives in Learning, 14(1). Columbus, GA: Columbus State University. Fabry, D. L., & Schubert, C. (2009). Increasing interactivity in the online environment. In T. Bastianens et al. (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (2562-2567). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE). Farrokhi, F., & Mahmoudi-Hamidabad, A. (2012). Rethinking convenience sampling: Defining quality criteria. Theory and practice in language studies, 2(4), 784. Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2014). G*Power (Version 3.1.9) [Computer software]. Uiversität Kiel, Germany. Retrieved from http://www.gpower.hhu.de/en/html Field, A. P. (2013). Discovering statistics using IBM SPSS statistics: And sex and drugs and rock 'n' roll. Thousand Oaks, CA: Sage. Fox, J. (2008). Applied regression analysis and generalized linear models (2nd ed.). Thousand Oaks, CA: Sage. Friedman, A. L. (2011). Continuing professional development: Lifelong learning of millions. New York, NY: Routledge Academic.
127
Gabriel, M. A., & Kaufield, K. J. (2008). Reciprocal mentorship: An effective support for Online instructors. Mentoring and Tutoring: Partnership in Learning, 16(3), 311327. doi:10.1080/13611260802233480 Galy, E., Downey, C. & Johnson, J. (2011). The effect of using e-learning tools in online and campus-based classrooms on student performance. Journal of Information Technology Education: Research, 10(1), 209-230. Santa Rosa, CA: Informing Science Institute. Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10(3), 157-172. doi:10.1016/j.iheduc.2007.04.001 Garrison, D. R., Anderson, T, & Archer, W. (1999). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 87-105. doi:10.1016/s1096-7516(00)00016-6 Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7-23. doi:10.1080/08923640109527071 Gardner, S. K. (2009). Student Development Theory: A Primer. ASHE Higher Education Report, 34(6), 15-28. doi:10.1002/aehe.v34:6 Ghilay, Y., & Ghilay, R. (2014). TMOC: A model for lecturers' training to management of online courses in higher-education. I-Manager’s Journal of Educational Technology, 11(2), 6-16. Ginder, S., & Sykes, A. (2013). Characteristics of Exclusively Distance Education Institutions, by State: 2011-2012 (NCES 2013-172). U.S. Department of
128
Education. Washington, DC: National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubsearch. Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australasian Journal of Educational Technology, 21(1), 82-101. Retrieved from http://ajet.org.au/index.php/AJET Gradel, K., & Edson, A. J. (2009). Putting universal design for learning on the higher ed agenda. Journal of Educational Technology Systems, 38(2), 111-121. doi:10.2190/et.38.2.d Gradel, K. & Edson, A. J. (2010). Cooperative learning: Smart pedagogy and tools for online and hybrid courses. Journal of Educational Technology Systems, 39(2), 193–212. doi: 10.2190/et.39.2.i Green, J. A., & Azevedo, R. (2007). A theoretical review of Winne and Hadwin's model of self-regulated learning: New perspectives and directions. Review of Educational Research, 77,334–372. doi:10.3102/003465430303953 Greenberg, G. (2011). From the ground up: Conceptions of quality in course design for Web-support education. (Unpublished doctoral dissertation). Ohio State University, Columbus, OH. Gregory, J., & Salmon, G. (2013). Professional development for online university teaching. Distance Education, 34(3), 256-270. doi:10.1080/01587919.2013.835771 Grossman, P., & Loeb, S. (2010). Learning from multiple routes: The variation in teacher preparation pathways can propel our understanding of how to prepare teachers. Education Leadership, 67(8), 22-27. Retrieved from
129
http://cepa.stanford.edu/content/learning-multiple-routes-variation-teacherpreparation-pathways-can-propel-our-understanding Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press. Harasim, L. (2102). Learning theory and online technologies. New York, NY: Routledge Academic. Harris, J. B., & Hofer, M. J. (2011). Technological pedagogical content knowledge (TPCK) in action: A descriptive study of secondary teachers’ curriculum-based, technology-related instructional planning. Journal of Research on Technology in Education, 43(3), 211-229. doi:10.1080/15391523.2011.10782570 Harris, H. S., & Martin, E. W. (2012). Student motivations for choosing online classes. International Journal for the Scholarship of Teaching and Learning, 6(2). Retrieved from http://digitalcommons.georgiasouthern.edu/ij-sotl/vol6/iss2/11/ Harris, D., & Parrish, D. (2006). The art of online teaching: Online instruction versus inclass instruction. Journal of Technology in Human Services, 24(2/3), 105-117. doi:10.1300/j017v24n02_06 Hart Research Associates. (2010). A national survey of part-time and adjunct higher education faculty. Washington, DC: American Federation of Teachers. Retrieved from http://www.aft.org/sites/default/files/aa_partimefaculty0310.pdf Hart, C. (2012). Factors associated with student persistence in an online program of study: A review of the literature. Journal of Interactive Online Learning, 11(1), 19-42. Retrieved from http://www.aft.org
130
Hawkins, A., Barbour, M. K., & Graham, C. (2012). Everybody is their own island: Teacher disconnection in a virtual school. The International Review of Research in Open and Distance Learning, 13(2), 123-143. Retrieved from http://files.eric.ed.gov/fulltext/EJ983276.pd Herbert, M. (2006). Staying the course: A study in online student satisfaction and retention. Online Journal of Distance Learning Administration, 9(4). Retrieved from http://www.westga.edu/~distance/ojdla/winter94/herbert94.htm Herman, J. H. (2012). Faculty development programs: The frequency and variety of professional development programs available to online instructors. Journal of Asynchronous Learning Networks, 16(5), 87-102. Retrieved from http://onlinelearningconsortium.org/read/journal-issues/ Heyman, E. (2010). Overcoming student retention issues in higher education online programs Online Journal of Distance Learning Administration, 13(4). Retrieved from http://www.westga.edu/~distance/ojdla/winter134/heyman134.html Higher Learning Commission (HLC) (2013). Retrieved from https://www.hlcommission.org/ Hirumi, A. (2005). In search of quality: An analysis of e-learning guidelines and specifications. Quarterly Review of Distance Education, 6(4), 309-329. Retrieved from http://www.infoagepub.com/index.php?id=89&i=14 Hofer, M., & Grandgenett, N. (2012). TPACK development in teacher education: A longitudinal study of preservice teachers in a secondary M.A.Ed. program. Journal of Research on Technology in Education, 45(1), 83-106.
131
Howell, D. C. (2013). Statistical methods for psychology (8th ed.). Belmont, CA: Wadsworth, Cengage Learning. Hrastinski, S. (2008, November). Asynchronous and synchronous e-learning, Educause Quarterly, 51-55. Retrieved from https://net.educause.edu/ir/library/pdf/eqm0848.pdf Hrastinski, S. (2009). A theory of online learning as online participation. Computers & Education, 52(1), 78-82. doi:10.1016/j.compedu.2008.06.009 Huberman, M. (1985) What knowledge is of most worth to teachers? A knowledge-use perspective. Teaching and Teacher Education, 1, 251-262. doi:10.1016/0742-051x(85)90008-3 Jaipal, K., & Figg, C. (2010). Unpacking the “total package”: Emergent TPACK characteristics from a study of preservice teachers teaching with technology. Journal of Technology and Teacher Education, 18(3), 415-440. Chesapeake, VA: SITE. Jaschik, S. & Lederman, D. (2013, August). The 2012 Inside Higher Ed survey of faculty attitudes on technology. Inside Higher Ed. Retrieved from https://www.insidehighered.com/news/survey/survey-faculty-attitudes-technology Jayaram, K., Moffit, A., & Scott, D. (2012). Breaking the habit of ineffective professional development. McKinsey on Society. Retrieved from https://mckinseyonsociety.com/breakingthe-habit-of-ineffective-professionaldevelopment-forteachers/
132
Jeffcoate, J. (2010). How postgraduate students engage with online course material and activities. Innovation in Teaching and Learning in Information and Computer Sciences, 9(1), 42–52. doi:10.11120/ital.2010.09010042 Johanneson, M. (2013). The role of virtual learning environments in a primary school context: An analysis of inscription of assessment practices. British Journal of Educational Technology, 44, 2-312. doi:10.1111/j.1467- 8535.2012.01296.x Johnson, B., & Christensen, L. B. (2014). Educational research: Quantitative, qualitative, and mixed approaches. Los Angeles, CA: Sage. Johnson, H., & Mejia, M. C. (2014, May). Online Learning and Student Outcomes in California’s Community Colleges. Public Policy Institute of California. Retrieved from http://www.ppic.org/content/pubs/rb/RB_514HJRB.pdf Johnstone, S. M. (2007). Advancing campus efficiencies: A companion for campus leaders in the digital era. Bolton, MA: Wiley, John & Sons. Keegan, D. (1990). The foundations of distance education (2nd ed.). London, UK: Routledge Academic. Keengew, J., & Kidd, T. T. (2010). Towards best practices in online learning and teaching in higher education. MERLOT Journal of Online Learning and Teaching, 6(2), 533-541. Retrieved from http://jolt.merlot.org/vol6no2/keengwe_0610.pdf Keppel, G. & Zedeck, S. (1989). Data analysis for research designs: Analysis of variance and multiple regression/correlation approaches. New York, NY: W. H. Freeman and Company.
133
Kereluik, K., Mishra, P., Fahnoe, C., & Terry, L. (2013). What knowledge is of most worth: Teacher knowledge for 21st century learning. Journal of Digital Learning in Teacher Education, 29(4), 127-140. doi:10.1080/21532974.2013.10784716 Kezar, A., & Lester, J. (2009). Supporting faculty grassroots leadership. Research in Higher Education, 50(7), 715-740. doi:10.1007/s11162-009-9139-6 Ko, S., & Rosen, S. (2001). Teaching online: A practical guide. Boston, MA: Houghton Mifflin. Kolb, D. A. (1981). Learning styles and disciplinary differences. The Modern American College, 232-255. Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice-Hall. Koehler, M. & Mishra, P. (2009). What is Technological Pedagogical Content Knowledge (TPACK)? Contemporary Issues in Technology and Teacher Education, 9(1),chi. Association for the Advancement of Computing in Education (AACE). Koehler, M. J., Mishra, P., Kereluik, K., Shin, T. S., & Graham, C. R. (2014). The technological pedagogical content knowledge framework. In Handbook of research on educational communications and technology, 101-111. New York, NY: Springer Kolowich, S. (2010, July). Buying local, online. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2010/07/23/online. Kothari C.R. (2007). Quantitative techniques. New Delhi, IN: UBS Publishers, Ltd.
134
Kramer, A. L., Gloeckner, G. W., & Jacoby, D. (2014). Roads scholars: Part-time faculty job satisfaction in community colleges. Community College Journal of Research and Practice, 38(4), 287-299. doi:10.1080/10668926.2010.485005 Lacey, K. (2013, June). Administrators and Faculty Split on Online Learning’s Value. University Business. Retrieved from http://www.universitybusiness.com/article/administrators-and-faculty-splitonline-learning%E2%80%99s-value Lake, E. (2003). Course development cycle time: A framework for continuous process improvement. Innovative Higher Education, 28(1), 21-33. doi:10.1023/a:1025411517749 Langen, J. M. (2011). Evaluation of adjunct faculty in higher education institutions. Assessment & Evaluation in Higher Education, 36(2), 185-196. doi:10.1080/02602930903221501 LaPointe Terosky, A., & Heasley, C. (2015). Supporting online faculty through a sense of community and collegiality, Online Learning, 19(3), 147-161. Leedy, P. D., & Ormrod, J. E. (2010). Practical research: Planning and design (9th ed.). Upper Saddle River, NJ: Prentice Hall. LeFebvre, L. (2008). Demographics, employment motivations, and roles of part-time faculty at virtual universities. New Directions for Higher Education, 3, 37-44. doi:10.1002/he.311 Levin, J. S., & Hernandez, V. M. (2014). Divided identity: Part-time faculty in public colleges and universities. Review of Higher Education, 37(4), 531-557. doi:10.1353/rhe.2014.0033
135
Lieberwitz, R. L. (2007). Faculty in the corporate university: Professional identity, law and collective action. Cornell Journal of Law and Public Policy, 16(2). Retrieved from http://www.lawschool.cornell.edu/research/JLPP/upload/Lieberwitz.pdf LoBasso, L. M. (2013). An investigation into the training of instructors of online graduate education courses at institutions of higher education (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3603016 Lonn, S., & Teasley, S. D. (2009). Saving time or innovating practice: Investigating perceptions and uses of Learning Management Systems. Computers & Education, 53(3), 686-694. doi:10.1016/j.compedu.2009.04.008 Lye, L. T. (2013). Opportunities and challenges faced by private higher education institution using the TPACK model in Malaysia. Procedia-Social and Behavioral Sciences, 91, 294-305. doi:10.1016/j.sbspro.2013.08.426 Marczyk, G., DeMatteo, D., & Festinger, D. (2005). Essentials of research design and methodology. Hoboken, NJ: John Wiley & Sons. Mayadas, A., Bourne, J., & Bacsich, P. (2009). Online education today. Science, 323(5910), 85-89. doi:10.1126/science.116887 McKee, C. W., Johnson, M., Ritchie, W. F., & Tew, W. M. (2013). Professional development of the faculty: Past and present. New Directions For Teaching & Learning, 2013(133), 15-20. doi:10.1002/tl.20042 McKerlich, R., Riis, M., Anderson, T., & Eastman, B. (2011). Student perceptions of teaching presence, social presence and cognitive presence in a virtual world.
136
MERLOT Journal of Online Learning and Teaching, 7(3). Retrieved from http://jolt.merlot.org/vol7no3/mckerlich_0911.pdf McLean, J. (2006). Forgotten faculty: Stress and job satisfaction among distance educators. Online Journal of Distance Learning Administration, 9(2). Retrieved from http://www.westga.edu/~distance/ojdla/ McQuiggan, C. A. (2012). Faculty development for online teaching as a catalyst for change. Journal of Asynchronous Learning Networks, 16, 27–61. Menchaca, M. P., & Bekele, T. A. (2008). Learner and instructor identified success factors in distance education. Distance Education, 29(3), 231-252. doi:10.1080/01587910802395771 Meyer, K. A., & Murrell, V. S. (2014). A national survey of faculty development evaluation outcome measures and procedures. Online Learning, 18(3). Retrieved from http://olj.onlinelearningconsortium.org/index.php/olj/article/view/450 Miles, J., & Shevlin, M. (2001). Applying regression and correlation: A guide for students and researchers. Thousand Oaks, CA: Sage. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108, 1017–1054. doi:10.1111/j.1467-9620.2006.00684.x Mishra, P., Koehler, M. J., & Henriksen, D. (2011). The seven trans-disciplinary habits of mind: Extending the TPACK framework towards 21st century learning. Educational Technology, 51(2), 22. Retrieved from http://danahhenriksen.com/wp-content/uploads/2014/07/Perceiving.pdf
137
Mishra, P., Koehler, M., & Kereluik, K. (2009). The song remains the same: Looking back to the future of educational technology. TechTrends: Linking Research & Practice to Improve Learning, 53(5), 48-53. doi:10.1007/s11528-009-0325-3 Monks, J. (2007). The relative earnings of contingent faculty in higher education. Journal of Labor Research, 28(3), 487-501. doi: 10.1007/s12122-007-9002-5 Monroe, R. (2011). Instructional design and online learning: A quality assurance study. (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3489828) Moore, M. G. (2013). Handbook of distance education (3rd ed.). New York, NY: Routledge Academic. Moore, J. L., Dickson-Deane, C., & Galyen, K. (2011). E-learning, online learning, and distance learning environments: Are they the same? Internet and Higher Education, 14(2),129-135. doi:10.1016/j.iheduc.2010.10.001 Moser, F. Z. (2007). Faculty adoption of educational technology. Educause Quarterly, 30, 66- 69. Retrieved from http://www.educause.edu/node/634 Motte, K. (2013). Strategies for Online Educators. Turkish Online Journal of Distance Education, 14(2), 257-266. Retrieved from http://tojde.anadolu.edu.tr National Education Council (NEA) (2011). Standards for Professional Learning. Retrieved from http://www.nea.org/home/48345.htm Newton, G. D., & Hagemeier, N. E. (2011). A curriculum development simulation in a graduate program. American Journal of Pharmaceutical Education, 75(9), 184. doi:10.5688/ajpe759184
138
Neuman, W.L. (2011). Social research methods: Qualitative and quantitative approaches (7th ed.). Boston, MA: Allyn & Bacon. Niess, M. L. (2011). Investigating TPACK: Knowledge growth in teaching with technology. Journal of Educational Computing Research, 44(3), 299-317. doi: 10.2190/EC.44.3.c Nishikant, S. (2009). The paradigm shift for adult education: from educational slavery to learning freedom of human brain with synaptic learning. In T. Kidd (Ed.), Online education and adult learning: New frontiers for teaching practices. Hershey, PA: IGI Global. Nistor, N., & Neubauer, K. (2010). From participation to dropout: Quantitative participation patterns in online university courses. Computers & Education, 55(2), 663-672. doi:10.1016/j.compedu.2010.02.026. North American Council for Online Learning. (2009). Retrieved from http://www.nacol.org Online Learning Consortium (OLC) (2011). 2011 Survey of Online Learning. Retrieved from http://onlinelearningconsortium.org/ Online Learning Consortium (OLC) (2014). 2011 Survey of Online Learning. Retrieved from http://onlinelearningconsortium.org/ Ormrod, J.E. (2013). Educational Psychology: Developing Learners (8th ed.). Boston, MA: Pearson. Özgün-Koca, A., Meagher, M., & Edwards, M. T. (2010). Preservice teachers’ emerging TPACK in a technology-rich methods class. The Mathematics Educator, 19(2), 10-20. Retrieved from
139
http://tme.coe.uga.edu/wpcontent/uploads/2012/08/v19n2_OzgunKoca-MeagherEdwards.pdf Özmantar, M., Akkoç, H., Bingölbali, E., Demir, S., & Ergene, B. (2010). Preservice mathematics teachers' use of multiple representations in technology-rich environments. Eurasia Journal of Mathematics, Science & Technology Education, 6(1), 19-36. Pagano, R. R. (2010). Understanding statistics in the behavioral sciences (9th ed.). Belmont, CA: Wadsworth, Cengage Learning. Pallant, J. (2010). SPSS survival manual: A step by step guide to data analysis using SPSS (4th ed.). Maidenhead: Open University Press/McGraw-Hill. Palloff, R. M., & Pratt, K. (2003). The virtual student: A profile and guide to working with online learners. San Francisco, CA: Jossey-Bass. Palloff, R. M., & Pratt, K. (2007). Building online learning communities. San Francisco, CA: Jossey-Bass. Palloff, R. M., & Pratt, K. (2011). The excellent online instructor: Strategies for professional development. San Francisco, CA: Jossey-Bass. Palomba, C.A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco, CA: Jossey-Bass. Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students (2nd ed.). San Francisco, CA: Jossey-Bass. Pastore, R., & Carr-Chellman, A. (2009). Motivations for residential students to participate in online courses. Quarterly Review of Distance Education, 10(3), 263-277.
140
Pereira, A., Oliveira, I., Tinoca, L., Amante, L., de Jesus Relvas, M., Pinto, M. C. T., & Moreira, D. (2011). Evaluating continuous assessment quality in competencebased education online: the case of the e-folio. European Journal of Open, Distance and E-Learning. Retrieved from http://www.eurodl.org/ Peterson, E. R., & Irving, S. E. (2008). Secondary school students' conceptions of assessment and feedback. Learning and Instruction, 18(3), 238-250. doi:10.1016/j.learninstruc.2007.05.001 Piña, A. A. & Bohn, L. (2014). Assessing online faculty: More than student surveys and design rubrics. Quarterly Review of Distance Education, 15(3), 25-34, 37-48. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88, 879-903. Polly, D., Mims, C., Shepherd, C. E., & Inan, F. (2010). Evidence of impact: Transforming teacher education with preparing tomorrow's teachers to teach with technology (PT3) grants. Teaching and Teacher Education: An International Journal of Research and Studies, 26(4), 863-870. doi:10.1016/j.tate.2009.10.024 Porras-Hernandez, L. H., & Salinas-Amescua, B. (2013). Strengthening TPACK: A broader notion of context and the use of teacher’s narratives to reveal knowledge construction. Journal of Educational Computing Research, 48, 223-244. doi:10.2190/ec.48.2.f Putman, S., Smith, L., & Cassady, J. (2009). Literacy research and instruction. Association of Literacy Educators and Researchers. doi:10.1080/19388070802251988
141
Puzziferro, M. (2005). Managing virtual adjunct faculty: Applying the seven principles of good practice. Online Journal of Distance Learning Administration, 8(2). Retrieved from http://www.westga.edu/~distance/ojdla/summer82/schnitzer82.htm Radford, A. W. (2011). Learning at a Distance: Undergraduate Enrollment in Distance Education Courses and Degree Programs (NCES 2012-154). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubsearch. Rastgoo, A., & Namvar, Y. (2010). Assessment approaches in virtual learning. The Turkish Online Journal of Distance Education, 11. Rickard, W. (2010). The efficacy (and inevitability) of online learning in higher education. Boston, MA: Pearson. Roby, T., Ashe S., Singh, N., & Clark, C. (2013). Shaping the online experience: How administrators can influence student and instructor perceptions through policy and practice. The Internet and Higher Education. 17, 29-36. doi: 10.1016/j.iheduc.2012.09.004 Rowh, M. (2007, October). E-Learning: The anytime, anywhere option. Career World, 36(2), 22-25. Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2010). Technological pedagogical content knowledge (TPACK): The development and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123-149. doi:10.1080/15391523.2009.10782544
142
Schubert-Irastorza, C., & Fabry, D. L. (2011). Improving student satisfaction with online faculty performance. Journal of Research in Innovative Teaching, 4(1), 168-179. Retrieved from http://www.nu.edu/assets/resources/pageResources/journal-ofresearch-in-innovative-teaching-volume-4.pdf Schullo, S., Hilbelink, A., Venable, M., & Barron, A. (2007). Selecting a virtual classroom system: Elluminate Live vs Macromedia Breeze (Adobe Connect Professional). MERLOT Journal of Online Learning and Teaching, 3(4), 331-345. Retrieved from http://jolt.merlot.org/vol3no4/hilbelink.htm Schulte, M. (2010). Faculty perceptions of technology distance education transactions: Qualitative outcomes to inform teaching practices. The Journal of Educators Online, 7(2). Retrieved from http://www.thejeo.com/ Seaman, J. (2009). Online learning as a strategic asset. Volume II: The paradox of faculty voices: Views and experiences with online learning. Washington, DC and Babson Park, MA: Association of Public and Land-grant Universities and Babson Survey Research Group. Retrieved from http://www.aplu.org/document.doc?id=1879 Shammot, M. M. (2014). The role of human resources management practices represented by employee's recruitment and training and motivating in realization competitive advantage. International Business Research, 7(4), 55-72. doi: 10.5539/ibr.v7n4p55 Shattuck, K. I., Zimmerman, W. A., & Adair, D. (2014). Continuous improvement of the QM Rubric and review processes: Scholarship of integration and application. Internet Learning, 3(1). doi: 10.18278/il.3.1.3
143
Shea, P., & Bidjerano, T. (2010). Learning presence: Towards a theory of self-efficacy, self-regulation, and the development of a communities of inquiry in online and blended learning environments. Computers & Education, 55(4), 1721-1731. doi:10.1016/j.compedu.2010.07.017 Shea, P., Sau Li, C., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and web-enhanced college courses. The Internet and Higher Education, 9(3), 175-190. doi: 10.1016/j.iheduc.2006.06.005 Sher, A. (2009). Assessing the relationship of student-instructor and student-student interaction to student learning and satisfaction in the Web-based learning environment. Journal of Interactive Online Learning, 8(2), 102–120. Retrieved from http://www.ncolr.org/jiol/issues/pdf/8.2.1.pdf Shiffman, C. (2009). The emerging academician: The rise of the online adjunct faculty. (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3344730) Shulman, L. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. doi:10.3102/0013189x015002004 Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57, 1-23. doi:10.17763/haer.57.1.j463w79r56455411 Simon, M. K. (2011). Dissertation and scholarly research: Recipes for success. Seattle, WA: Dissertation Success, LLC. Sixl-Daniell, K., Williams, J.B., & Wong, A. (2006). A quality assurance framework for recruiting, training and retaining virtual adjunct faculty. Online Journal of
144
Distance Learning Administration, 9(1). Retrieved from http://www.westga.edu/~distance/ojdla/spring91/daniell91.htm Spector, P. (2006). Method variance in organizational research: Truth or urban legend? Organizational Research Methods, 9, 221- 232. doi: 10.1177/1094428105284955 Stangor, C. (2011). Research methods for the behavioral sciences (4th ed.). Mountain View, CA: Cengage. Steinberg, W. J. (2011). Statistics alive (2nd ed.). Thousand Oaks, CA: Sage. Stevens, J. P. (2009). Applied multivariate statistics for the social sciences (5th ed.). Mahwah, NJ: Routledge Academic. Suskie, L. A. (1996). Questionnaire survey research: What works (2nd ed.). Tallahassee, FL: The Association for Institutional Research. Swan, K., & Ice, P. (2010). The community of inquiry framework ten years later: Introduction to the special issue. Internet and Higher Education, 13(1-2), 1-4. doi:10.1016/j.iheduc.2009.11.003 Tabachnick, B. G., & Fidell, L. S. (2012). Using multivariate statistics (6th ed.). Boston, MA: Pearson. Taylor, B. & Holley, K. (2009). Providing academic and support services to students enrolled in online degree programs. College Student Affairs, 28(1), 81–102. Retrieved from http://www.sacsa.org/?page=18 Thomsett-Scott, B. & May, F. (2009). How may we help you? Online education faculty tell us what they need from libraries and librarians. Journal of Library Administration, 49(1-2), 111–135. doi:10.1080/01930820802312888
145
Tipple, R. (2010, Spring). Effective leadership of online adjunct faculty. Online Journal of Distance Learning Administration, 13(1). Retrieved from http://www.westga.edu/~distance/ojdla/spring131/tipple131.html Toch, T. (2010, April). In an era of online learning, schools still matter. Phi Delta Kappan, 91(7), 72-73. Retrieved from http://pdkintl.org/publications/kappan/ United Nations Educational, Scientific, and Cultural Organization (UNESCO) (1998, July). World Conference on Higher Education. Paris, France. Retrieved from http://www.unesco.org/education/educprog/wche/principal/ag-21-e.html U.S. Department of Education, National Center for Education Statistics (NCES) (2011). Learning at a distance. Undergraduate enrollment in distance education courses and degree programs. Retrieved from http://nces.ed.gov/pubs2012/2012154.pdf U.S. Department of Education, National Center of Education Statistics (NCES) (2014). Enrollment in Distance Education: Web tables. Retrieved from http://nces.ed.gov/pubs2014/2014023.pdf Vogt, P. W. (2007). Quantitative Research Methods for Professionals. Boston, MA: Allyn and Bacon. Voogt, J., Fisser, P., Pareja, N., Tondeur, J., & van Braak, J. (2013). Technological pedagogical content knowledge (TPACK): A review of the literature. Journal of Computer Assisted Learning, 29, 109–120. doi:10.1111/j.1365-2729.2012.00487.x Ward, C. (2011, November). The development of technological pedagogical content knowledge (TPACK) in instructors using Quality Matters training, rubric, and
146
peer collaboration. [2010 QM Research Grant]. Presentation at the 3rd Annual Quality Matters Conference, Baltimore, MD. Ward, M. E., Peters, G., & Shelley, K. (2010). Student and faculty perceptions of the quality of online learning experiences. International Review of Research in Open and Distance Learning, 11(3), 57-77. Retrieved from http://www.irrodl.org/index.php/irrodl Weimer, M. (1990). Improving College Teaching: Strategies for Developing Instructional Effectiveness. San Francisco, CA: Jossey-Bass. West, E. (2010). Managing adjunct professors: Strategies for improved performance. Academy of Educational Leadership Journal, 14(4), 21-36. Retrieved from http://www.alliedacademies.org/affiliate-academies-ael.php Wilcox, R. R. (2008). Comparing dependent Pearson correlations: The overlapping case. Unpublished technical report, Department of Psychology, University of Southern California, Los Angeles, CA. Wang, Q. (2009). Designing a web-based constructivist learning environment. Interactive Learning Environments, 17(1), 1-13. doi:10.1080/10494820701424577 Xin, C. (2012). A critique of the community of inquiry framework. Journal of Distance Education, 26(1), 1-13. Retrieved from http://www.ijede.ca/index.php/jde Xu, D., & Jaggars, S. S. (2013). Adaptability to online learning: Differences across types of students and academic subject areas. Community College Research Center. Retrieved from http://ccrc.tc.columbia.edu/publications/adaptability-to-onlinelearning.html
147
Appendix A Permission to Use Existing Survey
148
Appendix B Premises, Recruitment, and Name (PRN) Use Permission
149
Appendix C Informed Consent Agreement
INFORMED CONSENT: PARTICIPANTS 18 YEARS OF AGE AND OLDER Dear Participant, My name is Wadad Kaaki and I am a student at the University of Phoenix working on a doctorate degree in Higher Education Administration. I am doing a research study entitled The relationship between part-time online faculty’s technological, pedagogical, and content knowledge and student grades. The purpose of the research study is to examine the extent of the relationship between the technological, pedagogical, and content knowledge of part-time online faculty working at a for-profit virtual institution and student grades considering scores of learners attending the courses taught by the same instructors. Your participation will involve less than 15 minutes of your time to take an online survey. You can decide to be a part of this study or not. Once you start, you can withdraw from the study at any time without any penalty or loss of benefits. The results of the research study may be published but your identity will remain confidential and your name will not be made known to any outside party. In this research, there are no foreseeable risks to you. Although there may be no direct benefit to you, a possible benefit from your being part of this study is to improve and enhance online instruction and find innovative ways of improving online instructional skills. If you have any questions about the research study, please call me at xxx-xxx-xxxx (phone) or xxxx@xxxx (e-mail). For questions about your rights as a study participant, or any concerns or complaints, please contact the University of Phoenix Institutional Review Board via e-mail at
[email protected]. As a participant in this study, you should understand the following: 1. You may decide not to be part of this study or you may want to withdraw from the study at any time. If you want to withdraw, you can do so without any problems. 2. Your identity will be kept confidential and all responses are anonymous and cannot identify you. 3. Wadad Kaaki, the researcher, has fully explained the nature of the research study and has answered all of your questions and concerns. 4. Data will be kept in a secure and locked area. The data will be kept for three years, and then destroyed.
150
5. The results of this study may be published. “By clicking on the survey link in this form, you agree that you understand the nature of the study, the possible risks to you as a participant, and how your identity will be kept confidential. When you click on the link in this form, this means that you are 18 years old or older and that you give your permission to volunteer as a participant in the study that is described here.” Survey Link: ID Code:
151
Appendix D TPACK Survey-Electronic
152
153
154
155
156
157
158
159
Appendix E Electronic Survey Host Website Privacy Policy SurveyMonkey® Privacy Policy Effective July 7, 2015 This privacy policy applies to all the products, services and websites offered by SurveyMonkey Inc., SurveyMonkey Europe Sarl, SurveyMonkey Europe, SurveyMonkey Brasil Internet Ltda. and their affiliates, except where otherwise noted. We refer to those products, services and websites collectively as the “services” in this policy. Some services have supplementary privacy statements that explain in more detail our specific privacy practices in relation to them. Unless otherwise noted, our services are provided by SurveyMonkey Inc. inside of the United States, by SurveyMonkey Brasil Internet Ltda. Inside of Brazil, and by SurveyMonkey Europe everywhere else.
TRUSTe. SurveyMonkey Inc. has been awarded TRUSTe’s Privacy Seal signifying that this privacy policy and our privacy practices have been reviewed by TRUSTe, an independent third party, for compliance with TRUSTe’s program requirements, which include transparency, accountability, and choice regarding the collection and use of your personal information. European Safe Harbors. SurveyMonkey Inc. complies with the US-EU and US-Swiss Safe Harbor Frameworks developed by the U.S. Department of Commerce regarding the collection, use and retention of personal information from EU member countries and Switzerland. We have certified, and TRUSTe has verified, that we adhere to the Safe Harbor Privacy Principles of notice, choice, onward transfer, security, data integrity, access and enforcement. View our certification on the U.S. Department of Commerce’s Safe Harbor website. Questions? For questions regarding our privacy policy or practices, contact SurveyMonkey by mail at 101 Lytton Avenue, Palo Alto, CA 94301, USA, or electronically through this form. You may contact TRUSTe if feel your question has not been satisfactorily addressed. Key Privacy Points: The Stuff You Really Care About
160
IF YOU CREATE SURVEYS: Your survey data is owned by you. Not only that, but SurveyMonkey treats your surveys as if they were private. We don’t sell them to anyone and we don’t use the survey responses you collect for purposes unrelated to you or our services, except in a limited set of circumstances (e.g. if we are compelled by a subpoena, or if you’ve given us permission to do so). We safeguard respondents’ e-mail addresses. To make it easier for you to invite people to take your surveys via e-mail, you may upload lists of e-mail addresses, in which case SurveyMonkey acts as a mere custodian of that data. We don’t sell these email addresses and we use them only as directed by you and in accordance with this policy. The same goes for any e-mail addresses collected by your surveys. We hold your data securely. Read our Security Statement for more information. Survey data is stored on servers located in the United States. More information about this is available if you are located in Canada or Europe. SurveyMonkey will process your survey data on your behalf and under your instructions (including the ones agreed to in this privacy policy). IF YOU ANSWER SURVEYS: Surveys are administered by survey creators. Survey creators conduct tens of thousands of surveys each day using our services. We host the surveys on our websites and collect the responses that you submit to the survey creator. If you have any questions about a survey you are taking, please contact the survey creator directly as SurveyMonkey is not responsible for the content of that survey or your responses to it. The survey creator is usually the same person that invited you to take the survey and sometimes they have their own privacy policy. Are your responses anonymous? This depends on how the survey creator has configured the survey. Contact them to find out, or click here to read more about respondent anonymity. We don’t sell your responses to third parties. SurveyMonkey doesn’t sell or share your survey responses with third party advertisers or marketers (although the survey creator might, so check with them). SurveyMonkey merely acts as a custodian on behalf of the survey creator who controls your data, except as further described in this privacy policy with regard to public surveys. If you think a survey violates our Terms of Use or may be engaging in illegal activity, click here to report it. Survey Creators & Survey Respondents SurveyMonkey is used by survey creators (people who create and conduct surveys online) and survey respondents (people who answer those surveys). The information we
161
receive from survey creators and survey respondents and how we handle it differs, so we have split this privacy policy into two parts. Click on the one that applies to you: Privacy for Survey Creators Privacy for Survey Respondents PRIVACY FOR SURVEY CREATORS What information does SurveyMonkey collect? When you use SurveyMonkey, we collect information relating to you and your use of our services from a variety of sources. These are listed below. The sections afterward describe what we do with this information. Information we collect directly from you Registration information. You need a SurveyMonkey account before you can create surveys on SurveyMonkey. When you register for an account, we collect your username, password and e-mail address. If you choose to register by using a third party account (such as your Google or Facebook account), please see “Information from third parties” below. Billing information. If you make a payment to SurveyMonkey, we require you to provide your billing details, such as a name, address, e-mail address and financial information corresponding to your selected method of payment (e.g. a credit card number and expiration date or a bank account number). If you provide a billing address, we will regard that as the location of the account holder. Account settings. You can set various preferences and personal details on pages like your account settings page. For example, your default language, timezone and communication preferences (e.g. opting in or out of receiving marketing e-mails from SurveyMonkey). Address book information. We allow you to import e-mail addresses into an Address Book and associate e-mail addresses with e-mail invitation collectors so you can easily invite people to take your surveys via e-mail. We don’t use these e-mail addresses for our own purposes or e-mail them except at your direction. Survey data. We store your survey data (questions and responses) for you. Other data you intentionally share. We may collect your personal information or data if you submit it to us in other contexts. For example, if you provide us with a testimonial, or participate in a SurveyMonkey contest.
162
We don’t share or abuse your respondents’ e-mail addresses. Rest assured, SurveyMonkey will not e-mail your survey respondents or people in your Address Book except at your direction. We definitely don’t sell those e-mail addresses to any third parties. Information we collect about you indirectly or passively when you interact with us Usage data. We collect usage data about you whenever you interact with our services. This may include which webpages you visit, what you click on, when you performed those actions, and so on. Additionally, like most websites today, our web servers keep log files that record data each time a device accesses those servers. The log files contain data about the nature of each access, including originating IP addresses. Device data. We collect data from the device and application you use to access our services, such as your IP address and browser type. We may also infer your geographic location based on your IP address. Referral data. If you arrive at a SurveyMonkey website from an external source (such as a link on another website or in an e-mail), we record information about the source that referred you to us. Information from third parties. We may collect your personal information or data from third parties if you give permission to those third parties to share your information with us. For example, you have the option of registering and signing into SurveyMonkey with your Facebook account details. If you do this, the authentication of your logon details is handled by Facebook and we only collect information about your Facebook account that you expressly agree to share with us at the time you give permission for your SurveyMonkey account to be linked to your Facebook account. Information from page tags. We use third party tracking services that employ cookies and page tags (also known as web beacons) to collect aggregated and anonymized data about visitors to our websites. This data includes usage and user statistics. E-mails sent by SurveyMonkey or by users through our services may include page tags that allow the sender to collect information about who opened those e-mails and clicked on links in them. We do this to allow the e-mail sender to measure the performance of their e-mail messaging and to learn how to improve e-mail deliverability and open rates. 2. How does SurveyMonkey use the information we collect? We treat your survey questions and responses as information that is private to you. We know that, in many cases, you want to keep your survey questions and responses (which we collectively refer to as “survey data”) private. Unless you decide to share your survey questions and/or responses with the public (such as by making
163
the survey questions and responses available via a public link), we do not use your survey data other than as described in this privacy policy or unless we have your express consent. We do not sell your survey data to third parties without your permission Generally, we use the information we collect from you in connection with providing our services to you and, on your behalf, to your survey respondents. For example, specific ways we use this information are listed below. (See the next section of this privacy policy to see who we share your information with.) However, this privacy policy is not intended to restrict our use of survey questions or responses that you have chosen to make available online through a public link. To provide you with our services. o
This includes providing you with customer support, which requires us to access your information to assist you (such as with survey design and creation or technical troubleshooting) Certain features of our services use the content of your survey questions and responses and your account information in additional ways. Feature descriptions will clearly identify where this is the case. You can avoid the use of your survey data in this way by simply choosing not to use such features. For example, by using our Question Bank feature, to add questions to your surveys, you also permit us to aggregate the responses you receive to those questions with responses received by other Question Bank users who have used the same questions. We may then report statistics about the aggregated (and deidentified) data sent to you and other survey creators.
o
If you choose to link your SurveyMonkey account to a third party account (such as your Google or Facebook account), we may use the information you allow us to collect from those third parties to provide you with additional features, services, and personalized content.
To manage our services. We internally use your information, including certain survey data, for the following limited purposes: To monitor, maintain, and improve our services and features. We internally perform statistical and other analysis on information we collect (including usage data, device data, referral data, question and response data and information from page tags) to analyze and measure user behavior and trends, to understand how people use our services, and to monitor, troubleshoot and improve our services, including to help us evaluate or devise new features. We may use your information for internal purposes designed to keep our services secure and operational, such as for troubleshooting and testing purposes, and for service improvement, marketing, research and development purposes. o
o
To enforce our Terms of Use. 164
o
To prevent potentially illegal activities.
o
To screen for and prevent undesirable or abusive activity. For example, we have automated systems that screen content for phishing activities, spam, and fraud.
To create new services, features or content. We may use your survey data and survey metadata (that is, data about the characteristics of a survey) for our internal purposes to create and provide new services, features or content. In relation to survey metadata, we may look at statistics like response rates, question and answer word counts, and the average number of questions in a survey and publish interesting observations about these for informational or marketing purposes. When we do this, neither individual survey creators nor survey respondents will be identified or identifiable unless we have obtained their permission. To facilitate account creation and the logon process. If you choose to link your SurveyMonkey account to a third party account (such as your Google or Facebook account), we use the information you allowed us to collect from those third parties to facilitate the account creation and login process. For more information, click here. To contact you about your service or account. We occasionally send you communications of a transactional nature (e.g. service-related announcements, billing-related matters, changes to our services or policies, a welcome e-mail when you first register). You can’t opt-out of these communications since they are required to provide our services to you. To contact you for marketing purposes. We will only do this if you have consented to our contacting you for this purpose. For example, during the account registration process we will ask for your permission to use your information to contact you for promotional purposes. You may opt-out of these communications at any time by clicking on the “unsubscribe” link in them, or changing the relevant setting on your My Account page. To respond to legal requests and prevent harm. If we receive a subpoena or other legal request, we may need to inspect the data we hold to determine how to respond. 3. With whom do we share or disclose your information? We don’t sell your survey data, unless you expressly permit us to! When might we disclose your survey data to third parties? Only for a limited number of reasons. We share your information with our service providers who help us to provide our services to you. We contractually bind these service providers to keep your information confidential and to use it only for the purpose of providing their services. For example, we use payment processors who help us to process credit card
165
transactions. By using our services, you authorize SurveyMonkey to sub-contract in this manner on your behalf.In rare circumstances, we may share information if required by law, or in a corporate restructuring or acquisition context (see below for more details). Sharing your surveys with the public. By default, your surveys are private. You are able to control who can take your survey by changing your collector settings. For example, surveys can be made completely public (and indexable by search engines),password protected, or distributed to a restricted list of people. You can also choose to share your survey responses instantly or at a public location. We recognize that you have entrusted us with safeguarding the privacy of your information. Because that trust is very important to us, the only time we will disclose or share your personal information or survey data with a third party is when we have done one of three things, in accordance with applicable law: (a) given you notice, such as in this privacy policy; (b) obtained your express consent, such as through an opt-in checkbox; or (c) de-identified or aggregated the information so that individuals or other entities cannot reasonably be identified by it. Where required by law, we will obtain your express consent prior to disclosing or sharing any personal information. We may disclose: Your information to our service providers. We use service providers who help us to provide you with our services. We give relevant persons working for some of these providers access to your information, but only to the extent necessary for them to perform their services for us. We also implement reasonable contractual and technical protections to ensure the confidentiality of your personal information and data is maintained, used only for the provision of their services to us, and handled in accordance with this privacy policy. Examples of service providers include payment processors, hosting services, e-mail service providers, and web traffic analytics tools. Your account details to your billing contact. If your details (as the account holder) are different to the billing contact listed for your account, we may disclose your identity and account details to the billing contact upon their request (we also will usually attempt to notify you of such requests). By using our services and agreeing to this privacy policy, you consent to this disclosure. Your e-mail address to your organization. If the e-mail address under which you’ve registered your account belongs to or is controlled by an organization, we may disclose that e-mail address to that organization in order to help it understand who associated with that organization uses SurveyMonkey, and to assist the organization with its enterprise accounts. (Please do not use a work e-mail address for our services unless you are authorized to do so, and are therefore comfortable with this disclosure.)
166
Aggregated or de-identified information to third parties to improve or promote our services. No individuals can reasonably be identified or linked to any part of the information we share with third parties to improve or promote our services. The presence of a cookie to advertise our services. We may ask advertising networks and exchanges to display ads promoting our services on other websites. We may ask them to deliver those ads based on the presence of a cookie, but in doing so will not share any other personal information with the advertiser. Our advertising network partners may use cookies and page tags or web beacons to collect certain non-personal information about your activities on this and other websites to provide you with targeted advertising based upon your interests. If you do not wish to have this information used for the purpose of serving you such targeted ads, you may opt-out at http://preferences-mgr.truste.com/. You will continue to receive generic ads. Your information if required or permitted by law. We may disclose your information as required or permitted by law, or when we believe that disclosure is necessary to protect our rights, and/or to comply with a judicial proceeding, court order, subpoena, or other legal process served on us. Your information if there’s a change in business ownership or structure. If ownership of all or substantially all of our business changes, or we undertake a corporate reorganization (including a merger or consolidation) or any other action or transfer between SurveyMonkey entities, you expressly consent to SurveyMonkey transferring your information to the new owner or successor entity so that we can continue providing our services. If required, SurveyMonkey will notify the applicable data protection agency in each jurisdiction of such a transfer in accordance with the notification procedures under applicable data protection laws. Information you expressly consent to be shared. For example, we may expressly request your permission to provide your contact details to third parties for various purposes, including to allow those third parties to contact you for marketing purposes. (You may later revoke your permission, but if you wish to stop receiving communications from a third party to which we provided your information with your permission, you will need to contact that third party directly.) 4. What are your rights to your information? You can: Update your account details. You can update your registration and other account information on your My Account page. Information is updated immediately. Access and correct your personal information. You may access and correct the personal information that SurveyMonkey holds about you. This right may be 167
exercised by visiting your My Account page or by contacting customer support. This right is subject to some exceptions, such as where giving you access would have an unreasonable impact on the privacy of other individuals. We will respond to your request for access or correction within a reasonable time and, where reasonable and practicable to do so, we will provide access to your personal information in the manner requested by you. Download/backup your survey data. Depending on what subscription plan you have, we provide you with the ability to export, share and publish your survey data in a variety of formats. This allows you to create your own backups or conduct offline data analysis. See here for downloading instructions. Delete your survey data. Deleting survey data in the ways described on this page will not permanently delete survey data immediately. As long as you maintain an account with us, we may retain your deleted data for a limited time in case you delete something by accident and need to restore it (which you can request by contacting customer support). To the extent permitted by law, we will permanently delete your data if you request to cancel your account. However, if your data was previously made available to the public through a public link, additional copies of your data may remain available on the Internet even after your account has been deleted. Cancel your account. To cancel and delete your account, please contact customer support. Deleting your account will cause all the survey data in the account to be permanently deleted from our systems within a reasonable time period, as permitted by law, and will disable your access to any other services that require a SurveyMonkey account. We will respond to any such request, and any appropriate request to access, correct, update or delete your personal information within the time period specified by law (if applicable) or without excessive delay. We will promptly fulfill requests to delete personal data unless the request is not technically feasible or such data is required to be retained by law (in which case we will block access to such data, if required by law). For how long do we retain your data? We generally retain your data for as long as you have an account with us, or to comply with our legal obligations, resolve disputes, or enforce our agreements. Data that is deleted from our servers may remain as residual copies on offsite backup media for up to approximately 12 months afterward. We describe our retention practices in more detail in this FAQ 5. Security, cookies and other important information Changes to this privacy policy. We may modify this privacy policy at any time, but if we do so, we will notify you by publishing the changes on this website. If we determine the changes are material, we will provide you with additional, prominent notice as is appropriate under the circumstances, such as via e-mail or in another conspicuous manner reasonably designed to notify you. If, after being informed of these changes, you do not cancel your subscription and continue to use our services beyond the advance-notice period, you will be considered as having expressly 168
consented to the changes in our privacy policy. If you disagree with the terms of this privacy policy or any updated privacy policy, you may close your account at any time. Security. Details about SurveyMonkey’s security practices are available in our Security Statement. We are committed to handling your personal information and data with integrity and care. However, regardless of the security protections and precautions we undertake, there is always a risk that your personal data may be viewed and used by unauthorized third parties as a result of collecting and transmitting your data through the internet. Data locations. Our servers are based in the United States, so your personal information will be hosted and processed by us in the United States. Your personal information may also be processed in, or transferred or disclosed to, countries in which SurveyMonkey subsidiaries and offices are located and in which our service providers are located or have servers. You can view where our offices are located on the Office Locations page. Cookies. We use cookies on our websites. Cookies are small bits of data we store on the device you use to access our services so we can recognize repeat users. Each cookie expires after a certain period of time, depending on what we use it for. We use cookies for several reasons: o
To make our site easier to use. If you use the “Remember me” feature when you sign into your account, we may store your username in a cookie to make it quicker for you to sign in whenever you return to SurveyMonkey.
o
For security reasons. We use cookies to authenticate your identity, such as confirming whether you are currently logged into SurveyMonkey.
o
To provide you with personalized content. We may store user preferences, such as your default language, in cookies to personalize the content you see. We also use cookies to ensure that users can’t retake certain surveys that they have already completed.
o
To improve our services. We use cookies to measure your usage of our websites and track referral data, as well as to occasionally display different versions of content to you. This information helps us to develop and improve our services and optimize the content we display to users.
o
To advertise to you. We, or our service providers and other third parties we work with, may place cookies when you visit our website and other websites or when you open e-mails that we send you, in order to provide you with more tailored marketing content (about our services or other services), and to evaluate whether this content is useful or effective. For instance, we may evaluate which ads are clicked on most often, and whether those clicks lead users to make better use of our tools, features and services. If you don’t want to receive ads that are tailored to you based on your anonymous online activity, you may “opt-out” of many of the companies that are involved in such tailoring 169
by going to http://www.aboutads.info. Opting out in this way does not mean you will not receive any ads; it just means that you will not receive ads from such companies that have been tailored to you based on your activities and inferred preferences. o
Google Analytics. In addition to the above, we have implemented on our websites and other services certain Google Analytics features that support Display Advertising, including re-targeting. Visitors to our websites may optout of certain types of Google Analytics tracking, customize the Google Display Network ads by using the Google Ad Preferences Manager and learn more about how Google serves ads by viewing its Customer Ads Help Center. If you do not wish to participate in Google Analytics, you may also download the Google Analytics opt-out browser add-on.
Click here for more details about our cookies. We don’t believe cookies are sinister, but you can still choose to remove or disable cookies via your browser. Refer to your web browser’s configuration documentation to learn how to do this. Please note that doing this may adversely impact your ability to use our services. Enabling cookies ensures a smoother experience when using our websites. By using our websites and agreeing to this privacy policy, you expressly consent to the use of cookies as described in this policy. Blogs and Forums. Our website offers publicly accessible blogs and community forums. You should be aware that any information you provide in these areas may be read, collected, and used by others who access them. We’re not responsible for any personal information you choose to submit in these areas of our site. To request removal of your personal information from our blog or community forum, contact customer support. In some cases, we may not be able to fulfill your request and we will let you know why. Online Tracking. We currently do not process or comply with any web browser’s “do not track” signal or other similar mechanism that indicates a request to disable online tracking of individual users who visit our websites or use our services (unless otherwise stated in a service-specific privacy statement). Safety of Children and COPPA. Our services are not intended for and may not permissibly be used by individuals under the age of 13. SurveyMonkey does not knowingly collect personal data from persons under 13 or allow them to register. If it comes to our attention that we have collected personal data from such a person, we may delete this information without notice. If you have reason to believe that this has occurred, please contact customer support. English version controls. Non-English translations of this privacy policy are provided for convenience. In the event of any ambiguity or conflict between translations, the English version is authoritative.
170
6. Additional information for European Union users SurveyMonkey provides some of its services to users in the EU through SurveyMonkey Europe, located at 2 Shelbourne Buildings, Second Floor, Shelbourne Road, Dublin 4, Ireland. “Personal data”. For users located in the EU, references to “personal information” in this policy are equivalent to what is commonly referred to as “personal data” in the EU. About IP addresses. Our servers record the incoming IP addresses of visitors to our websites (whether or not the visitor has a SurveyMonkey account) and store the IP addresses in log files. We use these log files for purposes such as system administration and maintenance, record keeping, tracking referring web sites, inferring your location, and security purposes (e.g. controlling abuse, spam and DDOS attacks). We also store IP addresses along with certain actions you take on our system. IP addresses are only linked to survey responses if a survey creator has configured a survey to collect IP addresses. By agreeing to this privacy policy, you expressly consent to SurveyMonkey using your IP address for the foregoing purposes. If you wish to opt-out from the foregoing consent to use your IP address, you must cancel your account (if you have one) or not respond to a survey if requested to do so. Data controller. SurveyMonkey Europe, whose contact information is listed above, is the data controller for registration, billing and other account information that we collect from users in the EU. However, the data controller for survey data is the survey creator. The survey creator determines how their survey questions and responses are used and disclosed. SurveyMonkey only processes such survey data in accordance with the instructions and permissions (including those given under this privacy policy) selected by the survey creator when they create and administer their survey. Accessing and correcting your personal data. You have the right to access and correct the personal information that SurveyMonkey holds about you. This right may be exercised by visiting your account’s My Account page or by contacting customer support. Your responsibilities. By using our services, you agree to comply with applicable data protection requirements when collecting and using your survey data, such as requirements to inform respondents about the specific uses and disclosures of their data. Consents By clicking “I Agree” or any other button indicating your acceptance of this privacy policy, you expressly consent to the following: You consent to the collection, use, disclosure and processing of your personal data in the manner described in this privacy policy, including our procedures relating to cookies, IP addresses and log files. 171
Our servers are based in the United States, so your personal data will be primarily processed by us in the United States. You consent to the transfer and processing of your personal data in the United States by SurveyMonkey Inc. and in the data locations identified in Section 5 by our various affiliates and service providers.. You consent and agree that we may transfer your data to data processors located in countries, including the United States, which do not have data protection laws that provide the same level of protection that exists in countries in the European Economic Area. Your consent is voluntary, and you may revoke your consent by opting out at any time. Please note that if you opt-out, we may no longer be able to provide you our services. You consent to us sharing your personal data with relevant persons working for service providers who assist us to provide our services. If you have enabled cookies on your web browser, you consent to our use of cookies as described in this privacy policy. 7. Additional information for Canadian users Please read this article for information about the U.S. Patriot Act and how it affects the personal information of Canadian users. 8. Additional information for Japanese users You agree that you are responsible for notifying the respondents of surveys that you create using our services about how SurveyMonkey may use the respondents’ survey responses and personal data as described in this privacy policy and obtaining prior consent from respondents to disclose their personal data to SurveyMonkey. 9. Additional information for Brazilian users The personal information collected, stored, used and/or processed by SurveyMonkey, as described in this privacy policy, are collected, stored, used and/or processed in accordance with Brazilian Law No. 12,965/2014. By clicking “I Agree” or any other button indicating your acceptance of this privacy policy, you expressly consent to the collection, use, storage and processing of your personal information by SurveyMonkey as described. 10. Additional information for Australian users If you are dissatisfied with our handling of your complaint or do not agree with the resolution proposed by us, you may make a complaint to the Office of the Australian Information Commissioner (OAIC) by contacting the OAIC using the methods listed on their website at http://www.oaic.gov.au. Alternatively, you may request that we pass on the details of your complaint to the OAIC directly.
172
Appendix F Histogram for Student Grades
173
Appendix G Histogram for Technological Knowledge
174
Appendix H Histogram for Content Knowledge
175
Appendix I Histogram for Pedagogical Knowledge
176
Appendix J Histogram for Pedagogical Content Knowledge
177
Appendix K Histogram for Technological Content Knowledge
178
Appendix L Histogram of Technological Pedagogical Knowledge
179
Appendix M Histogram of Technological Pedagogical Content Knowledge
180