An Overview of Evaluative Instrumentation for Virtual High Schools

4 downloads 84 Views 97KB Size Report
for assessment and a determination of success within virtual schools. This article seeks ... University of Florida, 2403 Norman Hall, Gainseville, FL 32611. E-mail:.
The Amer. Jrnl. of Distance Education, 22: 24–45, 2008 Copyright © Taylor & Francis Group, LLC ISSN 0892-3647 print / 1538-9286 online DOI: 10.1080/08923640701713422

An Overview of Evaluative Instrumentation for Virtual High Schools

1538-9286 0892-3647 HAJD The Amer. American Jrnl.Journal of Distance of Distance Education Education, Vol. 22, No. 1, January 2008: pp. 1–35

EVALUATIVE BLACK, FERDIG, INSTRUMENTATION DIPIETRO

Erik W. Black, Richard E. Ferdig, and Meredith DiPietro University of Florida

Abstract: With an increasing prevalence of virtual high school programs in the United States, a better understanding of evaluative tools available for distance educators and administrators is needed. These evaluative tools would provide opportunities for assessment and a determination of success within virtual schools. This article seeks to provide an analysis and classification of instrumentation currently available. It addresses issues regarding the limited arsenal of assessments and evaluation instrumentation for virtual schools.

Although there has been extraordinary growth in K–12 virtual school programs, assessment tools for the measurement and evaluation of key factors that equate to virtual schooling success have not kept pace. At the present time, a limited range of assessments are available for use at different stages within an online education program; few of these tools have been subjected to rigorous review and proven themselves as valid and reliable instruments. Virtual schools within the United States need a comprehensive picture of assessments in order to address several widespread concerns, including • local, state, and federal government’s failure to recognize the opportunity virtual learning can provide to students; • unreasonable expectations about learning in online environments; • a clear road map for virtual school improvement; • the lack of national communication, with a shared understanding of concepts and terminology, about virtual schools; • preparation for future increased interaction with local, state, and federal governments. Correspondence should be sent to Erik W. Black, School of Teaching and Learning, University of Florida, 2403 Norman Hall, Gainseville, FL 32611. E-mail: [email protected]

EVALUATIVE INSTRUMENTATION

25

The goal of this article is to provide an analysis and categorization of assessments for use within a distance education program and to make recommendations for future research and practice within the field of K–12 distance education assessment. The article categorizes assessments and provides examples of instruments that fall into each category. Research by Cavanaugh et al. (2004) identifies factors that influence the success of a distance education program. These factors directly correspond to categories of assessment developed and presented within this article. The concepts represent significant issues of critical concern for the development of successful K–12 virtual schools. They also represent variables with testable outcomes. Success Factors of Cavanaugh et al. (2004) Abilities and disabilities of the student Quality of the teacher Demands of the content Design of the distance learning system (not specifically addressed) (not specifically addressed)

Corresponding Assessment Category Student assessment Teacher assessment Content/Curriculum assessment Technology assessment Course instance assessments Other assessments

For the purposes of our conversation, and to provide a basis for the categorization of instrumentation and assessment, the factors introduced by Cavanaugh et al. (2004) have been assimilated into four specific content areas: student assessment, teacher assessment, content/curriculum assessment, and technological assessment. Two additional factors not considered by Cavanaugh et al. (2004) have been included within our discussion. The first factor, categorized as “other,” seeks to detail the involvement of other factors in a student’s academic progress. Research by Litke (1998) within this category focuses on the role of parents and/or guardians in student academics. This category could also include the role of mentors, site coordinators, counselors, and virtual school administrators. The second factor, categorized as “course instance,” details the unique entity of the course as a singularity. Course instance assessments focus on the role of classroom environment, community, and grades. In other words, a course assessment might evaluate the content; a course instance assessment would evaluate a situation involving a specific teacher, a group of students, a course, and a particular learning management system. It is important to remember that in some instances, assessments converge across multiple content categories. For example, the Distance Education Learning Environment Survey (DELES) is a comprehensive assessment that incorporates measures of teacher support, student interaction, and student satisfaction. In addition, the subject of assessment is not always determined by whom the assessment is administered to; for example, several instruments are

26

BLACK, FERDIG, DIPIETRO

administered to students but describe the nature of communal interactions throughout the course instance. Finally, many assessments provide the unique opportunity to look at a course instance from varying perspectives based on when it is administered during a course. For example, a content assessment can be used as a pre-course measure, mid-term evaluation, and post-hoc course evaluation, informing the test administrator of a student’s preexisting knowledge, progress toward a goal, and whether the student achieved the goals of the course. Because of these unique abilities, the assessment field lends itself to considerable complexity. In consideration of this complexity, a secondary purpose of this article is to clarify when and where an assessment can be used for optimal results.

STUDENT ASSESSMENT Description Student assessment focuses on factors related to student performance in online educational environments. Assessments within this category provide a picture of a student’s readiness for learning before engaging in instruction, a method for gauging just-in-time adjustments during the course of an instructional period, and post-hoc measurements for assessment of student progress. Post-hoc assessment is useful when making comparisons between differing educational environments. Whether focused on student computer skills and technological know-how, academic capability, or psychological factors essential for success as an online student, pre-coursework assessments can provide distance educators with an understanding of who the student is and how the virtual school can best scaffold the student’s success. Distance education’s extraordinary growth, precipitated by the Internet, has resulted in several alarming statistics, including exceedingly high rates of attrition and failure (Carr 2000; Carter 1996; Roblyer and Elbaum 2000). Armed with a better understanding of the student, educators can custom tailor curriculum to specific needs, maximizing the potential for the learner to succeed in a virtual schooling environment and eliminate issues associated with attrition. Example #1: Content-based subject-matter assessments. Content-based subjectmatter assessments provide an understanding of student capabilities based on specific academic focus areas (e.g., mathematics, reading, history). Utilized to provide an accurate conceptualization of student academic abilities (including language skills for speakers of other languages), subject-matter exams provide a measure by which teachers and administrators can guide students toward appropriate coursework and academic interventions to enable online schooling success. Subject-matter assessments do not need to be exclusive to online environments, but there are few exams that focus on competencies that are

EVALUATIVE INSTRUMENTATION

27

applicable in multiple states or across the nation, making the implementation and delivery of a subject-matter assessment online an arduous task. Table 1 summarizes four examples of content-based subject-matter assessments. It contains a brief description of the assessment as well as the metric, when it is offered, a link to the assessment, and any research available. Future innovation focused on the development of content assessments incorporating curricular competencies for multiple states or regions would provide an opportunity for virtual schools to integrate a consistent instrument into pre-course assessment of students. Example #2: Technological aptitude assessments. Distance education students need solid technical skills (Osika and Sharp 2003). To date, the vast majority of assessments built for students in online environments are technological aptitude instruments. Assessments that evaluate student technological knowhow are necessary and helpful to teachers and administrators, but student populations continue to become increasingly technologically advanced (Prensky 2006); today’s virtual high school students are self-selecting into online classrooms because they are comfortable with technology, thus negating the value of a technology index. Assessments within this category have the potential to be both lengthy and cost-prohibitive; pricing for the TechLiteracy Assessment begins at $5 per student (T.H.E. Journal 2005). As technological innovation occurs at a rapid pace, future research should focus on factors that are correlated with technological efficacy rather than focusing on a student’s ability to manipulate technologies that, in a few short years, will be obsolete. Table 2 contains three examples of student aptitude instruments. Example #3: Psychometric assessments. Assessments that focus on psychological traits predictive of success in distance learning environments seem to provide specific promise for the future of virtual schooling. Several researchers have isolated factors that contribute to success in virtual classrooms. Such factors include organization and self-regulation skills, beliefs about achievement, responsibility, and risk taking (McLester 2002; Osborn 2001; Roblyer and Marshall 2004; Wang and Newlin 2000). The challenge is to create a valid and reliable instrument that addresses success factors and provides for an educational intervention for those students determined to be at risk for dropout or failing grades. Research by Robyler and Marshall (2004) has focused on predicting which students may need extra assistance to succeed in virtual schools. Robyler and Marshall’s Educational Success Prediction Instrument (ESPRI) is currently undergoing rigorous validation. Ongoing research by Robyler and Blomeyer (2006) has resulted in the construction of a counseling intervention for use with students who have been identified by the ESPRI. One function a precoursework assessment can provide for distance education teachers and

28 Mid-course, Post-hoc

When None available

Research

Link

http://www.bc.edu/ research/intasc/ studies/DiagnosticAlgebra/description.shtml PSAT/NMSQT measures critical Reading, mathe- PreWilson (2004) http://www. reading skills, math problemmatics, writing coursework, collegeboard.com/ solving skills, and writing Mid-course, student/testing/psat/ skills. Post-hoc about.html Measurement of middle school English, mathe- PreWoodruff (2003) http://www.act.org/ students’ preparation for high matics, readcoursework explore/ school curriculum ing, science Results can be used to determine English profiPreNone available http://www.ets.org whether applicants possess the ciency coursework English proficiency needed to begin an online course of study.

Metric

Designed to identify whether stu- Algebraic achievement dent achievement in algebra is being hindered by algebraic misconceptions

Description

Note: “None available” is noted several times under the research category. This provides a glimpse into future research needed in this important area. However, it should be noted that the research category contains the results of our search of the publicly available literature as of the time of submission for publication. Any omission of research within these categories is unintentional; we hope such tables provide a format for global discussions within the research community. These discussions would promote a foundation from which to build new assessments.

Test of English for Distance Education (ETS’ TEDE)

ACT’s Explore

Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT)

Diagnostic Algebra Assessment

Name

Table 1. Examples of Content-Based Subject-Matter Assessments

29

TechLiteracy Assessment (TLA) North Carolina Online Test of Computer Skills International Computer Drivers License Computer Skills Placement

Name

Mixed-format test of technical skills assesses an understanding of basic IT concepts and competence in using common computer applications

Measures technology literacy and proficiency of elementary and middle school students Measures computer proficiency in middle school students

Description

Table 2. Examples of Technological Aptitude Assessments

Pre-coursework

Pre-coursework

Computer literacy Computer literacy

Pre-coursework

When

Computer literacy

Metric

Public Schools of North Carolina (2006) Dixie and Wesson (2001)

Learning.com (nd)

Research

http://www.icdlus.com

http://cskills.ncsu.edu/nccs/

http://www.learning.com/ TLA/index.htm

Link

30

BLACK, FERDIG, DIPIETRO

administrators is a better understanding of the student. Table 3 notes psychometric measures available. Pre-course assessments that are used to assess prior knowledge or to understand psychological factors are critical for understanding student success as an online environment and evaluating the success of online schools. A precoursework assessment should not be utilized as a gatekeeper, sorting out those eligible for participation in online coursework and those who are “undesirable.” Rather, they should be used as tools to deliver counseling interventions and assistance to students who are identified as at risk for dropout or course incompletion.

TEACHER ASSESSMENT Teacher assessment instruments measure traits and qualities proven to support student success in online educational environments. These instruments, coupled with standard student feedback in the form of grades and teacher evaluations, will provide an accurate picture of opportunities to recognize success and capitalize on improvement in online classrooms. Most states require that online instructors meet state certification standards though few—Kansas and Alabama being the exceptions—require specific training for online instruction. Many states have various requirements regarding teacher contact with students, class size limits, and the number of students a teacher may teach (Watson, Winograd, and Kalmon 2005). Although researchers possess considerable knowledge regarding desirable qualities for successful online learners, a better understanding of successful instructors is needed. Palloff and Pratt (2000) note that “technology does not teach students; effective teachers do” (4). A general lack of research regarding teacher effectiveness in online environments is a hindrance to continued progress. Many, including Raths (1999), Sherry and Wilson (1997), and Pea (1994), advocate for a new model of teaching specific to online instruction. Buchanan (2000) describes this model as a dualistic role for the instructor in online education; the teacher adopts the role of both guide and facilitator of knowledge. Emphasis is placed on the necessity for students to understand that a human, with subject knowledge and the ability to effectively relay information, is supporting them. Many instruments currently used to assess teachers in virtual schools focus on technological skills, use, and self-efficacy. Limited research has been undertaken to develop tools that provide accurate feedback regarding online pedagogical strengths and weaknesses of instructors. Table 4 summarizes examples of teacher assessments available. In light of the limited nature of teacher assessments for distance education currently in use, it is appropriate to present a framework for future assessment, identifying key characteristics and traits of successful online instructors. Bonk

31

Survey designed to measure the indicators described earlier in order to predict which students will and will not succeed in online courses

Description

Description

Metric

Ability to succeed in an online classroom

Metric

Teacher Technology Survey Establishes a profile of Technical abilities teacher technical abilities Teacher and Technology: A Measure of technological Technological Snapshot Survey attitudes, integration, and attitudes, infrastructure integration, and infrastructure School Observation Measures classroom 24 different Measure (SOM) practice at a whole school classroom level practices

Name

Table 4. Examples of Teacher Assessments

Educational Success Prediction Instrument (ESPRI)

Name

Table 3. An Example of a Psychometric Assessment Research

Link

http://crep.memphis.edu/web/ instruments/som.php

Pre, Mid, Ross, Smith, and Post-hoc Alberg (1998)

Link http://insight.southcentralrtec. org/ilib/tts/ http://insight.southcentralrtec. org/ilib/ttss/

Research

Pre, Mid, None available Post-hoc Pre, Mid, Insight (2006) Post-hoc

When

Pre-coursework Roblyer and Marshall http://web.ebscohost.com/ (2004) ehost/detail? vid = 1&hid = 22&sid = 1b7e977a7740-4637-9338590020d6c323% 40SRCSM2

When

32

BLACK, FERDIG, DIPIETRO

(2001) provides some guidance in his survey of distance instructors in higher education, recommending that online educators could benefit from increased training, recognition and support, sharing of expertise, online learning policy, research, partnerships for learning tool development, and pedagogy. Of primary importance is a teacher’s preparedness to teach in an online environment, which differs greatly from a traditional classroom. The Southern Regional Education Board (SREB) notes that teacher “training to fully understand the specific challenges and opportunities of teaching in the online environment is key” (2006, 14). Many new online instructors underestimate the amount of time required when planning and implementing online courses. Evidence indicates that planning and teaching online requires more time than teaching face-to-face courses due to the level of interactivity with each individual student (SREB 2006). Goodyear et al. (2001) identified and described the roles of an online teacher in an educational environment. These roles include • the role of content facilitator, concerned directly with facilitating the learners’ growing understanding of course content; • the role of technologist, concerned with making or helping make technological choices that improve the environment available to learners; • the role of designer, concerned with designing worthwhile online learning tasks; • the role of manager/administrator, concerned with issues of learner registration, security, record keeping, and so on; • the role of process facilitator, concerned with facilitating the range of online activities that are supportive of student learning; • the role of adviser/counselor, concerned with offering advice or counseling to learners on an individual or private basis to help them get the most out of their engagement with the course; • the role of assessor, concerned with providing grades, feedback, and validation of learners’ work; and • the role of researcher, concerned with engagement in production of new knowledge of relevance to the content areas being taught. Goodyear et al. (2001) promote that educational environments are dynamic, thus some or many of these roles may be insignificant in certain situations, but all should be understood by the instructor. For instance, virtual school instructors contend that it takes more time for them to teach an online course than it does to teach a comparable face-to-face course (SREB 2006). This increase in time commitment is due in part to the fact that most successful online courses are more interactive, involving all students. In addition to interactivity, online courses do not have a set schedule and may be accessed at any time; students and teachers expand the time they will devote to participating in a course (SREB 2006). This singular item, time commitment, promotes

EVALUATIVE INSTRUMENTATION

33

questions that as of yet remain unanswered. For example, if teachers are indeed engaging more time with online classes on a per-student basis, what is the optimal class size for successful learning outcomes? According to Sener (2004), this is a contentious question with no answer. Research by Sener (2000) and Turoff and Hiltz (2002) describes specific principles and models for effective instruction in virtual schools but fails to provide a specific metric for measuring an instructor’s ability to apply these metrics. The assertion of the authors is that class size is just another characteristic of virtual schooling that can be addressed, given appropriate instrumentation and access to enough data to provide a comprehensive picture of virtual schooling in the United States. Distance educators may be able to build on existing exemplary assessments currently utilized within traditional face-to-face learning environments. For instance, the School Observation Measure (SOM), developed by the Center for Research and Policy at The University of Memphis, measures classroom practices at a whole-school level and represents what could be possible for Virtual High School (VHS) instrumentation. By modifying existing assessments, the validation process for use in virtual high schools should be greatly expedited. In addition, research has detailed approaches to practice in online instruction. Graham et al. (2001) present a seven-principle approach for practice in online instruction: 1. 2. 3. 4. 5. 6. 7.

Good practice encourages student-faculty contact. Good practice encourages cooperation among students. Good practice encourages active learning. Good practice gives prompt feedback. Good practice emphasizes time on task. Good practice communicates high expectations. Good practice respects diverse talents and ways of learning.

By utilizing existing research, such as that provided by Goodyear et al. (2001) and Graham et al. (2001) and looking to existing instruments currently used in traditional classroom environments, the opportunity exists to substantively increase the repository of valid and reliable measures of teacher quality within online environments in the near future.

CONTENT/CURRICULUM ASSESSMENT Content and curricular assessment address the need for demonstrating quality in online instruction and for setting guidelines for developers of online content. The establishment of curricular benchmarks is of paramount importance for the virtual school industry. According to Galusha (1997), due to the general

34

BLACK, FERDIG, DIPIETRO

public’s distrust of distance education, curricula for online learners must equal if not exceed the quality of those in use in traditional environments. Previous curricular research conducted by Kozma et al. (2000) focused on the development of quality standards. Kozma et al. commissioned an expert panel to investigate course quality at Virtual High School (VHS), an online education consortium. The panel developed a set of nineteen individual course standards organized in four content areas: curriculum/content, pedagogy, course design, and assessment (Yamashiro and Zucker 1999). These content areas have been further expounded upon by Chico State University, whose guidelines for developers of online teaching constitutes a six-category guide incorporating aspects of instructional design. This Rubric for Online Instruction focuses on learner support and resources, online organization and design, instructional design and delivery, assessment and evaluation of student learning, innovative teaching with technology, and faculty use of student feedback (CSU, Chico 2003). Roblyer and Wiencke (2003) promote a fiveelement evaluation of student interactive qualities in distance courses. Their strategy utilizes student evaluations that give comments as well as a rating on each of five structural elements, providing substantial, useful feedback on how to make a course more interactive. Table 5 highlights two such course and curriculum assessments.

TECHNOLOGICAL ASSESSMENTS Increasing levels of scrutiny are being applied to issues relating human factors, usability, security, performance, and maintainability of applications in online environments (Bengtsson and Bosch 2000). Whether the concern lies in making applications usable to those with disabilities or making an application accessible to individuals with differing technological proficiencies, technological assessment will play an increasingly important role in the promotion of the design characteristics of the applications that constitute a virtual high school. Table 6 highlights technology assessments available to virtual schools. Forthcoming technological assessments need to incorporate real-time usability statistics garnered seamlessly from users. This information includes click-stream analysis and Web site interaction data. Such assessments would build upon instant feedback affordances to deliver adaptive instruction for students and teachers.

COURSE INSTANCE ASSESSMENT Course instance assessments attend to features unique to individual course experiences, including aspects of community and environmental concerns

35

Evaluation tool for existing online courses Evaluation of interaction in online course

Description

Usefulness, Satisfaction, and Ease of Use (USE) Measuring the Usability of Multi-Media Software (MUMMS)

Post Study System Usability Questionnaire (PSSQU)

Name

Metric

When

Mid-term, Post-hoc Mid-term, Post-hoc

Link

HFRG (2002)

Lund (2001)

Lewis (1991) Lewis (2002)

http://www.hcirn.com/ atoz/atozp/pssuq.php http://www.ucc.ie/hfrg/ questionnaires/mumms/ info.html

http://www.hcirn.com/ atoz/atozp/pssuq.php

Link

http://www.sloan-c.org/ publications/ jaln/v8n4/ v8n4_roblyer.asp

http://www.csuchico.edu/celt/roi/ how2use.html

Research

Roblyer and Wiencke (2003)

Distance Education Report (2003)

Research

Mid-term, Post-hoc

Post-hoc

Usability, satisfaction Usability

Usability

When Post-hoc

Metric

Interaction, community

Curriculum and instruction

Measures end-user usability for Web-based applications Multi-domain measure of usability Measures end-user usability for Web-based applications, specifically tailored for multimedia applications

Description

Table 6. Examples of Technological Assessments

Roblyer and Wiencke’s FiveElement Evaluation

Chico State Rubric for Online Instruction (ROI)

Name

Table 5. Examples of Course/Content Assessments

36

BLACK, FERDIG, DIPIETRO

through the incorporation of metrics regarding responsiveness and longitudinalbased course feedback (pre-, mid-, and post-hoc course analysis). According to Pearson and Trinidad (2005b), instructors have become more comfortable with producing e-learning materials, encouraging students to absorb information from them, and then testing student outcomes based on these materials. There is now a growing movement toward designing an e-learning environment that recognizes how the communicative powers of the Internet support an active and constructive role for learners. Personal relationships and interactions between participants offer a specific advantage to learning. The psychosocial environment of the distance education classroom is quite different from that of a face-to-face or place-based class and therefore must be cultivated and developed in order for it to be an effective medium for education (Walker 2003). According to Pearson and Trinidad (2005a), congruence between the “actual” and “preferred” educational environments can be used to measure and evaluate changes, which are anticipated to lead to improved learning outcomes for students. Past studies have found links between students’ perceptions of the psychosocial characteristics of their learning environments and their learning outcomes (Fraser 1998). In support of Walker’s statements regarding psychosocial environment, instance evaluations provide an excellent opportunity to assess and address the development of environment conducive to learning. There is no uniform benchmark for quality in the virtual school industry. Due to the relative youth of online education, many states leave the determination of quality assurance to the person in charge of online learning in the state. Many states rely on surveys of students, and sometimes stakeholders, in order to ensure quality. Other programs track course completion and pass rates, and some track advanced placement exam results (Watson, Winograd, and Kalman 2005). It remains to be determined whether instance evaluations available for use with adult populations, including undergraduate and graduate students, will be valid and reliable for use with K–12 populations. Aldridge and colleagues’ (2003) TROFLEI incorporates metrics successful in adult populations. Many of the environmental dimensions incorporated into adult instance assessments are also used in K–12 instruments. These dimensions include measures of student cohesiveness, teacher support, student involvement, and technology usage (Aldridge, Dorman, and Fraser 2004; Pearson and Trinidad 2005b). Table 7 highlights examples of learning environment assessments. Course instance assessments have the ability to provide users with data that depict the actual and preferred learning environments of students and teachers, in some cases immediately, giving instant and potentially valuable feedback to instructors working in these environments. Such data can then used to support open dialogue between the teacher and students to determine ways in which they might work together to guide educational decision making to improve their e-learning environment (Pearson and Trinidad 2005b).

37

Description

Metric

When

Research

Link

Technology Rich Outcomes Measurement of the Learning Post-hoc Aldridge, Dorman, http://ferdig.coe.ufl.edu/ For Learning Environments psychosocial learning environment and Fraser vhs/surveys/troflei.htm environment in distance (2004) Inventory (TROFLEI) education Classroom Community Index Measure of online Community Pre-coursework Rovai and Whiting http://www.sciencedirect. (CCI) community in a virtual and post-hoc (2005) com/science/journal/ classroom 10967516 Distance Education Learning Identifies associations Learning Mid-course, Walker (2003) http://insight. Environment Survey between environment and environment post-hoc southcentralrtec.org/ilib/ (DELES) student attitudes, deles/actual/ describes student and instructor perceptions of the learning environment

Name

Table 7. Examples of Course Instance Assessments

38

BLACK, FERDIG, DIPIETRO

To ensure success, online teachers and students need to have instructional and technical support. Pedagogy, content, and technical support for online teachers should be provided by the organization offering the courses. Experienced online teachers are also reaching out to form communities of learners to support each other. Students of all ages should have access to course and technical support to avoid technical or academic problems becoming a barrier to online success. This support may be provided face-to-face or electronically through help desk resources (SREB 2006).

OTHER ASSESSMENTS Parental/adult oversight and involvement is an important component of any educational experience. In the asynchronous, self-structured virtual school environment, parental/adult involvement can offer significant benefits to student outcomes. Therefore, virtual schools should actively encourage parental involvement. Homeschooled children, according to Maeroff (2003), comprise an increasing subset of the virtual schooling population. Virtual schooling may turn out to be a substantial boon to homeschoolers. Though not without controversy at the state and local level, virtual schooling has the potential to ease the way for parents who want to educate their children at home by providing structured curricula. Few virtual schools currently track and account for student’s parental involvement (see Table 8), exceptions being the Kiel eSchool in Wisconsin and Florida Virtual School, which provides an end-of-course parental assessment to evaluate parental perception of the online learning process (Ferdig 2006). According to Litke (1998), many virtual schools suffer from less-thanoptimal parental involvement. Litke’s research indicates that many parents adopt an absentee role, leaving students responsible for their own daily supervision, although parental involvement increased when the students were experiencing difficulties and decreased when there were fewer problems. In addition to parental involvement, virtual schools consist of administrators, site coordinators, and mentors. Although various entities provide training for such positions, there is little to no research on developing and implementing surveys and evaluation instruments for these important roles.

IMPLICATIONS There is an increasing need for assessment and evaluation tools to enable data-driven decision making in a variety of important areas within K–12 virtual schooling. Perhaps the most obvious implication from this overview is that there is a relative dearth of valid and reliable surveys and instruments that

39

Kiel Parent/Adult Satisfaction Survey

Name

Evaluation of parent/guardian perceptions regarding online learning

Description

Table 8. Other Assessments of Note When

Research

Parental involvement Post-hoc Ferdig, Papanastasiou, and DiPietro (2005)

Metric

http://ferdig.coe.ufl.edu/ vhs/surveys/ parent.htm

Link

40

BLACK, FERDIG, DIPIETRO

could be used to improve K–12 virtual schooling. An instrument should be (a) statistically validated with large and diverse populations and (b) accepted within the research and practice community as being not only accurate but also useful in promoting positive change. We need more research that produces surveys and instruments; however, that research needs to recognize the diverse categories within virtual schooling. Within time, such an overview as this one should not be possible because of the wealth of instruments available within the aforementioned categories (e.g., teacher, student, parent, course quality). Researchers need to more closely explore and examine existing surveys within other domains, including face-to-face and non-K–12 education. Teaching and learning in K–12 virtual schooling is a significantly different experience from teaching and learning online at other levels (e.g., adult or higher education). It is also significantly different from teaching K–12 face-to-face. Therefore, caution should be exercised in appropriating survey instruments from said domains. However, validated instruments do exist in both domains and can serve as building blocks in designing exemplary instruments. Researchers, educators, and developers need to move beyond simply evaluating instruments to building interventions for change. One of the first steps in improving K–12 virtual schooling is obviously to be able to measure the various components within the K–12 framework (e.g., teachers, students). These measurements provide metrics for cross-institutional conversation as well as baselines for growth and change. However, evaluation instruments by themselves do not necessarily promote change. The education community needs to take validated instruments and build tools that take the resulting data and turn them into opportunities for professional development and program redevelopment. K–12 virtual schools need to work with their learning-management system or content-management system provider to get access to important evaluation data. K–12 virtual schools often use a learning management system (LMS) or content management system (CMS) provider such as Blackboard, Desire2Learn, or Angel. There is a tremendous amount of data available within these systems. Some of these data are easily accessible; other data are more difficult to retrieve but are still available. These data are crucial for virtual schools for two reasons. First, they provide an insight into the actual course experience for the teacher and student. Second, many evaluation instruments would benefit from the triangulation of data that can be retrieved from LMS and CMS systems. Unfortunately, some providers see the data as proprietary to them and are making schools buy their own data back from the company. Schools should consider this prior to engaging a provider. Virtual schools need to find a way to protect participants from overtesting. Evaluation instruments can provide insights into virtual schooling that other data-collection measures miss (e.g., passively collecting how many students are in each class). However, a certain level of complexity exists

EVALUATIVE INSTRUMENTATION

41

within virtual schooling related to measurement and testing. One instrument may be focused on the course instance but may require input from teachers and students. If virtual schools find more than one instrument useful, there is a danger of over-testing various virtual school audiences. Much like prescription-drug interaction, simply adding more instruments is not necessarily an answer to improving virtual schooling. Schools need to be thoughtful about when and how to test and with what instruments. Researchers need to support virtual school administrators, teachers, and site coordinators in understanding the pros and cons of each instrument. Researchers need to understand the value of multiple methods of collecting evaluation data. Recent state and national policies have raised the awareness of the importance of quantitative data to measure student and teacher growth and competencies. Although such data are obviously important, there are certain questions that cannot be measured simply by numbered survey instruments. For instance, a survey may determine that students did not get the types of interaction out of a class that they desired. That is an important finding that can lead to positive change in online instruction. However, in the attempt to promote change, it is less clear why they did not get the interaction they desired. Researchers need to understand the value of mixed methodologies for answering what, how, and why questions. Researchers and educators need to have continuous, open discussions about evaluation instruments. At the time of this publication, every effort was made to include evaluation instruments that have been published in the research literature. However, it is likely and probable that some instruments currently in press or under development may have been missed. The purpose of this article is to set the groundwork for a discussion about current evaluation instruments; it is not meant to be (nor could it be) a comprehensive description of every instrument ever created. A Virtual School Clearinghouse has been created at http://vs.education.ufl.edu/virtualschool/ to continue instrument collection efforts. At that site, a discussion forum is also provided to promote discussions of evaluation instruments and their value to K–12 virtual schooling. In addition to joining the conversation with researchers, it is crucial that the research community find ways to promote awareness of said instruments to virtual school leaders.

CONCLUSION After the analysis of relevant instruments and assessments available for use in virtual high schools, it is evident that scant experimental research exists in the field. Of the research that has resulted in usable instruments, few are reliable and have been subjected to a rigorous validation process. In addition, no practical clearinghouse of information on assessment exists to provide useful, timely access for practitioners.

42

BLACK, FERDIG, DIPIETRO

Although online education is still in its infancy, educators and administrators cannot fail to recognize and focus on the importance of accurate, valid instrumentation to describe and attribute success and failure in the classroom. At the present time it is difficult, if not impossible, to empirically assess whether outcomes within a distance learning environment are the direct result of student, courseware, or teacher intervention variables. Until it is possible to isolate and identify these variables, considerable time, effort, and funds can be only casually allocated in order to strengthen distance learning programs. In order to systematically isolate variables of success in online classrooms, researchers need the ability to measure success and/or failure variables.

ACKNOWLEDGMENT This research was funded in part by the AT&T Foundation.

REFERENCES Aldridge, J. M., J. P. Dorman, and B. J. Fraser. 2004. Use of multitraitmultimethod modeling to validate actual and preferred forms of the Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI). Australian Journal of Educational and Development Psychology 4:110–125. Aldridge, J., B. Fraser, D. Fisher, S. Trinidad, and D. Wood. 2003. Monitoring the success of an outcomes-based, technology-rich learning environment. Paper presented at the annual meeting of the American Educational Research Association, April, Chicago. Bengtsson, P. O., and J. Bosch. 2000. Assessing optimal software architecture maintainability. In Proceedings of Fifth European Conference on Software Maintainability and Reengineering, September, Lisbon, Portugal. Bonk, C. J. 2001. Online teaching in an online world. Bloomington, IN: CourseShare.com Buchanan, E. 2000. Assessment measures: Pre-tests for successful distance teaching and learning? Online Journal of Distance Learning Administration 2 (4). Available online at http://www.westga.edu/˜distance/buchanan24.html Carr, S. 2000. As distance education comes of age, the challenge is keeping the students. Chronicle of Higher Education. February 11. Available online at http://chronicle.com/free/v46/i23/23a00101.htm Carter, V. 1996. Do media influence learning? Revisiting the debate in the context of distance education. Open Learning 11 (1): 31–40. Cavanaugh, C., K. J. Gillan, J. Kromrey, M. Hess, and R. Blomeyer. 2004. The effects of distance education on K–12 student outcomes: A metaanalysis. Naperville, IL: Learning Point Associates.

EVALUATIVE INSTRUMENTATION

43

CSU, Chico. 2003. Rubric for online instruction. Available online at http:// www.csuchico.edu/tlp/onlineLearning/rubric/rubric.pdf Distance Education Report. 2003. Rubric clearly defines exemplary online instruction. Distance Education Report 7 (23):5. Dixie, C. H., and J. L. Wesson. 2001. Introductory IT at a tertiary level— Is ICDL the answer? Proceedings of SAICSIT 2001. Annual Institute Conference of the South Africa Institute of Computer Scientists and Information Technologists, Pretoria, South Africa. Ferdig, R. E. 2006. End of course survey—Parents. Available online at http:// ferdig.coe.ufl.edu/vhs/surveys/parent.htm Ferdig, R. E., E. Papanastasiou, and M. DiPietro. 2005. Teaching and learning in collaborative virtual high schools. Report submitted to the North Central Regional Educational Laboratory as part of the K–12 Online Learning Initiative. Fraser, B. J. 1998. Classroom environment instruments: Development, validity and applications. Learning Environment Research: An International Journal 1:7–33. Galusha, J. M. 1997. Barriers to learning in distance education. Interpersonal Computing and Technology: An Electronic Journal for the 21st Century 5 (3/4): 6–14. Goodyear, P., G. Salmon, J. Spector, C. Steeples, and S. Tickner. 2001. Competencies for online teaching: A special report. Educational Technology Research and Development 49 (1): 65–72. Graham, C., K. Cagiltay, B.-R. Lim, J. Craner, and T. M. Duffy. 2001. Seven principles of effective teaching: A practical lens for evaluating online courses. The Technology Source March/April. Available online at http:// technology source.org/article/seven_principles_of_effective_teaching Human Factors Research Group (HFRG). 2002. MUMMS. Human Factors Research Group. Available online at http://www.ucc.ie/hfrg/questionnaires/ mumms/index.html Insight. 2006. Insight: Snap shot survey. South Central RTEC Instrument Library/Data Repository. Available online at http://insight.southcentralrtec. org/ilib/sssinfo.html Kozma, R., A. Zucker, C. Espinoza, R. McGhee, L. Yarnall, D. Zaller, and A. Lewis. 2000. The online course experience: Evaluation of the Virtual High School’s third year of implementation, 1999–2000. Menlo Park, CA: SRI International. Learning.com. n.d. TechLiteracy assessment whitepaper. Available online at http://www.learning.com/tla/tla_whitepaper.htm Lewis, J. R. 1991. User satisfaction questionnaires for usability studies: 1991 manual of directions for the ASQ and PSSUQ (Tech. Rep. No. 54.609). Boca Raton, FL: International Business Machines Corporation. ———. 2002. Psychometric evaluation of the PSSUQ using data from five years of usability studies. International Journal of Human–Computer Interaction 14 (3/4): 463–488.

44

BLACK, FERDIG, DIPIETRO

Litke, D. 1998. Virtual schooling at the middle grades: A case study. Journal of Distance Education 13 (2): 33–50. Lund, A. M. 2001. Measuring usability with the USE questionnaire. Usability Interface 8 (2). Available online at http://www.stcsig.org/usability/newsletter/ 0110_measuring_with_use.html Maeroff, G. 2003. A classroom of one: How online learning is changing our schools and colleges. New York: Palgrave/Macmillan. McLester, S. 2002. Virtual learning takes a front row seat. Technology and Learning 22 (8): 24–36. Osborn, V. 2001. Identifying at-risk students in videoconferencing and Webbased distance education. The American Journal of Distance Education 15 (1): 41–54. Osika, R., and D. Sharp. 2003. Minimum technical competencies for distance learning students. Journal of Research on Technology in Education 34 (3): 318–325. Palloff, R. M., and K. Pratt. 2000. Building learning communities in cyberspace. San Francisco: Jossey-Bass. Pea, R. 1994. Practices of distributed intelligence and designs for education in distributed cognitions. In Distributed cognitions, ed. G. Salomon, 47–87. Cambridge: Cambridge University Press. Pearson, J., and S. Trinidad. 2005a. Development, validation and use of the online learning environment survey. Australasian Journal of Educational Technology 21 (1): 60–81. ———. 2005b. OLES: An instrument for refining the design of e-learning environments. Journal of Computer Assisted Learning 21 (6): 396–404. Prensky, M. 2006. Don't bother me mom—I'm learning. St. Paul, MN: Paragon House Publishers. Public Schools of North Carolina. 2006. Test development process. Available online at http://www.ncpublicschools.org/accountability/testing/shared/ testdevprocess Raths, D. 1999. Is anyone out there? Inside Technology Training June: 32–34. Roblyer, M. D., and R. Blomeyer. 2006. Systematically designed online support for virtual school students: A theory into practice report. Proceedings of the Society for Information Technology in Teacher Education (SITE) Annual Conference, Orlando, FL, 521–527. Roblyer, M. D., and B. Elbaum. 2000. Virtual learning? Research on virtual high schools. Learning and Leading with Technology 27 (4), 58–61. Roblyer, M. D., and J. C. Marshall. 2004. Predicting success of virtual high school students: Preliminary results from an educational success prediction instrument. Journal of Research on Technology in Education 35 (2): 241–256. Roblyer, M. D., and W. R. Wiencke. 2003. Design and use of a rubric to assess and encourage interactive qualities in distance courses. The American Journal of Distance Education 17 (2): 77–98.

EVALUATIVE INSTRUMENTATION

45

Ross, S. M., L. J. Smith, and M. J. Alberg. 1998. School observation measure. Memphis, TN: Center for Research in Educational Policy, The University of Memphis. Rovai, A., and M. Wighting. 2005. Feelings of alienation and community among higher education students in a virtual classroom. The Internet and Higher Education 8 (2): 97. Sener, J. 2000. Bringing ALN into the mainstream: NVCC case studies. In Online Education: Proceedings of the 2000 Sloan Summer Workshop on Asynchronous Learning Networks. Vol. 2 in the Sloan-C series, ed. J. Bourne and J. Moore. Needham, MA: Sloan-C Press. ———. 2004. Online class size. Available online at http://senerlearning.com/ weblogs/archives/2004_11.html Sherry, L., and B. Wilson. 1997. Transformative communication as a stimulus to Web innovations. In Web-based instruction, ed. B. Khan, 67–73. Englewood Cliffs, NJ: Educational Technology Publications. Southern Regional Education Board (SREB). 2006. Multi-state online professional development toolkit. Available online at http://www.sreb.org/ programs/EdTech/MOPD/FAQ.asp T.H.E. Journal. 2005. TechLiteracy assessment. September. Available online at http://www.thejournal.com/articles/17411 Turoff, M., and S. R. Hiltz. 2002. Effectively managing large enrollment courses: A case study. In Online Education: Proceedings of the 2000 Sloan Summer Workshop on Asynchronous Learning Networks. Vol. 2 in the Sloan-C series, ed. J. Bourne and J. Moore. Needham, MA: Sloan-C Press. Walker, S. L. 2003. A new learning environment instrument for postsecondary distance education: The DELES. In Proceedings of the 2003 Distance Education Conference, Austin, Texas, ed. R. Ham, J. Woosley, and A. Knox. College Station, TX: Center for Distance Learning Research, Texas A&M University. Wang, A., and M. Newlin. 2000. Characteristics of students who enroll and succeed in Web-based classes. Journal of Educational Psychology 92 (1): 137–143. Watson, J. F., K. Winograd, and S. Kalmon. 2005. Keeping pace with K–12 online learning: A snapshot of state-level policy and practice, Naperville, IL: North Central Regional Educational Laboratory. Wilson, E. D. 2004. Assessing the relationships among PSAT and TAKS scores in selected Texas high schools. Ph.D. diss., Texas A&M University. Woodruff, D. J. 2003. Relationships between EPAS scores and college preparatory course work in high school. ACT Research Reports. Available online at http://www.act.org/research/reports/ Yamashiro, K., and A. Zucker. 1999. An expert panel review of the quality of virtual high school courses: Final report. Menlo Park, CA: SRI International.