Instruction Delivery Systems

8 downloads 10037 Views 5MB Size Report
in Elementary School Children. ••• ... students received a computer-generated alterna- tive form of ..... Florida Powerand Light, where he was instrumen-.
'.

---------Journalof

Instruction Delivery Systems FALL 1995

Learning and Technology: True stories. Real Lessons You Can't Teach

••• Soft Skills On A Computer ... Can You? •••

The Use of Technology to Increase Achievement in Elementary School Children

••• of Interactive

The Impacts Electronic Technical Manuals: Issues, Lessons Learned, and Trends in Training and Education

••• Interactive MultiInedia Instructional A Conceptual Framework

Systems:

••• The Mediated Classroom: A Systems Approach to Better University Instruction

•••

A Top Level Analysis of Training Management Functions

••• How To Get What You Need (And Want) From Your Multimedia Vendor

The Use of Technology to Increase Achievement in Elementary School Children BY STEVE TERRELLAND PAULRENDULIC

his article begins with a look at problems surrounding traditional assessment and evaluation in American education and then continues with suggestions for broadening the methodology as well as the validity of the assessment process. A study is then described wherein students received a computer-generated alternative form of assessment. The aim of the study was to get students involved in the assessment effort as well as help develop the metacognitive structures necessary to react to the cause and effect relationships depicted in the alternative feedback. Results of the study are significant from three perspectives: First. it provides support for the Cognitive Evaluation Theory of student motivation in that extrinsic. informational feedback can increase student achievement. Second. it shows that students can be willing and active participants in the evaluative process. Lastly. it shows that relatively simple technology can be used to effectively address both cognitive and affective issues in education.

T

Introduction The 1983 publication of A Nation at Risk: the Imperative for Educational Reform. painted a bleak picture of education in America. It, along with hundreds of subsequent studies. showed conststently declining standardized test scores. higher and higher levels of illiteracy. and an ever-increasing number of high-school dropouts. These factors. it is widely believed. ultimately contribute to higher levels of unemployment and welfare dependency (Perkins 1992). Critics. in an effort to explain the demise of the American educational system. are quick to point out the archaic nature of our system and the fact that it hasn't grown or adapted to a changing environment. As Papert (1993) points out, education is one of the few professions where a worker from 100 years ago would still be able to function quite well. Compared to physicians. architects. and accountants. whose professions' technology has radically changed. the educator would. in most in-

FALL 1995

stances. followmany of the same practices and use many of the same tools and techniques common at the end of the 19th century. Didactic instruction, despite advances in pedagogical theory and the dramatic growth of educational technology. is still the primary means of instruction. just as it always has been. One of the most maligned aspects of our educational system is the way we assess our students. Critics point out that our normative-based system of assessment and placement. because of many reasons, is inherently flawed and call for movement toward a realistic. criterion-based appraisal (Adler 1982).To fail to do so. they argue. is to maintain the status quo. As Adler further points out, systemic failure. such as high dropout rates and the placement of many students into less desirable courses of study. is. in part. due to the nondemocratic nature of assessment and placement in our school systems.

Assessment in Transition In response to these criticisms of the assessment of our students. many educators are broadening their efforts to include other. more performance-based assessment techniques. Methods to measure both procedural effort. as well as product quality. include techniques such as portfolio assessment. observation. checklists. rating scales. and product scales. These alternative techniques are touted as a means of more equitable. broadbased assessment (Gronlund 1993). In addition to these alternative methods of evaluation. others are calling for greater student involvement in the evaluative effort. whether it is the development and maintenance of a portfolio or any act that focuses student responsibility and contributes to a higher level of metacognition. Metacognition. an idea without a universally accepted definition. is often described as the enabling of students to become reflective about their thinking processes or helping students establish understanding and causality in the assessment process. Researchers have noted (DeFina 1992) that very

Journal of INSTRUCTIONDELIVERYSYSTEMS

13

It became clear from the interviews that the graphs were not having the intended impact on the students' metacognitive process. In an effort to strengthen the treatment effect of the graph. it was decided to print teacher comments and observations directly on the graphs. This. it was hypothesized. would both clarify and amplify the information presented. thus helping students make the connection between their effort and grades.

little research has addressed the effectiveness of student involvement in the assessment effort. It is this idea of the effectiveness of student involvement in the assessment process that is the focus of the remainder of this paper.

Graphs for Performance Assessment Since 1992 the authors of this paper have worked with several schools in the Dade County Public School system in an effort to understand the effect of alternative forms of assessment feedback. Specifically, a computer-managed instructional system was developed that allowed teachers to collect student demographic and grade information and. on a periodic basis. supply formative feedback to students via a grade graph (Figure 1). As can be seen, the graphs provide weekly grade information to a child as well as a running average. This extrinsic. informational feedback. based on Deci and Ryan's Cognitive Evaluation Theory (1983), theoretically serves as a stimulus for the metacognttive process. thereby causing positive gains in levels of student achievement and motivation. In light of the transitional state of assessment. it became clear that research was called for that included this type of technologically based assessment tool. Initial experiments involving the integration of this new technology into the assessment process at a bilingual magnet school failed to produce significant results (Terrell 1992; Terrell. Greenberg & Rendulic 1995). This. it was felt. was because the information conveyed by the graphic feedback was not strong enough to cause a shift in motivational inclination for this population. In an effort to understand why the treatment was not as effective as hypothesized. interviews were conducted with children who had received the alternative feedback. These interviews uncovered many interesting facts. Over 75% of the students interviewed indicated that the only time they thought about their graphic feedback was when the teacher distributed it. More revealing was that only 25% of the students could make the connection between their actual weekly achievement and what was being presented on the graph itself. Furthermore. one-third of the students couldn't make the connection between the average grade line on the graph and the grade that could be expected on the subsequent nine-week report card. Over one-half of the students admitted that they did nothing different in terms of class preparation upon viewing their graph. with a few admitting they would study harder if the graph showed a bad weekly grade or average. Figure 1 •

,. •._ .•. _ ••..•• .,...... .•..•. _r-r. .•.-.

•.•.

__

.•..•..•

~

.••.•.

The Development of an ExpertSystem Based Monitoring System In an effort to make the teachers' textual commentary as meaningful as possible. it was decided to allow up to two lines of text to be printed at the bottom of each graph. The first line of text would be controlled by an expert-system developed to analyze student progress. with the second line left for the input of teacher free-form commentary. The expert system component of the eMI software created messages based on three criteria; long-term achievement. short-term achievement. and grade history. In measuring long-term achievement. the expert system mathematically evaluated student achievement over the current grading period. This computerized analysis of student performance produced a numeric slope value. with positive values indicating an upward trend, negative values a downward trend. and values near zero representing no change over the grading period. The same type of evaluation was performed for short-term achievement. defined for this study as achievement during the most recent three weeks.

TEST STUDENT - MATH

100 A

94 B

87

c 78 D 70 F

Why are your grades down? You CAN do better

1/01

0== - Graphic _

1/08

1/15

Week Iy Grade Feedback

I I I

1/22

o=

1/29

Average

Grade history was considered the locus from which these slopes were calculated and was represented by the first grade of a grading period or the average from the most previous grading period. By using these values, the expert system was able to generate messages specific to the particular Circumstances. For example, students showing improvement or having a high overall average received praise and congratulatory messages while students maintaining a low average, failing to improve in a given time period, or showing a decline in their average grade were given messages that provided encouragement and positive reinforcement. The expert-system based software was used in an experimental study measuring the spelling achievement of two groups of fifth-grade students. One of the groups of students received the graphic feedback in the following manner: during the first nine-week grading period the graphic feedback contained no additional textual feedback, during the second grading period textual feedback was added, the students received no graphic feedback during the third grading period, and the feedback of the second grading period was repeated in the fourth nine-week grading period. During the entire school year, the control group received only traditional forms of feedback such as report cards, progress reports, and graded quizzes and exams.

Results

the third grading period caused a drop, albeit not statistically significant, in achievement. The continuation of feedback in the last grading period brought achievement levels for the experimental group back up to the same levels experienced during the second grading period.

Discussion The results of this study are notable from three perspectives: theoretical. applied, and technological. First, the results support Deci and Ryan's Cognitive Evaluation Theory. It appears that grade information presented to students in an extrinsic. informational fashion do contribute to higher levels of motivation and achievement. The authors of this paper will replicate this study in the upcoming school year with two major changes. First, a separate measure of student motivation and locus of control will be included. It is expected that these constructs, as measured, will also be significantly enhanced by the use of this technology. Second, a tabular form of feedback will be given to a second experimental group in a effort to control for the novelty effect of this type of feedback. Second, it appears that students can be, under the right circumstances, willing participants in the evaluative process. Interviews with students participating in the study indicated that. not only do they enjoy receiving feedback of this type, they appreciate and understand the opportunity it presents for active participation in the assessment process. In far too many instances, students are not aware of what is entailed in the evaluation pro-

Results of a multivariate repeated-measures analysis of covariance of the four nine-week grading periods show a significant difference (a = .05) between the control and experimental groups (Figure 2). It should be noted that. due to use of intact groups. this was a quasi-experimental design. With this Grades by Grading Period in mind, based on a pretest, it was found that the groups 92r------------------were not equivalent at the gO start of the study. This pretest, along with scores from 86 the previous year's Stanford Achievement Test, were used to equate the groups using 82 ~ the aforementioned covari80 ance technique. 78 In looking at Figure 2, attention should be paid to 76 three things. First, there is a 74 highly Significant increase in 72 scores for the experimental group between the first grading period (graphic feedback with no text) and the second a centr-e I G-OlC) + Experlmenta I G--oup grading period (graphic feedback with text). Second, the cessation of feedback during Figure 2 - Average Grades by Grading Period

--.

BB

In G

U

GradIng

FALL 1995

Period

Journal of INSTRUCTIONDELIVERYSYSTEMS

15

.

.. Terrell, S. 1992. An investigation of cognitive evaluation theory: the effect of graphic feedback on student motivation and achievement. (Doctoral dissertation, Florida International University. 1992). Dissertations Abstracts, International. 5308A, 2749. Terrell.S .. B. Greenberg, & P. Rendulic, 1995. Us-

cess; they simply see the end result- a grade on the report card. Participation of this type enables the students to develop the type of metacognitive structures necessary to understand the cause and effect nature of the evaluative process. Lastly, the study shows that relatively simple technology can be used to make significant affeclive. as well as cognitive, contributions to the learning process. Unfortunately, far too much emphasis has been placed on computer-assisted instruction and its effect on achievement, with ancillary consideration given to observable affective measurements. It is clear that the introduction oftechnology in this case contributed to the non-observable constructs of motivation and locus-of-control, with the corresponding positive shift in achievement. Critics may argue that the technology in this case was a surrogate for the teacher's comments; it would be expected that any feedback of this type would contribute positively. They are absolutely right. Technology, in this case, served two purposes: first, the software conveniently produced output from readily available data. The graphic feedback could have been generated by other means, but that effort would have been more time and labor intensive. Second, the software addressed an area that is often missed with traditional computer-assisted and computer-managed instructional software. In both of those cases. feedback is often limited to traditional summative feedback or eventoriented formative assessment.' This software afforded students the opportunity to view their daily and weekly progress in relation to their overall effort and, based on these observations. incorporate the changes necessary to positively affect their achievement. Whether from a normative or criterion-based perspective, it is this type of metacognitive shift we believe is the future of assessment in education.

Bibliography Adler,

M. 1982. The Paideia. Proposal - An educa-

tional Manifesto. New York: Macmillan. De Ftna, A. 1992. Portfolio Assessment - Getting Started. New York: Scholastic. Deer. E. & R. Ryan, 1985. Intrinsic Motivation and

Self-determination in Human Behavior. NewYork:

Plenum. Gronlund, N. 1993. How to Make Achievement Tests and Assessments (fifth edition). Needham Heights, MA: Allyn and Bacon. Papert. S. 1992. The Children'S Machine - Rethinking School in the Age of the Computer. New York: Basic Books. Perkins, D. 1992. Smart Schools - Better Thinking and Learning for Every Child. New York: Simon & Schuster.

16

JOllrn::ll nf

INSTRT1(,TTnN nH'T HTT