Testing the Knowledge Gained in Multimedia-enhanced Learning

23 downloads 3286 Views 261KB Size Report
technical, pedagogical, and organizational experience in distance education, ... schaft II & Pädagogische Psychologie (i.e., Practical Computer Science and ...
Testing the Knowledge Gained in Multimedia-enhanced Learning Claudia Schremmer1, Holger Horz2,and Stefan Fries2 Universität Mannheim, Germany ABSTRACT New forms and techniques of teaching based on the Internet and on multimedia have appeared in recent years. In the teleteaching project Virtual University of the Upper Rhine Valley (VIROR), many Java-based teaching modules were developed for the topic Compression Techniques and the DCT and given to the students in supplement to their theory classes. In the present paper, an evaluation of the efficiency of these modules is presented. In an experiment on traditional learning versus multimedia-enhanced learning, not only the motivation, but also the objective knowledge gain by students was measured. The students were assigned to different learning settings: One group of students attended a lecture, while others participated in computer-based training. Different quality attributions amidst this second group entailed significant differences in the knowledge gain, despite the identical learning content. This encourages some postulations on the attribution for university teaching software, which we suggest in this paper. 1. INTRODUCTION The VIROR project aims to establish a prototype of a semi-virtual university and to gain technical, pedagogical, and organizational experience in distance education, where multimedia simulations and animations complement traditional teaching material. Until today, the overwhelming majority of related teleteaching projects concentrate on the technical fields (e.g. electrical engineering or computer science), sounding out and continually extending the technical possibilities (see [ISLG00][SHE00][BFNS00]). The didactic-pedagogical evaluation of such projects, in contrast, examines the impact of the new learning environment on students. In cooperation between the departments Praktische Informatik IV and Erziehungswissenschaft II & Pädagogische Psychologie (i.e., Practical Computer Science and Educational Science II) at the University of Mannheim, we evaluated the effectiveness of a ‘traditional’ lecture held by a professor in a lecture hall against a multimedia-based lecture as used in an asynchronous distance education scenario. The topic taught was the mathematical notion of the discrete cosine transform (DCT) as a part of the JPEG image coding standard. In order to facilitate the teaching of this topic, two Java applets were developed to demonstrate the transition from the ‘time domain’ into the ‘frequency domain’ and vice versa, they allow the students to vary parameters and directly observe the impact of their modifications. The effectiveness of these applets was evaluated in a study that was carried out in June 2001 with 115 students in their first and second year of studies in computer science. Two questions have been researched: •

Is computer-based learning an appropriate way to teach students?



Does a different quality attribution to the computer programs influence the learning efficiency?

1

Praktische Informatik IV [email protected] 2 Erziehungswissenschaft II & Pädagogische Psychologie {holger.horz | stefan.fries}@phil.uni-mannheim.de Universität Mannheim, 68131 Mannheim, Germany

2. TEST SET-UP Only students enrolled in computer science were selected to participate in this evaluation since this assured that they would have the background to understand the purpose and use of the DCT. Nonetheless, the students were just beginning their studies. Since coding standards enter the curriculum of Mannheim students of computer science only during the third or fourth year of study, all students had the same level of prior knowledge: none. This resulted in a homogeneous test group. 2.1 Learning Setting A total time of 90 minutes was allotted for each learning setting, in each instance a central 60minute block of learning time was preceded by a 15-minute preliminary test to record sociodemographic variables and information on covariates such as preliminary knowledge, and followed by a follow-up test to gather dependent variables. 2.1.1 Traditional Learning The traditional lecture was held by Prof. W. Effelsberg with color printed visuals on an overhead projector. Students generally like his lectures very much as they are clearly structured and he has a nice manner of presenting, always combined with a few jokes, but never losing sight of the general idea. Actually, on a scale from 0 to 3 , the lecture of Prof. Effelsberg was rated with an average of 2.317 points, which is very good. Nevertheless, having just begun their studies, our test candidates in the lecture hall were unacquainted with him, so that they encountered the lecture on Compression Techniques and the DCT unbiased. 2.1.2 Computer-based Learning For the test candidates in the computer-based learning scenario the central 60-minute learning block was divided into three 20-minute periods, each allotted to one of the three modules each candidate received: •

Introductory video (encoded as real),



Applet on the one-dimensional DCT,



Applet on the two-dimensional DCT.

Fig. 1: Photos of the evaluation of the computer-based learning.

The digital video on the laptops showed Prof. Effelsberg giving an introduction to the topic, this time only as an oral presentation without any visuals. In the video, Prof. Effelsberg welcomes the student, introduces the evaluation, and proceeds to address the compression topic. Half of the video is dedicated to instructions on how to use the two DCT applets. Figure 1 shows photos taken during the evaluation. Mayes et. al. have proposed the model of a learning cycle for distance education [MCTM94]. This model can also be applied to forms of computer-based learning, as used in our study. The learning cycle in our setting was implemented as follows: •

Conceptualization: introductory video.



Construction: use of the programs with very strong guidance, i.e., extensive use of a help menu.



Dialog: The guidance within the program becomes insignificant; the student exclusively works with examples. The externalization of the knowledge is especially supported by the scripted learning setting. 2.2 Hypotheses

The large number of 115 students participating in the evaluation allowed us to test two important hypotheses: the first on the effect of different learning instructions, the second on the effect of different attribution. •

Learning guidance by means of a script. If one wants to deal with a new problem, a part of the attention is directed towards the topic itself, but another share of attention is used to work up with the learning context. Cognitive load denotes the capacity which must be spent by the working memory in order to use a learning environment. The higher the cognitive load of a learning environment is, the less capacity is left for the treatment of its topic [Swe88][Swe94]. Due to the additional cognitive load of the digital video and the Java applets, as opposed to that of a lecture, we expect that a computer-based learning environment devoid of any other aids will yield worse results than a traditional lecture. From educational psychology, however, it is known that scripts facilitate the learning process in complex learning environments since they deepen the contextual dispute [BS89][Ren97], thereby lowering the cognitive load of the non-contextual elements of the program. In order to test this first hypothesis on the cognitive load of computer-based training, we divided the students into three groups:

− Lecture: One group of students attended a traditional 60-minute lecture. − Exploration: The students in this computer-based learning scenario were told to explore the learning environment without any exertion of influence in order to learn about the topic. Apart from this, they were given no additional information about the provenience or purpose of the programs. Since this setting is the usual way of working with computer-based training, this group serves as the reference group in our evaluation.

− Script: The students in this computer-based scenario were told to prepare the contents of what they will learn as if they should later present the topic to a fellow student. The students were provided with a script of sample questions as a guideline for their preparation.



Pygmalion effect. Due to the pygmalion-effect [RBH74][Jus86][Hof97], it is expected that both the learning effect and subjective rating of the program will be higher if a student assumes the program to be of a better quality. The notion pygmalion-effect means a difference in the learning outcome as a result of different expectations of the teachers towards the students (and the reverse!), depending on the anticipation of the learning (respectively, teaching) quality on the teacher (respectively, student) side. In analogy to the role of the teacher in a learning process, a positive (respectively, negative) anticipation of teaching quality should lead to different learning results and different subjective ratings. In order to test this second hypothesis on the effect of positive and negative attribution, the computer-based settings were further subdivided:

− Exploration: The reference group in our evaluation, see above. − β -version: The students were told that the programs on the laptops had been developed as part of a Studienarbeit. A Studienarbeit is implementation work by a student which every student has to do during his/her studies, but which is not graded. Thus, the quality of such implementations often is moderate, i.e., β -version.

− c’t-article*: With the kind permission of the c’t impressum, we distributed a (false) ‘preprint’ of the next issue of the c’t magazine, in which the software installed on the laptops was lauded as one of the best examples of computer-based training worldwide. The students were blind to their assignment to one of the five different test settings. Especially the split-up between the lecture and one of the settings of computer-based learning was carried out intransparent to the students. They knew only the time at which they should arrive at a particular location. This method precluded selection of a scenario according to one’s own preferences. The dependent variables of the evaluation, i.e., the variables that were influenced by the test settings, were selected to be: follow-up knowledge test and subjective rating of the learning environment. The covariates, i.e., the variables which are not influenced by the test setting, but which might influence the dependent variables, were selected to be: preliminary knowledge test and average grade on prior exams. The preliminary knowledge test contained seven multiple-choice questions on the general context of the student’s current lecture as well as on image compression basics. Thus, the number of possible correct answers ranged from 0 to 7 . The follow-up knowledge test contained nine properly balanced questions to measure what the students had learned during the 60 minutes of learning time. The average grade on prior exams asked for the students’ grades on exams taken in their first semester. 3. RESULTS 3.1 Features of the Sample The students of computer science are predominantly male, which is reflected in the figure of 87% male test subjects. They are all at the beginning of their studies, which is reflected by the age range of 18 to 25 with an average age of 21.22 years. The semester of study varies,

*

The c’t is a German magazine on computers and technology of a very high standard of quality.

but an average of 2.51 semesters (i.e., less than half a year) elucidates that the overwhelming majority was at the beginning of their studies. The average amount of computer usage in hours per week sums up to 20 hours for private and non-private usage. 3.2 Learning Success and Subjective Rating Table 1 details the results for the two variables subjective rating of the students in their respective learning setting and learning success (measured by means of the follow-up knowledge test). Variable

Subjective Rating

Follow-up knowledge test

Setting Lecture Exploration Script c’t-article β -version Lecture Exploration Script c’t-article β -version

mean 2.32 2.44 2.47 2.53 2.06 6.55 5.76 6.91 6.45 4.95

std. dev. 0.44 0.67 0.52 0.48 0.43 1.42 1.87 1.49 1.60 1.84

Table 1: Descriptive statistic on the results, detailed for the setting. •

Subjective Rating: The questionnaire on the subjective rating of the lecture respectively, learning environment allowed a rating from 0 points (i.e., poor) to 3 (i.e., very good). The average rating for each setting is, at 2.06 to 2.53 , very good, which means that the students in each setting, including the lecture, have enjoyed it. However, the worst and the best rating, respectively, were reached with the negative and positive attribution of the programs in the computer-based settings.



Follow-up knowledge test. The follow-up knowledge test consisted of nine questions, resulting in a possible score of 0 to 9 points. The average result for the students who attended the lecture setting is, at 6.55 points on this scale, very good, reinforcing our statement of Section 2.1.1 that the lectures by Prof. Effelsberg are of high quality. The expected outcome of objective knowledge gain is given in the exploration setting. At 5.76 , it is below that for the lecture, due to the cognitive load of computer-based programs. However, the script setting, which guides the learner through the program and provides hints upon where to focus one’s attention, has reached the maximum score of 6.91 points. This result verifies our hypothesis that scripted learning reduces the cognitive load so that the students can concentrate on the contents. The second hypothesis stated that the assumed quality of the programs influences the learning success. Indeed, the positive attribution of the c’t-article incited the students to reach high scores of 6.45 , while the assumption that the program had been developed by a fellow student (i.e., β -version) lowered the results to the notably poor score of 4.95 . Note that this result was especially thrilling since all participating students encountered the identical learning information.

3.3 Summary The analysis of the two hypotheses on the learning behavior in a computer-supported environment yields the following statements: •

Lecture versus exploration versus learning guidance by means of a script. The dependencies and expected results in the different learning are generally comparable. However, we have proved that the results depend on the precise setting. This is the reason why the literature supports both statements: that a lecture is superior to good computerbased training and vice versa. Of the three settings lecture, exploration, and script, the last one yielded the highest scores since the attention of the students was directed to the program in very different situations. This result is especially noteworthy since the lecture by Prof. Effelsberg was rated extremely positively by the students. But in contrast to all other learning settings, we observed our students in the setting script fetching paper and pencil to take notes as they studied. In contrast to a guided tour at the beginning of a program, the script has the additional advantage of capturing attention in between, not only just at the beginning.



Assumed background information on the learning programs. A single sentence indicating that the presented programs have been developed by a student lowers results dramatically. Inversely, a positive attribution of the programs produces better results, though they fall just short of significance. What is more, not only the subjective rating of the programs is influenced by this attribution, but the objective gain of knowledge as well, which decreases with negative attribution, while positive attribution increases it. The total difference in knowledge gain is enormous at 36.36% . A common practice of universities is to distribute software labeled as ‘own development’. Our evaluation clearly indicates a need for change: Never say β ! 4. CONCLUSION AND OUTLOOK

In the evaluation of learning behavior and the progress made by students learning by means of computer-based training versus students in a lecture, we varied the attribution passed on as information to the students. The survey not only revealed that a good computer-based training program can outperform all knowledge gain by students in a lecture scenario, it also states which circumstances, i.e., attributes have a certain effect. These didactic-psychological evaluations will be pursued at the University of Mannheim in the winter term 2001/02. An open issue in this regard is to depart from both the introductory video and the global help systems of the computer-based setting, and to experiment with smaller units of instruction. Furthermore, the script used in the presented setting was not optimal. Due to the very positive overall rating and the high score of expected learning progress by our students in the presented learning setting, a differentiated attribution was not possible. In further evaluations, we will try to explore which arguments in the information notes induce which precise reaction. Where is the limit of the plausibility of both positive attribution (here: c’t-article) and negative attribution (here: β -version)? A logical extension of our test setting is to combine positive attribution and script. When the students are told that they are working with a ‘groovy’ product and they are furthermore being aided by the sustaining element of the script, questions arise such as is there an upper limit to what can be reached with the program, can this upper limit be met, and might a positive attribution already suffice, so that the script might be omitted without negative effect?

BIBLIOGRAPHY [BFNS00]

Katrin Borcea, Hannes Federrath, Olaf Neumann, and Alexander Schill. Entwicklung und Einsatz multimedialer Werkzeuge für die Internet-unterstützte Lehre. Praxis der Informationsverarbeitung und Kommunikation, 23(3):164168, 2000.

[BS89]

C. Bereiter and M. Scardamalia. Intentional learning as a goal of instruction. In L.B. Resnick, editor, Knowing, learning, and instruction: Essays in honor of Robert Glaser, pages 361-392. Erlbaum, Hillisdale, NJ, 1989.

[Hof97]

Manfred Hofer. Lehrer-Schüler-Interaktion. In F.E. Weinert, editor, Psychologie des Unterrichts und der Schule (Enzyklopädie der Psychologie, Themenbereich D, Serie I, Pädagogische Psychologie), pages 213-252. Hofgrefe, Göttingen, Germany, 1997.

[ISLG00]

Frank Imhoff, Otto Spaniol, Claudia Linnhoff-Popien, and Markus Gerschhammer. Aachen-Münchener Teleteaching unter Best-Effort-Bedingungen. Praxis der Informationsverarbeitung und Kommunikation, 23(3):156-163, 2000.

[Jus86]

L. Jussim. Self-fulfilling prophecies: A theoretical and integrative review. Psychological Review, 1986.

[MCTM94] T. Mayes, L. Coventry, A. Thomson, and R. Mason. Learning through Telematics: A Learning Framework for Telecommunication Applications in Higher Education. Technical report, British Telecom, Martlesham Heath, 1994. [RBH74]

R. Rosenthal, S.S. Baratz, and C.M. Hall. Teacher behavior, teacher expectations, and gains in pupils’ rated creativity. Journal of Genetic Psychology, 124(1):115-121, 1974.

[Ren97]

A. Renkl. Lernen durch Lehren: Zentrale Wirkmechanismen beim kooperativen Lernen. Deutscher Universitäts-Verlag, 1997.

[SHE00]

Claudia Schremmer, Volker Hilt, and Wolfgang Effelsberg. Erfahrungen mit synchronen und asynchronen Lehrszenarien an der Universität Mannheim. Praxis der Informationsverarbeitung und Kommunikation, 23(3):121-128, 2000.

[Swe88]

J. Sweller. Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(29):257-285, 1988.

[Swe94]

J. Sweller. Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4):41-44, 1994.

Suggest Documents