MOODLE TESTS FOR DIGITAL SIGNAL PROCESSING Juan Carlos G. de Sande, Víctor Osma-Ruíz, César Benavente-Peces Circuits and Systems Engineering Department in the EUIT de Telecomunicación at the Universidad Politécnica de Madrid (SPAIN)
[email protected],
[email protected],
[email protected]
Abstract Computer and web based tests are being extensively used at higher education level for assessment purposes. However, learning instead of only assessment should be the main objective of these tests. An important drawback of using these tests for learning purposes is that it is necessary to generate a large items bank in order the students can practice and obtain feedback about their learning process. This drawback is partially solved by some learning management systems, like Moodle, that offer the possibility to automatically generate numerical variations of the parameters of a given question. Here, a case study is presented where a comparison of the results obtained by students when they solve tests delivered by a learning management system under different conditions (before and after practicing with similar tests) and in a supervised or unsupervised environment. The course in which this study has been developed (Digital Signal Processing for Electrical and Electronics Engineering undergraduate students) is organized in 4 subjects and the students should solve two tests at the end of each subject. One of the two tests consisted of 5 single choice questions and the other test consisted of 5 numerical answer questions (calculated questions). Students that attended the course were arbitrarily divided into two groups: one of the groups solved the single choice questions test before to practice and solve the calculated questions test; the other group solved the single choice questions tests after practicing and solving the calculated questions test. Additionally, the first group solved the test on the web, during an extended period of time and without instructor supervision. The second group solved the tests during a laboratory session under the instructor supervision. Supervised students could consult reference books and notes in the same way non-supervised students could. For the following course subject assessment, the conditions for the two groups were interchanged in order to avoid any potential advantage for one of the groups. At the end of the process a survey was developed in order to detect students’ tests and procedures preferences. Results of this study will be useful to improve the learning process through web based test in the future. Keywords: Web based test, test based learning, feedback, assessment.
1
INTRODUCTION
The use of learning management systems (LMS) has become an extended practice, specially at university level studies [1-3]. One of the most powerful tools offered by these systems is the delivery of automatically assessed questionnaires. Computer or web based tests could be intended in order the students could check their progress (in real time and with anonymous feedback), for assessment purposes, or for fulfilling both objectives as is the case in the present study. A relevant aspect of the development of the European Higher Education Area is that the assessment of the students must be based on competence acquirements and on the student’s workload [4-6].Then a continuous supervision of students’ work is often convenient. Continuous evaluation methods increase the instructors’ workload when they deal with numerous groups of students, as usually is the case for basic courses of many engineering studies [7-8]. In this scenario, the use of LMS is a very convenient choice for instructors.
LMS system usually offers several ways to create tests with some degree of randomness. Common ways to obtain different quizzes are picking questions from a large item bank, changing the order in which the questions are presented, changing the order in which the possible answers are presented (at least for multiple choice questions) and changing numerical data when the solution is a single relation of these numerical data. However, for courses with large number of students there is a high probability that the same question could appear in the tests of several students. On the other hand, the access to LMS resources is usually controlled for each student by means of personal username and password. However, there is not any effective way to be sure of who is answering an online questionnaire (the real identity), so students could solve by cheating.
Proceedings of EDULEARN11 Conference. 4-6 July 2011, Barcelona, Spain.
003809
ISBN:978-84-615-0441-1
The present study has been developed in the Escuela Universitaria de Ingeniería Técnica de Telecomunicaión (EUITT) at the Universidad Politécnica de Madrid (UPM). Digital Signal Processing is a basic course of the Electrical and Electronics Engineering studies. Students have the possibility to solve quizzes intended for self assessment and formative purposes (their results are not taken into account in the final mark of the students) and assessment questionnaires with a given weight in the final mark of the students. Different types of tests are designed and conditions for solving these tests are varied. The results obtained by the students for each class of tests and each condition are analyzed. The main objective of this work is the comparison of the results of the evaluation of students’ skills through web based tools using various types of questions and under different conditions (supervised and non-supervised) as well as the influence of disposing a practicing period prior to solve calculated questions tests.
2 2.1
METHODOLOGY Context
Digital Signal Processing is a course programmed in the second semester of the second academic year and is a common course for all students enrolled in Electrical and Electronics Engineering studies of the EUITT at the UPM. The course usually lasts for 15 weeks and the students attend 2 h of lessons per week and 2 h laboratory sessions every two weeks. At the EUITT this course is organized in 5 blocks: T.1. Real-time Digital processing of analogue signals T.2. Discrete Fourier transform T.3. IIR filters design T.4. FIR filters design T.5. Multirate processing A continuous assessment method is used in this Digital Signal Processing course. Several works and assessment items are programmed along the semester with the following weights: a) Two online tests after each subject (14.5% of the final mark). b) Several exercises for each subject done in the classroom (15% of the final mark). c) Homework (10.5% of the final mark) d) Laboratory work (20% of the final mark) e) Final exam at the end of the semester (40% of the final mark) The final exam is mandatory to take into account the marks obtained through the weekly work of students. This exam is the same for all students enrolled in the course. The final exam is made under the supervision of all the teachers involved in Digital Signal Processing. The present work has been developed during the spring semester of the 2010-11 academic year. Although 87 students (19 women and 68 men) were enrolled in the course, only those that participated in solving the proposed tests were taken into account (a total of 72, 17 women and 55 men) in this study. These students were arbitrarily split into two groups A and B (38 and 34 students from groups A and B, respectively, solved the proposed tests).
2.2
Experiment design
According to the questions type, a particular test only contained multiple choice questions (MCQ test) or only consisted in a set of calculated questions (CQ test). The answer of a calculated question is a numerical open answer and involves some calculation with one or more parameters that are randomly created by the LMS. Moodle permits the automatic generation of 100 numerical variations of the same question [1]. In this way, a large items bank of calculated questions can be generated from small set of questions (10 to 15 for each test). On the other hand, a reduced items bank was used for generating each MCQ test. Attending to the assessment type, delivered tests were divided into two classes: practicing tests (students could solve these tests for checking their progress, but their results didn’t
003810
influence in the final mark) and summative assessment (SA) tests (those that count for final mark). For each subject of the course the students could solve a practicing test (that only contained CQ) and two SA tests, one of them was a MCQ test and the other was a CQ test. Practicing tests were opened during 2 days before the date programmed for doing the CQ test corresponding to each block. Time to solve any of the tests is limited to 30 minutes. After finishing the content of block T.3 (IIR filters design), a practicing CQ was opened during two days. Then, group A made the SA and CQ test in a laboratory session under the instructor supervision. They could consult their notes and text books during the evaluation session but they could not get any dishonest help. In the same date and in a second laboratory session, group B made the SA MCQ test under the same conditions than group A. The very next day both groups solved a second SA test, but in this case, group A made a MCQ test and group B solved a CQ test. Students solved this second test using their own PC or using the EUITT facilities. After finishing the content of block T.4 (FIR filters design), both groups of students solved a SA MCQ test. Group A, solved this test during a laboratory session under instructor supervision, and group B solved this test during the same day, but without instructor supervision. Afterwards a practicing CQ was opened during two days. Then, both groups made the SA and CQ test but group A solved it without instructor supervision and group B made this test during a laboratory session under the instructor supervision. In this way, students from both groups solved two tests under instructor supervision and two tests without instructor supervision, a MCQ test after having the possibility of making a practicing CQ test and a MCQ without having this possibility, so any student could claim that some of their mates have had some advantage due to this experiment. Finally, students were asked to fill in a survey in a Likert five levels scale (see Table 1) to obtain their opinion about several aspects of CQ and MCQ tests and about the conditions for solving the tests. TABLE 1: Survey items. TA: totally agree, A: agree, N: neutral, D: disagree, TD: totally disagree TA I.1. MCQ tests helped me to understand the subject contents I.2. CQ tests helped me to understand the subject contents I.3. Practicing CQ tests helped me to solve MCQ tests I.4. CQ tests helped me to solve MCQ tests. I.5. MCQ tests helped me to solve CQ tests I.6. I prefer that all SA tests were MCQ tests I.7. I prefer that all SA tests were CQ tests I.8. The weight of the tests in the final mark (14.5%) is adequate. In case of disagreement, indicate if they should have a higher (+) or lower (-) weight I.9. I prefer that tests results were not taken into account for obtaining my final mark in this course I.10. The time stated (30 min) for solving the MCQ tests is adequate (indicate + or – in disagreement case) I.11. The time stated (30 min) for solving the CQ tests is adequate (indicate + or – in disagreement case) I.12. The time that practicing tests are available (2 days) is
003811
A
N
D
TD
adequate (indicate + or – in disagreement case) I.13. The time that SA tests are available (1 day for solving them without instructor supervision) is adequate (indicate + or – in disagreement case) Prior to this experiment, both groups of students have solved tests corresponding to blocks T.1. and T.2. The mean marks in these tests for both groups of students were compared to check if both groups showed similar performances. Then the mean marks and standard deviation, as well the analysis of variance for both groups of students in the four tests were analyzed. Histograms of the survey students’ answers were also studied.
3 3.1
RESULTS AND DISCUSION Tests marks analysis
Fig 1 shows the histograms of group A and B students’ marks in the SA tests corresponding to the first and second subjects of the course (T.1. and T.2.). The marks obtained by both groups of students in the T.1 subject test have practically the same histogram (blue and green columns in Fig. 1). The mean value and standard deviation are 4.69 and 1.85 for group A marks and 4.38 and 1.65 for group B marks. For the case of tests corresponding to subject T.2, it can be observed that the histogram of the marks for group B is slightly displaced to higher marks. In this case, the mean value and standard deviation are 7.00 and 2.95 for group A and 7.53 and 2.26 for group B. However, the analysis of the variance yield to a p=0.46 for the first test marks and p=0.52 value for the second test, so the small difference in performance of the two groups is not statistically significant. The increase in the mean of marks obtained in test for block T.2 when compared to those of test corresponding to subject T.1 could be due to the feedback received by the students about their poor progress and/or to the different difficulty of the two subjects.
Figure 1: Histograms of the marks obtained by students in A and B groups for tests corresponding to subjects T.1 and T.2 of the course Fig. 2 shows the histograms of the marks obtained by the students of group A (CQ_S and MCQ_U where S means that the test was solved in a laboratory session under instructor supervision and U means unsupervised conditions for solving the test) and group B (CQ_U and MCQ_S) for the unit T.3 tests. Both groups of students have the opportunity to solve practicing CQ tests before doing these two SA tests. From Fig. 2 it is clear that CQ_S test histogram is displaced to lower marks than the CQ_U test histogram. The mean mark and standard deviation for CQ_S test were 5.89 and 2.90 respectively while the mean mark and standard deviation for CQ_U were 7.15 and 2.60 respectively. The analysis of variance comparing these two set of marks yield to a p=0.06 value, then, the difference between these two sets of marks is statistically significant at 94 % confidence level.
003812
On the other hand, it can be observed that MCQ_S test histogram is also displaced to lower marks than the MCQ_U histogram. In this case, the mean mark and standard deviation for MCQ_S test were 6.82 and 2.39 respectively while the mean mark and standard deviation for MCQ_U were 7.77 and 1.65 respectively. The analysis of variance comparing these two set of marks yield to a p=0.07 value and the differences between these two groups of marks can be considered statistically significant at 93% level.
Figure 2: Histograms of the marks obtained by students of A (CQ_S and MCQ_U) and B (CQ_U and MCQ_S) groups for tests corresponding to subject T.3 of the course Fig. 3 shows the histograms of the marks obtained by the students of group A (CQ_U and MCQ_S) and group B (CQ_S and MCQ_U) for the block T.4 tests. For T.4 unit, both groups of students had not the opportunity to solve practicing CQ test before doing the MCQ SA test. However, both groups of students could solve a practicing CQ test before they solved the CQ SA test corresponding to T.4 unit. The mean mark and standard deviation for CQ_S test marks were 6.30 and 3.01, respectively. The mean mark and standard deviation for CQ_U test marks were 7.38 and 2.71, respectively. The analysis of variance comparing these sets of marks yield to a p=0.14 value. Although the histogram corresponding to CQ_U test marks seems to be slightly displaced to higher marks when compared to CQ_S test marks (see Fig. 3), the high p value does not permit to infere any significant difference between the to sets of marks. This could be due to the large dispersion of both sets of marks. However, when comparing the histograms of MCQ_S and MCQ_U, it is quite clear a displacement to higher marks for MCQ_U. The mean mark and standard deviation for MCQ_S test marks were 7.71 and 1.40, respectively. The mean mark and standard deviation for MCQ_U test marks were 8.88 and 1.49, respectively. The analysis of variance comparing these sets of marks yield to a p=0.001 value, which means a statistically significant difference between the two sets of marks.
003813
Figure 3: Histograms of the marks obtained by students of A (CQ_U and MCQ_S) and B (CQ_S and MCQ_U) groups for tests corresponding to subject T.4 of the course On the other hand, the mean mark obtained by all students in the two MCQ tests (independently of they were solve under instructor supervision or not) was 7.80 with a standard deviation of 1.90, while the mean mark obtained in the two CQ tests was 6.65 with a standar deviation of 2.84. The same difference was found when comparing the mean marks of the tests solved under instructor supervision (6.65) or under unsupervised conditions (7.81). These results could be interpreted in the sense that MCQ tests were easier than CQ tests and that the students obtained better results when they solve the test without the presence of the instructor.
3.2
Survey results
Fig. 4 shows the histograms of the students’ responses to items I.1 to I.7 in Table 1 about the students’ test type preferences. It can be observed that histograms corresponding to items I.1, I.2 and I.3 are displaced to “agree” or “totally agree” responses. In fact, more than 2/3 of the responses (78%, 67%, and 71% for items I.1, I.2, and I.3, respectively) fall into these two categories. Then, it can be inferred that students consider tests as useful learning tool. However, histograms for items I.4 and I.5 show that 41 % and 46%, respectively of students consider that making a CQ test (or a MCQ tests for item I.5) helps them to solve a MCQ test (or a CQ test. Item I.5). Finally, histograms for items I.6 and I.7 shows that there is no clear preference of a test type over the other.
Figure 4: Histograms of the students answers to items S.1 to S.7 in Table 1
003814
Fig. 5 shows the histograms of the students’ responses to items I.8 to I.13 in Table 1 relative to the students’ opinion about the test influence on the final mark and about the time to solve the tests. Most of students agree or totally agree (51 %) with the weight given to the tests in the final mark (item I.8) and with the inclusion of this kind of assessment (62% disagree or totally disagree with item I.9). Histograms of items I.10 and I.11 reveals that most of students consider that the limit time (30 minutes) given to solve the both MCQ (67% of the students) and CQ (48% of the students) tests is enough. The mean time required to solve the MCQ tests was 10.4 minutes (with a standard deviation of 4.6 minutes). For the case of CQ tests, the mean time required to solve them was 16.6 minutes (with a standard deviation of 5.6 minutes). For both types of tests, the mean time required to solve them was quite below the given limit of 30 minutes. Finally, the students’ opinion about the period of time they have to practice with CQ tests is divided (45% agree or totally agree and 45% disagree or totally disagree), while most of students (60%) considered that the period of time during which they can solve the tests (without instructor supervision) it is not enough.
Figure 5: Histograms of the students answers to items S.8 to S.13 in Table 1 Analyzing the time required for solving different tests under unsupervised conditions, a fact is observed: there are some students that finished the tests in a very short time (around or less than 2 minutes, specially for MCQ unsupervised tests) and obtained a high score (in the 7 to 9 or higher than 9 ranges). This fact made the instructors suspicious that these students could use some help for solving the MCQ tests [6]. Additionally, this could be determining factor in high mean marks obtained for MCQ solved under unsupervised conditions.
4
CONCLUSIONS
In this work performances in solving different types of test under different conditions for two groups of students (that had shown similar grades in previous work) are studied. The analysis of the results shows that i) students obtained better results in MCQ tests than in CQ tests; ii) students obtained better results when they solved the tests without instructor supervision than when they solved the tests under instructor supervision. It was also observed that some students finished their unsupervised MCQ test in a very short time compared to the average completion time and obtained higher scores, so it can be inferred that probably they have had some advantage over the supervised students. On the other hand, survey items analysis shows that i) students consider that both CQ and MCQ tests helped them to understand and assimilate the course content, ii) practicing with CQ tests helped students to solve MCQ test, iii) students did not show any preference about the test type, iv) they consider the test a useful tool for assessment purposes, v) and the students considered that the schedule followed for doing the tests was adequate.
003815
These results and conclusions can help to redefine the evaluation method to obtained better learning results and to set appropriate weights to those evaluations that are not supervised by the instructor in comparison to the ones that are supervised.
REFERENCES [1]
Moodle.org: open-source [Accessed 10 May 2011].
community-based
[2]
Questionmark, http://www.questionmark.com/uk/index.aspx, [Accessed 10 May 2011].
[3]
Openmark, https://openmark.dev.java.net/, [Accessed 10 May 2011].
[4]
European Ministers of Educ., “The European higher education area,” European Union, Bologna (Italy), Joint Declaration, 1999. [Online]. Available: bologna/documents/MDC/BOLOGNA http://www.ond.vlaanderen.be/hogeronderwijs/ DECLARATION1.pdf
[5]
R. Fraile, I. Argüelles, J. C. González, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, C. Benavente, L. Arriero, D. Osés. A Systematic Approach to the Pedagogic Design of Final Year Projects: Learning Outcomes, Supervision and Assessment. Int. J. Engng Ed. Vol. 26, No. 4, pp. 997–1007, 2010.
[6]
J. C. G. de Sande, R. Fraile, L. Arriero, V. Osma, D. Osés, L. Narvarte, and J. I. GodinoLlorente. Cheating and Learning through web based tests. Proceedings of ICERI2010 Conference. 15-17 November 2010, Madrid, Spain.
[7]
C. R. Smaill. The Implementation and Evaluation of OASIS: A Web-Based Learning and Assessment Tool for Large Classes. IEEE Transactions on Education, V. 48 (4), 658-663, 2005.
[8]
J. C. G. de Sande, L. Arriero, C. Benavente R. Fraile, J. I. Godino-Llorente, J. M. GutiérrezArriola, D. Osés. V. Osma. Evolution of efficiency and success rate for Electrical and Electronic Engineering students at EUITT from Universidad Politécnica de Madrid. Proceedings of INTED2009 Conference. 9-11 March 2009, Valencia, Spain.
003816
tools
for
learning,
http://www.moodle.org,