The Value of Video Quizzes in a Computer Science Flipped Classroom: An Empirical Study Lisa L. Lacher
Mark C. Lewis
Department of Computer Science University of Houston – Clear Lake Houston, Texas, USA
[email protected]
Department of Computer Science Trinity University San Antonio, Texas, USA
[email protected]
Abstract—Flipping the classroom has become much more common in the last several years, but in the field of Computer Science it is still a much rarer means of delivery than the traditional classroom setting. One big concern is whether or not students will spend the preparation time necessary outside of the classroom in order to come prepared to class ready to engage in the active learning activities prepared by the instructor and thus maximize their learning potential. The study concerns four class sections in a Principles of Computer Science I course. Each student was required to watch video lectures prior to class. Some students were also required to take pre-class video quizzes which provided those students with immediate feedback on the level of their preparedness. Statistical analyses did not support the hypothesis that the pre-class video quizzes were effective in helping the students earn better grades. Student perceptions of the usefulness of video quizzes and anecdotal evidence from instructor experience are also presented. Keywords-education; computer science; flipped classroom, assessment
I. INTRODUCTION The ability for faculty to easily record and post video content, combined with the capability for students to view that content on many platforms and in many locations has led to a significant increase in the use of flipped classrooms. In flipped or inverted classrooms, students learn new content from sources such as video lectures which prepare them to work on assigned problems in the classroom. The basic idea behind flipped classrooms is that more classroom time can then be dedicated to active learning where the instructor can provide immediate feedback. However, the success of the flipped classroom is highly dependent on students actually taking time outside of class to do the required preparation. In this paper we look at the impact of having regular quizzes over the video lectures compared to just having verbal admonitions from the instructor. The flipped classroom approach has been used in a variety of disciplines, including computer science, although it appears that more instructors have experimented with this approach in upper division courses. Gehringer and Peddycord [1] used the flipped model in an upper level Computer Architecture course. Sureka, Gupta, Sarkar, and Chaudhary [2] as well as Gannod, Burge, and Helmick [3] experimented with a junior level Software Engineering course. Amresh, Carberry, and Femiani [4] did evaluate the effectiveness of the flipped classroom approach for teaching CS1. Their early observations on the
success of this model were mixed. These studies essentially report on the initial success of the flipped approach in these classes in terms of engagement and grades. However, a major factor in this success or lack of success is how well the students prepare outside of the classroom so that they can take maximum advantage of the activities that take place during class time. Our work also takes place in CS1 and is taught using a flipped methodology. In particular, we are evaluating whether or not gate-check quizzes help students better prepare outside of the classroom -- to ensure that they had the background information and knowledge necessary to effectively engage and participate in the flipped class session’s activity. This paper will report the preliminary findings of our study to understand if gate-check quizzes help beginning programming students be better prepared. The remainder of this paper is organized as follows: Section II provides a general background of flipped classrooms and the course framework. Section III details the design of the experiment that was conducted to investigate the usefulness of video quizzes as a tool for motivating students to watch the videos. Section IV presents an analysis of the results from the experiment. Section V gives a summary and the conclusions. II. BACKGROUND The use of lecture as the primary method of imparting information to students during class time has a long history. Teaching Computer Science courses has typically involved the instructor lecturing and conveying information while the students were responsible for listening and note-taking. Current research into how students learn challenges the traditional approach and suggests that students should be more actively engaged with the material in the course to help maximize their learning. The National Research Council has stated that ―….the new science of learning is beginning to provide knowledge to improve significantly people’s abilities to become active learners who seek to understand complex subject matter and are better prepared to transfer what they have learned to new problems and settings‖ [5]. There are many active learning strategies in literature; however, incorporating these active learning strategies in class means that the instructor has less time to convey the course content required for the learning activities. One pedagogical model that has emerged to address this issue is the flipped or inverted classroom.
A. Flipped Classrooms The fundamental idea behind flipping the classroom is that more classroom time should be dedicated to active learning where the teacher can provide immediate feedback and assistance. In this model, students are first exposed to the material they are studying outside of class using video lectures or by reading a textbook. During the class time, they usually do problem solving exercises or discuss the topics they were just exposed to. When we look at Bloom’s revised taxonomy [6] (shown in Figure 1), we can see that the goal of this method is for the students to do the lower levels of cognitive work outside of class and then focus on the higher levels of cognitive work in class where they can get assistance from the instructor or their peers. This is quite different from the traditional classroom model where the students are first exposed to the material during class time via lectures and the higher levels of cognitive work are performed outside of class through homework.
Figure 1 Bloom's Taxonomy (Revised)
Carnegie Mellon University’s just-in-time lecture project suggests that video-based education supports the same level of learning effectiveness as face to face instruction [7]. Zhang et al. assessed the learning effectiveness of video on learning effectiveness and found that simply incorporating video is not always sufficient to improve learning [8]. They suggest that interactivity can be used to improve learning effectiveness. The student must take the initiative outside of class to prepare for class. Not all students, especially students in beginning level course have the self-discipline to take on this responsibility without some help from the instructor. Not all instructors have the technology available to them to create interactive videos. Will having a pre-class quiz increase the likelihood that students will use out-of-class time to watch the videos in order to learn the material necessary to be successful? Educators have suggested that they will help assure that students are prepared and will earn better grades, but provide no proof of the success of this practice [9, 10].
B. Course Design The CS1 course, CSCI1320 – Principles of Computer Science, includes an introduction to the Scala programming language, Linux command line and covers sequence, selection, branching, iteration, higher order functions, and collections in the first half of the course. This course is required by students majoring in computer science, engineering, physics, and mathematics. It is an option in the Biology and Geoscience majors and counts as a possible course in Trinity’s Common Curriculum so that many students consider taking it for reasons outside their major. The four sections of this course are designed to be an initial introduction into learning a programming environment. Three of the sections met two times per week for seventy-five minutes. Our initial study only concerns half of a fifteen week spring semester for a total of fifteen meetings. The last section met three times per week for fifty minutes and was held in the previous fall semester. This study is also only concerned with the progress of the class during the first half of that semester. Author Lewis began flipping his sections of CS1 and CS2 in the fall of 2012 using video lectures posted on YouTube. Every chapter of the textbook had several videos associated with it: ranging from four to twelve videos per chapter. Students were required to watch a total of 53 videos (13 hours and 52 minutes total time) before the midterm. Videos range from seven to twenty-six minutes in length. All in-class activities were problem-based exercises that required the students to write code, based on certain pre-defined sections of the textbook and videos per day. Students were free to work alone or collaborate on all in class activities. Solutions to all exercises were discussed by the instructor during the class and all code written in class was posted on the course website so that students could study it further after class. At the end of the class period, students were required to fill out a minute essay, part of which included giving feedback to the instructor on subject matter that they wished would be further explained in the next class period. Our study is motivated by our desire to improve the course and our hunch and student comments about the amount of time students are actually spending on watching the videos and preparing outside of class. We hypothesize that this shortcoming could be addressed with gate-check quizzes. III. STUDY DESIGN The study is designed to answer the question of whether the use of ―gate-check‖ pre-class video quizzes is helpful in improving the students’ learning as measured by the in-class quiz and midterm exam scores. The study basically compares the performance of students that were required to complete preclass video quizzes that provided immediate feedback about their level of preparedness with the students that did not use any sort of pre-class quiz to self-evaluate their preparedness for the next active learning activities. The experiment utilizes a Non Equivalent Pre-test Post-test Control group design consisting of a control group (no preclass video quizzes) and an experiment group (required pre-
grades on the three in-class quizzes and a midterm exam.
class video quizzes) to help to prepare for the next active learning activities. The details of the study are given below.
B. Participating Subjects Sixty-three undergraduate students enrolled in the Principles of Computer Science I course at Trinity University participated in this study. These students were in four different class sections taught by two different instructors (each taught two sections). Each section had between fifteen and eighteen students enrolled. One section was in the fall of 2013, and the other three were in the spring of 2014. The course was taught in a flipped classroom style and required that the students watch video lectures and read required textbook sections before attending class.
A. Hypotheses The following hypothesis was posed in the study Hypothesis 1: The students using the gate-check video quizzes produced higher in-class quiz and midterm exam grades than their peers who did not have to take the gatecheck video quizzes. Hypothesis 2: The students with the lower aptitude test scores would benefit more (earn higher grades) using the gate-check video than their lower aptitude peers who did not have to take the gate-check video quizzes.
C. Artifacts Each of the students had taken three quizzes and a midterm exam after seven and one half weeks of class. All sections used identical written tests and quizzes which were graded by a student grader using a rubric developed by the instructors for consistency, except for the Fall 2013 course which used different in-class quizzes that are not included in this study (only midterm grades were compared). Although one class was during the previous semester, the same schedule (as seen in Table I) was followed. During this time the students were required to watch the videos and read the text associated with each day’s in-class active learning activities. Every day in class the students worked on writing code solutions for a variety of problems posed by the instructor. On some days the class
1) Independent and Dependent Variables The experiment manipulated the following independent variable:
The pre-class video quiz technique. Subjects either took a pre-class quiz over the required video content for the course or did not take any pre-class quiz. The computer aptitude test. Subjects took a computer aptitude test during the first week of the courses.
The following dependent variables were measured:
Student In-class Quiz and Midterm exam performance. These measures include the student TABLE II.
Date 1-16
COURSE SCHEDULE
Topic
Reading
Introduction to Class and the Future of Computing
Ch. 1
1-21
Future of Computing and the Linux Command Line, Linux, vi and Scala
Ch. 2.1-2.3
1-23
More Command Line, Linux, vi and Scala
Ch. 3.1-3.3
1-28
Show Your Code with Scala Expressions and Types; More Command Line, Types and Methods; Binary Arithmetic Strings and Variables, Sequential Execution and Scripts
1-30
Due
Ch. 3.1-3.5
IcP #1 (Chapter #2 Exercises)
Ch. 3.6-3.9
Quiz #1
2-4
Boolean Expressions and if
Ch. 4.1-4.3
2-6
Boolean Expressions, Functions, and Function Literals
Ch. 4.4-4.5, 5.1-5.2
2-11
Show Your Code with Higher Order Functions
Ch. 5.3-5.5
IcP #2 (Chapter 4 Projects)
2-13
Higher Order Functions and Recursion for Repetition
Ch. 5.6-5.7 Ch. 6.1-6.4
Quiz #2
2-18
Show Your Code and Recursion, Match and Patterns
Ch. 6.5-6.7
IcP #3 (Chapter 5 Projects )
2-20
Collection Types (Arrays and Lists)
Ch. 7.1-7.3
2-25
Collection Methods
Ch. 7.4-7.6
Quiz #3
2-27
Argument Passing and While Loops
Ch. 7.7-7.11 and Ch. 8.1-8.2
3-4
Show Your Code with Loops
Ch. 8.3-8.6
Assignment #1 (Chapter 6 Projects) IcP #4 (Chapter #7 Projects)
3-6
Midterm
started off with an Interclass problem (IcP). Here students were required to code solutions to textbook project selections of their choice prior to coming to class and then show their solutions to the class. On other days, the class started with an in-class quiz which was graded and returned to the students the following class period. There was one assignment during the first half of the semester and this was worked on by the students outside of class. D.Experimental Procedure To evaluate the hypotheses posed in Section III. A, the study contained a control group and an experiment group. Figure 2 provides an overview of the procedure followed. The details of the experiment steps are given below. 1) Step 1 (Course Assessments with and without Video Quizzes): The grade for the video quizzes needed to be included in the overall grading for the course. Before the video quizzes were introduced, course grading was calculated as follows: four assignments were worth forty percent of the grade, two tests (midterm and final) were worth thirty percent of the grade, five quizzes were worth ten percent of the grade, Interclass problems were worth ten percent of the grade, and class participation was worth ten percent of the grade. With the introduction of the video quizzes, it was decided to reduce the assignment and test percentages, thus in the courses with the video quizzes the grading was calculated as follows: four assignments were worth thirty five percent of the grade, two tests (midterm and final) were worth twenty five percent of the grade, five quizzes were worth ten percent of the grade, Interclass problems were worth ten percent of the grade, twenty six video quizzes were worth ten percent of the grade, and class participation was worth ten percent of the grade. 2) Step 2 (Pre-Test): A twenty-six item pre-study computer aptitude test, an eighteen question learning survey, and a nineteen question general information survey were administered at the beginning of the semester. The Fall 2013 course did not take these tests so the students in that course were not considered for that part of the study. a) Computer Programming Aptitude Test: This is a hybrid test created by the University of Kent[11] that is comprised of elements involving numerical problem solving, logical reasoning, attention to detail, pattern recognition, and the ability to follow complex procedures. This test is very appropriate for Computer Science I students because this test does not require any knowledge of programming. Numerical problem solving is similar to the logical thinking and trouble shooting required in programming. Pattern recognition is necessary in understanding the graphical representations of symbols and procedures and leads to the attention to detail required to do things such as find misspelled variable names or missing semi-colons. The ability to follow complex procedures is necessary to follow coded instructions, sequences, branching, and repetitions of events in order to decide how one set of instructions affects another and the flow of events. The students first took an untimed, easy three
question practice test before they took the timed (forty minutes) twenty-six question test. b) Learning Approach Survey. This eighteen item survey was developed by Dolmans, Wofhagen, and Ginns [12] and provides a valid and reliable way to measure students’ learning approaches (deep vs. surface approaches). They state that ―based on intrinsic interest in the topic, students taking a deep approach try to understand ideas and seek meaning and understanding. Often driven by a fear of failure, students adopting a surface approach are focused on meeting external requirements, such as assessments, particularly through rote learning. A deep approach is assumed to positively correlate with achievement and a surface approach is assumed to negatively correlate with achievement.‖ Because the objective of this study was mainly to investigate the effect of video quizzes on learning effectiveness, learner characteristics were not considered as independent variables for this study. We collected this information for future work. c) General Information Survey. In this survey students were asked questions such as their gender, previous programming experience, previous computer experience, confidence in their technical skills, comfort with this computer science course, grade expectations, time they spend on extracurricular activities, work, and caring for others, how they rate their time management skills, whether they like to work collaboratively or alone, whether or not they were ever encouraged to take computer science courses, and their nationality. This information was collected for future research.
Figure 2 Experiment Design
3) Pre-Class Video Quizzes: Two sections of students were required to complete a quiz based on videos that covered certain sections of the text (see Table I) prior to the start of
class. The number of questions on each quiz ranged from four questions on the shortest quiz, to eleven questions on the longest quiz. There were generally one to two questions asked per video. The quiz questions were one of the following: true/false, multiple choice, or matching. The video quizzes were given via our learning management system which allowed them to be auto-graded and closed at class time each day. The quizzes themselves typically focused on things like syntax that would have been difficult for students who did not watch the videos or read the book, but their answers were straightforward after watching the videos. The questions were also presented in the order that material appeared in the videos. The students were allowed three attempts on the first six video quizzes, and then based on class feedback, it was modified to two attempts per quiz for the last six quizzes. 4) Post-Study Questionnaire: A fifteen question poststudy questionnaire was administered to the students after the midterm. The post-study questionnaire collected feedback from the student regarding their usage of the videos and the text, and their perception of the effectiveness of video quizzes or lack of video quizzes.
A. Hypothesis 1: The comparison of students using the gatecheck video quizzes to produce higher in-class quiz and midterm exam grades to their peers who did not have to take the gate-check video quizzes. This section provides analyzes of the effect that video quizzes had on the student in-class quiz and midterm exam grades. The marks scored by the students that used video quizzes and the marks scored by the students that did not use video quizzes were compared. Each of the scores was tested using an independent samples t-test.
IV. ANALYSIS AND RESULTS Statistical analyses were carried out on the data collected during the study. The hypotheses described in Section III A were tested to determine the usefulness of the video quizzes in improving student understanding as proven out as grades. An alpha value of 0.05 was selected for judging the significance of the results. The post-study questionnaire was also used to gain an understanding of the student’s perception of the usefulness of the video quizzes to their learning.
Figure 4. Comparing Total Scores of the students who took video quizzes (VID) and the students who did not take video quizzes (NoVID)
Figure 3 shows comparisons of the average scores of students who took the video quizzes (VID) and students who did not use video quizzes (NoVID) on Quiz1, Quiz2, Quiz3, and Midterm. On Quiz1 the NoVID performed better than the VID, while the VID performed better than the NoVID on the others. On the Midterm, the VID did slightly better than the NoVID. However; the p value from the t-tests are illustrated in Figure 3 and do not show a significant difference in the average scores of VID and NoVID students for three of the four grade items: Quiz 1 p= 0.23, Quiz 3 p=0.72 and the Midterm-p = 0.83 are all greater than the alpha value of 0.05. In comparing the mean over the quiz and midterm deliverables using a simple t-test, the NoVID mean of 73 was slightly better than the VID average score of 72.2. However; given a NoVID standard deviation of 15.82 and VID standard deviation of 14.36 and using an alpha=0.05, the difference between the average grades for the two groups (VID vs. NoVID) was not significant. An independent samples t-test reveals that the NoVid student teams performed as well as the VID student teams on grades. These results indicate that using the video quizzes is not beneficial to grade performance.
Figure 3. Comparing Performances of students who took the video quizzes (VID) and student who did no use the video quizzes (NoVID) on In-class quizzes and Midterm
B. Hypothesis 2: The students with the lower aptitude test scores would benefit more (earn higher grades) using the gate-check video than their lower aptitude peers who did not have to take the gate-check video quizzes This section provides analyzes of the effect that video quizzes had on the grade performance of students with
different computer aptitudes. Figure 4 shows each student’s aptitude and compares the scores of the students who took the video quizzes with those who did not. It can be seen that regardless of whether or not the students took the video quizzes, increased aptitude seems to correlate with increased course performance. Our next question is then if the video quizzes had any possible effect?
quizzes did not help students with average computer programming aptitude. Although there were no students in the NoVID group that had below average computer programming aptitudes, it is interesting to note that the lower aptitude VID students performed better on grades than the average aptitude VID students, although they did not perform better than the average aptitude NoVID students. It is possible that the video quizzes may have helped these below aptitude students because their computer aptitude scores suggest they these students should find it much more difficult to perform well in this class, yet they performed close to the average NoVID students and surpassed the average VID students. We attempted to tease out possible answers to this type of question through our student post study survey. C. Student perceptions
Figure 5 Student Computer Aptitudes vs. Grade Performance
The University of Kent’s Computer Programming Aptitude Test scores are divided into three categories. Nineteen or above is considered an above average test score. People with these test scores are most likely ready for the challenge of computer careers. Scores between thirteen and eighteen are considered average. Even though the test is only a rough guide, people who scored within this range may find it more difficult to do well as a computer programmer. Scores less than thirteen are considered below average. To help analyze this hypothesis, we divided the students into the three categories: high = students with scores nineteen or above, average = students with scores between thirteen and eighteen, and low = students who scored twelve or less. This formed three groups for each of the VID and NoVID categories. It is interesting to note that the average computer programming aptitude score in the above average category (High) for the both the students who took the video quizzes (VID) and the students who did not take the video quizzes (NoVID) was 21. It is also interesting to note that the average computer programming aptitude score in the average category for both the students who did take the video quizzes (VID) and those that did not (NoVID) was 16. There were no students who did not take the video quizzes (NoVID) in the Low category. The average computer programming aptitude score for the students who scored below average (Low) and took the video quizzes was 9. Figure 5 shows that the VID students performed better than the NoVID students and after performing a t-test it was shown that the grade difference was statistically significant (p = 0.025), thus the video quizzes seemed have an effect on grade performance. Although both the VID student and NoVID students in the average aptitude category had exactly the same average aptitude, the NoVID students performed better than the VID students although the grade difference was not statistically significant. However, it appears that the video
The usefulness of the pre-class video quizzes used by students was also evaluated using feedback from the subjects in the both the experiment group (the members of VID) and the control group (the members of NoVID). The quiz questions fell into three categories: how the students used the quizzes, how the students used the textbook, and their perception on whether or not the quizzes were useful. Because this study primarily is concerned with the video quizzes, we will look primarily at the eight questions that dealt with those. It is apparent from the question results shown in Table II that the video quizzes did cause the students to watch the videos more often and watch them more thoroughly—which was one of the fundamental goals of having the students take the video quizzes. TABLE II.
VIDEO USAGE
Did you watch the video lectures?
VID
NoVID
Rarely
15%
8%
Sometimes
20%
58%
Often
25%
17%
Almost Always
40%
17%
Did you review portions of the lecture that seemed unclear? Rarely
15%
33%
Sometimes
35%
50%
Often
30%
17%
Almost Always
20%
0%
Percentage of videos generally watched for each class meeting: None
5%
0%
25%
10%
58%
50%
20%
25%
75%
40%
17%
100%
25%
0%
It can also be seen in Table III that both the students who were required to take the video quizzes (VID) and the students who were not required to take the video quizzes (NoVID) felt that they were beneficial to their learning and did (or would have) caused them to spend more time preparing by watching the videos or read the text. However, the last question on the post survey asked the students about their general thoughts on the video quizzes and a majority of the students did not like the video quizzes. Some stated that they did not feel that the quizzes really prepared them for the class, that the quizzes took too long to take, that time could have been better spent than watching the videos, and that the video quizzes did not actually help them remember any information. One of the post survey questions asked the students whether they thought it would be more helpful to have a small coding assignment to practice the video content rather than the video quizzes: sixty percent of the VID students and sixty seven percent of the NoVID students would prefer a small assignment. The implications and the discussion of these results are presented in Section V. TABLE III.
VIDEO QUIZ MOTIVATION
The pre-class quizzes were beneficial to your learning in the course.
Our second hypothesis concerns assertions that having a mechanism to help ensure that students watch the videos would help students with lower computer programming aptitudes perform better in the course. Results indicate that to the contrary, it was the higher aptitude students who benefited the most from having the video quizzes. We speculate that higher aptitude students are more prone to slacking off on tasks like watching the videos, so a simple tool that enforces them spending the time helps them more. Unfortunately, raising the performance of the high aptitude students, while commendable, is something that we view as less significant than bringing up the lower end students. It is possible that the tendency for this style of quiz to lead to more surface learning was detremental to the lower aptitude students and that another style of enforcing preparation that aims to facilitate deep learning would be superior. We plan to test the use of small programming tasks, instead of multiple choice quizzes, in the future to see if that is a superior approach. We believe that style of preparation enforcement would lead to deeper learning and benefit the middle and lower aptitude students more. ACKNOWLEDGMENT
VID
NoVID
True
70%
NA
False
30%
NA
I would have watched the videos if we were not (would have) been required to take quizzes on the material. True
55%
58%
False
45%
42%
Having quizzes caused me (would cause me) to watch the videos or read the book more than I would have otherwise. True
75%
75%
False
25%
25%
V. SUMMARY AND CONCLUSIONS In flipped classrooms, students learn course content from sources such as video lecture which is intended to prepare them to work on assigned problems in the classroom. A major success factor, when using the flipped approach, hinges on how well the students come prepared to class. Our primary hypothesis is that pre-class video quizzes would help students earn better grades. We assumed that earning better grades would correlate with being better prepared. However, our experimental study does not support this hypothesis. We speculate that the quizzes failed to elevate grades because they did not facilitate deep learning. Students focused on answering the multiple choice type questions and did not delve deeper into the material. While one of the instructors felt that the quizzes did improve students preparedness for class, this was not seen in both sections that were required to take the quizzes and could be due to many other factors unrelated to the quizzes.
We thank the students in the Principles of Computer Science I course at Trinity University for their participation in this experiment. REFERENCES E.F. Gehringer, and B.W. Peddycord III, ―The inverted-lecture model: a case study in computer architecture‖. Proceeding of the 44 th ACM technical symposium on Computer science education. ACM, 2013. [2] A. Sureka, M. Gupta, D. Sarkar, V. Chaudhary, ―A Case-Study on Teaching Undergraduate-Level Software Engineering Course Using Inverted-Classroom, Large-Group, Real-Client and Studio-Based Instruction Model‖ CoRR abs/1309.0714, Sep 2013 [3] G.C. Gannod, J.E. Burge, and M.T. Helmick, ―Using the inverted classroom to teach software engineering.‖ Proceedings of the 30 th international conference on Software engineering. ACM, 2008. [4] A. Amresh, A.R. Carberry, and J. Femiani, ―Evaluating the Effectiveness of Flipped Classrooms for Teaching CS1‖. Proceedings of the Frontiers in Education Conference, IEEE, 2013 [5] National Research Council (2000). How people Learn: Brain, Mind, Experience, and School. Washington, D.C.: National Academy Press, p13 [6] H. Coffey. (2008). Bloom’s Taxonomy [Online]. Available:http://www.learnnc.org/lp/pages/4719 [7] Carnegie Mellon http://www.jitl.cs.cmu.edu [8] D. Shang, L. Zhou, R. O. Briggs, J.F. Nunamaker Jr., ―Instructional video in e-learning: Assessing the impact of interactive video on learning effectivenss‖. Information and Management 43, (2006) 15-27 [9] S. Zappe, R. Leicht, J. Messner, T. Litzinger, and H.W. Lee, ――Flipping‖ the Classroom to Explore Active Learning in a Large Undergraduate Course‖, American Society for Engineering Educagion, 2009 [10] R. Toto and H. Nguyen, ―Flipping the Work Design in an Industrial Engineering Course‖. ASEE/IEEE Frontiers in Education Conference, Oct 18-21, 2009, San Antonio, TX. [11] University of Kent (accessed 04/18/2014). Computer Programming Aptitude Test [Online]. Available: https://www.kent.ac.uk/careers/tests/computer-test.htm [12] Dolmans, D., Wolfhagen, I., & Ginns, P. (2010). Measuring approaches to learning in a problem-based learning context. International Journal of Medical Education, 1, 55-60 [1]