Measuring student learning over time with clickers in

0 downloads 0 Views 272KB Size Report
introductory operations management course, they were used to track student learning and ... objectives in the cognitive domain into six levels with different orientations: knowledge, comprehension ..... library/pdf/ERB0403.pdf. Berberet, J. ... Biggs, J. (2003) Teaching for Quality Learning at University (2nd ed.). Berkshire, UK: ...
Int. J. Information and Operations Management Education, Vol. X, No. X, xxxx

Measuring student learning over time with clickers in an introductory operations management course Harm-Jan Steenhuis* and Brian Grinder College of Business and Public Administration, Eastern Washington University, 668 N. Riverpoint Blvd., Suite A, Spokane, WA 99202, USA Fax: +1 509 358 2267 E-mail: [email protected] E-mail: [email protected] *Corresponding author

Erik-Joost de Bruijn School of Management and Governance, University of Twente, PO Box 715, Enschede, 7500 AE, The Netherlands Fax: +31 53 489 2159 E-mail: [email protected] Abstract: This article describes the use of classroom communications systems, otherwise known as clickers, for measuring student learning over time. In an introductory operations management course, they were used to track student learning and progress. Student learning was assessed based upon the three lower levels in Bloom’s taxonomy: knowledge, understanding and application. This assessment occurred during lectures, small-stakes quizzes and high-stakes tests. This article shows that assessment opportunities are made possible through the use of clickers. This provides a mechanism to measure learning and allows an instructor to adjust instruction and to diagnose individual students in a timely manner so that additional help can be provided to students in need. Since many of the factors involved in the learning process lie outside of the realm of clicker technology, readers are advised to be cautious about reaching conclusions too quickly with regard to student learning and clickers. Keywords: clickers; education; learning; operations management. Reference to this paper should be made as follows: Steenhuis, H-J., Grinder, B. and de Bruijn, E-J. (xxxx) ‘Measuring student learning over time with clickers in an introductory operations management course’, Int. J. Information and Operations Management Education, Vol. x, No. x, pp.xx–xx. Biographical notes: Harm-Jan Steenhuis is an Associate Professor of Operations Management at Eastern Washington University and Chair of the Department of Management. He received his MSc in Industrial Engineering and Management and his PhD in International Technology Transfer from the University of Twente, the Netherlands. He is currently involved in research on international technology transfer and manufacturing, industry–university technology transfer and instructor–student knowledge transfer. Copyright © 200x Inderscience Enterprises Ltd.

1

2

H-J. Steenhuis, B. Grinder and E-J. de Bruijn Brian Grinder is an Associate Professor of Finance at Eastern Washington University. He received his MBA from Fort Hays State University and his PhD from Washington State University. His current research interests include technology in the classroom, finance pedagogy and financial history. Erik-Joost de Bruijn is a Professor of International Management. He received his MSc from the University of Massachusetts and a PhD from the University of Twente. Since 1971, he worked as Project Coordinator and Consultant for the Netherlands Government in various industrialisation projects in developing countries. Currently, he teaches International Management at the School of Management and Governance, University of Twente.

1

Introduction

During the 2005–2006 academic year, a traditional introductory operations management class, which was designed to accommodate a maximum of 60 students, was redesigned so that it could handle more than 150 students. As part of the change, the number of lecture sessions was reduced from twice a week to once a week, and the second session was converted to a seminar format where students were split into small groups of 12–20 individuals and led by peer-mentors. This mix of large lectures and small seminars is similar to the hybrid approach discussed by Flamm et al. (2008). In anticipation of the changing lecture size, clickers were introduced in the lectures during the trial-run of the new course during the spring quarter of 2006. Clickers will be discussed in Section 3. First, a short introduction will be provided about the academic environment in which these changes took place.

2

Academic environment

Entwistle and Tait (1990) demonstrated that the academic environment influences how students approach the classroom. Therefore, some general information about the academic environment at the university in question is provided. The introductory operations management course is a mandatory junior level class for business students. Roughly 300 students a year take this course during the regular academic year with an additional 60–80 students taking the course during the summer quarter. The course is taught at Eastern Washington University (EWU). EWU is a regional comprehensive public university and has almost 10,000 students. Around 1,200 students are enrolled in the business programme. During the spring quarter of 2007, which can be considered representative, roughly 30% of the students enrolled in operations management could be classified as non-traditional students, approximately 75% of the students had taken courses at other institutions (many were transfer students from community colleges) and the average grade point average at EWU upon entering the operations course was 3.1 (on a scale from 0.0 to 4.0). Instruction in the revised course follows a learning centred approach that is similar to what Samuelowicz and Bain (2001) describe as ‘preventing misunderstandings’ or ‘negotiating misunderstandings’. This approach requires two-way communication

Measuring student learning over time with clickers

3

between students and the instructor. In addition, the course is designed to achieve deep learning because this is more meaningful than surface learning (Biggs, 1999). Clickers were introduced to accomplish this by making it easier to create classroom discussion and to serve as a feedback mechanism. The student learning objectives in the course are based on Bloom’s (1987) taxonomy. Bloom’s (1987) taxonomy classifies educational objectives in the cognitive domain into six levels with different orientations: knowledge, comprehension, application, analysis, synthesis and evaluation. The operations course was oriented toward the three lowest learning levels with the following student learning objectives. 1

Knowledge level •

2

3

Comprehension level •

Describe the functional and supporting roles of operations management in a variety of production and service organisations.



Explain key operations management concepts.



Interpret solutions to quantitative data problems relevant to the operations management discipline.

Application level •

3

Know the vocabulary of the operations management discipline.

Apply mathematical formulas to quantitative data problems relevant to the operations management discipline.

Clickers

Clickers are a fairly recent innovation for university classrooms. They are part of classroom communication systems that can be used to enhance the traditional lecture format by introducing multiple-choice questions, numeric problems, or opinion surveys into the lecture. Students input their responses using a clicker device, the responses for the entire class are immediately recorded and feedback in a wide variety of formats is instantly available to the instructor who can share it with the class, record it as a grade or use it later for analysis of learning. There are several types of clicker systems available including TurningPoint (www.turningtechnologies.com), einstruction (www.einstruction.com), iclicker (www.iclicker.com) and Qwizdom (www.qwizdom.com). All clicker systems are essentially composed of transmitters (clickers), a receiver, a projection system and software that must be installed on a computer that can be used in the classroom. Portable lap tops are preferable to desktop systems that are always in the classroom since quizzing and testing information must be installed on the computer that is used with the system. Once a system is operational, questions can be posed in the classroom via PowerPoint or some other presentation mechanism, and student responses are transmitted from the clicker through the receiver to the computer where they are recorded. Most systems allow for immediate question response analysis. For example, once all answers have been received for a particular question, the instructor can pull up a distribution of responses that can quickly be used to

4

H-J. Steenhuis, B. Grinder and E-J. de Bruijn

determine any number of things such as the percentage of correct responses or the percentage of incorrect responses that resulted from a common error or misconception. The costs of a clicker system must be given careful consideration. Oftentimes, the entire cost of the system falls on students who, at the very least must purchase a clicker. Some systems also require students to pay a registration fee each term. If a registration fee is required, a student enrolled in multiple clicker courses during the term effectively reduces the registration cost per course. During 2006–2007, students who were required to use the TurningPoint clickers paid roughly the same as students who were required to purchase einstruction clickers. The TurningPoint clicker cost about $50 while the einstruction clicker only cost about $35. However, students using einstruction clickers were also required to pay a registration fee. Clicker systems may also impose costs on the educational institution. For instance, the TurningPoint system required a licensing fee of $200 for the classroom receiver and software. The einstruction receiver and software, on the other hand, were free of charge. Of course, the prices quoted above are subject to change and may very well be reduced as more instructors adopt clicker systems and as advancements in clicker technology are made. There will also be changes in the pricing mechanisms used by the clicker producing companies as they continue to experiment with various pricing packages. In addition to the financial cost, students experience a bit of a learning curve as they become familiar with the clickers. Instructors also need to set aside time to familiarise themselves with the various features of their particular system before using it in the classroom. It has been argued that clicker technology promotes active learning (Martyn, 2007), increases student engagement in the classroom (Bruff, 2007) or at the very least, helps keep students from nodding off in class (Beatty, 2004). Although Duncan (2005) and Caldwell (2007) mention several ways for faculty to use clickers, they were primarily introduced into the operations management course redesign in order to counter-balance the larger class size and to increase classroom discussion. Increased classroom discussion was expected to ultimately lead to increased student understanding of the material. However, it soon became apparent that the clickers did not lead to more classroom discussion. Even though students could now see how the class responded overall, it seemed that participation in discussion was no better than when clickers were not used. This was a bit disappointing because part of the rational for using clickers was that they would make it easier for students to express their thoughts or engage in discussions about why a certain answer (even if it was an answer they did not choose) might be correct. This confirms observations by Carnaghan and Webb (2007) who found that students asked fewer questions with the use of clickers than when clickers were not being used in the classroom. The observations in the course also confirmed Ribbens (2007). His description of clicker introduction led him to conclude that “I realized that it was my bright students who answered my discussion questions in previous years” (Ribbens, 2007,P.62). Since the clickers did not lead to increased discussion, the focus shifted to using them as a means to track student learning and provide feedback to the instructor. The clickers allowed early insight into what students did and did not understand and allowed the instructor to adjust the course accordingly. This is similar to what was experienced by Ribbens (2007), d’Inverno, Davis and White (2003) and Wood (2004). The latter commented, “I was … elated by the realization that for the first time in 20 years of lecturing I knew, right on the spot (rather than after the next mid-term examination) that over half the class didn’t ‘get it’”.

Measuring student learning over time with clickers

5

The ability to get better and earlier data on student understanding of the material in a quarter aligns with the increasing attention being given to assessment of learning. Universities are facing increasing scrutiny with regard to their educational processes. One obvious force behind this scrutiny is accrediting agencies. For example, AACSB-International, the accreditation agency for business schools, has a set of standards several of which are oriented around the assurance of learning (see www.aacsb.edu). That is, business schools have to demonstrate the learning that is taking place. There are also other forces. For example, state universities face questions about the effective use of funding from state or local governments. That is, there is an increasing emphasis on public accountability in higher education (Lucas and Associates, 2000, p.1). As Berberet explains “For more than a decade, as part of this rethinking, the roles, workload, and productivity of faculty have come under intense public scrutiny” (Berberet, 2002, p.3). This includes greater educational accountability (Berberet, 2002). Major efforts are already under way throughout the country to make student learning the central focus in higher education. There is a monumental shift from an emphasis on teaching to a concentration on learning (Lucas and Associates, 2000, p.161). This shift has also been recognised in works on pedagogy. For example, Biggs identifies three levels of teaching. Level one is where learning is considered a function of individual differences between students. That is, the teacher ‘transmits’ the information and individual student differences determine whether they ‘learn’ or not. The second level is where learning is considered to be a function of teaching. That is, teaching is still based on transmission, but of concepts and understanding, not just information. The responsibility for ‘getting it across’ now rests to a significant extent on what the teacher does. At the third level, learning is viewed as the “result of students’ learning-focused activities which are engaged by students as a result both of their own perceptions and inputs, and of the total teaching context” (Biggs, 1999, p.61; Biggs, 2003). These levels represent a shift from teaching (transmission) to what is actually being learned. There is also additional pressure from parents who are increasingly getting involved in tertiary education. These so called ‘helicopter-parents’ often question the quality of instruction, the grades received by their children, and the overall quality of educational service. Lastly, at least in the US, with the increasing enrolment of non-traditional students, there is now a force in the classroom that is concerned about the cost of their education and the perceived value of classes. Whether students, their parents or (local) government agencies who are typically non-experts in course design can make these judgments on the quality of education is not at issue. The issue is that these types of influences are nowadays part of the educational process and cannot be ignored. The result is that educational institutions are forced to pay more attention. The key issue is: Is learning taking place? And a response is required from universities that will satisfy the entire range of stakeholders. Overall, as summarised by Gardiner (2000, p.166); “Society is asking higher education to educate all of its students to a much higher level than ever before” and “Evidence of student learning produced by assessment research can be thought of as the department’s academic bottom-line” (2000, p.165). The evidence for clickers as an effective method of increasing student learning is mixed (see for example, Beuckman, Rebello and Zollman, 2007; Caldwell, 2007; Carnaghan and Webb, 2007; Stowell and Nelson, 2007; Yourstone, Kraye and Albaum, 2008). However, Duncan (2005) has identified a potentially important use of clickers in tracking student ‘performance’. The goal of this article is to describe how

6

H-J. Steenhuis, B. Grinder and E-J. de Bruijn

clickers were used in an introductory level operations management course to track student learning. This type of tracking can serve the public need for more accountability with regard to achieving learning as well as provide the reflective teacher with insights into course design modification requirements.

4

Phase 1: group tracking

During the first phase of student tracking (April 2006–March 2007), the TurningPoint technologies clicker system was used. When this system was introduced, it was new to the students as well as the instructor. The TurningPoint clicker system was a PowerPointoriented system in which classroom PowerPoint presentations could be enhanced with clicker questions. The introduction of the clicker system led to an important change in instructional approach. One of the first questions asked with the TurningPoint system was a seemingly simple math-oriented question, and it quickly became clear that most students were unable to correctly answer the question. This was quite unexpected because in previous quarters when the same question was posed, one student in the class would typically provide the correct answer, and it was then assumed that, due to the simple nature of the problem, all students were able to understand it. Instantaneous feedback from the clicker system allowed the instructor to become more in tune with student understanding or lack thereof. This was accomplished by designing non-graded clicker quizzes that were conducted at the beginning of each class period. These quizzes dealt mostly with assessing student learning based on the students’ ability to correctly answer questions after reading the text on their own and doing the related homework. The feedback from these quizzes allowed the instructor to assess the overall performance of the class in real time and to determine how instruction time was going to be divided amongst the various topics of discussion. Over time, a system was developed in the operations management course to use this initial assessment and to track student learning over time by looking at the overall performance of the class on three successive assessments. First, the in-class assessment was made of student learning as was just discussed. Second, an assessment was made of student learning through small-stakes quizzes after the material was explained in class and students had the opportunity to both discuss theory with an instructor and/or mentor and practice mathematical problems. Finally, an assessment was made of student learning through a high-stakes test. To provide a context for the assessment that took place, it is necessary to discuss the type of assessment used in the course. Quizzes and tests were geared towards the lowest three levels of Bloom’s (1987) taxonomy which aligns with the student learning objectives. Each test counted four to five times as much as a quiz, e.g. 200 points vs. 50 points. Questions were designed to correspond to the three Bloom’s levels as follows: 50% of the questions were knowledge level questions, 25% were understanding questions and 25% were application questions. The quizzes generally consisted of 20 multiple-choice questions. Although this format is not the best for assessing higher levels of learning (Biggs, 2003, p.180) it is possible to assess higher levels of learning through multiple-choice formats by carefully wording the questions (Carneson, Delpierre and Masters, 1996).

Measuring student learning over time with clickers

7

For the development of the high-stakes test (40 multiple-choice questions representing 85% of the test), a spreadsheet was developed which identified each question on the test as a topic 1

that was covered on a clicker question slide in any of the previous lectures

2

that was covered on any of the previous small-stakes quizzes.

The spreadsheet, therefore, connected the classroom clicker questions with the appropriate quizzes and with a test. The results were analysed by examining the percentage of students who answered a particular question correctly in each of the three assessments and determining whether progress had been made over the time from when the concept was first introduced in the classroom to when it was assessed on the test. An example of this sequence is the following set of three questions on cost–volume analysis: Lecture clicker question There are two alternatives for producing product X. Machine A has lower fixed costs than machine B. Machine A has higher variable costs than machine B. At a volume of 100,000 units, machines A and B have the same total cost and are profitable. If the required volume is 200,000 units, which machine is the better choice? (several multiple-choice options were provided). 51.8 % answered correct. Quiz question A company has to make a decision to purchase a machine to produce product X. It has two options: machine A and machine B. Machine A has higher fixed costs than machine B. Machine A has lower variable costs than machine B. The higher the quantity of product X that the company has to produce, the more attractive it is to use… 68.1% answered correct. Test question A company has to make a decision to purchase a machine to produce product X. It has two options: machine A and machine B. Machine A has lower fixed costs than machine B. Machine A has higher variable costs than machine B. The higher the quantity of product X that the company has to produce, the more attractive it is to use… 78.1 % answered correct. This example shows a similar concept that was assessed three times and how, over time, the performance of the classroom on this particular concept improved. This can be viewed as confirming the ‘rolling reinforcement strategy’ findings that were discussed by Mukherjee (2002). An example of several of these types of assessment comparisons during a quarter are provided in Table 1. This table is based upon the spring 2007 quarter, but similar approaches were also used in other quarters.

8

H-J. Steenhuis, B. Grinder and E-J. de Bruijn

Table 1

Tracking of classroom learning spring 2007

Test question

Lecture result (in %)

Quiz result (in %)

Test result (in %)

Lecture – test Quiz – test improvement (%) improvement (%)

1

38.5

53.9

66.4

27.9

12.5

2

13.8

52.5

73.0

59.2

20.5

4

58.2

92.2

85.4

27.2

–6.8

9

30.3

89.4

67.2

36.9

–22.2

13

86.4

43.6

76.6

–9.7

33.1

15

25.8

43.1

55.5

29.7

12.4

18

25.8

75.9

82.5

56.7

6.6

21

45.5

36.9

69.3

23.9

32.5

22

91.4

71.5

85.4

–6.0

13.9

23

91.4

65.7

87.6

–3.8

21.9

27

51.8

68.1

78.1

26.3

10.0

28

25.8

54.7

67.2

41.3

12.4

29

36.6

52.6

59.9

23.3

7.3

31

24.8

63.8

78.1

53.3

14.3

32

10.1

45.4

65.7

55.6

20.3

34

75.5

79.4

90.5

15.1

11.1

35

78.2

34.8

47.4

–30.7

12.7

39

23.7

67.2

83.9

60.3

16.8

Average

46.3

60.6

73.3

27.0

25.5

Table 1 displays results from a test containing 40 multiple-choice questions in the spring 2007 quarter. The table does not include all questions from the test because not all test questions could be tracked through the sequence (lecture question → quiz → test). The percentages in italics represent questions that were either identical or nearly identical questions on a quiz and the test, for example for low level knowledge questions, i.e. definitions. Table 1 illustrates that on average 46.3% of the students were able to correctly answer questions in the lecture, this improved to 60.6% for quizzes and improved further to 73.3% of the students able to answer questions correctly on the test. This demonstrates that learning was taking place. As can be seen from Table 1, progress in learning was different for different questions and in some instances it was negative, i.e. it seems that students originally did better on the lecture question, but were unable to achieve similar results on the test. This can be explained by looking more carefully at the questions. For example, the following set of questions was used for decision-making theory, i.e. decision-making under uncertainty using approaches of minimax, maximax, minimax regret and Laplace.

Measuring student learning over time with clickers

9

Lecture clicker question For problem 2 in the book (this problem was similar to the problems that are presented below, meaning that it provided a pay-off table), what alternative should be chosen under minimax regret? 78.2 % answered correct. Quiz question The operations manager for a well-drilling company must recommend whether to build a new facility, expand one or do nothing. He estimates that long-run profits (in $1,000) will vary with the amount of precipitation (rainfall) as follows: Precipitation Alternative

Low

Normal

High

Do nothing

–100

100

300

Expand

350

500

200

Build new

750

300

0

If he uses the minimax regret criterion, which alternative will he decide to select? 34.8% answered correct. Test question The operations manager for a well-drilling company must recommend whether to build a new facility, expand one or do nothing. He estimates that long-run profits (in $1,000) will vary with the amount of precipitation (rainfall) as follows: Precipitation Alternative

Low

High

Do nothing

10

3

Expand

–2

8

4

5

Build new

If he uses the minimax regret criterion, which alternative will he decide to select? 47.4% answered correct. There are several possible explanations for why more people were able to answer the initial lecture clicker question correctly. First, the students had to solve this problem outside of class and could have consulted with others. Second, the answers were provided in the back of the book so even if students did not understand it, they might have answered the clicker question correctly by knowing the answer. Based upon a comparison of only the quiz and the test, it can be concluded that learning improved.

5

Phase 2: the move to diagnosing individuals

In the spring quarter of 2007, the TurningPoint clickers were replaced by eInstruction clickers. The eInstruction clickers had an advantage over the TurningPoint clickers

10

H-J. Steenhuis, B. Grinder and E-J. de Bruijn

because they could be used by students to take self-paced, non-sequential multiple-choice tests. This offered a major improvement for student tracking and improved the ability to analyse results. For example, the answer data from all students could be saved in an excel spreadsheet. This made it easy to analyse and to identify the questions that posed the most problems for students. Furthermore, the system made it easy to identify the incorrect answers that students tended to choose. This feature was implemented for the first time in the summer quarter of 2007. As with the TurningPoint clickers, the introduction of the einstruction clicker system changed how the course was taught. Due to the ease of tracking students and the ability to analyse results, the focus became even more oriented toward tracking actual student learning. For example, by tracking how students answered incorrectly on low-stakes multiple-choice quizzes it was fairly easy (but time-consuming) to determine the types of misconceptions that had occurred. These misconceptions were then discussed in the classroom before major high-stakes tests. Course design also included the explicit repetition of quiz questions on exams in order to measure improvement and to determine the effectiveness of the quiz feedback loop. Because of the previous experiences with assessment, it was clear that what and how things were being asked played an important role in the overall class results. Consequently, one of the changes made was that the knowledge category was split into two separate categories. Knowledge/theory that was explicitly treated in the classroom was classified as knowledge I whereas knowledge/theory that was not explicitly treated in the classroom was classified as knowledge II. Knowledge II questions were typically important concepts which students should easily pick up from the book. On quizzes and tests, each category (knowledge I, knowledge II, understanding and application) represented 25% of the questions. The same results that were shown in Table 1 still held true under the expanded learning classifications although comparable clicker questions were used less frequently in the classroom. Improvement in learning was still tracked and included more explicit tracking by learning level. Figure 1 provides an overview of fall quarter 2007 and how the class did overall for each learning level during the second half of the quarter, i.e. quiz 3, quiz 4 and the final test. Figure 1 illustrates that 1

As in Table 1, improvements in performance occurred from the quizzes to the test (with one exception, i.e. knowledge II for quiz 3 was higher than for quiz 4 and the final).

2

Students performed better at lower levels of learning than at higher levels of learning.

The new clickers also allowed for better tracking of students. Student tracking can be used to diagnose common problems and help an instructor identify ways for students to improve their study habits. Figures 2 and 3 provide an overview of nine students from one of the seminar sections and how the students in that seminar performed, respectively, on a quiz (20 questions, 4 per category) and on the mid-term test (60 questions, 15 per category). This type of analysis shows that overall these students struggle with application (math) and that some students seemed to struggle with particular learning levels.

Measuring student learning over time with clickers Figure 1

11

Tracking by Bloom’s learning level (see online version for colours)

OPSM 330, Fall 2007 - % correct by category % of class with correct answer

100.0% 80.0% Quiz 3

60.0%

Quiz 4 Final

40.0% 20.0% 0.0% Know ledgeexplicit

Figure 2

Know ledge-not Understanding explicit

Application

Tracking by Bloom’s learning level by student, quiz 1 (20 questions) (see online version for colours)

Number of questions correct

Section X, Quiz 1 scores 20 15

Application Understanding

10

Knowledge II Knowledge I

5 0 1

2

3

4

5 Student

6

7

8

9

12

H-J. Steenhuis, B. Grinder and E-J. de Bruijn

Figure 3

Tracking by Bloom’s learning level by student, mid-term test (60 questions) (see online version for colours)

Number of questions correct

Section X, mid-term scores 60 45

Application Understanding

30

Knowledge II Knowledge I

15 0 1

2

3

4

5

6

7

8

9

Student

Figure 4

Tracking an individual student (see online version for colours)

Tracking student X Fall 2007 by learning level

Percent correct of total

100 75

Application Understanding

50

Knowledge II Knowledge I

25 0 Quiz 1 Quiz 2

Midterm

Quiz 3 Quiz 4

Final

The same information can also be used to diagnose individual students. Figure 4 provides an overview for one student’s performance in the course. The actual scores in Figure 4 are expressed as percentages of the total possible score to allow for an easy comparison. This provides insight with regard to where a particular student may be struggling. The analysis from Figure 4 can be helpful in discussions with individual students and provide insight into why a student is struggling given that student’s approach to studying

Measuring student learning over time with clickers

13

for the course. For example, the student from Figure 4 had a particularly low score on the knowledge II category on quiz 3. This indicates that the student may not have spent sufficient time studying the theory on his/her own. As a side note, the analyses show that for the course overall students typically perform the worst on quiz 3 (as the student in Figure 4 illustrates). This may be due to a psychological effect after the ‘big’ mid-term test when students drop their level of effort. In looking at quiz 4 and the final test, the student from Figure 4 seemed to perform reasonably well on the knowledge categories (scoring roughly 70% of the available points in those categories), but struggled with the higher levels of understanding and application (scoring roughly 50% of the available points in those categories).

6

Assessment

The discussion above shows how data was gathered and analysed in an introductory operations management course. This data illustrates how learning occurs in the classroom and might help with dealing with the need for public accountability. The data also allows the instructor to diagnose the class overall, i.e. where more time needs to be spent, and which students need individual attention. It assumes that the assessment of quizzes and tests is an accurate reflection of student learning. The use of Bloom’s taxonomy meant that it was possible to gain more insight into the reliability of assessment and student learning. Two examples will be used to illustrate this point and its impact on teaching. Note that for each of these examples, similar assessments with similar results also occurred for other topics and in other quarters, i.e. these are not unique experiences. Example 1 On a quiz (fall 2007) the following question was asked: ABC analysis generally classifies inventory in order of: A) activity-based costing categories B) price of the items C) number of products sold D) none of the above. 48.4% of the students answered correctly (D; it is a combination of B multiplied with C). This question was originally identified as knowledge I because the theory was treated before the quiz in which it was assessed. Following this, on the test the exact same question was asked again and it was again classified as a knowledge I question. 54.2% of the students answered correctly. This example indicates learning improvement, i.e. there was an increase of 5.8% of the students who now correctly answered this question. Nevertheless, the overall performance is quite disappointing in particular because

14

H-J. Steenhuis, B. Grinder and E-J. de Bruijn

1

it is a matter of knowing the definition or knowing the formula that needs to be used for ABC-inventory analysis

2

it was treated in the class originally, and it was treated in the class again when the quiz was discussed

3

it is an important concept in the chapter on inventory management.

Example 2 On a quiz (fall 2007), the following question was asked If the cost per order doubles, all else remaining the same, what happens to the economic order quantity (EQO)? (formula was also provided on the test) A) it goes up by 41% B) it goes up by 100% C) it goes down by 50% D) it goes down E) nothing. 45.7% of the students answered correctly. This question was identified as an understanding question because it relates to whether students understood the EOQ model. Following this, on the final test the exact same question was asked again. 72.9% of the students answered correctly. Based upon this information, it would seem that there is an improvement in learning. That is, an increase of 27.2% of the students now answered correctly. However, this conclusion is misleading because when the exact same question was repeated on the test, it was no longer assessing understanding. Instead, because the quiz was discussed in the classroom, the question on the test was now a knowledge (memorization) oriented question. To test for understanding, on the final test the following question also appeared: If the holding cost per item triples, all else remaining the same, what happens to the economic order quantity? A) it goes down B) it goes down by 42% C) it goes up D) it goes up by 300% E) nothing. 31.3% of the students answered correctly. This question on manipulating the holding cost is in essence the same as the earlier quiz question on manipulating the ordering cost. The results illustrate that despite the 72.9% score on the other question on the final, the students overall did not understand the EOQ formula. Therefore, conclusions from this assessment on how these figures can be used to

Measuring student learning over time with clickers

15

demonstrate learning must be treated with caution. From an outsider perspective, i.e. someone who does not know what goes on in the classroom, the two questions from the test may appear similar. In fact, they are really assessing different types of student learning. This then also means that public accountability is faced with challenges and that demonstration of learning (as expressed by what students can answer correctly) is not straightforward.

7

Conclusions

Clickers were introduced in an introductory operations management course. The initial intent was to increase classroom engagement through increased discussion with the expectation that this would lead to higher levels of understanding. It was found that increased discussion did not occur. This confirms observations that have been made elsewhere (Carnaghan and Webb, 2007). The purpose for using clickers was subsequently changed to tracking student learning. This type of tracking might be useful in responding to the increasing pressure for public accountability with regard to learning in the classroom. The eInstruction clicker system in particular allows an instructor a fairly easy way to analyse (multiple choice) test results. In the course, questions were organised around the lowest three levels of Bloom’s taxonomy: knowledge, understanding and application. In the first phase, tracking occurred for the class overall. In the second phase, tracking also occurred for individual students. Overall, the analysis indicates that students were fairly competent in memorising theory, especially if it was explicitly treated in the classroom but that they had more difficulties with understanding the theories and applying them (especially in mathematical situations). Tracking classes in this manner allows an instructor to demonstrate that student learning is taking place, i.e. students improve during the quarter. Tracking at the overall class level can also provide an instructor with valuable insight with regard to the needs of a particular class, e.g. where students struggle and more time needs to be spent on a topic. Lastly, tracking at the individual student level might be used to help individual students by identifying the type of learning level they are struggling with and comparing this with their studying approaches. The introduction of the clicker system has changed the way this course is taught. It has become increasingly focused on determining what students did or did not (yet) learn. This is done by quizzing more often, providing feedback on the quizzes and bringing in an element of repetition. Also, the focus of instructor time has changed. That is, more time is now spent on analysing assessment results and using them to make weekly adjustments in the content of the course. In a way, this could also be considered a disadvantage to introducing clicker systems because of the time-consuming nature of the analysis. With regard to assessment and in particular the demonstration of learning, one should be cautious about limiting conclusions to quantitative data per question. This is because the formulation of a question does not necessarily reveal the level of learning that is assessed because this is dependent on what was treated in the classroom. Therefore, questions that appear similar to an outside observer may in fact be assessing different levels of learning.

16

H-J. Steenhuis, B. Grinder and E-J. de Bruijn

Acknowledgement The authors would like to thank Ms Carlee Marshall for her continued help with the course data analysis.

References Beatty, I. (2004) ‘Transforming student learning with classroom communication systems’, Educause Research Bulletin, Vol. 3, No. 3, pp.2–13. Available at http://www.educause.edu/ir/ library/pdf/ERB0403.pdf. Berberet, J. (2002), ‘A new academic compact’, in L.A. McMillan and W.G. Berberet (Eds), New Academic Compact, Revisioning the Relationship Between Faculty and their Institutions. Bolton, UK: Anker Publishing Company, pp.3–27. Beuckman, J., Rebello, N.S. and Zollman, D. (2007) ‘Impact of classroom interaction system on student learning’, in L. McCollough, L. Hsu and P. Heron (Eds), 2006 Physics Education Research Conference. American Institute of Physics, pp.129–132. Biggs, J. (1999) ‘What the student does: teaching for enhanced learning’, Higher Education Research and Development, Vol. 18, No. 1, pp.57–75. Biggs, J. (2003) Teaching for Quality Learning at University (2nd ed.). Berkshire, UK: The Society for Research into Higher Education and Open University Press. Bloom, B.S. (Ed.) (1987) Taxonomy of Educational Objectives, the Classification of Educational Goals. New York, NY: Thirtieth Printing, Longman, Inc. Bruff, D. (2007) ‘Clickers: a classroom innovation’, NEA Higher Education Advocate, Vol. 25, No. 1, pp.5–8. Caldwell, J.E. (2007) ‘Clickers in the large classroom: current research and best-practice tips’, CBE – Life Sciences Education, Vol. 6, pp.9–20. Carnaghan, C. and Webb, A. (2007) ‘Investigating the effects of group response systems on student satisfaction, learning, and engagement in accounting education’, Issues in Accounting Education, Vol. 22, No. 3, pp.391–409. Carneson, J., Delpierre, G. and Masters, K. (1996) Designing and Managing Multiple Choice Questions. Available at: http://web.uct.ac.za/projects/cbe/mcqman/mcqman01.html on March 28 (accessed on 2008). d’Inverno, R., Davis, H. and White, S. (2003) ‘Using a personal response system for promoting student interaction’, Teaching Mathematics and its Applications, Vol. 22, No. 4, pp.163–169. Duncan, D. (2005) Clickers in the Classroom. How to Enhance Science Teaching Using Classroom Response Systems. San Francisco, CA: Addison Wesley and Benjamin Cummings. Entwistle, N. and Tait, H. (1990) ‘Approaches to learning, evaluations of teaching, and preferences for contrasting academic environments’, Higher Education, Vol. 19, pp.169–194. Flamm, P., Hoffman, J.J., Delgadillo, F. and Ewing, B.T. (2008) ‘A hybrid approach for teaching introduction to operations management’, Int. J. Information and Operations Management Education, Vol. 2, No. 3, pp.255–274. Gardiner, L.F. (2000) ‘Monitoring and improving educational quality in the academic department’ in Lucas and Associates (Eds), Leading Academic Change, Essential Roles for Department Chairs. San Francisco, CA: Jossey-Bass Publishers, pp.165–194. Lucas, A.F. and Associates (2000) Leading Academic Change, Essential Roles for Department Chairs. San Francisco, CA: Jossey-Bass Publishers. Martyn, M. (2007) ‘Clickers in the classroom: an active learning approach’, Educause Quarterly, Vol. 30, No. 1, pp.71–74.

Measuring student learning over time with clickers

17

Mukherjee, A. (2002) ‘Improving student understanding of operations management techniques through a rolling reinforcement strategy’, Journal of Education for Business, July/August, pp.308–312. Ribbens, E. (2007) ‘Why I like clicker personal response systems’, Journal of College Science Teaching, Vol. 37, No. 2, pp.60–62. Samuelowicz, K. and Bain, J.D. (2001) ‘Revisiting academics’ belief about teaching and learning’, Higher Education, Vol. 41, No. 3, pp.299–325. Stowell, J.R. and Nelson, J.M. (2007) ‘Benefits of electronic audience response systems on student participation, learning and emotion’, Teaching of Psychology, Vol. 34, No. 4, pp.253–258. Wood, W.B. (2004) ‘Clickers: a teaching gimmick that works’, Developmental Cell, Vol. 7, pp.769–798. Yourstone, S.A., Kraye, H.S. and Albaum, G. (2008) ‘Classroom questioning with immediate electronic response: do clickers improve learning?’, Decision Sciences Journal of Innovative Education, Vol. 6, No. 1, pp.75–88.

Suggest Documents