Session W2C
Work in Progress - Game Mechanics and Social Networking for Co-Production of Course Materials Edward F. Gehringer, Maria Droujkova, Abhishek Gummadi, and Maveeth Nallapeta
[email protected],
[email protected],
[email protected],
[email protected] Abstract — Peer review has long been used in classroom settings, especially for teaching writing skills. In the last fifteen years, peer review has increasingly migrated online. Our Expertiza system has used online peer review of team projects to get students involved in the creation of reusable learning objects, such as homework problems, exam questions, and active- learning exercises. These activities support higher-order thinking skills and mimic workplace tasks. This paper reports on our efforts to integrate game mechanics into Expertiza. We hypothesize that by being able to share their achievements with the class, as well as attempting useful “microtasks” for extra credit, students will become more engaged with the class material and their fellow students. In addition, the course will be improved by the contributions that students will be motivated to create. Index Terms – Active and cooperative learning, peer review, game mechanics, social networking, achievements system, reputation system. INTRODUCTION
beyond simple reviews of submitted work. Interactions include authors’ double-blind feedback to reviewers; teammates evaluating each other’s contributions to the team; and third parties rating the quality of reviews (metareviewing). Any of these interactions can be the basis for recognizing achievement. For example, the system might recognize the top students, as measured by reviews of their work; or the top students as measured by feedback from the authors they were reviewing; or the students whose contributions are rated most highly by their teammates. AN ACHIEVEMENT SYSTEM Our design is very similar to online games, which often feature an achievement system to recognize players for accomplishments. The World of Warcraft achievement system, for example, recognizes about 750 individual accomplishments, “covering every aspect of gameplay.” Examples include defeating a member of each race in your opposing faction, or winning ten consecutive ranked arena matches. Players are motivated to work harder to earn recognition, because achievements viewable in the WoW Armory can be shown off over the Web. Similarly, Expertiza will allow students to publicize their accomplishments to the rest of the class if they wish. Just as game players can be recognized for simply performing an action a certain number of times, students might like to be recognized for, say, completing a certain number of reviews. But this is tricky, because a student could submit vacuous or cursory reviews just to get credit for the raw number. Hence, we will measure the number of contributions by either of two metrics: number of vetted contributions (quantity), or weighted contributions (quality).
Online gaming plays an important role in the lives of many of our students. Since gaming can promote engagement with course material, games are now being developed for all areas of the curriculum [1, 2]. The advantages of games for education are many [3]: they constitute active learning, they reward achievement, and they promote metacognition (students learn not only about the domain, but also about their competencies). The benefits are great, but so are the costs: each game must be developed independently. This is labor intensive, and reaches only a limited market—a particular course, for example. There are some games, such as Jeopardy, • A vetted contribution is a contribution that has been that are easily adaptable to multiple contexts, but these are evaluated and ranked higher than a certain threshold. rightly perceived as artificial beyond a limited range of rote For example, a review of submitted work is evaluated learning tasks. by feedback from an author. A student might be Suppose we could develop a game framework that can be awarded credit for a review that the author rated at least intrinsically applied to any course. Students would engage in 4 on a scale of 5 for helpfulness. activities to earn rewards, but these activities would be related to the course content. Only the framework would remain the • Weighted contributions is number of contributions × same. average quality of contribution. For example, someone who submitted 10 reviews with the average This paper reports on designing such a framework for the Expertiza online peer-review system. As in other online peerrating of 2.5 will have a lower weighted score than review systems, students review each other’s submissions. someone who submitted 6 reviews rated 5. We envision a leaderboard, where students can at any Expertiza differs from other systems because (i) it allows the reviewed documents to be files, wiki pages, or Web pages time view the top performers in each category in their class, or (URLs), (ii) it accepts team submissions, where every member in the system as a whole. To respect privacy, students will have the right to authorize or not authorize the system to of the team has the ability to create or update any part of the submission, and (iii) it supports several kinds of interactions display their name if they are among the leaders in the class. 978-1-4244-4714-5/09/$25.00 ©2009 IEEE October 18 - 21, 2009, San Antonio, TX 39th ASEE/IEEE Frontiers in Education Conference W2C-1
Session W2C CO-PRODUCTION, MICROTASKS, AND MICROPAYMENTS In most classes, the instructor produces the course syllabus and lecture material, and the students in effect consume it (the producer/consumer model). Expertiza promotes an alternate view, the co-production model, in which the students cooperate to help produce the course. Students or teams sign up for various tasks such as writing homework or test questions on a specific topic [4] or producing active-learning exercises illustrating a course concept [5]. Much of the work of co-production can be done in regular homework assignments, where each student/team does analogous (though not the same) work, and receives the early peer feedback, typical of any collaborative assignment. Other tasks, however, are specific to the co-production model. For example, the instructor might want one or more students to categorize the active-learning exercises according to what lecture they relate to, or classify the homework/test questions according to which section of the textbook they cover. Such tasks promote highorder learning, because they require applying, systematizing and evaluating course knowledge.. But how would students be induced to do this extra work? Research in motivating online community participation [6] shows that users respond most positively when they can contribute in unique ways, can track their contributions, and be recognized for them. One such contribution-tracking system is Amazon’s Mechanical Turk, launched in 2005. It is a site where people can post jobs online and pay other people to perform them. The jobs are usually simple “human intelligence tasks,” such as drawing a sketch, or determining which two of a set of photographs are most similar. The jobs are small (microtasks) and so are the payments (micropayments). We base the Expertiza mechanism on best practices of existing systems. Here is the overview of its general design: The instructor posts a task, and states how much credit it is worth. A student (or team) voluntarily performs the task. Their contribution can be evaluated by other students. The students performing the task receive the stated credit for the task, multiplied by their peer-reviewed score for the task. Students who do the reviewing also share in the credit, according to a formula. Students may also have limited rights to post tasks for others to do. A REPUTATION SYSTEM Any classroom peer-review system needs some mechanism for ensuring review quality. Our system uses reviewer feedback to authors, and metareviews. But neither of these suffices when credit is granted for student answers to student questions: a questioner and an answerer could conspire to rate each other’s contributions highly. Our strategy is to use reputation to determine how heavily a particular reviewer’s evaluation counts in evaluating a piece of work. Several algorithms have been developed for automatically calculating reputation in a peer-review system; here we outline the strategy of Cho and Schunn [7]. The reputation of a reviewer is determined by three factors: systematic difference, the degree to which a reviewer tends to
assign scores that are above or below the mean scores assigned by other reviewers; consistency, the extent to which a reviewer’s scores agree with scores assigned by other reviewers to the same work; and spread, the degree to which a reviewer assigns different scores to different work. We intend to extend this strategy to weight reputation for assigned reviews more heavily than reputation for voluntary answers to student questions. We plan to research this strategy further. RELATED WORK Facebook has a ranking mechanism for ranking the “sweetest” person or the most-liked person. Rankings are based on the number of other Facebook users who vote for that person, and a user’s ranking may be revealed or concealed at the user’s option. Extramarks.com is an educational website that has been developed for Indian students in grades 6 to 12. A student can post questions, and receives points for answering questions posted by others [8]. The quality of content is monitored by qualified teachers hired by Extramarks.com. Answers are rated by peers and by teachers. If the answer posted by a student is selected as the best answer by the post initiator, then that student receives extra points. If the question posted by the initiator is flagged as inappropriate, then the initiator of the question loses points. The Web site helps students, teachers and parents to improve and contribute to the educational system. Students may redeem points for discounts on purchases of premium content, such as access to previous question papers and online classes. ACKNOWLEDGMENT This work has been funded by NSF under a Course, Curriculum, and Laboratory Improvement grant, as well as by several internal NCSU programs. REFERENCES [1]
Alexander, B. (2008, July). Games for Education: 2008. Educause Review, 43(4), 64-65. Retrieved February 27, 2009, from Academic Search Premier database.
[2]
Etuk, N. (2008, November). Educational Gaming—From Edutainment to Bona Fide 21st-Century Teaching Tool. MultiMedia & Internet@Schools, 15(6), 10-13.
[3] Center for Technology in Education, Johns Hopkins University (2006, November) A review of recent games and simulation research and potential educational applications, http://labyrinth.thinkport.org/www/library/papers/cte_november2006.pdf [4]
Gehringer, E. F., Ehresman, L. M., and Skrien, D. J., “Expertiza: Students helping to write an OOD text,” OOPSLA 2006 Edcuators’ Symposium, Portland, OR, October 23, 2006.
[5]
Gehringer, E.F. and Miller, C.S. 2009. “Student-generated activelearning exercises,” SIGCSE 2009, Fortieth Technical Symposium on Computer Science Education, Chattanooga, Mar. 4-7, 2009.
[6] Fisher, D., Turner, T.C., and Smith, M. A., “Space planning for online community,” Am. Assn. for Artificial Intelligence, 2008. http://www.aaai.org/Papers/ICWSM/2008/ICWSM08-014.pdf [7[ Cho., K. Schunn, C.D. Scaffolded writing and rewriting in the discipline: A Web-based reciprocal peer-review system. Computers & Education 48:409–426, 2007 [8] Extramarks.com, “How to use Extramarks.” http://www.extramarks.com/how-to-use/ Accessed in 3/09.
978-1-4244-4714-5/09/$25.00 ©2009 IEEE October 18 - 21, 2009, San Antonio, TX 39th ASEE/IEEE Frontiers in Education Conference W2C-2