Competitive Evaluation in a Video Game Development Course Manuel Palomo-Duarte
Juan Manuel Dodero
José Tomás Tocino
Computer Languages and Systems Department University of Cádiz Cádiz, Spain
Computer Languages and Systems Department University of Cádiz Cádiz, Spain
Computer Languages and Systems Department University of Cádiz Cádiz, Spain
[email protected] [email protected] [email protected] Antonio García-Domínguez Antonio Balderas Computer Languages and Systems Department University of Cádiz Cádiz, Spain
Computer Languages and Systems Department University of Cádiz Cádiz, Spain
[email protected]
[email protected]
ABSTRACT
Keywords
The importance of the video game and interactive entertainment industries has continuously increased in recent years. Academia has introduced computer game development in its curricula. In the case of the University of C´ adiz, a noncompulsory course about “Video Game Design” is offered to Computer Science students. In the artificial intelligence lesson, students play a competition to earn part of their grade. Each student individually implements a strategy to play a predefined board-game. The different strategies compete in a software environment that implements it. Grading this part of the course depends on the results obtained in the competition. Additionally, before running the competition, students have to read the source code written by their classmates and bet a part of their grade on other teams’ results. This way, they elaborate a critical analysis of the source code written by other programmers: a very valuable skill in the professional world. Students that are not so proficient in coding are still rewarded if they demonstrate they can do a good analysis of the source code written by others.
Competition, video game development, artificial intelligence, expert system programming
1.
INTRODUCTION
The importance of the video game and interactive entertainment industries has continuously increased in recent years [16]. Academia has introduced computer game development in its curricula. In the case of the University of C´ adiz, a non-compulsory course about “Video Game Design” is offered to students of its degrees on Computer Science in their last year (third one). The recommended prerequisite is passing the Object Oriented Programming, Data Structures and Analysis and Design of Algorithms courses of the second year. There is also an elective Introduction to Artificial Intelligence course offered in the second year. But it is not a prerequisite as our experience gives all the necessary background on the topic and the course does not include lessons on expert systems. The course mainly follows a project-based learning approach [13]. But in the artificial intelligence lesson, students play a competition (implementing a serious game [3]) to get part of their grade. Each student individually programs a strategy to play a predefined board-game. The different strategies compete in a software environment that implements it. Grading this part of the course depends on the results obtained in the competition, serving as a motivating factor [2]. To implement the competition, a specially designed free software tool provides a common environment where different strategies coded using rule-based expert systems compete against each other. The system models a square board in which players have a partial knowledge of the world (i.e., there is no “perfect strategy” to play the board-game). Using this application, the students can program and test their expert system rule sets against other rule sets or human players. After competing in a league, students can improve their systems before participating in a play-off tournament. This way, the students can develop the skill of critically analyzing the strategy they have implemented.
Categories and Subject Descriptors I.2.1 [Artificial Intelligence]: Applications and Expert Systems; K.3.2 [Computers and education]: Computer and Information Science Education
General Terms Experimentation
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ITiCSE’12, July 3–5, 2012, Haifa, Israel. Copyright 2012 ACM 978-1-4503-1246-2/12/07 ...$10.00.
321
Table 1: Pieces in each army Value Number of pieces 1 1 2 8 3 2 4 2 5 2 6 1
Additionally, besides programming a strategy, students have to read source code written by their classmates and bet a part of their grades on other teams. This way, they review source code written by other programmers, which is a very valuable skill in the professional world. At the same time, students that are not so proficient in coding are still rewarded if they can do a good analysis of the source code written by others by identifying the future winner of the competition. The rest of the paper is organized as follows: first, we describe the methodology followed in the classroom. Then, we discuss the results obtained last year. The fourth section comments similar experiences. Finally, we compile some conclusions and draw future works.
2.
Figure 1: A possible initial set.
der certain situations. This helps students test and improve their strategies. Figure 2 shows a screenshot of a match between two teams. Most of the window is dedicated to the game board. The right edge of the window shows the names of both teams and their dead pieces, along with the number of turns played and the remaining turns before the match is tied. Below the game board, four buttons with arrows allow for navigating the saved game states for the match. From left to right, they allow for replaying the match from the beginning, go back one step, go forwards one step, and fast forward to the last turn, respectively. Finally, three more icons on the bottomright corner activate automatic simulation (advancing one turn per second), continue to the next match (or exiting if not inside a tournament), and exit the tournament, respectively. On the one hand, the game implements a partial knowledge world for players. Thus, there is no “perfect strategy” to play the game, avoiding a loss of interest due to unbeatable teams. But, on the other hand, players have perfect knowledge of their armies. Since there is only one agent playing, the game is kept simple as there is no need to implement the fuzzy intelligence and communication protocols usually required by a multi-agent system. As a result, the problem is suitable to be solved using expert systems. In particular, as the board is discrete, the artificial intelligence can be easily programmed using a rule-based expert system [5]. This way, the strategy is implemented as a set of prioritized if-then rules that are executed according to information from the current game state. Programming is as simple as adding, modifying, removing or re-prioritizing rules. This offers a wide variety of behaviors that can be implemented making slight changes in a rule set. It includes non-deterministic behaviors if several rules of the same priority can be activated at a certain moment. Even more, while general rules seem good for the beginning of a match, more specific ones play better in the last steps or in certain situations. So a good balance between both kinds of rules is required to create a winning team. The system is programmed in Python using PyGame (a wrapper of the SDL multimedia library), and Glade for the GUI. As for the core, it is developed on CLIPS [15]. All sys-
METHODOLOGY
The main aim of the experience is that students of the video game design course learn artificial intelligence programming using expert systems. All work for this lesson must be done within the allotted 8 hours in the classroom, so the game has to be very easy to play and code for.
2.1
The game
The game implemented for the competition is a simplified version of the Stratego board game [19]. It pits two armies made up by sixteen pieces against each other in a 8x8 grid. Both armies have the same pieces listed in table 1. Every piece has an associated value that is initially only visible to its owner. Players can initially place their pieces on the board in any order: one army is placed on the two lowest rows of the board and the other on the two highest rows. Figure 1 shows a possible initial set: brackets indicate that the value of the piece is hidden to the opponent. Players take turns moving one piece to an adjacent square (upwards, downwards, rightwards or leftwards). When a piece moves into a cell occupied by an enemy piece, their values are revealed and the lowest valued piece is removed from the board (dies). If they have the same value, both pieces die. The objective of the game is to capture the opponent’s lowest valued piece. After a certain number of turns have passed, the match is tied. A free software tool named Gades Siege [8] has been developed to support the competition. It improves upon a previous system called “Resistencia en C´ adiz”, and provides a common environment where the expert systems developed by the students and those saved from previous years can face off each other. Gades Siege allows for playing leagues, playoffs and single matches step by step, or collecting statistics from batches of 100 unattended matches against an opponent (using different random seeds). A human can also take control of a team to test the behavior of the strategy un-
322
Figure 2: Screenshot of a match in Gades Siege. The value of the 6-point orange piece is revealed. tems and our own code are multi-platform and free. Gades Siege is freely available for download at [8].
2.2
• Lecture time for the second week starts with students declaring which strategy or strategies will bet part of their marks. They can bet on one, two or three different strategies, and the sum of the grades bet must be between 10% and 50%. Then, a two-legged round robin league in which all students are confronted against each other is played. Some matches are watched by the entire class on a large wall screen so students can explain the behavior they implemented.
Schedule
The experience was performed throughout 2 weeks, using the weekly 4-hour blocks available for the course (2 hours of lectures followed by 2 hours of laboratory): • A 2-hour lecture dedicates the first hour to provide the required conceptual foundations behind rule-based expert systems, emphasizing their practical applications in science and engineering. The second hour is used to show the game. Some sample rules are shown along, from simple cases to a reasonably complex system, so they can get familiar with the environment.
• Usually, after playing the league, students identify minor issues and weak points in their systems that could be easily fixed. Note that a change in the priority of a rule or the addition/deletion of one rule more is very easy to implement and can lead to a very different behavior. This improvement could include adding rules considered suitable from the strategies of a classmate. They have 1 hour in the laboratory to modify their rule set and then, the final strategies take part in a playoff tournament.
• After the lecture students move to the computer classroom. Due to the ease to program the system, they can code basic rule sets in 2 hours, learning expert system programming with the support of a teacher. • Students have 5 days to code their strategies at home. Two of the features previously commented proved to be very useful for learning: playing a very large batch of matches automatically versus sample strategies to collect statistics on general performance, and playing against a human to test how the strategy behaves under certain circumstances and do fine tuning.
2.3
Grading
In order to pass the course, a strategy only needs to defeat a naive (sparring) team that is included in the system. This way, students can be confident they will pass this part of the course, even if the rule set ends up last in the competition. We do not consider fair that students able to program a good expert system in 2 weeks fail because their classmates are better programmers. Regardless of the quality of the strategies, one student must necessarily win the tournament, another must be second, and so on until the last place. An additional point is given if the student explains clearly her strategy. This prevents cheating on one hand (or at least, it ensures that students know what their strategies do), and on the other hand, classmates can easily use some rules to improve their strategies for the play-off. In order to further improve their marks, students need to win matches against their classmates.
• One day before the competition starts, students have to upload their strategies into the Moodle virtual learning platform used in the course. When all the students have uploaded their strategies the teachers make them publicly available for the enrolled student, so they can read the code of colleagues. Note that only strategies and not initial formations are available for download. If they both could be downloaded, the students could actually play a competition and (except for light differences caused by random seeds) know the winner.
323
Mark (0-10) Fail (below 5) 5 6 7 8 9 10
Table 2: Grades distribution Performance of the team No match won Only defeats the sparring (DS) DS and explains the strategy clearly DS and wins another match DS and stays in upper half of the league or passes one play-off round DS, stays in upper half of the league and passes one play-off round Also DS and wins a tournament DS, stays in upper half of the league, passes one play-off and wins a tournament
Strategies get extra points (up to a total score of 10) for each of:
Range of teams 0 or 1 0 or 1 0 to All 0 to 2 Half to all Half to half+2 0 to 2
laxed. If the final mark of a team in the league is X (of a maximum of 9), then the student who coded it could get a minimum grade of X/2 (by betting 50% for a team unable to beat the sparring opponent), and a maximum of X/2 + 4.5 (betting 50% for the best team, that obtains 9 points). But, if we suppose that every team gets at least 7 points, the minimum would be X/2 + 3.5 for betting half of the grade on a team that only got 7 points. In any case, if a student preferred keeping the maximum of the mark of his team he could get a minimum of X ∗ 0.9 (supposing that he bet for a team that won no matches) or X ∗0.9+0.7 if we suppose 7 to be the minimum actual mark. And the maximum would be X ∗0.9+0.9 (betting for a team that wins a competition). As remarked before, the formulas concern only the league, covering up to 9 points. Two additional points could be obtained in the play-off of the improved strategies, up to 10 total points.
• Winning a match (not against the sparring opponent). • Being in the top half of the league. • Passing a play-off round. • Winning a tournament. The experience up to this point has been carried out since the first year of the course, in 2007–08. It has a clear problem: it implies certain restrictions on the number of teams that can get certain grades. For example, there can be only one winner in each competition, and only half of the teams will pass the first play-off round. This limits the distribution of grades according to teams’ performance as shown in table 2. This part of the course represents 15% of the course grade. As we have seen, getting 7 points in this part is quite affordable: students only need to program a strategy, explain it to their classmates and win a match against a real opponent. A student getting 7 points could get up to 9.55 points in the final general course grade (over a maximum of 10, allowing an A++ mark). Considering the relative value of the experience in overall course grade and the motivation of the competition, we think that the limitation of the range of teams obtaining certain grades is a good tradeoff. Nevertheless, a more flexible distribution of marks and grades according to students’ skills was desired. For this reason, in the 2010–11 academic year, besides programming an expert system, students had to read source code written by their classmates and bet a part of their final marks on other teams. This way, they elaborated a critical analysis of the source code written by other programmers, a very valuable professional skill. At the same time, students that were not so proficient in coding were still rewarded if they showed they could do a good analysis of the source code written by others, identifying the future winner of the competition. It is important to note that the bet only concerned the performance of the strategies in the league. The play-offs were played using modified strategies that students improved in the lab, so they were different than those read by their colleagues before the competition. It would have been interesting to extend the betting system to the playoff, but it was not possible due to the strict time constrains (two blocks of 4 hours). The first block is dedicated to learning, and the second is for playing and making quick changes in strategies, leaving no time for analyzing and improving upon them. This way, the relation between the performance of the strategy and the grade obtained by its author is further re-
3.
RESULTS OF THE EXPERIENCE
Due to the nature of the game (providing only partial knowledge of the world) and the limited knowledge of the undergraduate students on artificial intelligence techniques, there has been no “absolute best” team in any year. It is curious that quite often some of the teams in the top of the league had difficulties or even could not beat some in the lowest part. This demonstrates that even the best teams had some weak points. Informally speaking with students, they reported that the competition between peers is a motivating factor to learn expert system programming, reinforcing conclusions of similar experiences [18]. They also commented that seeing the army evolve without their intervention against others was a nice way to understand the application of artificial intelligence to game development. In this year 15 students joined the course, but only 14 attended the experience: a student uploaded a strategy but did not go to the sessions and got 0 points in the bet (as he did not provide a satisfactory reason to miss it). The 14 students bet an average of 24.29% of their marks: most bet below 30% except for one who bet 40% and another one who bet 50%. As for the lowest bets, three students bet the minimum 10% of their grade. Only three students obtained lower grades due to their bets and seven improved it. The maximum gain was obtained by a student whose strategy had 7 points and bet half of his mark for the winning one (who obtained 8, the maximum available in the league). Highest losses were −0.85 and −0.65. The rest of students varied their strategy grades in a 0.2 points or less. Finally, four kept the same grade because
324
5.
they bet different rates for several teams whose weighted mean was the same grade their strategy got. Interestingly, some students acknowledged that the main reason to bet for a certain team was not only their impressions when they reviewed the code, but the overall marks of the developers in previous courses. Although we do not have access to that information, in general seems that students who perform well in programming subjects, also do well in this experience. Students were, on average, much more interested in this course than in others, judging from the anonymous surveys conducted near the end of the term. It included different aspects about the course, all of them rated from 1 (minimum) to 5 (maximum). Results showed a very high score:
The importance of the video game and interactive entertainment industries has caused academia to introduce computer game development in their curricula. In an elective “Video Game Design” course offered in the Computer Science degrees of the University of C´ adiz the grade in the artificial intelligence lesson is obtained competitively. Students code strategies to play a serious game. They compete in a software environment that implements a board-game, and the results obtained in the competition will be part of their grade. Firstly, each student has to code a strategy as a rulebased expert system. Those strategies compete in a league. Then, students can improve them before participating in a play-off tournament. Another part of the grade depends on a bet on the strategies of their classmates. This way, they practice critical analysis on the source code written by other programmers, a very valuable skill in the professional world. At the same time, students that are not so proficient in coding are still rewarded if they demonstrate they can do a good analysis of the source code written by others identifying the future winner of the competition. The experience has been rewarding both for the students and the teacher. Developing the environment was a time consuming task in the first year, but results showed it was a very interesting way to motivate students. As the system is open source, is has evolved year after year, mainly due to students who detected deficiencies or interesting features that could be added and coded them. Thanks to its automation, the experience is very easy to handle. Enrollment figures were 20 students in 2011, 16 in 2010, 27 in 2009 and 13 in 2008. Skills developed include not only those specific for the lesson (programming rule-based expert systems, using it for video game artificial intelligence systems, etc.) but also general ones: improvement of a strategy by analyzing its performance and critical analysis of the source code written by other programmers. In general, due to the generous grading system used for the strategies, all the students get good grades in this lesson. This way, students that are able to program a good expert system in 2 weeks cannot fail simply because their classmates are better. This is necessary, as regardless of the quality of the strategies, the range of teams in each place of the tournament is limited: one student must win the tournament, another must be second, and so till the last place. The use of a betting system has an interesting impact in the experience, as it relaxes the relation between the performance of the strategy and the grade obtained by its author. In fact, most of the 15 students in the course increased their grades thanks to it (seven in particular), and only three decreased them. Students could bet from 10% to 50%, being the average almost 25%. Interestingly, some students acknowledged that the main reason to bet on a certain team was not only their impressions when they reviewed the code, but the overall marks of the developer in previous courses. Although we do not have access to that information, our general impression is that students who perform well in programming courses also do well in this experience. Two lines of future work lie ahead. The first line of work is making it easier to change the rules of the board-game for each competition or match without having to code them, so the teams could play in different environments and avoid
• Evaluation method in the course: 4.69 • Competition of expert systems: 4.92 • Betting system included: 4.15 • General course value: 4.61
4.
DISCUSSION AND CONCLUSION
RELATED WORKS
There are similar experiences of competition between students in the literature. Concerning simulated robotic soccer, there are different experiences using RoboCup Soccerserver, an evolution of Javasoccer [10]. This systems uses a clientserver architecture, so the artificial intelligence modules can be programmed not only using rule-based expert systems [12, 4], but also other techniques [11, 17]. In [7] groups of students had to implement the artificial intelligence for a RoboCup team during a course. At the end of it, a competition was run among the teams coded by students and some other from Robocup World Cup participants. The basis for the student’s examination and grade was a written report on the team design and performance in the competition. The Another example is Robocode [6]. Robocode is a game that implements a virtual battle field where two robots fight each other. Students use Java to implement different artificial intelligence techniques. Then, they observe how their agent performs compared to others by watching the agents compete. There is an interesting experience using software based on the Ataxx game [14]. Groups of two or three students had to use Prolog to implement an agent which had to complete every game using a limited amount of CPU time. The assignment grade was determined by the ranking obtained in a free-for-all competition among these student programs. Finally, even hardware competitions [9, 1] using robots are documented. They are developed in the laboratory sessions of the Artificial Intelligence course. This way, students can bring into practice the concepts learned and are highly motivated. Nevertheless, all of the experiences commented use much more complex games than our proposal. So they would have needed more time than what was available. As we commented previously, time restrictions are very tight in this experience. As for the betting system, we have found no experiences using it, showing its originality.
325
over-specialization. For example, there might be some unreachable square in the matrix that needed extra intelligence to play, or some pieces could be uncovered from the beginning of the match. Our second line of work would be applying log analysis techniques to detect weak rules in an strategy.
6.
[7] F. H. Johan, J. Kummeneje, and P. Scerri. Using simulated RoboCup to teach AI in undergraduate education. In In Proceedings of the 7 th Scandinavian Conference on AI, 2001. [8] Jos´e Tom´ as Tocino et al. Gades Siege. http://code.google.com/p/gsiege, 2011. [9] F. Klassner. A case study of LEGO Mindstorms’ suitability for artificial intelligence and robotics courses at the college level. SIGCSE Bull., 34:8–12, February 2002. [10] B. L´ opez, M. Montaner, and J. L. de la Rosa. Utilizaci´ on de un simulador de f´ utbol para ense˜ nar inteligencia artificial a ingenieros. In Actas de las Jornadas de Ense˜ nanza Universitaria de la Inform´ atica, 2001. [11] I. Noda, H. Matsubara, K. Hiraki, and I. Frank. Soccer server: A tool for research on multiagent systems. Applied Artificial Intelligence, 12(2-3):233–250, 1998. [12] M. Palomo, F.-J. Mart´ın-Mateos, and J.-A. Alonso. Rete algorithm applied to robotic soccer. Lecture Notes in Computer Science, 3643:571–576, 2005. [13] P. Recio-Quijano, N. Sales-Montes, A. Garc´ıa-Dom´ınguez, and M. Palomo-Duarte. Collaboration and competitiveness in project-based learning. In Proceedings of the Methods and Cases in Computing Education Workshop (MCCE), pages 8–14, 2010. [14] P. M. P. Ribeiro, H. Sim˜ oes, and M. Ferreira. Teaching artificial intelligence and logic programming in a competitive environment. Informatics in Education, 8(1):85–100, 2009. [15] G. Riley. CLIPS home site. http://clipsrules.sourceforge.net, 2011. [16] S. E. Siwek. Video Games in the 21st Century: the 2010 report, 2010. [17] P. Stone and G. K. December. Publications related to the robocup soccer simulation league. World, 29(3):866–871, 2007. [18] S. A. Wallace and J. Margolis. Exploring the use of competetive programming: observations from the classroom. J. Comput. Sci. Coll., 23:33–39, December 2007. [19] Wikipedia. Stratego. http://en.wikipedia.org/wiki/Stratego, 2012.
ACKNOWLEDGMENTS
This work has been funded by the Proyecto de Innovaci´ on Educativa Universitaria del Personal Docente e Investigador “La competitividad y el an´ alisis cr´ıtico en la evaluaci´ on” (CIE44) of the Convocatoria 2010 de Proyectos de Innovaci´ on Educativa Universitaria para el PDI de la Universidad de C´ adiz, funded by the Consejer´ıa de Innovaci´ on, Ciencia y Empresa of the Junta de Andaluc´ıa and the University of C´ adiz (Spain), and by the Computer Languages and Systems Department of the same University.
7.
REFERENCES
[1] M. Arevalillo-Herr´ aez, S. Moreno-Picot, and V. Cavero-Mill´ an. Arquitectura para utilizar robots AIBO en la ense˜ nanza de la inteligencia artificial. IEEE RITA (Revista Iberoamericana de Tecnolog´ıas del Aprendizaje), 6(1):57–64, 2011. [2] J. C. Burguillo. Using game theory and competition-based learning to stimulate student motivation and performance. Comput. Educ., 55:566–575, September 2010. [3] M. J. Dondlinger. Educational video game design : A review of the literature. Journal of Applied Educational Technology, 4(1):21–31, 2007. [4] R. Fathzadeh, V. Mokhtari, M. Mousakhani, and A. Shahri. Coaching with expert system towards robocup soccer coach simulation. In A. Bredenfeld, A. Jacoff, I. Noda, and Y. Takahashi, editors, RoboCup 2005: Robot Soccer World Cup IX, volume 4020 of Lecture Notes in Computer Science, pages 488–495. Springer Berlin / Heidelberg, 2006. 10.1007/11780519 45. [5] J. C. Giarratano and G. Riley. Expert Systems. PWS Publishing Co., Boston, MA, USA, 3rd edition, 1998. [6] K. Hartness. Robocode: Using games to teach artificial intelligence. In Journal of Computing Sciences in Colleges, page 2004, 2004.
326