Using Game Development to Reveal Programming Competency Steven Simmons, Betsy DiSalvo, Mark Guzdial College of Computing Georgia Institute of Technology 801 Atlantic Drive Atlanta, GA 30332-0280
[email protected], {bdisalvo, guzdial}@cc.gatech.edu ABSTRACT
2. BACKGROUND
In the summer of 2011, we revamped the curriculum for the GLITCH Game Testers research project to better serve the interests of the 15 student participants. Our new curriculum, based on Greenfoot and game development, replaced an earlier curriculum that students felt was inauthentic. Through Greenfoot, the new curriculum had the benefits of authenticity, easy access to concepts that were relevant to the students, and immediate visual feedback, and these qualities helped align students’ perceptions with their actual abilities to program based on what they had learned. We analyze students’ reported abilities and their demonstrated abilities and show how the two align, and we suggest long-term implications for maintaining that alignment.
GLITCH Game Testers has proven to be a successful program in increasing young African American males interest in computing [1]. However, the curriculum used in teaching introductory CS posed some challenges. To address these concerns we implemented an introductory CS curriculum based upon Greenfoot in 2011. Greenfoot focuses on game and simulation development as a means to introduce CS. A number of other programs have used similar approaches in leveraging game development for CS education.
2.1 GLITCH Game Testers
GLITCH Game Testers is a unique environment for CS learning funded by the National Science Foundation Broadening Participation in Computing initiative. The program is a partnership between Morehouse College and Georgia Institute of Technology and intends to encourage interest among AfricanAmerican teenage males toward fields of study in computing. As the name implies, GLITCH students spend most of their time testing video games; however, some time is allocated for formal CS workshops. Our choice of curriculum for GLITCH’s CS workshops involves considerations for the appeal of the programming language and environment and the propensity of the curriculum to aid in motivating interest, experience, and competence in computing among the participating students.
Categories and Subject Descriptors
K.3.2 [Computers and Education]: Computer and Information Science Education – computer science education, curriculum
General Terms Human Factors
Keywords
Games, Game Development, Games Education, Programming, Programming Competency
1. INTRODUCTION
The use of digital games has garnered much attention for computer science (CS) education. Games offer an opportunity for complex learning and interaction with computation that many young people already enjoy. However, leveraging this opportunity with high school students has proven to be a challenge because of the steep learning curve to making quality games. We found in the GLITCH Game Testers, a work and learning program for African American male high school students, that the use of more traditional programming languages was too challenging to teach broader concepts, while the use of drag-and-drop programming environments felt inauthentic. To address these challenges we implemented a curriculum using the Greenfoot programming environment to make games and teach introductory CS, and we observed that through Greenfoot, these students were better able to assess and report their programming abilities.
The program began in 2009, and for the past three summers participants worked from 10:00 AM to 5:00 PM, Monday through Friday for 8-weeks. Participants were paid $8 per hour for testing games. While GLITCH seemed like a dream job for many of these young men when they began, they soon found the quality assurance work to be repetitive and tedious. There was an unanticipated result from this, participants looked forward to CS workshops as a break from the monotony of game testing – they were eager to work on more interesting problems. In the first two years, curricula based on programming with Alice and Jython were used to introduce fundamental concepts of software design and computer programming. Alice and Jython are popular choices for CS1 and earlier courses [2, 3]. Earlier findings indicated that participants had varied responses to Alice and Jython [4]. While many students preferred Alice for its visual appeal and easy access to larger concepts, a greater number preferred Jython for its more authentic programming experience. For these students, articulating a program textually with Jython (as opposed to dragging and dropping pre-fabricated constructs in Alice) gave them greater ownership as authors of their programs.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. FDG ’12, May 29–June 1, 2012, Raleigh, NC, USA. Copyright © 2012 ACM 978-1-4503-1333-9/12/05…$10.00.
To address these conflicting concerns in the summer of 2011 we developed and implemented a new introduction to CS curriculum based on the Greenfoot software program. The Greenfoot project was developed by the Computing Education Research Group in
89
the School of Computing at the University of Kent, United Kingdom. With Greenfoot, students learn object-oriented programming in Java and interact with ‘actors’ in a ‘world’, building ‘scenarios’: games, simulations, and other visuallyoriented programs. The Greenfoot software comes complete with a fully-interactive visual “world” interface, a text-based Java programming IDE, and a detailed programming library that enables students to easily develop and manipulate their scenario creations.
and an introduction to game programming using Greenfoot. Our game programming material derived heavily from [11] while our software development process lessons borrowed general concepts from [12]. The game development workshop lasted all 8 weeks of the program. The workshop was conducted 3 days a week (Monday, Wednesday, and Friday) for 1 to 2 hours each day. The first 6 weeks of the game development workshop included 16 sessions of game development instruction, and all 16 students participated in this portion of the workshop. Lessons were conducted as class discussions where students were encouraged to offer their ideas and solutions. Of the 16 sessions, 3 were dedicated to introducing the software development process, and 2 were reserved for actively using the process to brainstorm and design a game. Because our focus was more on game programming, we gave brief overviews of each of the 5 stages in the waterfall model to show the various activities involved in game and software development, and we coached students through brainstorming and design activities for their game projects. For programming lessons, students were to modify prewritten Greenfoot scenarios loaded onto their individual workstations, adding the appropriate Java programming code to complete the scenario to some expected end. Successive scenarios and lessons allowed students to practice previously introduced topics but focused primarily on increasingly complex programming concepts. Due to time constraints, students were given brief in-class assignments that were reviewed as a class within the session.
There were multiple reasons for the switch from Alice and Jython to Greenfoot. First, Greenfoot had potential to balance easy access to larger concepts with an authentic programming experience. Second, Greenfoot uses Java, which is the language used in the Advanced Placement Computer Science test that the participants prepare for in their second year in the program. Third, Greenfoot matched with the gaming interest and day-to-day experience our testers had.
2.2 Using Games for CS Learning
Within the research field of computer science (CS) education there is evidence that video games are an important cultural artifact in the development of computer scientists. Research based on computer scientist biographical stories and ethnographic research with computer science majors and young computer enthusiast demonstrates that many computer scientists contribute much of their interest in computing with playing video games [57]. However, simply playing digital games does not seem to be the key to increased interest in computing. There are indicators that the way people play and the practices around play have a greater impact on interest in CS than the hours people invest in play games [8]. While the use of digital games to teach computer science has found success in attracting and retaining computer science majors for some programs [9]. Others have found that, game design curriculum in CS courses can have a positive effect for some students, while negatively impacting other students’ interest in CS. Rankin, Goch et al. found that pedagogical strategies were important and the use of a short gaming project did not necessarily result in greater interest with game development or computer science [10]. We sought to better balance the affordances that Jython and Alice provided in our pedagogical strategies while using a game based approach.
The remaining 2 weeks of the workshop were dedicated to a game development project in which students, organized into teams, were to use what they had learned about the software development process and game programming to develop a game of their own. 12 of the 16 students participated in the game development workshop and organized into 6 teams of 2 to 4 members. A total of 6 sessions were dedicated to the game development project, and for each of these sessions, students were given a set of tasks to fulfill towards completing their game. We provided each team with a prototype of the games they intended to build, and we adjusted the level of scaffolding afforded each team through the functionality provided in the prototype and in the amount of assistance we gave in accordance with the team’s needs. Teams were to complete their games by adding the necessary code to the prototype they were supplied, mirroring the format of the class workshops.
3. METHODS
During the 8-week workshop we collected data through surveys, code analysis, and observations. This data was used to help us better define and understand programming competency and the impact our Greenfoot-focused curriculum had in reconciling students’ perceptions of their programming abilities to their abilities to demonstrate their programming knowledge.
The game development workshop concluded with a games exhibit in which each team presented their game to fellow GLITCH students and staff. We decided to make the games exhibit competitive by allowing the students and staff to vote for the bestin-show; members on the winning team won a game of their choosing. The potential to win a game added incentive for students to work all the more diligently towards developing exemplary games and ultimately sharpening their programming abilities.
3.1 The Game Development Workshop
To best suit the GLITCH students’ interest in gaming and to give a more detailed portrayal of how games are created, we repurposed the CS education component of the program as a game development workshop. Our primary goal for the workshop was convey how the software behind games is written. We also wanted to provide some fundamental instruction on the broad activities associated with developing games and software, such as concept brainstorming, implementation, and testing. Thus, our curriculum centered on two thrusts: a brief introduction to the software development process using a 5-stage waterfall model (brainstorming, design, implementation, testing, and deployment)
3.2 Programming Competency
During the workshop, we made two significant observations: (1) in the beginning, students were sometimes unaware that their perceptions of their abilities did not exactly match with their actual abilities, and (2) as students began to understand programming concepts and as more complicated topics were introduced, most of the students – even those that had generally positive perceptions of their programming abilities – began to
90
acknowledge that there were topics they had yet to completely master and their reported perceptions and demonstrated abilities reflected this realization.
To better analyze and distinguish trends in the students’ selfreported perceived abilities, we coded survey responses, assigning point values for each answer given. For each survey question, 1 point was given for answering Strongly Disagree, 2 were given for Disagree, 3 for Neutral, 4 for Agree, and 5 for Strongly Agree. By averaging the number of points received per response, more general statements can be made concerning students’ perceived abilities. Specifically, students who generally have high perceived programming ability would most likely yield Agree and Strongly Agree responses in surveys; consequently their point-coded responses will average between 4 and 5 points. Similarly, students with low perceived programming ability will have point-averages between 1 and 2, and students who express neutrality about their ability will average around 3 points.
To understand why these observations were apparent, we first define programming competency as the ability to demonstrate understanding of programming concepts and tasks. With this definition, we suggest three classifications of programming competency and the behaviors that characterize each category in Table 1. We surmise that while programming competency is most often demonstrated through the act of programming, a student’s perception of their programming abilities also has significant impact on how the student engages in learning about programming and on how the student demonstrates their programming abilities. We also infer that the programming tools involved in developing students’ competency has great impact on how quickly students reconcile their perceptions with their actual abilities.
Table 2. Sample workshop survey Please tell us how much you agree or disagree with the following statements. Based upon this week’s workshop I can: SD D N A 1. Tell someone what a constructor is. 2. Tell someone when a constructor is called. 3. Tell someone how a constructor is different from normal methods. 4. Write the diagram for a constructor. 5. Tell someone how to create a new object in Java code.
Table 1. Programming Competency Classifications Competency Classification
High
Normal
Low
Characteristic Behaviors Answers questions pertaining to programming accurately Can express relationships among various programming concepts Can readily devise complete solutions for problems using programming concepts Recognizes programming constructs and concepts and is able to develop some solutions Perceives a lack of knowledge in some concepts Recognizes the relationships among various programming concepts Does not recognize programming constructs and/or concepts Has noted difficulty recognizing how programming code solves a problem
SA
Using this coding system, we averaged each student’s survey responses by day. As some students did not answer all surveys, we further averaged survey responses in bi-weekly intervals for more general, uninterrupted trends in perceived ability. Next, we partitioned the space of 1 to 5 points per response into 3 categories, corresponding with our classifications for competency. The idea is that lower averages in survey responses – which correspond with disagreement with statements of programming ability – indicate lower perceived competency. Table 3 delineates the ranges for point-coded average survey responses alongside the corresponding perceived levels of competency.
3.3 Surveys
We issued a total of 14 surveys over the course of the 6 weeks of workshop lessons. With these surveys, students reported their perception of their ability to perform specific programmingrelated tasks. Each question requested the student to rate their agreement with a statement pertaining to their programming ability on a 5-point scale, ranging from strongly disagree to strongly agree with a neutral midpoint. Table 2 gives sample survey questions and the 5 rating options. We used results from these surveys during the workshop to determine which concepts needed more detailed review, and we revised subsequent workshop lessons to accommodate the students’ needs. While surveys were voluntary, students were strongly encouraged to participate and submit their most accurate responses.
Table 3. Average Survey Response Ranges and Perceived Competency Average Survey Response (by points) Lower Bound 1.0 2.33 3.67
Upper Bound 2.33 3.67 5.0
Perceived Competency Low Normal High
3.4 Code Analysis
We had two opportunities in which to observe students’ actual programming competency: first, in the code students completed for the workshop lessons, and second, in the code they produced for their game development projects.
The results of these surveys supply our evaluation into the students’ perception of their programming competency. Students who have higher perceptions of their programming abilities would likely answer Agree (A) or Strongly Agree (SA) to statements from the surveys. Likewise, students who are certain they are unfamiliar with some concepts would likely express low perceived ability by yielding Disagree (D) or Strongly Disagree (SD) responses. Finally, students who recognize a topic but are unsure or have difficulty expressing their understanding would likely mark Neutral (N).
The goal for workshop lessons was to ensure that students were equipped with programming concepts necessary for building a game. However, all of our workshop lessons were conducted as a class due to time restrictions, and as a result, there is very little variance among students’ individual modifications to the workshop scenarios. Because of this lack of variance, programming code the students wrote to complete the workshop
91
Luther 2 actively participated in the game development workshop lessons and frequently volunteered answers to questions posed to the class.
scenarios does not reliably suggest any impact to students’ individual competencies and we omit analysis on that code set. In contrast to the class workshops, the game development projects were conducted in groups where students were expected to manage themselves and produce their own work. Naturally, we observed a greater diversity in the quality of the teams’ project code, and the quality of the code seems to have implications on the levels of competency the students on the team embody. The notion is that teams with higher programming competency work more independently and produced code that utilizes more programming concepts correctly and effectively. On the other hand, teams with lesser programming competency tend to employ fewer programming concepts, and those concepts tend to be the essentials: if-statements, methods, and classes. These students more often needed expert assistance when the programming concepts they are most comfortable with were not enough to produce the result they desired. Table 4 gives classifications and descriptions for code complexity and corresponding levels of programming competency.
4.1.2 Survey Analysis
Perceived Competency: High Luther and Ron responded to an average of 9 of the 14 surveys we issued during the workshop. Figure 1 plots the averages of their point-coded survey responses in bi-weekly intervals. Together, the team had an average response of 3.9 to survey questions over the course of the workshop, indicating above average perceived ability to perform various programming tasks.
Table 4. Code Complexity Classifications Code Complexity
Low
Moderate
High
Description Team’s code highly resembles code from workshop scenarios Use of essential programming constructs (required methods, conditional statements) Team’s code is similar to workshop scenarios with some differences Use of more effective fundamental programming constructs (classes, method abstraction, instance variables) Team’s code differs greatly from workshop scenarios Use of more complex programming constructs (class inheritance, local variables, loops, object method calls)
Actual Competency
Figure 1. Team 2 Average Survey Responses
Low
4.1.3 Code Analysis
Code Complexity: Moderate Team 2 developed a game in which two players were to protect a set of precious crystals by warding off an invasion of alien spacecraft. Aliens fell from the sky (the top of the screen) toward the crystals on the ground (the bottom of the screen). Each player had (his/her) own set of keyboard controls by which they could rotate and fire a laser cannon. When an alien was hit by an energy ball fired from a player’s laser cannon, the alien disappeared and that player received points. However, if an alien managed to evade the barrage of cannon fire, it would collect a crystal and disappear. As increasingly more alien spacecraft were destroyed, the game generated new spacecraft more often, making the game more challenging. The game ended when aliens collected all the crystals, and the player with the most points was declared the winner.
Normal
High
Of the six original project teams, 5 successfully completed a game. For different reasons and outside variables teams 1 and 5 required a great deal of assistance from the instructor 1. As they are not representational their results are omitted. We present our findings as case studies of three teams in which we profile the members of the team and analyze the teams’ survey responses and project code. At the end of each case study, we suggest classification for the level of competency each team appears to have achieved based on our analyses.
The team briefly redesigned their game after they had determined their original concept would be too complex to develop based on their experience with Greenfoot and Java. The team was given a prototype of their concept in which the user could move a crosshair using the keyboard directional keys to aim and shoot moving blocks of different sizes and colors. The prototype lacked scoring, multiple players, and an end-game scenario.
4.1.1 Team Profile
The game Ron and Luther designed turned out to be very similar to the scenarios they had encountered during the workshop and the team had very little difficulty adding functionalities that were covered in class. However, the team did require additional scaffolding for more involved functionalities that were not specifically covered. For example, the team was puzzled as to how to modify the prototype to generate blocks only from the top of the screen and not from the sides of the screen, and they were
1
2
4. FINDINGS 4.1 Team 2 – Alien Invasion Team 2 consisted of two members, aliased Ron and Luther. Both were newcomers to the GLITCH program and neither had prior experience with computer programming. As students, Ron and
The first author is referred to as “the instructor” for brevity.
92
All participant names are pseudonyms in the interest of privacy.
unsure about making their cannons fire projectiles. The instructor provided the team more scaffolding through coaching and by directing them to online resources that demonstrated the functionalities they desired, and the team had few problems using the additional help to solve their problems.
Worms, the player who successfully hit his opponent was declared the winner. The team gave the game a Super Mario Bros. theme and players had the ability to fire “Bullet Bills” at a given speed and angle. Once fired, “Bullet Bills” moved in a trajectory according to the speed and angle at which they were fired.
Team 2 had ideas about the kinds of classes and methods they would need to implement based on what they learned from the workshop. As a result, the team created several classes that were critical to their game and made several significant changes to their original prototype without the instructor’s assistance. However, the kinds of classes and methods they added were very similar to the workshop’s scenario code, and it is highly likely that the team used the workshop code as a template for the changes they wanted to make.
Martin and Elliot were provided a simple prototype in which a cannon would fire simple rectangles that moved according to the laws of projectile motion. The prototype allowed the user to press the spacebar to start a meter for how fast the projectile would be fired. Team 3 required very little additional scaffolding with modifying their supplied prototype into a functional game. Because the prototype for their game included trajectory motion, the most complicated functionality for the game, Martin and Elliot were ultimately left with the task of making their game work for 2 players. The team had expressed and demonstrated high programming competency but made modifications that were only moderate by our classifications. For example, in implementing 2 players, the team chose to copy all of the classes that were necessary for the first player and rework them for the second player. While their solution was functional, it did not reflect the higher competency the team had generally reported in surveys and demonstrated in workshop lessons.
4.1.4 Competency
Table 5. Team 2 Perceived and Actual Competency Avg. Survey Response (Perceived Competency) 3.9 (High)
Code Complexity (Actual Competency) Moderate (Normal)
Luther and Ron acted mostly autonomously and produced a nicely decorated, functional game. They required some additional scaffolding to add features that had not been taught during the workshop, but they were able to understand where those features were needed, the requirements for implementation, and impact the new code would have on their game. Their survey responses show that they each had above normal perception of their individual abilities to program but began to recognize their lack of understanding as more complex material was introduced toward the end of the workshop. This lack of understanding reflects in their choice of programming constructs in the code and in the amount of additional scaffolding they required.
4.2 Team 3 – Super Mario Worms
Figure 2. Team 3 Average Survey Responses
4.2.1 Team Profile
The third project team included two returning members to the GLITCH program aliased Martin and Elliot. Both had prior programming experience with Alice and Jython and they were able to apply their general understanding of programming concepts to programming in Java. Martin and Elliot were very active participants in workshop lessons and were the most frequent to offer answers to class questions.
4.2.4 Competency
Table 6. Team 3 Perceived and Actual Competency Avg. Survey Response (Perceived Competency) 4.1 (High)
4.2.2 Survey Analysis
Code Complexity (Actual Competency) Moderate (Normal)
Team 3 consisted of two students that had prior experience with programming. The modifications they made did reflect some understanding of object-oriented programming, but they failed to employ concepts that reflected high competency, specifically class inheritance. Their work reflects a more concrete understanding of the code; they acknowledged the need for similar-but-not-same functionality for the second player and fulfilled that need by duplicating classes and making minor modifications instead of using the higher concept of abstraction. This behavior is more characteristic of normal programming competency as opposed to the high competency the team perceives of themselves.
Perceived Competency: High Elliot and Martin each responded to 7 of 14 total surveys. Average survey results for the team are depicted in Figure 2. Based on these results, the team appears to have had very high perceived ability to program with a team average response of 4.1 (slightly above Agree). We expected this team to have both higher perceived and actual competency because of their prior experience with programming and their participation in an AP-CS supplemental course that focused on Java. As topics grew in complexity, the team appears to have maintained a high level of perceived ability.
4.3 Team 4 – GoalBuster
4.2.3 Code Analysis
4.3.1 Team Profile
Code Complexity: Moderate
Team 4 comprised two newcomers to the GLITCH program aliased Drew and Victor. Both Drew and Victor were quiet participants in the workshop, but the questions they asked
Team 3 devised a game for two players in which each player would fire projectiles at the other. Similar to the classic game
93
reflected an avid interest in game development, programming, and computing in general. Neither had prior experience programming but they each worked hard to glean as much as they could from workshop lessons.
instructor’s assistance – adding images and playing sounds where necessary – were of low complexity. The multiplayer game saw changes that were of a higher degree of complexity, but again, most of those changes were made with the instructor’s assistance. Unassisted changes the student made were of low to moderate complexity, but they reflect a deep understanding of how different elements of the code interact. For example, the team added a class that acted as an invisible barrier between the two sides in the twoplayer mode so that an opposing player’s ball could not cross into a player’s side. The team had to determine how to add the barrier to the environment and have the ball bounce on the barrier on contact. Both topics were covered during workshop sessions, and the team appears to have successfully recognized and added the necessary code for this functionality.
4.3.2 Survey Analysis
Perceived Competency: Normal Drew and Victor responded to an average of 9 out of the 14 workshop surveys. The team’s average survey responses are presented in Figure 3. The team appears to have been unsure about their programming abilities as their collective average survey response was 3.3 (slightly above Neutral). However, one member (aliased Victor) showed a steady growth in average perceived ability as the workshop went on.
4.3.4 Competency
Table 7. Team 4 Perceived and Actual Competency Avg. Survey Response (Perceived Competency) 3.3 (Average)
Code Complexity (Actual Competency) Moderate (Normal)
Drew and Victor, who expressed the lowest perception of their programming abilities of all the teams, was the only to produce 2 fully-functional games! While they did require some assistance with very intricate problems, the code they produced independently reflects a deep understanding of how different elements of the game were to interact. Survey results, which show the team as mildly neutral in their abilities, translated into the amount of programming assistance they needed and the complexity of the code they produced on their own.
Figure 3. Team 4 Average Survey Responses
4.3.3 Code Analysis
Code Complexity: Moderate
5. DISCUSSION
Team 4 envisioned a soccer-themed game that had single- and double-player modes. In the single-player mode, the player would kick a free-floating ball at a series of goals, similar to the game Breakout. In double-player mode, each player had a ball that they would kick toward a series of goals arranged in the middle of the field. Much like the classic game Pong, the first player unable to save their ball by kicking it back toward the middle of the field would lose the game.
Earlier, we posited that students began the workshop with misaligned perceived and actual competency, and through the workshop, students were able to reconcile their perceptions to their actual programming abilities, achieving normal competency (by our definition). Further, the programming tools involved play a heavy role in the reconciliation. We now discuss how our data show the alignment and how it relates to improved metacognition, the characteristics of Greenfoot that aid in the reconciliation, and the implications for working with normal competency in the long term.
Drew and Victor were given simple prototypes of the games Breakout and Pong to aid them in developing their game. The instructor gave very general instructions as to what needed to be done but left the bulk of the decision-making to the team based on earlier interactions with the team. Each member took one of two the prototypes they were given and modified it to completion, adding graphics, sound, and new game-play features such as endgame scenarios.
5.1 Reconciling Perceived and Actual Competency
The team made several game-play modifications to the original prototypes they were provided to complete the game they had devised. Unfortunately, the team was unable to integrate both games into a single game with a menu option for one- or twoplayer mode due to limitations with Greenfoot.
Our findings suggest that regardless of prior knowledge, by the end of the workshop the teams in our study perceive themselves as having close to normal competency and they also demonstrate normal competency in their programming projects. We argue that the change from the students’ original perceptions of their abilities, however high or low, toward a more neutral perception of their abilities is based not only on the increasing complexity of the material to which they were exposed, but also upon improved metacognition in regards to their programming abilities. Students were not only facing harder concepts which attribute to some drop in their perceptions, but they were making the more general realization that while they have the capacity to program, they certainly have not mastered all the concepts to program effectively.
The final rendition of the single-player game had some new functionality, but the most complex changes were covered with the instructor. Changes the team made to this game without the
The students’ final projects and ending survey averages confirm this position; Figures 1 – 3 each show an ending average of around 3.5 to 4 points indicating normal and above normal
Drew and Victor each required additional scaffolding to complete the prototypes they had split among themselves. On the whole, the team required a moderate amount of assistance with mostly complex issues that were not specifically covered during the workshop.
94
perceptions. The teams we analyzed believed in their ability to program, but in the quality of the code the teams produced and in the content of the scaffolding each team needed, it was clear that teams knew the limits of their programming abilities.
which were of personal relevance to our students. Typing Java (as opposed to clicking and dragging concepts with Alice) gives students a greater sense of authorship and control over their programs and because most professional programmers use textbased languages, students experience the benefit of doing what they feel is “real” work.
Metacognition, the ability to reflect on one’s current state of knowledge [13], is highly related to perceived and actual ability: an individual that is strongly metacognitive is able to accurately perceive the limits of their competency. At the beginning of our workshop, students’ prior knowledge of programming affected their general perception of their competency, and they accordingly begin with weak metacognitive skill in that they may overstate or understate their ability to program in general. As the workshop continued, students were able to see what they were capable of through workshop scenarios and in the final project. With this information, they could more accurately account for the knowledge they had and did not have and thus, their metacognitive ability improved.
5.2.3 Ease-of-Access
From the previous curriculum, Alice, with its drag-and-drop programming interface, had a strong focus on easy access to larger programming concepts – a focus neither Jython nor Greenfoot quite parallel. However, Greenfoot does allow easy access to more complicated game-related concepts through its API, and in doing so it allows easier access to a very powerful and authentic language. Easy access to both the language and to programming concepts contributes to reconciling perceived and actual competency. Students often have negative views of programming based on cultural stereotypes and on their perceptions of programming as being tedious, and these views create a barrier to entry to programming for many of them [7]. By trivializing certain aspects of programming, many sources of barriers to entry are dismantled. In turn, students are empowered to believe they can program without having to surrender to programming culture or struggle with the tedious details.
5.2 Contributions from Greenfoot
Aligning perceived and actual programming competency can be achieved with virtually any programming tool or environment, but certain qualities in these tools best catalyze the reconciliation. Three qualities found in Greenfoot – immediate feedback, authenticity, and ease-of-access to concepts – make it particularly well-suited for helping students build programming competency and dispelling misconceptions about their actual ability.
The Greenfoot API hides enough detail about the game environment that students can focus on much larger concepts. For example, the Greenfoot API exposes one method to play a sound that requires only that the student supply the name of the file as a string argument. Once a student has placed the file into the appropriate folder on his/her workstation, (s)he can focus on the logic of playing that sound at the appropriate time and not worry about the details of how Java loads and interprets the sound file.
5.2.1 Immediate Feedback
Much like Alice, Greenfoot is able to provide immediate visual feedback to changes made in code. Students can directly map the code they have written to some sequence of actions performed onscreen. The ability to reconcile perceived ability to actual competency is apparent: when students can immediately see the results of their demonstrations of what they know, they can immediately account for a misaligned perception of their ability. More importantly, immediate feedback allows for “tinkering,” the ability to experiment with the code or the resulting program to arrive at some desired functionality. Through tinkering, students can experiment and attempt to fill in missing knowledge on their own. Greenfoot allows (and encourages) students to experiment with both the code and the resulting visual “world” and its “actors,” and this helps students recover when they discover they lack competency for concepts.
While Greenfoot does hide some detailed functionality, it neither supplements nor simplifies the Java language; students are still learning valid Java syntax and have full access to all of Java’s core libraries.
5.3 Long-term Competency
We earlier gave 3 classifications for competency levels. With these 3 classifications come specific expectations of students’ behaviors and attitudes toward programming over time. We argue that achieving and maintaining normal competency in the longterm is the ideal for beginning students.
5.2.2 Authenticity
First, our goal is for each student to aspire towards achieving high programming competency. However, achieving high programming competency too soon can be detrimental to students’ development of their programming abilities. With both high perceived and actual abilities, over time, students might become over-confident in their abilities; they might take on the illusion that they have mastered programming when in fact they have so much more to learn. For some students, this overconfidence might lead to disinterest or “boredom” with developing their programming ability, simply because they are not motivated to learn more.
For these students, authenticity had much more impact on their willingness to learn and on how seriously they took the material. Based upon literature on thick authenticity [14], we define authenticity as having personal relevance, having relevance in the real world, and providing an opportunity to think as a member of a discipline; inauthenticity is lack of any of these qualities. Authenticity has subtle but certain impact on perceived competency; students who relate to the learning material and who find use in it beyond the learning environment are more motivated to learn and assess and improve their knowledge. In particular, the students in our study were concerned that their learning had relevance outside the educational setting and with how applicable their learning was professionally. On one occasion, a student asked whether Greenfoot was used by professionals to build games. On another occasion, another student asked if it was possible to use Greenfoot to build games for his mobile device. Fortunately, Greenfoot is authentic in that Java, its base language, does have basis in the real-world and that the Greenfoot environment is geared toward developing simulations and games,
On the other hand, achieving low competency in the long term is also detrimental to development of programming ability. Students with low competency acknowledge large deficits in their knowledge and ability. Over time, students that continue to realize they have extreme lack of ability might be discouraged that they do not have the “talent” for programming or that programming ability is reserved to a special group of people, and they might dismiss computing as beyond or outside their personal capability.
95
[2] Cooper, S., Dann, W. and Pausch, R. 2003. Teaching objects-first in introductory computer science. In Proceedings of the 34th SIGCSE technical symposium on Computer science education (Reno, NV, 2003). ACM, New York, NY, 191-195. DOI= http://dx.doi.org/10.1145/611892.611966
Maintaining normal programming competency, in which students acknowledge their ability to program and realize the limits of their knowledge, is ideal for learning in the long-term. Students who operate with normal competency over time will eagerly seek to increase their programming ability because they know programming is not beyond their scope. They will also acknowledge the vast amounts of information they have yet to obtain. Greenfoot enables normal competency through authenticity, ease-of-access, and immediate feedback. Because Java is the base language, Greenfoot evades the limitations and inauthenticity found in Alice that might lead to a false sense of mastery. Because Greenfoot hides details of some higher-order functionality, it evades the barriers to entry that might discourage programming. Finally, immediate feedback confirms the capacity to program in that students must formulate ideas through code to get the feedback; it also reveals to students their strengths and weaknesses in ability.
[3] Guzdial, M. 2003. A media computation course for nonmajors. SIGCSE Bull., 35, 3 (Sept. 2003), 104-108. DOI= http://dx.doi.org/10.1145/961290.961542 [4] DiSalvo, B., Bruckman, A. and McKlin, T. 2011. Drag and Drop or Text-based?: Exploring Introductory Languages in Informal CS Learning. In GVU Center Technical Papers. Georgia Institute of Technology, Atlanta, GA. GIT-GVU-1111-1. [5] Barron, B. 2004. Learning Ecologies for Technological Fluency: Gender and Experience Differences. Journal of Educational Computing Research, 31, 1 (Jan. 2004), 1-36.
6. CONCLUSION
[6] Margolis, J. and Fisher, A. 2003. Unlocking the Clubhouse: Women in Computing. MIT Press.
Several factors serve as barriers to entry into computing for students. In particular, students might have incorrect assumptions about their ability to program based on stereotypes and their own experiences. Computing education programs like GLITCH Game Testers must take into account the ability of the computing curriculum to dispel bad assumptions about programming abilities and encourage students to be active participants in computing. It is important to note that the Greenfoot learning environment was particularly well-suited for the game-loving African American male students in our study for its authenticity, easy access to highlevel constructs, and immediate visual feedback. Authenticity and introduction to the breadth of computer science were of particular importance to this audience and Greenfoot showed promise for keeping students engaged in learning about computing over time. Similar to lessons learned from Media Computation approach to teaching CS, we found matching students’ interests and values with the learning environments is critical to recruiting the computing professionals of tomorrow.
[7] Schulte, C. and Knobelsdorf, M. 2007. Attitudes towards computer science-computing experiences as a starting point and barrier to computer science. In Proceedings of the 3rd international workshop on Computing education research (Atlanta, GA, 2007). ACM, New York, NY, 27-38. DOI= http://dx.doi.org/10.1145/1288580.1288585 [8] DiSalvo, B. J. and Bruckman, A. 2009. Questioning video games' influence on CS interest. In Proceedings of the 4th International Conference on Foundations of Digital Games (Orlando, FL, 2009). ACM, New York, NY, 272-278. DOI= http://dx.doi.org/10.1145/1536513.1536561 [9] Bayliss, J. D. 2009. Using games in introductory courses: tips from the trenches. SIGCSE Bull., 41, 1 (Mar. 2009), 337341. DOI=http://dx.doi.org/10.1145/1539024.1508989 [10] Rankin, Y., Gooch, A. and Gooch, B. 2008. The impact of game design on students' interest in CS. In Proceedings of the 3rd international conference on Game development in computer science education (Miami, FL, 2008). ACM, New York, NY, 31-35. DOI= http://dx.doi.org/10.1145/1463673.1463680
7. FUTURE WORK
Because this was the first trial of the new game development curriculum in the GLITCH program, our results, while promising, are preliminary. Several iterations of this curriculum will need to be conducted both within the GLITCH program and in other settings in order to discover whether our results generalize to a larger set of students. Our methods for analysis are also largely qualitative. More stringent guidelines and tools for analyzing programming code both qualitatively and quantitatively would better determine students’ actual programming competencies.
[11] Kölling, M. 2009. Introduction to programming with Greenfoot: object-oriented programming in Java with games and simulations. Prentice Hall, Boston, MA. [12] Sommerville, I. 2007. Software engineering. AddisonWesley, New York, NY. [13] Bransford, J. D., Brown, A. L., and Cocking R. R. (Eds.). 2000. How Experts Differ From Novices. In How People Learn: Brain, Mind, Experience, and School - Expanded Edition. National Academy Press, Washington, DC, 29-50.
8. ACKNOWLEDGMENTS
This work was supported by NSF #0837733. Thank you to The Arthur Blank Foundation, Yahoo! Games, Microsoft Xbox, and EA Tiburon for their financial and in-kind support of the GLITCH Game Testers Program.
[14] Shaffer, D. W. and Resnick, M. 1999. "Thick" authenticity: new media and authentic learning. J. Interact. Learn. Res., 10, 2 (Dec. 1999), 195-215.
9. REFERENCES
[1] DiSalvo, B. and Bruckman, A. 2011. From interests to values. Commun. ACM, 54, 8 (Aug. 2011), 27-29. DOI= http://dx.doi.org/10.1145/1978542.1978552
96