SIGCHI Conference Paper Format - ACM Digital Library

1 downloads 0 Views 2MB Size Report
Jun 30, 2017 - comprised 12 girls (mean age: 12.64, S.D.: 2.838) and 32 boys (mean age: 12.35, ... between scripts and output, or script and robot, depict the.
Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

Using Eye-Tracking to Unveil Differences Between Kids and Teens in Coding Activities Sofia Papavlasopoulou Norwegian University of Science and Technology Trondheim, Norway [email protected]

Kshitij Sharma Faculty of Business and Economics, University of Lausanne and École Polytechnique Fédérale de Lausanne Lausanne, Switzerland [email protected]

Michail N. Giannakos Norwegian University of Science and Technology Trondheim, Norway [email protected]

Letizia Jaccheri Norwegian University of Science and Technology Trondheim, Norway [email protected]

ABSTRACT

ACM Classification Keywords

Computational thinking and coding is gradually becoming an important part of K-12 education. Most parents, policy makers, teachers, and industrial stakeholders want their children to attain computational thinking and coding competences, since learning how to code is emerging as an important skill for the 21st century. Currently, educators are leveraging a variety of technological tools and programming environments, which can provide challenging and dynamic coding experiences. Despite the growing research on the design of coding experiences for children, it is still difficult to say how children of different ages learn to code, and to cite differences in their task-based behaviour. This study uses eye-tracking data from 44 children (here divided into “kids” [age 8–12] and “teens” [age 13–17]) to understand the learning process of coding in a deeper way, and the role of gaze in the learning gain and the different age groups. The results show that kids are more interested in the appearance of the characters, while teens exhibit more hypothesis-testing behaviour in relation to the code. In terms of collaboration, teens spent more time overall performing the task than did kids (higher similarity gaze). Our results suggest that eye-tracking data can successfully reveal how children of different ages learn to code.

K.3.2 Computer Science Education; H.5.m Information interfaces and presentation (e.g., HCI): Miscellaneous INTRODUCTION

Nowadays, students need to acquire skills and digital competences in accordance with 21st-century needs. Computational thinking, problem-solving, and coding have become an integral part of our world. Most parents want their child’s school to offer coding and problem-solving competences [12], and most people believe that learning how to code is as important as reading, writing, and math [28]. In light of this growing recognition, ACM, Code.org, Computer Science Teachers Association, Cyber Innovation Center, and National Math and Science Initiative have developed conceptual guidelines for coding education. The K–12 Computer Science Framework was developed to inform the development of standards and curriculum, build capacity for teaching computer science, and implement computer science pathways (K–12 CS Framework www.k12cs.org). Many computing platforms and various tools exist to support computational thinking and coding to provide learning experiences [5]. However, despite the apparent growing body of research in the area, there is limited evidence to support the design of appropriate learning experiences to allow children to learn how to code.

Author Keywords

Eye-tracking; maker movement; coding; teens; kids.

Currently, educators are leveraging a variety of technological tools and programming environments – such as Alice, Scratch, Greenfoot, and Kodu – which can set challenging and dynamic learning experiences in educational contexts. Since Papert’s [31] constructionist framework was created, different practices, models, and strategies have represented new ways in which computers can be used in student-centred design learning experiences. Construction-based learning has been widely studied over the years in various pedagogical contexts, in both formal

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. IDC '17, June 27-30, 2017, Stanford, CA, USA © 2017 Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 978-1-4503-4921-5/17/06…$15.00 http://dx.doi.org/10.1145/3078072.3079740

171

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA



and informal education [30]. Coding constitutes a beneficial key aspect that enables students to not only enhance their problem-solving skills but also change their attitudes toward computing by creating computational artefacts with a practical, personal, or societal intent [7].

The rest of the paper is structured as follows: the next section provides an overview of the related work; the third section presents the methodology employed in this study; and the fourth session presents the empirical results. The fifth section discusses the results, the limitations of the study, and recommendations for future research.

Several efforts have been made to broaden participation in coding and introduce computational literacy to children. Giannakos and Jaccheri [13] focused on how physical construction with recycled materials can help in designing creative coding activities, while Buechley et al [6] developed LilyPad Arduino to make coding an approachable and natural activity for girls. The importance of teaching coding to everyone from a young age also refers to the use of programmable objects and interactive robots in learning experiences right from kindergarten [10]. Combining physical fabrication and coding results in engagement in programming concepts and practices (e.g., testing and debugging) [7] [23]. Previous research describes various approaches to motivate and engage students with coding through making and constructing in a fun and enjoyable way [40]. Children can learn coding through various construction-based activities and utilize current powerful learning environments [37] such as Alice, Kodu, Scratch, and Scratch Junior, as well as other visual blockbased coding environments that can improve children’s understanding of concepts like loops and variables [27].

RELATED WORK Construction-based children’s learning

coding

activities

to

support

The constructionist learning approach provides children with freedom to learn by actively engaging in creating meaningful projects. More precisely, constructionist learning is a process of exploring and reflecting with the ultimate goal of learners obtaining their own knowledge and identifying with it, instead of being passive recipients [31]. Nowadays, educational strategies incorporate elements from Papert’s constructionism [31], which states that students can learn deeply during activities that require them to apply the knowledge obtained by executing tasks. This form of instruction has been described according to various concepts, such as learning by doing, learning by designing, and project-based learning, to name a few. These concepts are applied both inside and outside of the classroom [21]. In the recent special issue of Child– Computer Interaction on digital fabrication in education, Iversen et al. [18] pointed out the importance of designbased activities to teach digital literacy. Computer programming, mainly in relation to game programming, is considered to be another form of constructionist learning environment and promotes coding, problem-solving, critical thinking, computational thinking, and collaborative skills [30; 40].

Despite the growing research on the design of constructionbased coding experiences for children [30], it is still difficult to say what the main needs of different age groups are. We have seen systems utilizing various affordances to support different age groups (e.g., Scratch and Scratch Junior), but research on how different age groups use those tools is still in its infancy. Current research has been focusing on traditional qualitative and/or quantitative measurements, such as observations, interviews, tests, and surveys, to investigate children’s engagement, experience and learning [30]. Nevertheless, further research should also focus on using other measures to better understand the way in which children of different ages learn how to code, along with their task-based behaviour, in order to provide feedback on how to design coding experiences to improve the learning process. To this end, the current study uses physiological (eye-tracking) data to understand the learning process of coding in a deeper way, along with the role of gaze in the learning gain and the different age groups.

Coding is not only a fundamental skill within computer science, but is also a demonstration of computational competences [16] – a way to support computational thinking and develop students’ high-order thinking skills. Kids as young as 4–6 can build and code simple robotbased projects and learn ideas from engineering, technology, and coding, and thereby enhance their computational thinking skills [3]. Visual programming languages introduce the potential of a broader and younger group of students to learn programming concepts [40]. Nevertheless, construction-based coding activities (e.g., making, modding, hacking) can help students aged 9–10 to engage in more complex programming languages, such as Java. Various studies have proven the importance of combining coding and physical fabrication to engage students with complex programming concepts (e.g., loops, conditionals, events) and practices (e.g., remixing, testing, debugging) [7] [23] [22] found that digital storytelling in a school setting demonstrates competence in several key programming concepts, such as event-driven programming and synchronization. In their project, an early-childhood

In this paper, we provide an overview of our constructionbased coding activity, and present an evaluation of the activity carried out by 44 children (here divided into “kids” [age 8–12] and “teens” [age 13–17]). During the coding activity, we recorded the gaze of the children and measured their learning gain in order to answer the following research questions: 

How is children’s gaze associated with their learning gain during coding?

What are the differences between kids’ and teens’ gaze during coding?

172

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

robotics curriculum, TangibleK, fostered multiple skills, including problem representation, systematic generation and implementation of solutions, debugging, and strategies to approach difficult problems. Denner et al. [7] reported results from an analysis of 108 games created by middleschool girls to show that it is feasible to learn programming concepts when designing and coding activities are seamlessly combined.

common method to capture learning gain in computational thinking and coding is knowledge acquisition tests with combined types of questions [15]. Eye-tracking in coding

Prior research has shown that collaborative eye-tracking has been used to explain the different cognitive mechanisms of coding of adult programmers; however, to the best of our knowledge, collaborative eye-tracking has not been used in explaining children’s mechanisms of coding. Existing results using eye-tracking data have examined expertise [20], collaboration quality [43], learning outcome [19], and task-based performance [29].

There is some variation in the ways in which students handle coding tasks and how they manage concepts and practices. For example, novices tend to approach programs in a line-by-line fashion, rather than in blocks [38], and are not persistent in debugging their programs [11]. In their study on middle-school girls, Denner et al. [7] reported that the students rarely used “variables” to handle coding processes, and faced difficulty in joining pieces of code to successfully complete an operation. Students aged 11 to 12 made their own computer games using software called Adventure [37], spending the most amount of time adding new content to their code, rather than changing what they had already done, and girls spent more time writing the dialogue for their games than did boys. The most popular practices used in projects by students of almost the same age – 11–14 – were reusing and remixing already existing code, and addressing problems in an incremental and iterative way [23]. Kids aged 5–6 either carefully thought about and tried to predict results before trying the commands, or tried out different commands to receive immediate feedback [10].

In the domain of coding activities, Pietinen et al. [34] provided a new metric to measure joint visual attention in a co-located pair-programming setup, using the number of overlapping fixations and the fixation duration of overlapping fixation to assess the quality of collaboration. In a study by Pietinen et al [33], a possible design for an eye-tracking setup was presented for co-located pairprogramming, and some of the problems regarding setup, calibration, data collection, validity, and analysis were outlined. Bednarik and Tukiainen [2] examined coordination of different program representations in a program-understanding task. Experts have concentrated more on the source code than on looking at other representations, with the different representations being taken to be different areas of interest (AOIs). Bednarik et al. [1] related the information types posited by Good and Brna [14] to the gaze among the four AOIs (Code, Output, Control Panel, and Animation of program). The authors concluded that the presence of information type (e.g., highlevel or low-level) in the comprehension summary does not necessarily confirm that that the target program has been correctly comprehended.

Capturing children’s coding knowledge and learning

Many studies have collected the actual code created in children’s projects and then analysed it using Brennan and Resnick’s framework for computational thinking [24], Bloom’s modified taxonomy or solo taxonomy [4], or other types of deductive coding schemes to evaluate the projects [7] and understand how children learn coding [25]. “Fairy assessment”, which is based on Alice’s programming environment, requires students to modify and add existing code in order to assess their understanding of algorithm abstraction and code. Other ways of capturing children’s progress and understanding include multiple-choice instruments or quizzes that measure the learning of computer science concepts, or even traditional assessments such as tests and grades [8].

Romero et al. [39] compared the use of different program representation modalities (propositional and diagrammatic) in an expert novice debugging study in which experts had a balanced shift of focus among the different modalities compared to that of the novices. Sharif et al. [42] emphasized the importance of code scan time in a debugging task and concluded that experts perform better and have shorter code scan time compared to non-experts. Hejmady and Narayanan [17] compared the gaze shift between different AOIs in a debugging. The authors concluded that good debuggers were switching between code and the expression evaluation and variable window, rather than code and the control structure and data structure window.

Capturing computational thinking skills and the way in which children learn coding is challenging, and more objective mechanisms are needed to illuminate children’s understanding and knowledge gain regarding computational concepts and other computational thinking skills, such as debugging and problem decomposition [15]. Assessments utilizing coding blocks (akin to Parson’s puzzles), where students have to snap them in correct order are widely used in eBooks [32]. Assessments in which snippets of basic code are used to test whether children can identify the core constructs are widely used as well [9]. Thus, the most

In our study, we collected eye-tracking data to capture the gaze of children while they were accomplishing instructional tasks during our coding activity. In addition, to examine the association between learning gain and gaze, we performed a knowledge acquisition test. The study presented in this paper attempts to fill in the gap regarding

173

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

robots but not change any parts of them. Although the duration of the session was different for each team, it lasted between 45 minutes and one-and-a-half hours, and ended with a break before the next session.

use of eye-tracking data to investigate how children learn to code. METHODOLOGY Coding activity

We designed and implemented a coding activity in conjunction with an initiative organized at the Norwegian University of Science and Technology (NTNU), in Trondheim, Norway. The workshop activities are based on the constructionist approach, as one of the main principles of this is learning by making. The workshop was conducted in a largely informal setting, as an out-of-school activity, and lasted for four hours in total. Various student groups, ranging from 8–17 years old, were invited in NTNU’s specially designed rooms for creative purposes to interact with digital robots and create games using Scratch and the Arduino hardware platform. Specifically, Arduino was attached to the digital robots to connect them with the computer. At that point, an extension of Scratch called Scratch for Arduino (S4A) provided the extra blocks needed to control the robots. Scratch programming language uses colourful blocks grouped into categories (motion, looks, sound, pen, control, sensing, operators, and variables), with which children can develop stories, games, and any type of animation. In general, children who attended the workshop worked collaboratively in triads or dyads (depending on the number of children). The workshop was designed for children without (or with minimum) previous experience in coding. During the workshop, student assistants were the responsible supporting each team as needed. Approximately one assistant observed and helped one or two teams. Three researchers were also present throughout the intervention focusing on observing, writing notes, keeping notes and taking care of the overall execution of the workshop. The workshop had two main sections.

Figure 1. Examples of the tutorial materials (up) and the robots children interacted with (down)

Creating games using Scratch: This session focused on the creative implementation of simple game development concepts using Scratch. All children took another paperbased tutorial containing examples and visualizations to help them ideate their own game. The tutorial comprised simple text explanations and included basic computational thinking concepts and possible loops that the children were supposed to use in their own games (Figure 2 left). First, the assistants advised the children to concentrate on understanding the idea of the game, discuss it with their team members, and then create a draft storyboard. The children then developed their own game by collaboratively designing and coding using Scratch. To accelerate the children’s progress, they were given already existing game characters and easy loops. While the children worked on their projects, help was provided whenever they asked for it and complex programming concepts were introduced on an individual level according to the relevance to their project. Children created their games step by step by iteratively coding and testing them. After completing the games, all teams reflected and played each other’s games (Figure 2, right image). This section lasted approximately three hours.

Interacting with the robots: In the first section, the children interacted with digital robots made from an artist (using recycling materials). Each of the different robots was placed next to the computers (one for each team). When the children entered the room, one assistant welcomed them, told them to be seated and briefly presented an overview of the workshop. The assistants then advised the children to pay attention to the paper tutorial and the worksheet placed on the desks (one for each student). First, the children filled in the worksheet to answer questions regarding the exact place and the number of sensors and lights on the robots. The tutorial contained instructions with examples and pictures (Figure 1 up), similar to the robots they were using (Figure 1 down). The examples had little text and more images, and described exactly how the children could interact with the robots. The children accomplished a series of simple loops that controlled the robots and made them react to the environment with visual effects (such as turning on a light when sensors detected that the light was below a certain threshold). Children could touch and play with the

174

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

Figure 3. Sample questions used in our post-test

In our study, we calculated the relative learning gain (RLG) as defined from Sangin [41] (see the type below). RLG is more accurate compared to learning gain, since it takes into consideration the difficulty in gaining more knowledge if the learner is already very knowledgeable in a subject. In this work, RLG is considered as the dependent variable.

Figure 2. An example of children’s paper tutorial (left), and an example of the developed game (right) Sampling

The study was conducted in a dedicated lab space at the Norwegian University of Science and Technology (NTNU), in Trondheim, Norway. Through a two-week period during Autumn 2016, 44 children from 8th to 12th grade (age 8–17 years old) participated in our coding activity. The sample comprised 12 girls (mean age: 12.64, S.D.: 2.838) and 32 boys (mean age: 12.35, S.D.: 2.773). Five workshops took place over two weeks, following the coding activity described in the previous section. Our activities were organized for kids (8–12 years old) and teens (13–17 years old); the teens were children from local schools whose teachers/schools applied to attend our coding activity, while the kids were recruited from local coding clubs, which are after-school groups in which youngsters can interact and learn how to code. All of the participants were coding novices. When the participants were selected, a researcher contacted their teachers and parents in order to obtain the necessary consent from both the child and the legal guardian for the data collection.

𝑃𝑜𝑠𝑡𝑡𝑒𝑠𝑡 − 𝑃𝑟𝑒𝑡𝑒𝑠𝑡 , 𝑖𝑓 𝑃𝑜𝑠𝑡𝑡𝑒𝑠𝑡 ≥ 𝑃𝑟𝑒𝑡𝑒𝑠𝑡 𝑀𝑎𝑥. 𝑖𝑛 𝑝𝑟𝑒𝑡𝑒𝑠𝑡 − 𝑃𝑟𝑒𝑡𝑒𝑠𝑡 𝑅𝐿𝐺 = 𝑃𝑜𝑠𝑡𝑡𝑒𝑠𝑡 − 𝑃𝑟𝑒𝑡𝑒𝑠𝑡 , 𝑖𝑓 𝑃𝑜𝑠𝑡𝑡𝑒𝑠𝑡 < 𝑃𝑟𝑒𝑡𝑒𝑠𝑡 { 𝑃𝑟𝑒𝑡𝑒𝑠𝑡

During the coding activity, the children worked in triads (or, in some exceptions, in dyads) and wore eye-tracking glasses. Four SMI RED 250 and one TOBII mobile eyetracker working at 60Hz were used. A sampling rate of 60hz is considered sufficient for usability studies [35]. For this research, we selected the following gaze measures: Time spent on each AOI: We divided the whole visual field into six AOIs – five on the screen and the sixth as the robot. We used specifically made QR codes to scan the robots and the area around them (Figure 4).

Measures

As noted above, we divided our sample into kids (8-12 years) and teens (13-17 years). The main reason for this categorization is that after the age of 12, children’s ability with respect to complex thought improves, and they develop a stronger sense of right and wrong (www.cdc.gov/ncbddd/childdevelopment/facts.html). The children completed pre- and post-knowledge acquisition tests. These consisted of nine coding questions of increasing difficulty. The questions were adopted from previous study [15] and followed instructors’ suggestions. The children took approximately 10 minutes to finish the tests. The tests were paper-based and were manually graded by the researcher. Figure 3 shows two sample questions from the test.

Figure 4: Examples of the robots with the QR codes

The five AOIs on the screen were as follows (see Figure 5): 1.

2. 3. 4.

175

Tools: This area of the screen contains a general categorization of the commands available. For example, controlling the motion, looks, sound, and other variables. Command: This area contains all available commands within the currently selected tools. Scripts: This is the area of the screen in which the coding task is performed. Output: This area shows participants the output of their scripts.

Coding and Computational Thinking

5.

IDC 2017, June 27–30, 2017, Stanford, CA, USA

Sprites: This area controls the aesthetic of the program. The participants can change the appearance of the animated character using the characters available in this part of the interface.

way ANOVA was conducted to examine any potential difference in the time spent in the various AOI between kids and teens. Third, another one-way ANOVA was conducted to examine any potential difference in the three AOI transitions (i.e., scripts and output; scripts and commands; script and robot) between kids and teens. Finally, we conducted one more ANOVA to investigate any potential difference between teens’ and kids’ gaze similarity. We did not assume the equality of the variance across groups for any of the ANOVAs. In the second group of tests, we wanted to identify any potential correlation between the children’s gaze and their learning gain. That is, we wanted to investigate, first, any potential correlation between gaze similarity and RLG; second, any potential correlation between the six AOIs and the RLG; and last, any potential correlation between the three AOI transitions and RLG. Pearson correlation was used for all three of these tests.

Figure 5. The five AOIs of the Scratch interface

Transitions among AOIs: We also computed the transitions to and from one AOI to another. This helped us to understand the temporal relationship between the children’s gaze patterns, and depict the coding process used by the participants. For example, frequent transitions between scripts and output, or script and robot, depict the typical behaviour of hypothesis verification. The participants made a small change in the program based on a certain hypothesis about the output or robot’s movement; once they had observed the output/robot’s behaviour, either their hypothesis was confirmed and they moved on to the next step in coding, or they modified the program to reverify their hypothesis. This behaviour would result in a high number of transitions between the scripts and output/robot. We consider only three types of transitions for this analysis based on the literature. Hejmady and Narayanan [17] showed that experts shift their attention more, between the code and the output, than novices do. This is why we chose to compare the gaze transitions between the script and robot/output. The third type of transition we include in our analysis is the transitions between the commands and the script areas. These transitions imply a behaviour that shows a thinking process of “what comes next in the code?” by the children.

RESEARCH FINDINGS

Gaze similarity: We computed the gaze similarity as the proportion of time all participants were looking at the same AOI within a time window of 4 seconds. For dyads, this measure is equivalent to the cross-recurrence proposed by Dale et al [36]. For triads, we extend the concept of crossrecurrence (originally defined for two temporal scales) to three-dimensional cross-recurrence.

A one-way ANOVA was conducted to compare the kids’ RLG (M1=0.00, SD1=0.43) and teens’ RLG (M2=0.35, SD2=0.56). The results showed a statistically significant difference (F[1, 36.13]=4.07, p=.05). This means that teens outperformed kids. To examine the difference in the time spent in the six AOIs between kids and teens, we conducted a one-way ANOVA with children’s age as the independent variable (kids or teens) and the six AOIs as dependent variables (commands, output, robot, scripts, sprites, tools). As can be seen from the outcome data in Table 1, the teens spent more time on scripts, output, and commands compared to the kids, while the kids spent more time on sprites than did the teens. There was no significant difference in the time spent on tools and the robot. AOIs

Kids Mean (SD)

Teens Mean (SD)

Sig.

Commands

0.13 (0.04)

0.17 (0.06)

4.20*

Output

0.14 (0.06)

0.18 (0.06)

4.10*

Robot

0.18 (0.06)

0.17 (0.08)

0.27

Scripts

0.16 (0.06)

0.20 (0.06)

5.00*

Sprites

0.22 (0.08)

0.13 (0.05)

14.22***

Tools

0.16 (0.06)

0.15 (0.08)

0.02

Significance Level, ***p< .001;**p< .01; *p< .05 Table 1. Differences in the proportions of time spent in AOIs between kids and teens using (ANOVA).

Data Analysis

From Figure 6, it can be observed that the kids spent most of their time in the sprites area of screen, while the teens focused on scripts. It is also interesting that the teens spent the least amount of time in the sprites area, while the kids focused on the commands area. In general, Figure 6

To answer our research question in regards to differences between kids and teens, we split the sample using the age of 12 as a threshold. First, we conducted a one-way analysis of variance (ANOVA) to test any potential differences between the children’s age and their RLG. Second, a one176

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

suggests that when kids become teens their attention will move from the sprites area to other areas more relevant to text-based coding, such as scripts, output, and commands.

Figure 7. Proportions of time kids and teens spent in the three transitions from one AOI to another

A one-way ANOVA was conducted to compared the kids’ gaze similarity (M1=0.006, SD1=0.003) and the teens’ gaze similarity (M2=0.026, SD2=0.02). The results showed a statistically significant difference (F[1,11.23]=9.90, p=.009). This means that teens have more gaze similarity compared to kids. Regarding the association between gaze similarity and RLG, by applying Pearson’s correlation we observed a positive and significant correlation between the gaze similarity and the average RLG for the groups (r(14)=0.72, p=.001). Thus, the groups with high gaze similarity also had a high average RLG (see also Figure 8).

Figure 6. Proportion of time children and teens spent in each of the six AOIs

Transitions from one AOI to another help us to understand the temporal relationship between the gaze patterns of the children. For example, a transition from command to scripts or scripts to command might reflect the thinking process of the student while he or she is trying to figure out how to extend the instruction list in the program or which command to use so that the scripts becomes executable. Moreover, a back-and-forth transition between scripts and output or scripts and robot might reflect the debugging behaviour of the children. In order to examine the difference in the time spent in the three transitions from one AOI to another between children and teens, we conducted a one-way ANOVA with children’s age as the independent variable (kids or teens) and the three transitions as the dependent variables (scripts.output, scripts.command, and scripts.robot). As can be seen from the outcome data in Table 2, as well as from Figure 7, the teens had higher proportions of transitions to and from scripts and output, scripts and commands, and scripts and robot compared to the kids. Transitions scripts.command scripts.output scripts.robot

Kids Mean (SD) 0.09 (0.06) 0.07 (0.05) 0.09 (0.06)

Teens Mean (SD) 0.14 (0.07) 0.12 (0.06) 0.15 (0.08)

Figure 8. Gaze similarity’s association with RLG (left) and gaze similarity between kids and teens (right)

In order to identify the correlations among RLG and the six AOIs, we used Pearson’s correlation coefficient, which is about quantifying the strength of the relationship between variables. Pearson’s test verified the relatively strong relation between five out of the six variables, as indicated in Table 3. This shows that the children who spent more time on scripts, output, and commands had high RLG. On the other hand, children who spent more time on sprites and tools attained a low RLG.

Sig. 5.85** 6.51** 6.15**

Significance Level, ***p< .001; **p< .01; *p< .05 Table 2. Differences in the proportions of transitions between AOIs between kids and teens using (ANOVA)

Scripts 0.43**

Output 0.30*

Robot -0.18

Sprite -0.35*

Tools -0.31*

Commands 0.35*

Significance level: ***p< .001;**p< .01; *p< .05 Table 3. Pearson’s correlation coefficient between RLG and the different AOIs (N=44)

In order to identify the correlations between RLG and the three transitions (i.e. scripts and output, scripts and 177

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

commands, scripts and robot), we again used Pearson’s correlation coefficient. Pearson’s test verified the relatively strong relation in all the three variables of the transitions, as indicated in Table 4. This shows that the children who had many more scripts.command, scripts.output, and scripts.robot transitions had high RLG. RLG

scripts.command 0.38**

scripts.output 0.37**

scripts.robot 0.39**

Significance level: ***p< .001;**p< .01; *p< .05 Table 4. Pearson’s correlation coefficient between RLG and the three transitions (N=44) DISCUSSION AND CONCLUSIONS

In this section, we discuss the results from our study in which children (8–17 years old) performed certain coding tasks in groups (dyads and triads) while their gaze was recorded and their RLG measured. For the purpose of this study, we divided the sample into kids (8–12 years) and teens (13–17 years) to analyse the difference in gaze patterns and RLG across age groups. In general, the teens outperformed the kids in terms of RLG. The key motivation behind our contribution was to establish the relations between gaze, different RLGs, and different age groups. In this study, we established certain key differences in the gaze patterns of kids and teens in order to investigate the reasons why/effects of the fact that teens outperformed kids in the RLG from the coding tasks. First, one interesting feature of the results is that the teens spent more time looking at the scripts, output, and command AOIs, while kids spent more time on the sprites AOIs. The sprites control the aesthetic part of the problem at hand. For example, what the main animated character or the different costumes look like. Spending more time on the appearance of the output proved to be detrimental to the kids’ RLG. On the other hand, the scripts, output, and commands control the actual functionality of the coding environment and the main areas of attention in the coding process. These are the areas in which the coder must choose the appropriate command, then add it to the scripts area and see the outcome of the executed code. Our results showed that teens, who were spending more time on these areas, attained higher RLG. In addition, we found positive and significant correlations between the RLG and the proportion of time spent on the scripts, output, and commands AOIs and the negative and significant correlation between the RLG and the proportion of time spent on the sprites. In a study by Lee et al. [26], all participants aged 10–12 spent significant time on aesthetics; however, the authors identified differences in the time spent on aesthetics in terms of gender. Girls spent more time on aesthetics and also tried harder to balance technical functionality.

outputs indicates behaviour that is either caused by a debugging activity or a desire to verify a hypothesis. In addition, a higher number of transitions between script and robot shows similar behaviour. For example, moving back and forth between script and output might result from frequent changes in the code and a need to check the output. Thus, if the output matches the student’s hypothesis, after, executing the desired code the student moves on to the next step to continue with a new task. If the output does not match the hypothesis, he or she refines the code and rechecks the output. This is a typical hypothesis verification cycle. Such a cycle is often associated with the novice coding style [44]. Novices who tend to use this style often perform better. Since the teens used this style in coding, it might explain why they outperformed the kids. Young students who are novices usually do not try to debug their code, and thus face the difficult task of solving a poorly executed block of code, or successfully joining pieces of code [7]. The finding of a positive correlation between the RLG and the number of transitions between script and output supports this explanation as well. Moreover, the higher number of transitions between script and command areas shows the process of choosing the appropriate command to follow the current script. The teens spent more time finding the correct command, and trying different ones, than did the kids. Thus, the teens learned more compared to the kids; again, a significant and positive correlation between the RLG and the number of transitions between the script and command AOIs support our explanation. A study involving kids as young as 5–6 showed that it is possible for kids to plan their actions and think two or three commands ahead; those who do so concentrate hard on the screen, do not pay much attention on other’s comments, and have more confidence in their actions and knowledge [10]. Finally, we found a relation between the gaze and age groups, since the teens had higher gaze similarity than the kids did. One plausible explanation for this could be that the groups with high gaze similarity were able to reflect together on their progress and deal with the coding tasks by making decisions together. This might have helped them to create a shared understanding of the problem at hand. Having a higher level of shared understanding helped them to attain a higher average RLG [43]. This can also be verified based on observations, and assistants’ comments during the activity that the teen teams helped each other more, while the children quarrelled more about who would take the lead role in coding. On the other hand, the groups with low levels of gaze similarity mostly focused on the different parts of the program within the given time frame, and this might have had a detrimental impact on their level of shared understanding, and in turn, their average RLG. The significantly positive correlation between the gaze similarity and the groups’ average RLG further strengthens this explanation. This result is in line with several other studies showing high levels of cross-recurrence or gaze

Second, the teens had a higher number of transitions among scripts and output/command/robot, compared to the kids. The higher number of transitions between scripts and

178

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

Conclusions

similarity being correlated with task-based performance [19] and/or learning gains.

In this paper, we used eye-tracking data to analyse children’s learning process in coding and discover any differences in their task-based behaviour according to their age. We implemented a coding activity with 44 children aged 8–17. After collecting their gaze using mobile eye trackers, we measured their learning gain through pre- and post-tests. The teens (13–17 years old) had higher RLG compared to the kids, and tackled coding in a different way. The kids focused more on the appearance of the characters, while teens followed more structured behaviour comprising basic programming practices such as debugging and testing. In addition, the gazes showed that the teen teams collaborated better during the activity, a fact that might have led to their higher learning gain. The paper introduces new means and measures by which to understand how children learn coding.

Implications

Scholars, educators, and practitioners should take into consideration the existence of differences between kids’ and teens’ gaze during coding activities, and realize the impact this has on students’ learning. Our findings in this regard highlight the importance of developing ageappropriate learning instructions. For example, the kids focused more on aesthetics and had lower gaze similarity and learning gain, so instructors should ensure that children in the young age bracket of 8–12 receive guidelines on where to pay attention when they code (such as commands and output). The learning environment should support collaboration within teams and good communication, so that all members get the advantage of each other’s help. Limitations

ACKNOWLEDGMENTS

The present study is one of the first to use eye-tracking data to examine the relation between gaze, and children’s means of accomplishing coding tasks and their learning gain. However, our study entails several limitations. One of the main challenges the study encountered pertained to the fact that it was difficult to capture the kid’s gaze because of their constant head movements during the activity and the fact that the eye-tracking glasses were not really appropriate for kids (8–12), since they were not the perfect size for their faces and caused irritation, meaning that the kids had to take them off frequently. In addition, the collaborative concept of the workshop allowed the children to speak to each other to share their experience and express their enthusiasm, which made difficult to collect high-quality data. We were only able to use 75% of the data in total. Furthermore, the workshops varied in duration because of time constraints related to whether the children had been recruited form schools or local coding clubs. In all workshops, the activities were successfully completed by the children. Moreover, the participants were randomly selected from our region so other sampling methods could have been applied to ensure a more consistent sample was obtained in terms of the children’s prior knowledge of coding. Finally, this study lacked structured qualitative data (e.g., observations and interviews), which would be a fruitful opportunity for further research.

The authors would like to express their gratitude to all of the children, teachers and parents for volunteering their time. Our very special thanks go to Kristin Susanne Karlsen, Ioannis Leftheriotis, Amanda Jørgine Haug, Lidia Luque Fernandez, Marjeris Sofia Romero, Eline Stenwig, Kristoffer Venæs Monsen. The project has been recommended by the Data Protection Official for Research, Norwegian Social Science Data Services (NSD), following all the regulations and recommendations for research with children. This work was funded by the Norwegian Research Council under the projects FUTURE LEARNING (number: 255129/H20) and the Centre for Excellent IT Education (ExcITEd -http://www.ntnu.edu/excited) REFERENCES

1.

Roman Bednarik, Niko Myller, Erkki Sutinen, and Markku Tukiainen. 2006. Program visualization: Comparing eye-tracking patterns with comprehension summaries and performance. In Proceedings of the 18th Annual Psychology of Programming Workshop, 66-82.

2.

Roman Bednarik and Markku Tukiainen, 2006. An eye-tracking methodology for characterizing program comprehension processes. In Proceedings of the 2006 symposium on Eye tracking research & applications (ETRA'06), 125-132. http://dx.doi.org/10.1145/1117309.1117356

3.

MU Bers. 2008. Blocks, robots and computers: Learning about technology in early childhood. Teacher’s College Press, NY, NY.

4.

John B Biggs and Kevin F Collis. 2014. Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). Academic Press.

5.

Paulo Blikstein. 2013. Gears of our childhood: constructionist toolkits, robotics, and physical computing, past and future. In Proceedings of the 12th

Future Research

This study opens up interesting perspectives for future research by introducing the idea of collecting eye-tracking data from coding activities involving children. An opportunity would be to use that type of data to explore gender differences and attitudes within coding activities. Furthermore, since our study showed differences between teens’ and kids’ coding methods, it could be useful to focus on a specific age group and examine their particular gaze behaviour. Future studies could compare the role of gaze in alternative learning environments, as well as obtaining deeper insights from longitudinal collection of eye-tracking data. 179

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

international conference on interaction design and children (IDC'13), 173-182. http://dx.doi.org/10.1145/2485760.2485786 6.

Leah Buechley, Mike Eisenberg, Jaime Catchen, and Ali Crockett. 2008. The LilyPad Arduino: using computational textiles to investigate engagement, aesthetics, and diversity in computer science education. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI'08), 423-432. http://dx.doi.org/10.1145/1357054.1357123

7.

Jill Denner, Linda Werner, and Eloy Ortiz. 2012. Computer games created by middle school girls: Can they be used to measure understanding of computer science concepts? Computers & Education 58, 1: 240249. http://dx.doi.org/10.1016/j.compedu.2011.08.006

8.

9.

technology in computer science education (ITiCSE'14), 57-62. http://dx.doi.org/10.1145/2591708.2591713 16. Shuchi Grover and Roy Pea. 2013. Computational Thinking in K–12 A Review of the State of the Field. Educational Researcher 42, 1: 38-43. http://dx.doi.org/10.3102/0013189X12463051 17. Prateek Hejmady and N Hari Narayanan. 2012. Visual attention patterns during program debugging with an IDE. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA'12), 197200. http://dx.doi.org/10.1145/2168556.2168592 18. Ole Sejer Iversen, Rachel Charlotte Smith, Paulo Blikstein, Eva-Sophie Katterfeldt, and Janet C Read. 2016. Digital fabrication in education: Expanding the research towards design and reflective practices. International Journal of Child-Computer Interaction 5: 1-2. http://dx.doi.org/10.1016/j.ijcci.2016.01.001

Katelyn Doran, Acey Boyce, Samantha Finkelstein, and Tiffany Barnes. 2012. Outreach for improved student performance: a game design and development curriculum. In Proceedings of the 17th ACM annual conference on Innovation and technology in computer science education (ITiCSE '12), 209-214. http://dx.doi.org/10.1145/2325296.2325348

19. Patrick Jermann and Marc-Antoine Nüssli. 2012. Effects of sharing text selections on gaze crossrecurrence and interaction quality in a pair programming task. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (CSCW'12), 1125-1134. http://dx.doi.org/10.1145/2145204.2145371

Barbara Ericson and Tom McKlin. 2012. Effective and sustainable computing summer camps. In Proceedings of the 43rd ACM technical symposium on Computer Science Education (SIGSCE'12), 289-294. http://dx.doi.org/10.1145/2157136.2157223

20. Patrick Jermann, Marc-Antoine Nüssli, and Weifeng Li, 2010. Using dual eye-tracking to unveil coordination and expertise in collaborative Tetris. In Proceedings of the 24th BCS Interaction Specialist Group Conference British Computer Society, (BCS'10), 36-44.

10. Georgios Fessakis, Evangelia Gouli, and E Mavroudi. 2013. Problem solving by 5–6 years old kindergarten children in a computer programming environment: A case study. Computers & Education 63: 87-97. http://dx.doi.org/10.1016/j.compedu.2012.11.016

21. Larry Johnson, Samantha Adams Becker, Victoria Estrada, and Alex Freeman. 2015. The NMC Horizon Report: 2015 Museum Edition. ERIC.

11. Ann E Fleury. 1993. Student beliefs about Pascal programming. Journal of Educational Computing Research 9, 3: 355-371. http://dx.doi.org/10.2190/VECR-P8T6-GB10-MXJ5

22. Yasmin B Kafai and Quinn Burke. 2015. Constructionist gaming: Understanding the benefits of making games for learning. Educational psychologist 50, 4: 313-334. http://dx.doi.org/10.1080/00461520.2015.1124022

12. Google and Gallup. 2015. Searching for computer science: Access and barriers in U.S. K–12 education. 13. Michail N Giannakos and Letizia Jaccheri. 2013. What motivates children to become creators of digital enriched artifacts? In Proceedings of the 9th ACM Conference on Creativity & Cognition (C&C'13), 104113. http://dx.doi.org/10.1145/2466627.2466634

23. Yasmin B Kafai and Veena Vasudevan. 2015. Constructionist gaming beyond the screen: Middle school students' crafting and computing of touchpads, board games, and controllers. In Proceedings of the Workshop in Primary and Secondary Computing Education (WiPSCE'15), 49-54. http://dx.doi.org/10.1145/2818314.2818334

14. Judith Good and Paul Brna. 2004. Program comprehension and authentic measurement:: a scheme for analysing descriptions of programs. International Journal of Human-Computer Studies 61, 2: 169-185. http://dx.doi.org/10.1016/j.ijhcs.2003.12.010

24. Mitchel Resnick Karen Brennan. 2012. New frameworks for studying society computer science exemplification project. Paper presented at AERA.

15. Shuchi Grover, Stephen Cooper, and Roy Pea. 2014. Assessing computational learning in K-12. In Proceedings of the 2014 conference on Innovation &

25. Lieve Laporte and Bieke Zaman. 2016. Informing Content-driven Design of Computer Programming Games: a Problems Analysis and a Game Review. In 180

Coding and Computational Thinking

IDC 2017, June 27–30, 2017, Stanford, CA, USA

35. Alex Poole and Linden J Ball. 2006. Eye tracking in HCI and usability research. Encyclopedia of human computer interaction 1, 211-219.

Proceedings of the 9th Nordic Conference on HumanComputer Interaction (NordiCHI'16), 61. http://dx.doi.org/10.1145/2971485.2971499

36. Daniel C Richardson, Rick Dale, and Natasha Z Kirkham. 2007. The art of conversation is coordination common ground and the coupling of eye movements during dialogue. Psychological science 18, 5: 407-413. http://dx.doi.org/10.1111/j.1467-9280.2007.01914.x

26. Eunkyoung Lee, Yasmin B Kafai, Veena Vasudevan, and Richard Lee Davis. 2014. Playing in the arcade: Designing tangible interfaces with MaKey MaKey for Scratch games. In Playful User Interfaces Springer, 277-292. http://dx.doi.org/10.1007/978-981-4560-96-2_13

37. Judy Robertson. 2012. Making games in the classroom: Benefits and gender concerns. Computers & Education 59, 2: 385-398. http://dx.doi.org/10.1016/j.compedu.2011.12.020

27. John H Maloney, Kylie Peppler, Yasmin Kafai, Mitchel Resnick, and Natalie Rusk. 2008. Programming by choice: urban youth learning programming with scratch. In Proceedings of the 39th SIGCSE technical symposium on Computer science education (SIGSCE'08), 367-371. http://dx.doi.org/10.1145/1352135.1352260

38. Anthony Robins, Janet Rountree, and Nathan Rountree. 2003. Learning and teaching programming: A review and discussion. Computer science education 13, 2: 137-172. http://dx.doi.org/10.1076/csed.13.2.137.14200

28. Horizon Media. 2015. Horizon Media study reveals Americans prioritize STEM subjects over the arts; science is “cool,” coding is new literacy. PR Newswire.

39. Pablo Romero, Rudi Lutz, Richard Cox, and Benedict du Boulay. 2002. Co-ordination of multiple external representations during Java program debugging. In Human Centric Computing Languages and Environments, 2002. Proceedings. IEEE 2002 Symposia on IEEE, 207-214. http://dx.doi.org/10.1109/HCC.2002.1046373

29. Marc-Antoine Nüssli, Patrick Jermann, Mirweis Sangin, and Pierre Dillenbourg. 2009. Collaboration and abstract representations: towards predictive models based on raw speech and eye-tracking data. In Proceedings of the 9th international conference on Computer supported collaborative learning-Volume 1 International Society of the Learning Sciences, (CSCL'09), 78-82.

40. José-Manuel Sáez-López, Marcos Román-González, and Esteban Vázquez-Cano. 2016. Visual programming languages integrated across the curriculum in elementary school: A two year case study using “Scratch” in five schools. Computers & Education 97: 129-141. http://dx.doi.org/10.1016/j.compedu.2016.03.003

30. Sofia Papavlasopoulou, Michail N Giannakos, and Letizia Jaccheri. 2017. Empirical studies on the Maker Movement, a promising approach to learning: A literature review. Entertainment Computing 18: 57-78. http://dx.doi.org/10.1016/j.entcom.2016.09.002

41. Mirweis Sangin, 2009. Peer knowledge modeling in computer supported collaborative learning École Polytechnique Fédérale de Lausanne.

31. Seymour Papert. 1980. Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc. 32. Dale Parsons and Patricia Haden. 2006. Parson's programming puzzles: a fun and effective learning tool for first programming courses. In Proceedings of the 8th Australasian Conference on Computing EducationVolume 52 (ACE'06), 157-163.

42. Bonita Sharif, Michael Falcone, and Jonathan I Maletic, 2012. An eye-tracking study on the role of scan time in finding source code defects. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA'12), 381-384. http://dx.doi.org/10.1145/2168556.2168642

33. Sami Pietinen, Roman Bednarik, Tatiana Glotova, Vesa Tenhunen, and Markku Tukiainen. 2008. A method to study visual attention aspects of collaboration: eye-tracking pair programmers simultaneously. In Proceedings of the 2008 symposium on Eye tracking research & applications (ETRA'08), 39-42. http://dx.doi.org/10.1145/1344471.1344480

43. Kshitij Sharma, Daniela Caballero, Himanshu Verma, Patrick Jermann, and Pierre Dillenbourg, 2015. Looking AT versus looking THROUGH: A dual eyetracking study in MOOC context. In Exploring the material conditions of learning: opportunities and challenges for CSCL,” the Proceedings of the Computer Supported Collaborative Learning Conference, (CSCL'15), 260-267.

34. Sami Pietinen, Roman Bednarik, and Markku Tukiainen. 2010. Shared visual attention in collaborative programming: a descriptive analysis. In proceedings of the 2010 ICSE workshop on cooperative and human aspects of software engineering (CHASE'10), 21-24. http://dx.doi.org/10.1145/1833310.1833314

44. Elliot Soloway and Kate Ehrlich, 1984. Empirical studies of programming knowledge. IEEE Transactions on software engineering, 10, 5: 595-609. http://dx.doi.org/10.1109/TSE.1984.5010283

181

Suggest Documents