Embodied programming with visual and tangible

1 downloads 0 Views 358KB Size Report
Based on a number of empirical examples, we discuss how new forms of ... present a number of excerpts from children's programming of simulations and games ...
Embodied programming with visual and tangible representations Jakob Tholander & Ylva Fernaeus Department of Computer and Systems Sciences, Stockholm University Forum 100 164 40 Kista Sweden [email protected], [email protected]

In this paper, we discuss how meaning-making and knowledge in a particular area depends upon the representations used for expression and interaction. Based on a number of empirical examples, we discuss how new forms of representing computer programs turns the activity of programming from a mental to a largely embodied activity. With this follows that what it means to program, and to learn programming, becomes essentially different as the representational form changes. Thus, the different forms of representations implies not only different activities, but in effect also differences in the content and meaning-making of the activity at large. We argue that concrete representations should not be seen as ways of simplifying learning of particular concepts, but must instead be understood as redefinitions or alternative definitions of the concepts themselves.

Computer technology has provided increased possibilities of developing and exploring different ways of representing knowledge and information. Investigating the use of such representational forms – such as textual, graphical, dynamic and even physical media – have played considerable role in the development of computer supported learning environments, as well as in the fields of human-computer interaction in general. In terms of interaction, the general understanding is that the “visual” and the “concrete” is easier for people to learn and to use, than “textual” and “abstract” forms of representation. This is exemplified for instance in the shift from textual towards graphical interfaces, and also in current research in the field of tangible systems and applications. We present a number of micro-analytical investigations of students’ interactions and programming with tangible and visual representations. Based on these, we discuss some of the consequences that we find to be disregarded in discussions of the role of representations for learning and interaction with programming materials. We argue that that concrete representations should not be seen as ways of simplifying learning of particular concepts but must instead be understood as redefinitions or alternative definitions of the concepts themselves. We start by discussing our analytical starting point, which is based on studies of peoples use and meaning-making with technological artefacts in an ethnomethodological tradition. Thereafter, we present a number of excerpts from children’s programming of simulations and games in the animated programming system ToonTalk and with tangible programming prototypes. Finally, we discuss possible consequences of these forms of programming, with respect to the learning of formal concepts and abstractions often aimed for in school learning situations.

Starting points Work in ethnomethodology and conversation analysis has specifically focused on the sensemaking practices that people use in social interactions. An important element of such work is to understand the indexical aspects of language and interaction, i.e. how people make sense of indexical terms like “this” and “there” and how they are able to understand what one is referring to through pointing and gesture. Central to studies in these areas is that making sense of such

actions and utterances is an interactive achievement between the involved participants through use of external resources. Hence, the meaning of the term ‘that’ in the phrase “pick up that one” is not to be found solely in the intentions of the person uttering the phrase, nor in the addressee’s capability of decoding its meaning, but rather in the interactions between the participants and through their relation to the context. Moreover, a number of studies have shown the importance of the body in these practices, for instance how posture and gaze in concert with talk contribute to making particular features of the environment salient to another person (Goodwin 2000). An example from a technology-rich setting could be how moving one’s gaze away from the screen that is working as a shared space for the indexing of actions a participant may indicate to others that an alternative course of action is suggested (Heath and Hindmarsh 2000). Gesture has also been studied in learning settings as a way of articulating what one knows (Koschmann and LeBaron 2002). Common for all this work is the role of external artefacts for peoples understanding of talk and gesture. A vivid example of this is provided in Goodwin’s (2000) study of girls’ social interaction when playing hopscotch. The hopscotch grid drawn on the ground provides a structure to the activity, which allows the girls to use phrases like ‘this’ and ‘that’ and also gesture to refer to different squares in the grid. He argues that “the deictic term used to talk about particular squares presupposes bounded entities” (p.1505). Similarly, in the excerpts presented here, we will argue that the character of the representations work as structuring resources (Lave 1988), making bodily action central to the children’s activity, allowing them to refer to program elements in a similar style. In this paper, we use this perspective to study the character of children’s interaction in three different situations when engaged in hands on activities around visual and tangible programming representations. Focus is on detailed description of children’s actions and interactions while working with the tools. Studying the children’s actions on a fine-grained level of analysis, allows us to describe the actual character of the activity that they engage in. Representations and learning We start out from the idea that different forms of representation affords different forms of interaction, thus the whole activity that people engage in will also be different (Wertsch 1998). For instance, in HCI it is well known that looking at, and interacting with, a graphical interface is essentially different from interacting with a text-based or command driven interface, not only in how to “read” the system, but also in how to interact with and through it (Norman 1993). In addition, each representational form puts emphasis on different aspects of what is being represented. Representing a number sequence in algebraic form is quite different from representing it as a visual diagram or as a dynamic computer program. Hence, different forms of representing knowledge and information implies not only different kinds of interaction, it also influence peoples’ meaning-making with those representations. A new representational form redefines what it means to know about what is represented, e.g., knowing mathematics through algebra is not the same thing as knowing mathematics through programming (Lesgold 1998). Consequently, as new ways of representing the content of a particular domain is developed, new ways of “evaluating” what it means to know about that domain must follow. In this paper, we explore this idea for the case of computer programming and new program representations.

Programming for children A long term ambition in research on interaction design and educational technology for children has been to develop representations that make it easier for children to interact with, and learn from, complex technology (Druin 1999;Papert 1993). It is often argued that alternative forms of representations, such as visual metaphors and concrete artefacts that can be physically manipulated, provide bridges for young students to grasp abstract or theoretical concepts. Computer programming has traditionally been considered an activity dominated by reasoning of the kind that is characteristic for formal domains such as math and physics. In particular, it has been argued that programming supports children in developing abstract and theoretical thinking skills. When using a symbolic programming language such as Logo, users type text serving as formal specifications of how the objects in the program should behave. This is known to be a mentally complex activity (Pea and Kurland 1984). Much recent research in making programming languages for children concerns alternative ways of representing the computational systems. Currently, several efforts in this area concerns developing visual and tangible representations as forms of expressing program structures and functionality. A main argument put forth for such approaches to programming is that they make otherwise abstract concepts more concrete and easier to manipulate. Examples of research attempting to find program representation that more directly maps to the runtime representation includes graphical rewrite rules (Smith and Cypher 1999), and comic strips (Kindborg 2003). Another approach is to use concrete virtual objects (Tholander 2003) and real world physical artefacts (Eisenberg, Eisenberg et al. 2002, Zuckerman and Resnick 2003) as elements in the computational representation. Embodied programming In visual and tangible programming environments, programs are made by physically manipulating objects that represent functions, objects and relations in the program, rather than by producing algorithms in the form of written texts. These representations provide the possibility for participants to engage in what we call “Embodied programming”. We use this term – in the same vein as in Dourish’s (2001) embodied interaction – to emphasise that programming with visual and tangible forms of representations allows people to involve bodily actions such as pointing and gesture in a more direct sense than is possible with traditional text based or symbolic programming representations. Embodied in the sense used here does not refer to how abstract concepts can be given concrete and physical form, e.g. in mathematics it is sometimes stated how a mathematical concept is embodied in a particular programming tool. Instead “embodied” should be understood as an interactive phenomenon that only occurs in the social and physical relation between people, their bodies, the artefacts in their environment, and the actions that they take with those artefacts ((Goodwin 2000; Dourish 2001)). This includes the bodily actions performed in order to interact with and through the programming environment, and also to physical aspects of the discussions, negotiations and perceiving (Nishizaka 2000) that goes on when people engage in construction activities with such tools. The physical aspects include the extent of body language involved in the interaction as well as in elaborations of the content of the program being discussed. Hence, also text based approaches may be viewed as embodied, allowing for users’ bodily actions to become a part in their programming. However, tangible and physical representations have properties that afford users to refer to them through pointing and deictic language in a way that is more directly coupled to their representational form. For instance, saying “set the value of variable s to thirty” is fundamentally different from pointing to an object and saying “make that one move this way”. Hence these representations

afford a set of actions and activities that are quite different from those afforded by more text- or symbol based representations.

Case studies of embodied programming The excerpts presented in this paper are taken from workshops held with children learning to program their own animated games and simulations. The children work in pairs or in groups programming on the computer or conducting role play activities with paper-based representations of dynamic systems. The fragments are taken from three different situations and with different groups of children. The first fragment shows how gestures and body language, but also external resources such as a work sheet, plays a central role when children get to agree on what to build on the computer. In the process of “translating” an idea that is expressed statically on the paper into the dynamics of the computational media, gesture and body language plays a central role. The second fragment shows how two children who are in the process of changing the speed of one of the objects in their game also to a large extent uses gestures, outside as well as inside the programming environment, in order to agree on the actions to take in the interface. The third fragment shows how a group of children collaboratively debug a program represented in a tangible prototype. The three situations were selected because they provide examples of what we find typical for what we call “embodied programming”: a domination of social and embodied elements, structured by the physical manifestations of the programming environment. The children’s talk was originally in Swedish, and has been translated to English by the authors. In the transcripts, we have excluded the children’s original utterances in Swedish, partly because of the very crude character of these utterances, but also to emphasise the importance of the nonverbal actions that the children take. In the transcripts talk is in italics and other actions within brackets. The role of gesture when moving between different media In this excerpt two boys have got the task of building a game in the animated programming language ToonTalk (Kahn 2002). The game is described on a piece of paper that they have to the left of the computer and they are now in the process of discussing how it should run on the computer. The programming environment is designed particularly for children, and is based upon concrete metaphors represented in a visual interface. The excerpt starts when Sam and Jakob have just finished programming one of the characters in the game and are moving on to discuss how the next character should be programmed. The conversation takes the form of a negotiation and discussion of how they should interpret the static representation of the game that they have available on a piece of paper next to their computer (Figure 1). In order to express their ideas of how to program the dynamics in the game, they largely use gesture as a means of articulation.

Figure 1. Picture of the game

57

S: this should only go like this huh? (waves left hand)

58

J: yeah and then behind (1.5) (makes circular gesture) [the car S: [behind?= J: =yeah (1.0) S: what (.) (looks at Jacob)

59 60 61

62

S: yeah right go and then like [do- (points to the screen and makes gesture to the left)

63

J: (picks up worksheet) [go like this (2.0) (makes circular clockwise gesture) through (.)

64

J: that those two should do (.) (points at paper with two fingers)

65

J: it should do (.) that one (points at worksheet)

66

J: should go in the other way (makes circular counter clockwise gesture)

The excerpt starts out by Sam suggesting how one of the characters in the game, a water-skier, should move (line 57). The suggestion is achieved by combining talk (“should go like this”) and gesture (waving the left hand) where the waving of the hand provides a means of proposing and describing a motion for the water-skier that supposedly would require more effort to describe in words. The suggestion is refined by Jacob (58) in the following turn where he says that the waterskier should move “behind”. Exactly at the moment after saying “behind” he also makes a circular gesture to further elaborate what “behind” should mean in this particular situation. The meaning of “behind” is ambiguous in this context, it seems like behind works as a description for how an object is invisibly being moved across the scene from one side to the other. Thereby, the two gestures work as resources for the children to describe and elaborate how to dynamically represent what is statically described on the piece of paper. In line 59 Sam repeats Jacob’s “behind” in a low pitch tone of voice that suggests to Jacob that he is still not really sure what “behind” should mean in this particular situation. In the following turn (61) Sam moves his gaze towards Jacob immediately after saying “what”. By moving his gaze towards Jacob and by leaving the end of his description open (line 62: “then like [do -]”) he indicates that he does not fully understand Jacob’s suggestion. Jacob responds to this by attempting to refine his explanation. This is achieved by picking up the piece of paper, and thereby moving the attention away from the screen. In lines 63-66 Jacob uses the paper in combination with talk and gesture to elaborate on his description. Previously he tried to do this by using the screen as the primary reference point, now the piece of paper works as an alternative and less complicated representation of what they are building. The screen is filled with a large number of objects, pictures, and programming behaviours which requires substantial work to talk

about. By using gesture they are able to “leave” the screen and instead create an alternative space in which it is easier for them to focus on one particular object and its behaviour. In summary, what the children repeatedly have to do throughout this passage is to establish a way of referencing “things” that are not available on the screen or in the piece of paper. The paper is a static description of something dynamic while the programming system provides the resources of constructing what is described on the paper. However, what they should produce – a dynamic representation on the screen of the static description on the paper – is not physically there to refer to. They have to make substantial efforts in establishing a way of talking about it before they can go on to actually build it. In doing that, bodily action such as gesture and programming interface actions play a significant role. By combining talk and gesture in relation to the paper representation and to the programming system, the children are able to mutually construct a semantic space for talking about the properties of what they are building, even though those properties are neither physically nor virtually there to refer to. Embodied interaction through visual representations In the previous excerpt, making references to a static description on a piece of paper played an important role in how the children’s activity progressed. In the next excerpt, the activity unfolds in a different manner. One important difference is that children have no static representation to refer to, instead what they build evolve more in accordance with what they discover in the programming environment. The children are in the process of reducing the speed of one of the characters in the game they are building together. The passage shows how gesture and indexical language are not only a complement to the children’s verbal actions but are actually essential parts of the conversation when the children are making their programs.

1. T: should we have that one= (points with TT tool Dusty at the hunter character in their game) 2. B: =yeah 3. T: (removes the hunter from the game, puts it on the TT floor) 4. T: (points with the TT hand on the hunter on the floor) 5. B: pick it up 6. T: (picks up the hunter, flips it over, points with the TT-hand on the behaviour square to the right) that one huh= 7. B: =yeah 8. T: (picks up the right square tries to put it down) put down (puts the square down on the floor)

9. T: (points to the left square) this one 10. B: no 11. T : (keeps pointing at the left square) 12. B: yeah 13. T: (picks up the square and puts it down on the floor, points to the robot’s box with the TT-hand) 14. B: pick it up 15. T: (gets the box and moves it to some free space on the floor) 16. T: (points to the number “600” in the box and takes it out) 17. B: minus 600, ok not minus 600, but minus 550… no minus 450 In this excerpt the specific representations of the computational material play a central role in how the children’s interactions unfold. A significant aspect regards how the conceptual aspects of programming here largely involves bodily actions. Instead of producing textual code, the children operate on visual representations and objects that are already present in the programming interface. The children successfully interact with the programming interface, however, note that this is not trivial but requires significant effort by the two children. However, rather than having to memorise commands, this effort has more of an embodied character. To change the speed of the hunter character, the children need to grab hold of “dusty the vacuum cleaner”, aim it at the hunter character (line 1), press a command to vacuum it off the game and then press another command to let dusty spit out the hunter again on the floor (line 3). Then the hunter needs to be “flipped” so that the behaviours that are located on the back of the image get exposed. The particular behaviour that controls the speed then needs to be identified (lines 6-12) and taken apart in order to change it (lines 13-16). An important aspect that we would like to bring up here concerns how the children “dereference” the deictic actions – e.g. short utterances like “that” accompanied by pointing gestures – that is so dominating in their interactions. How do the children actually establish what such actions refer to? Clearly, this is far from trivial given the complex and often cluttered programming environments that they use. Since their actions to such a large extent are non-descriptive they have to use other means of establishing what their interlocutors are referring to. These actions get their meaning through their relation to the ongoing activity and especially to the other actions in the interactive sequence of utterances and actions. In these turns B seems to be actively monitoring T’s actions in the programming interface, by making comments that directly refer to these actions (line 2: yeah) or that suggests an appropriate next action (line 5: pick it up). T also seems to orient her actions so that B becomes part in the details of finding the correct programming element. T appears throughout the excerpt highly sensitive to B’s utterances as she repeatedly delays her interface actions to get his response. For instance, in line 4 T points to the hunter with the ToonTalk hand, thereby providing the opportunity for B to suggest an appropriate next action as he does in line 5: “pick it up”. This works as a way for T to seek confirmation for the action she is proposing. A similar example is also seen in lines 9 through 13 where T does not pick up the object she is pointing at with the ToonTalk hand until B has confirmed it verbally.

9. T: (points to the left square) this one 10. B: no

11. T : (2.0) (keeps pointing at the left square) 12. B: yeah

13. T: (picks up the left square) At first B disagrees with T by rejecting her suggestion (lines 9 and 10). But by holding her action of pointing at the behaviour (line 11), T is able to “argue” that they should pick that one but without having to articulate that idea verbally. Even though B does not immediately agree with this suggestion, T’s pointing seems to lead him to rethink his suggestion. This short fragment shows how also bodily actions such as gesture and pointing work as ways of displaying and communicating knowledge and ideas to one another. Moreover such actions can only be meaningfully understood as interactively constructed in the current situation and through the relation to the surrounding talk, the game as it evolves, as well as the programming interface. The skills that the children display in the passage above has more a character fluency in working with each other and with the tools, than mentally calculating solutions or doing abstractions to the problems they face. Even if one could argue that such reasoning took place, this is not visible in the activity that the children engage in. For instance, when selecting the speed that the hunter should move in the game (of which the excerpt is a part) the children start with the preprogrammed initial value and iteratively change this towards a value that works sufficiently well for the purpose of their game. Even though there is an ongoing discussion on how much the speed needs to be changed in each round (see line 17), these discussions are generally based on their direct experience with the material, rather than referring to calculations performed in ones

head. This results in a rather lengthy procedure of repeated changes, which could perhaps have been shortened by a more structured reflection on the speed as a function of how quickly the object moves in relation to the size of the game. However, the fact that the children eventually succeed in finding a suitable speed for their hunter shows that in the practice that they have developed is sufficient to achieve what they are set out to do. Social representation of programming elements Our next excerpt shows an episode where a group of children is testing a prototype system for a tangible programming environment. The prototype is based on behaviour cards, representing program code that can be attached to physical representations of objects in a game. The cards can be physically moved between the different objects, thereby reprogramming the game. The purpose of the system is to support children in collaborative construction and role-plays of dynamic applications that can be run inside as well and outside the computer. In the next fragment, tangible objects but also the children themselves, are involved in controlling, running and representing parts of the system. Large paper element representing the visual elements in the computational system are arranged on an area on the floor serving as the background. The two researchers were assigned the role of users, and the children are assigned responsibility for the execution of one or a small set of rules in the system, which call robots.

Figure 2. A group engaged in a role play activity

The game is then played by iteratively evaluating all the rules/behaviours in the system. In each iteration the children perform their actions if the conditions for their behaviour is fulfilled. Mera! Excerpt 1: Students’ conversation in a role play activity

1

Researcher 1:

So, let’s try something else. What happens if we remove a robot?

2

Children (many at a time):

Not me!

3

Sandra:

I haven’t done anything

4

Researcher 1:

You haven’t done anything?

5

Per:

Because we (points to himself and Lisa) are number one and no one has collided with us yet

6

Klas:

If we remove me, then we have to remove her as well (points to Sandra)

7

Researcher 1:

Why?

8

Klas:

Because I stand to the right. I move to the right and she takes leaf one

9

Sandra:

But he (Tom) goes in that direction and you (Klas) go this way… oh I see

10

Tom:

If you remove her (Sandra), then you have to remove Ida as well

11

Tom:

Because look…

12

Klas:

Because I cannot go there if I am removed (points to Sandra)

There are two issues that we would like to highlight in this excerpt. The first issue is that the children get a first person experience of the computational process and that they engage in the activity on a social level. Clearly, the children identify themselves with the behaviour they are responsible for, referring to them as “me”, “we” and “her”. This personal identification with the behaviours seems to play an important part when they discuss the researcher’s suggestion in line 1, that they should try something else. Sandra saying that she has not performed her action yet became a trigger for the rest of the group to discuss why this was the case and if removing any other behaviour would have the effect that Sandra still would not get a chance to perform her actions. For instance, in turn 5 where Per makes a suggestion for why Sandra did not get a chance to execute her action as she had complained about in turn 3, he pairs his program action together with Lisa since both their actions concerned the same object. Lisa kept moving the leaf down, Per was responsible for moving it to the top again when it reached the bottom, and Sandra was responsible for removing the bug if it collided with the leaf. The second is that relations between one’s own and other’s physical position around the game are essential parts in the activity. In turn (6) Klas brought up a consequence of removing the rule that he was responsible for. If that rule was removed then also Sandra’s would have to be removed since her rule would then never trigger. When explaining this relationship, he begins by referring to his own physical position: “because I stand to the right” (turn 8). The children have arranged themselves around the game so that their program actions in the game correspond to the way they are physically located. By referring to his own physical location, it is hence possible for the others to see that he is the one who would move the bug in the direction of Sandra and that if he

would be removed, then Sandra would still not get a chance to trigger. Several of the turns that followed were of similar character, taking dependencies and relations between objects into consideration, with reference to ones own and others’ physical positions and actions in the game. Even in this quite short passage five of the eight children were actively engaged in analysing the consequences of removing a robot. The suggestion initiated by the researcher in turn 1 worked as a catalyst for the whole group to get involved in a negotiation of the role played by the different objects and behaviours that build up the system. The actions of each child were grounded in the previous turns and almost in every case explicitly considered different aspects of the relations between themselves and the objects in the system. Tom’s proposal in turn 10 is an excellent example of how the suggestion by Klas in turn 6 is picked up by another participant in the group. Tom tries to make Klas’s suggestion understandable to the others by drawing an analogy to the objects and the rules, and the children handling these on the left part of the game. Since these have similar behaviours as those on the right side, the same argument should be valid for Anna and Ida on their side of the game.

Discussion Summary Throughout this paper we have shown how representations are resources for actions and that the actions that the children take are intrinsically intertwined with these representations. •

The first excerpt showed how gesture became central when two children discuss how to interpret a static representation when programming a dynamic system.



The second excerpt showed two children who collaboratively used visual forms of representations to reprogram a game. Central was the role of bodily forms of interaction in their negotiations and discussions.



The third excerpts showed an example of tangible and socially distributed representations used in a group and where the participants themselves became active elements of the representations.

We argue that actions in these three situations can only be meaningfully described through the children’s relations to the representations, and also, the representations only become meaningful for the children through the actions they perform with them. In the following, we discuss these embodied forms of programming and interaction, and how to understand new representational forms for programming and learning. Programming – embodied versus mental activity The primary observation from these excerpts is that programming in these situations is largely becoming an embodied and physical activity. While new tools for programming often have had the purpose of making it easier, especially for children, to build computational systems, we believe that this view is somewhat simplified. Visual and tangible programming representations reduce the complexity of programming in terms of not having to memorize a complex programming syntax, and not having to compose and interpret algorithm structures. However, one effect that we have shown is that the sense making practice is instead shifted to emphasize embodied and physical thinking when programming. This stands in strong contrast to the

traditional conception of programming as a cognitive activity where the prime of the action goes on within the minds of the participating individuals. Visual tools used for building dynamic systems often contain large amounts of information in the form of text, icons, tool-boxes and animated “virtual agents”, meaning that particular programming concepts often become difficult to talk about for children in their ongoing interactions. This seems to lead them to using other resources such as gesture and representations on paper in their interactions about programming and model building. Bodily actions as the ones that we have showed examples of here have had little room in discussions regarding learning and use of programming tools. Viewing programming as a purely mental activity is clearly not in accordance with the analysis presented here. Children’s ideas and expression of ideas often evolve alongside the actions taken and feedback given in the programming interface. Participation in the activity must be recognized as a collaborative effort where the individual children, as well as the character of the representational artefacts, play a central role in shaping the final product. Hence, to separate mastery of the tools from domain-specific learning becomes problematic. We argue that embodied and enacted forms of participation in tool-intense activities need to be recognized as just as important parts to the learning processes as verbalizations of knowledge. Producing programs through embodied forms of interaction requires different sets of considerations than through more mentally oriented interactions. This difference in form and structure also implies differences in meaning and content. Interpreting learning from embodied activities Finally, as new forms of representing and expressing knowledge evolve, the actual content of what is expressed and communicated will change. We find that this point has largely been overlooked in the discussions regarding new representational forms for learning. For instance, in Ivarson’s (2004) conclusion from a study with similar findings as those discussed here, he argues that the results “suggest that there is a possible conflict between this highly indexical language and more theoretical knowledge” (p.10). Thereby, he proposes that the practical activity in a domain is something different than learning its theoretical content. Hence, gaining theoretical knowledge would require other – non practical – kinds of activities, whatever those might be. An important assumption made in the above quotation is that knowing is closely connected to verbal articulation, while non-verbal action is secondary to the process of knowing. However, we would instead like to emphasize that with visual and tangible representations, indexical language becomes an important part of the actual expression of ideas and that a significant portion of knowing lies in the non-verbal actions with the particular physical resources available in the situation at hand. Attempting to have students developing knowledge that is detached from the representational tools they rely upon, whether these are textual, visual or mathematical, is then not in line with a perspective on learning that emphasizes the relationship between people, activities, and artefacts. Moreover, what the “theoretical knowledge” would be in activities that to a large extent rely on external tools and representations is difficult to define. The representations that the children are working with are not primarily old concepts in new form, but are actually alternative definitions of the concepts themselves. For instance, in Wilensky's and Resnick's (1998) work they argue that programming of computational systems with tools like StarLogo redefines what it means to learn about complex systems. The primary question is then not if it is a better or easier way of learning important concepts in the domain of complex systems theory, but most importantly, that it is a different way of knowing, and expressing knowledge, in the chosen domain.

The visual and physical elements that the children are interacting with are not only representations of abstract pieces of information that could be represented in other forms with their information content preserved. Instead, the representations provide resources for particular kinds of actions and it is through these actions that the meaning of the representations, in other words the information content, gets constructed. Hence, if the representations are changed, the possible actions and meaning making that these afford are also changed. For instance, the phrase “that one” accompanied by a pointing gesture would be difficult to make sense of in an environment where the target of the utterance and the point do not have properties that are suitable for these actions. Similar actions in a text based representation would require additional interactive work – probably language based – on behalf of the participants to make their actions meaningful to one another. From this follows that embodied interactions have to be equally valued in learning settings as are interactions that include articulations and verbalisations in more abstract form. References Dourish, P. (2001). Where the Action Is: the foundations of Embodied Interaction. Cambridge, Massachusetts Institute of Technology. Druin, A., Ed. (1999). The Design of Children's Technology. San Fransisco, CA, Morgan Kaufmann Publishers. Eisenberg, M., A. Eisenberg, et al. (2002). Computationally-Enhanced Construction Kits for Children: Prototype and Principle. International Conference of the Learning Sciences, Seattle. Goodwin, C. (2000). Action and embodiment within situated human interaction. Journal of Pragmatics 32(10): 1489-1522. Heath, C. and J. Hindmarsh (2000). Configuring Action in Objects: From Mutual Space to Media Space. Mind, culture, and activity 7(1&2): 81-104. Ivarsson, J. (2004). Renderings & Reasoning. Studying artifacts in human knowing. Göteborg studies in educatonal sciences, Göteborg University: 210. Kahn, K. (2002). Toontalk. Kindborg, M. (2003). Concurrent Comics - programming of social agents by children. Department of Computer and Information Science. Linköping, Linköping University. Koschmann, T. and C. LeBaron (2002). Learner Articulation as Interactional Achievment: Studying the Conversation of Gesture. Cognition and Instruction 20(2): 249-282. Lave, J. (1988). Cognition in Practice. Mind, mathematics and culture in everyday life., Cambridge University Press. Lesgold, A. (1998). Multiple representations and their implications for learning. Learning with multiple representations. M. W. v. Someren, P. Reimann, H. P. A. Boshuiezen and T. d. Jong, Elsevier science: 307-319. Nishizaka, A. (2000). Seeing What One Sees: Perception, Emotion, and Activity. Mind, culture, and activity 7(1&2): 105-123.

Norman, D. A. (1993). Cognition in the Head and in the World: An Introduction to the Special Issue on Situated Action. Cognitive Science 17(1): 1-6. Papert, S. (1993). The Children's Machine - Rethinking School in the Age of the Computer, Basic Books. Pea, R. and M. Kurland (1984). On the cogntive Effects of Learning Computer Programming. New Ideas in Psychology. 2(2): 137-168. Smith, D. C. and A. Cypher (1999). Making Programming Easier for Children. The Design of Children's Technology. A. Druin, Morgan Kaufmann Publishers.: 202-221. Tholander, J. (2003). Constructing to Learn Learning to Construct - studies of computational tools for learning. Department of Computer and Systems Sciences. Stockholm, Stockholm University/Royal Institute of Technology. Wertsch, J. V. (1998). Mind as Action. New York, New York, Oxford University Press. Wilensky, U. and M. Resnick (1998). Thinking in Levels: A Dynamic Systems Approach to Making Sense of the World. Journal of Science Education and Technology. 8(1). Zuckerman, O. and M. Resnick (2003). A physical interface for system dynamics simulation. CHI'03, Ft. Lauderdale, Florida, USA, ACM Press.