Integrating Virtual Worlds and Mobile Robots in Game

16 downloads 0 Views 811KB Size Report
Real → Imaginative World Transition: The robot, the child's avatar and the ..... It makes available to Game Controller a library of methods that uses Kinect api and.
1 Virtual Reality Enhanced Robotic Systems for Disability Rehabilitation

Integrating Virtual Worlds and Mobile Robots in Game-based Treatment for Children with Intellectual Disability Authors Franca Garzotto and Mirko Gelsomini Politecnico di Milano, Department of Electronics, Information and Bioengineering Via Ponzio 34/5 20133 Milano (Italy) [[email protected], [email protected]]

ABSTRACT  In recent years we have witnessed a rapid growth of learning applications for children with different kinds of disabilities. These tools exploit different learning paradigms and employ a gamut of “beyond the desktop” interaction modes and devices, including haptic controllers, (multi)touch small and large displays, digitally augmented physical objects, robots and motionsensing cameras. Our research explores novel interactive solutions for children with intellectual disability who have significant limitations both in intellectual functioning, i.e., general mental capacity such as memory, attention, reasoning and problem solving, and in adaptive behavior, i.e., social and practical skills related to daily living (interpersonal relationships. social responsibility, ability to follow rules/obey laws, personal care). Our goal is to provide intellectually disabled children with game-based learning tools that integrate motion-based touchless interaction and interaction with mobile robots. In this chapter, we discuss the above issues and exemplify them by describing a set of games based on the above mentioned interaction paradigm that we have designed for IDD children in order to promote social and cognitive skills.

2

  INTRODUCTION  In recent years we have witnessed a rapid growth of learning applications for children with different kinds of disabilities. These tools exploit different learning paradigms and employ a gamut of “beyond the desktop” interaction modes and devices, including (multi)touch small and large displays in Hourcade (2012), digitally augmented physical objects in Zalapa (2013), robots in Cabibihan (2013) and motion-sensing cameras in Bartoli et al. (2013, 2014). Our research explores novel interactive solutions for children with intellectual disability, or Intellectual Developmental Disorder (IDD). IDD children have significant limitations both in intellectual functioning, i.e., general mental capacity such as memory, attention, reasoning and problem solving, and in adaptive behavior, i.e., social and practical skills related to daily living (interpersonal relationships. social responsibility, ability to follow rules/obey laws, personal care). Our goal is to provide IDD children with innovative game-based tools that promote new forms of behavioral therapy for this target group. From a learning and therapeutic perspective, our work is grounded on theoretical and empirical research in psychology, pedagogy, and neuro sciences that highlight the relationship between physical activity and cognitive processes, with the formative role of “embodiment” (the way an organism’s sensorimotor capacities enable it to successfully interact with the physical environment) in the development of cognitive skills such as in Dourish (2004), Lee (2012), Morgan (1986), Bianchi (2007). Sylva (1976) also pinpoints that play is the most natural way for any young child to express him/herself, to experience and make sense of the world, to connect with other human beings, and to exercise and develop the core cognitive functionalities that are a

3 prerequisite for any higher level skill, e.g., imagination, language development and abstract reasoning. In addition as proved by Bouvier (2014), Csikszentmihalyi (1997), Schoenau (2011), play is fun, and fun accelerates learning processes by inducing a state of flow that promotes attention, increases the capability of selecting relevant information, and augments the willing to complete the required tasks. Integrating digital play into educational and therapeutic routines offers opportunities for encouraging social interaction, developing communication and imaginative thinking, and increasing children’s ability to perform a variety of activities needed for daily life. Our design and technical approach (Figure 1) integrates motion-based touchless interaction with large displays and interaction with mobile robots as shown in Bonarini, Garzotto, Gelsomini et al. (2014). In motion-based touchless interaction, the ingredients are a motion-sensing device (e.g., a Kinect camera) and a virtual world that is displayed on a medium-large screen and is controlled by means of body movements and gestures, without wearing any additional device. This form of interaction promotes kinesthetic and visual learning, and has been proved effective to support the development of behavioral skills and cognitive functions related to attention, body awareness, awareness of the physical space, meaning construction, and imagination in Alcorn, Pain (2011) and Bianchi, Berthouze (2007). Human-robot interaction is mainly used with intellectually disabled children to develop competencies in the social sphere. Socially interactive robots create interesting, appealing and meaningful interplay situations that compel children to interact with them, to communicate, to express and perceive emotions, to interpret natural cues and to maintain social relationships as in Cooney (2014), Diehl (2012), Lehmann (2014), Robins et al. (2012), Scassellati (2012), Turkle

4 (2006) and Wainer et al (2014). In principle, the integration of these two interaction paradigms should offer opportunities to achieve all the above mentioned benefits. Still, the coordinated use of motion-based touchless interaction and human-robot interaction in a single learning experience is largely unexplored in the current literature and pedagogical practice, and raises a number of challenges: in terms of technology (because the integration of the different hardware and software components related to motion-sensing devices, visual interfaces, and robots requires sophisticated technical solutions) and from a UX (User eXperience) perspective.

Figure 1 – Moments of interaction with the integrated robot + touchless large display system. Top left: a child personalizing the robot; Top right: Social play with on-screen

5 contents; Bottom left: social play with the robot; Bottom right: the child and robot’s representations on screen. In this chapter, we will discuss the above issues. We exemplify them by describing a set of games that are based on the above mentioned interaction paradigms and have been designed for and evaluated with IDD children in the context of the project “KROG- Kinect-Robot Interaction for Gaming” funded by the Polisocial Program at Politecnico di Milano.

THE UX DESIGN SPACE   To integrate full-body and robotic interaction, the designer must orchestrate the behavior of different types of interactive actors: children, robots, and on screen digital elements – which behave and interact both in the real world and in virtual world, use different interaction modalities. The mix of the two paradigms leads to an articulated system of interaction relationships among the triad of actors - robot, child, virtual world-: child ↔ robot, child → virtual world, robot → virtual world, and [robot + child] → virtual world.

6

Figure 2: Interaction Relationships in the UX design space These interaction relationships must be further instantiated into fine-grained interactions among the robot, the child, and the contents in the virtual world, e.g., the characters of a story or the elements of a virtual game, which take place in the physical space. On screen digital elements are digital representations of the robot or the children, or characters of narratives, or virtual companions. Children interact with the robot using movements, touch, and physical manipulation, and interact with on-screen elements using mid-air gestures and movements in the physical space. The effects of children’s interactions are actuated on the robot or in the virtual world. The robot behavior, affected by children’s actions and by virtual world events, is expressed by movements, vibrations, sounds, or light effects, and can change the state of the virtual world. On-screen digital elements can react to the robot’s and the children’s behaviors, e.g., providing suggestions, instructions, and rewards. The overall picture gets more complex if more than one human actor is involved, e.g., if we want

7 to support social play. In this case, we need to include also social interaction among children (or between a child and her adult caregiver), between them and the virtual word, and between them and the robot. Finally, in the design scenario there the need to consider constraints and affordances of the physical space (Figure 3). The physical space does not exhibit an interactive behavior that need to be orchestrated with the different actors; still, it introduces an additional design dimension and a further element of complexity in the definition of the gaming experiences.

Figure 3: Example of physical space constraints As mentioned in the introduction, the motion-based touchless interaction child-child + adult → virtual world and child ↔ robot interaction have been explored by prior studies, also in the context of children’s disability. Still, there is a limited knowledge about the other types of interactions – robot → virtual world, [child + robot] → virtual world, [robot+ children/child + adult] → virtual world – and how to orchestrate all these paradigms together. Mastering this complexity is a design challenge per se, which increases when the UX must be conceived for IDD children, who have difficulty to process many simultaneous stimuli and may reject them.

8

UNDERSTANDING USERS AND UX REQUIREMENTS   To address the design challenges discussed in the previous section, our UX design process has involved 22 specialists (psychologists, motor/psycho-therapists, special educators and neurological doctors) from 6 therapeutic centers in Italy. Within a period of 7 months, we organized 5 half-day focus groups (Figure 4) for small teams of specialists and 5 meetings with therapeutic center directors.

Figure 4: Focus groups with IDD specialists

The working materials used for these events comprise progressive design alternatives for the robot and the on-screen elements, rendered using paper-based sketches, digital images and animations, and physical prototypes. Wizard of Oz techniques were used to simulate the behavior of the robot and on-screen virtual worlds. Several questions were addressed in the sessions with specialists: 

“What is the profile of the IDD children who may benefit from our technology?”



“What are the perceptual and cognitive affordances of the robot and the virtual worlds?”



“Which stimuli should they provide to the children?”



“What roles can the robot/virtual world elements play in the different moments of

9 children’s play?” 

“What are the conventional educational and therapeutic practices and how can they be translated into, or integrated with, activities that make use of our technology?”

According to our specialists, the target group for which our technology can be particularly appropriate comprises IDD children with low or medium cognitive level (IQ ranging between 35 and 60) who exhibit lacks in two or more specific areas of adaptive behavior (communication skills, interpersonal skills, or daily living skills). To actively participate in any skill-oriented learning task and ultimately gain some benefits from this experience, a prerequisite for these children is to preliminarily achieve two emotional states, relaxation and affection. Relaxation is a state of low tension and absence of anger, anxiety, or fear. It is particularly important for IDD children because these subjects are typically scared of everything that breaks their routine or is unknown. Only after the child is relaxed, she may reach a willingness to participate in any skill-oriented learning activity. Still, this participatory state may not persist unless also a state of affection is reached. Affection denotes a positive feeling of fondness and affective attachment towards an entity. In IDD children, affection toward an object can be reached by exploring, manipulating, and discovering it using all senses (touching, moving, or smelling them) and is often expressed by manifesting pleasure from these actions. From this general consideration, we elicited a fundamental requirement: The design of the robot and the virtual worlds, and the activities to be performed with children, should not be only functional to the improvement of specific skills in the cognitive, social, or motor sphere, but also to the achievement of relaxation and affection. Distilling this general requirement and the whole amount of information emerged from the discussion with specialists; we identified a set of finer-grained functional and non-functional

10 requirements on the behavior and sensory affordances of the robot and the virtual worlds. Familiarity and Trust: Children should relax with the robot and the visual elements on screen, and believe that they are reliable, good, harmless, and inoffensive. This can be facilitated for example if the visual and physical characteristics of the digital elements and the robot evoke characters and objects that children have already experienced and enjoyed in other contexts, e.g., popular toys, or characters of well-known TV programs. In addition, the robot should be designed as a soft, comfortable and predictable object that can be touched, manipulated, hugged, and personalized. 

Virtual ↔ Physical Transition: The robot should act as a bridge between the physical world and the virtual on screen world, helping the child to connect what happens in the two worlds. It must be clear for the children that the robot is the physical counterpart of a digital character on screen (its avatar).



Real → Imaginative World Transition: The robot, the child’s avatar and the digital characters on screen should bridge the real world and the child’s internal imaginative world. They should be designed to act as props to spark children’s imagination and fantasy, helping the transition from sensory-motor (functional) play to symbolic play.



Imaginative → Real World Transition: The robot and its avatar should help children to “go back” to the real world when they fall in their own imaginary world for too long, e.g., drawing a child’s attention to a physical object in the real space.



Social Mediation: The robot should encourage and facilitate social behavior by acting as a communication channel between children or among children and adults (caregivers). It should support the transition from non-social play (when the child plays alone with the robot) to social/non-interactive play (when two or more children play with the robot

11 simultaneously, or in turn, but without communicating or interacting between them), to social interactive play (when children play with the robot simultaneously, or in turn, exchanging verbal or non-verbal messages, or performing collaborative tasks during play). 

Feedback: The robot and the screen should provide a gamut of multisensory stimuli in response to child’s actions that enforce affection, engagement, and promote various forms of learning (e.g., meaning making and cause-effect understanding). Still, stimuli must be clear, one-at-the time, well distinguishable, strictly functional to a specific learning goal. Children are distracted from feedbacks that are not strictly relevant for the current task, and may lose attention. Too many visual stimuli may induce anxiety as children may not be able to discriminate and interpret single elements within a group.



Prompt: Both the robot and the digital characters on screen should act as behavioreliciting agents that attract attention, stimulate action and promote engagement.



Emulation: On screen avatars of the robot and the child should stimulate children’s imitative capability. They should include both behaviors that replicate the movements and actions of the child and the robot in the physical space, and behaviors that must be emulated.



Facilitation: The robot and the on-screen multimedia content (e.g., video tutorials) should suggest how and when to do something, facilitating the execution of learning tasks.



Instruction: The virtual world should offer visual, verbal or textual instructions for the actions to be performed on the robot or in the real world. The robot should give verbal instructions for the children’s interaction with the screen and for their movements in the physical world.

12 

Reward: The robot and the on-screen visual/audio contents should offer positive reinforcement to a child’s successful action, and have no reaction in case of failure.



Restriction: Some movements of the robot should be used to “mark” the physical space; in this way, they help children to identify the spatial constrains for their movements during play.



In our current prototypes, described in the next session, the whole set of perceptual and behavioral characteristics of the robot and the virtual world, and the set of activities that the children can perform with them, have been designed to meet the above requirements.

ON‐SCREEN VIRTUAL WORLDS   In the on-screen virtual world, multimedia contents range from very simple colored shapes or simple elements (Fig. 5b) integrated with sound or video elements, to 2D and 3D virtual environments and characters that create fantasy tales or communicate specific tasks for the children and the robot (Fig. 5b) to be performed in the physical world (Fig. 5d). The virtual representation of the child and the robot (avatars) are body silhouette, mirrored images, or fictitious characters, depending on the current game task (Figg. 5a and 5c). The children interact with multimedia contents on-screen using a very simple body language: raising or swiping arms, moving forward/ backward with respect to the screen or the robot, and moving to a specific area in the physical space.

13

Figure 5: On screen multimedia contents

THE ROBOT   Our robot is called Teo, a name that can be easily pronounced by children. We have calibrated its size and weight so that children can easily manipulate and hug it, and move it around, but at the same time, Teo can attract attention and cannot be ignored when on stage. Teo’s body has a neutral egg-line shape, made of a soft fabric, to evoke the enjoyable, tactile experience that the children may have done with their own toys. Children can personalize Teo’s neutral shape by sticking eyes, mouth, eyelids, and other components (Figure 6).

Figure 6: Personalizing Teo

14 Teo wears a hat equipped with a set of transparent buttons that can be customized to a specific task using either colored tags, PCSs (Picture Communication Symbols), or iconic images. Buttons are meant to enable children to express choices: in several games, they are requested to press the proper button in response to requests from the screen or the robot (Figure 7).

Figure 7: Teo’s instrumentation: sensors and actuators

Teo offers a variety of visual, sensory, and spatial stimuli, providing feedbacks and instructions in many ways: using hidden speakers and light strips of colored LEDS placed at its bottom (Figure 7) it can speak or generate light effects; using its three omni-wheels it can vibrate, rotate, and move around. Teo’s body is equipped with distance sensors that can sense children’s distance and movements, while a force sensor strip enables the robot to detect manipulation (e.g., caresses, hugs, or punches). All these sensors enabled us to implement a variety of states and behaviors for the robot, which

15 are automatically triggered according to children’s interactions, movements, and the game logic of the ongoing task: 

Waiting (when Teo waits for someone to interact, it “looks around”, i.e., it remains in the same position rotating itself);



Invitation to interact (as soon as someone goes closer to Teo, it rotates towards her and verbally invites her to play);



Happy (when its body is softly caressed touched, or hugged, Teo “is pleased”, i.e., it replies by vibrating, rotating itself cheerfully, and moving around, while a green colored light led strip blinks slowly);



Angry (if the child slaps Teo with moderate force, the robot “becomes angry”, i.e., it moves sharply towards the child);



Scared (as soon as someone brutally hits it, Teo “becomes timorous”, i.e., it slowly retreats itself).

Figure 8: Teo’s behaviors At any time the caregiver can take the control of the robot and trigger specific behaviors or feedbacks using a remote controller.

CHIDRENS’ ACTIVITIES   The tasks proposed to the children are of two types. A preliminary set of familiarization” activities is devoted to help children understand the affordances of Teo and the onscreen

16 elements, and to support the achievement of relaxation and affection. A set of structured learning tasks are devoted to promote cognitive and social skills; in these activities, multimedia contents, rewards, play time, body movements, and levels of complexity of tasks can be customized to the characteristics and preferences of each child. As most IDD children have attention deficits and it is difficult to keep them concentrated on an activity, all tasks designed are very focused, and their duration is short (but can be extended using the customization features provided by our system as a child’s attention capability improves). To maintain concentration, the role of the caregiver is fundamental: (s)he can take the control of screen and robot behavior at any time, suspend the activity when the child’s attention is lost, and use conventional behavioral strategies to recover it. Familiarization activities During familiarization activities, Teo’s behavior is largely controlled by the caregiver using the remote controller. After entering into the room, Teo keeps steady while the child is moving towards it. Then, the child is invited by the caregiver to stick face expressions, and Teo reacts by activating light, sound or movement feedbacks to express appreciation. Children are then invited to move in the space while Teo follows her. If the child is speaking to Teo, an operator can control the sound answer, either by editing the answer text and generating the corresponding synthesized voice on the fly, or by selecting the answer from a built-in set. Eventually, Teo’s avatar appears on the screen, and invites the child to move towards it. When the child is close enough to be sensed by the motion-sensing device, its avatar appears near Teo’s avatars and the child imitates it.

17

Figure 9: Familiarization phase Structured learning activities Structured learning activities are inspired by those frequently proposed to IDD children in therapeutic centers: simple choice-making and recognition tasks, well-known physical games, and storytelling. To promote the ability of making choices, often repressed in IDD children, children must express their willingness to play before each learning activity, by performing a specific gesture towards the screen. In addition, before leaving the room at the end of the entire play session, a final task is proposed: Teo and its avatar thank the child and greet him, and the child must respond with a similar gesture. This task has two goals. Firstly, it helps the child to learn a social convention (greeting when on leave); secondly, it smoothens the negative feeling that the departure may generate. So far, four games have been designed and prototyped to support structured learning activities, two of which are briefly described in the rest of this section. Colors Game In Colors (Fig. 10) the robot asks what the color of the object on screen is and the child has to push the right colored button placed on Teo’s cap. Information and feedbacks, according to the predefined configurations, are given through Teo’s audio stimuli and lights, as well as with text and videos on the screens. The possibility to change the image on the robot’s buttons, e.g., by using PCS images, is seen as a potential factor to build up fully personalized activities.

18

Figure 10: “Colors” game “Witch says colors” The “Witch says colors” (Fig. 11) game aims at developing concept understanding and spatial awareness. Six large images (colored circles or more complex monochrome figures) are physically placed on the floor of the play room. Teo’s and the child’s avatars appear on screen and move to one or two of the images (depending on the game level), calling the corresponding name(s). The child and the robot must perform the same action in the physical space. Three situations can take place: 1) both Teo and the child reach the correct position, and the activity ends with a reward. 2) Teo reaches the correct position while the child does not; Teo asks the child if that is the right place, giving feedbacks and suggestions until the child succeeds, and a reward appears on screen. 3) The child is in right place but Teo is not. The robot asks the child for help and the activity ends with a reward when the child, pushing the robot, places Teo on the right image on the floor.

19

Figure 11: “Witch says color” game Customization features There is no such thing as an ”average” child with cognitive and intellectual disability. Each child manifests unique strengths and skill deficits. Things that are reinforcing or rewarding to one individual may be unpleasant for another person. Any play activity must therefore be oriented to addressing the unique capability and needs of each individual child, which implies that a game must support a high degree of customizability. It must enable caregivers to adapt a gaming experience to the individual skills and preferences of each child, customizing multimedia contents, rewards, play time, body movements. Furthermore, to support the evolving needs of a child over time, game should support increasing levels of motor and cognitive complexity of game tasks. It should enable the progression along a continuum of game sessions involving activities that are similar but, when the child has acquired and consolidated the proper skills, are progressively more demanding in terms of motor, cognitive and social skills required. To address this need, we provide a control panel (Figure 12) that enables the caregiver to customize each game and adapt it to each child’s profile, e.g., changing the multimedia contents of each game (images, texts, sounds) to increasing/decreasing the complexity of the game tasks.

20

Figure 12: Control panel for game customization

IMPLEMENTATION  Each game has a modular structure to facilitate the integration of different parts of the system and to improve scalability. The main modules, shown in figure 13, are the following: 

Robot Controller: manages the communication with and enables the control of the robot declaring public methods that the Game Controller calls to manage the robot.



Robot recognizer: recognizes and localizes robot in the environment. Different sensors can be used to implement this feature but we chose Microsoft Kinect for its high to low level sdk and cost. Kinect api enables programmers to acquire different info out of the sensor, such as: color camera, ir data, depth data, sound info.



Kinect manager is in charge of managing the interaction between the system and Kinect. It makes available to Game Controller a library of methods that uses Kinect api and enable the recognition of users’ joints.



Game controller is the main module of this architecture, and implements the game logic of each specific game. It manages the modality, flow, logic and acts as a bridge between the onscreen user interface and the user interaction in the physical space.

21 

Movements with the robot can be controlled directly by the therapist thanks to a gamepad. The module Gamepad api receives commands coming from the controller and notify them to the Game controller.



Robot Action module provides Robot controller with a complete set of methods that activate different predefined actions and behaviors.

Figure 13: System Architecture



CONCLUSIONS  The general goal of our research is to create new interactive spaces to promote relaxation, affection, and the development of social and cognitive skills in IDD children. Our approach blends mobile robots and virtual worlds on large screens, and implements multiple interaction paradigms to enable the communication among children, robots, and on-screen virtual world:

22 motion-based interaction at the distance, voice interaction, and manipulation-based interaction. In this paper, we have discussed the design challenges of this approach and have presented our current results, which we have achieved in collaboration with 22 therapists from 6 therapeutic and consist of: i) the identification of a set of general and detailed design requirements that take into account the affordances of our blend of technologies and the specific needs of IDD children; ii) the development of a set of prototypes (one robot and four games) that enable IDD children to perform a variety of familiarization activities and more structured learning tasks by interacting with the robot and on-screen multimedia contents. An exploratory study has been recently performed at a local therapeutic center to evaluate the effectiveness of our design solutions and prototypes. The exploratory study involved 22 low and medium functioning IDD children and their 11 therapists. We are currently analyzing the wide amount of collected data but we can anticipate that all therapists agree that blending robotic interaction and full-body interaction with on screen multimedia contents screen opens new extraordinary opportunities in the arena of IDD children’s learning. This mix of interaction opportunities with physical and digital material elicits operational behaviors, social interaction and emotional responses that normally do not occur using other methods, or that require a much longer time to be achieved. For example, an autistic child explicitly called a mate to play with Teo and the screen together: it was the first time he expressed the willingness to play in social mode. A girl with severe Hyperactivity Disorder relaxed in few minutes after meeting Teo, and she was able to concentrate on, and perform, a learning task (the “Colors” game) during the same session.

23 Our future work will revise our current prototypes in light of an accurate analysis of the results of this preliminary study. The evaluation of the new version of the system will be performed in 2 therapeutic centers involved in the KROG project using a control study protocol, to provide more grounded empirical evidence of the learning potential of our approach in the treatment of IDD children.

ACKNOWLEDGMENTS  We are grateful to children and caregivers at the centers involved in our study and in particular . Dr. Lucio Moderato, Dr. Rita Montoli, Dr. Carlo Riva, Dr. Aurelia Rivarola. A special thank to the KROG team at Politecnico di Milano: Andrea Bonarini and Francesco Clasadonte (Department of Electronics, Information, and Bioengineering); Maximiliano Romero and Fiammetta Costa (Department of Design). This research is partially supported by the POLISOCIAL Program of Politecnico di Milano 2013-15.

REFERENCES  1. ALCORN, A., PAIN, H., GNANATHUSHARAN, R., SMITH, T., LEMON, O., … BERNARDINI, S., 2011. SOCIAL COMMUNICATION BETWEEN VIRTUAL CHARACTERS AND CHILDREN WITH AUTISM. VOLUME 6738, 2011, PP 7-14, SPRINGER. 2. BARTOLI, L., CORRADI, C., GARZOTTO, F., VALORIANI, M. (2013). MOTION-BASED TOUCHLESS INTERACTION FOR AUTISTIC CHILDREN'S LEARNING. PROC. PROCEEDINGS OF THE 2013 CONFERENCE ON INTERACTION DESIGN AND CHILDREN (IDC) 2013, PP. 53-44, ACM.

24 3. BARTOLI, L., GARZOTTO, F., GELSOMINI, M., OLIVETO, L., VALORIANI, M. (2014, JUNE). DESIGNING AND EVALUATING TOUCHLESS PLAYFUL INTERACTION FOR ASD CHILDREN. IN PROCEEDINGS OF THE 2014 CONFERENCE ON INTERACTION DESIGN AND CHILDREN (PP. 17-26). ACM. 4. BIANCHI-BERTHOUZE, N., KIM, W., PATEL, D. 2007. DOES BODY MOVEMENT ENGAGE YOU MORE IN DIGITAL GAME PLAY? AND WHY? AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, 102-113. SPRINGER. 5. BONARINI, A., GARZOTTO, F., GELSOMINI, M., VALORIANI, M. (2014, MAY). INTEGRATING HUMAN-ROBOT AND MOTION-BASED TOUCHLESS INTERACTION FOR CHILDREN WITH INTELLECTUAL DISABILITY. IN PROCEEDINGS OF THE 2014 INTERNATIONAL WORKING CONFERENCE ON ADVANCED VISUAL INTERFACES (PP. 341-342). ACM. 6. BOUVIER, P., LAVOUÉ, E., SEHABA, K. (2014). DEFINING ENGAGEMENT AND CHARACTERIZING ENGAGED-BEHAVIORS IN DIGITAL GAMING. SIMULATION & GAMING, 1046878114553571. 7. CABIBIHAN, J. J., JAVED, H., ANG JR, M., ALJUNIED, S. M. (2013). WHY ROBOTS? A SURVEY ON THE ROLES AND BENEFITS OF SOCIAL ROBOTS IN THE THERAPY OF CHILDREN WITH AUTISM. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 5(4), 593-618. 8. COONEY, M. D., NISHIO, S., ISHIGURO, H. (2014). DESIGNING ROBOTS FOR WELL-BEING: THEORETICAL BACKGROUND AND VISUAL SCENES OF AFFECTIONATE PLAY WITH A SMALL HUMANOID ROBOT. LOVOTICS, 1(101), 2. 9. CSIKSZENTMIHALYI, M. (1997). FINDING FLOW: THE PSYCHOLOGY OF ENGAGEMENT WITH EVERYDAY LIFE. BASIC BOOKS. 10. DIEHL, J.J., SCHMITT, L.*, CROWELL, C.R., VILLANO, M. (2012). THE CLINICAL USE OF

25 ROBOTS FOR CHILDREN WITH AUTISM SPECTRUM DISORDERS: A CRITICAL REVIEW. RESEARCH IN AUTISM SPECTRUM DISORDERS, 6(1), 249-262. DOI: 10.1016/J.RASD.2011.05.006. PMCID: PMC3223958 . 11. DOURISH, P. 2004. WHERE THE ACTION IS: THE FOUNDATIONS OF EMBODIED INTERACTION. MIT PRESS 12. HOOKHAM, G., MEANY, M. (2015). THE SPECTRUM OF STATES: COMEDY, HUMOUR AND ENGAGEMENT IN GAMES. IN PROCEEDINGS OF THE 11TH AUSTRALASIAN CONFERENCE ON INTERACTIVE ENTERTAINMENT (IE 2015) (VOL. 27, P. 30). 13. HOURCADE J.P., BULLOCK-REST N., HANSEN T.E. (2012) MULTITOUCH TABLET APPLICATIONS AND ACTIVITIES TO ENHANCE THE SOCIAL SKILLS OF CHILDREN WITH AUTISM SPECTRUM DISORDERS. PERSONAL AND UBIQUITOUS COMPUTING 16, 2 (FEB 2012), SPRINGER, 157-16 14. LEE W. J., HUANG C. W., WU C. J., HUANG S. T., CHEN G. D. (2012) THE EFFECTS OF USING EMBODIED INTERACTIONS TO IMPROVE LEARNING PERFORMANCE. PROC. ICALT 2012, 557-559. IEEE 15. LEHMANN, H., IACONO, I., DAUTENHAHN, K., MARTI, P., ROBINS, B. (2014). ROBOT COMPANIONS FOR CHILDREN WITH DOWN SYNDROME: A CASE STUDY. INTERACTION STUDIES, 15(1), 99-112. 16. MORGAN, S. B. (1986). AUTISM AND PIAGET'S THEORY: ARE THE TWO COMPATIBLE?. JOURNAL OF AUTISM AND DEVELOPMENTAL DISORDERS, 16(4), 441-457. 17. ROBINS, B., DAUTENHAHN, K., FERRARI, E., KRONREIF, G., PRAZAK-ARAM, B., MARTI, P. & LAUDANNA, E. (2012). SCENARIOS OF ROBOT-ASSISTED PLAY FOR CHILDREN WITH COGNITIVE AND PHYSICAL DISABILITIES. INTERACTION STUDIES, 13(2), 189-234.

26 18. SCASSELLATI, B., HENNY, A., MAJA, M.. (2012) "ROBOTS FOR USE IN AUTISM RESEARCH." ANNUAL REVIEW OF BIOMEDICAL ENGINEERING 14: 275-294. 19. SCHOENAU-FOG, H. (2011). THE PLAYER ENGAGEMENT PROCESS–AN EXPLORATION OF CONTINUATION DESIRE IN DIGITAL GAMES. IN THINK DESIGN PLAY: DIGITAL GAMES RESEARCH CONFERENCE. 20. SYLVA K., JOLLY A., BRUNER, J. S. (1976). PLAY: ITS ROLE IN DEVELOPMENT AND EVOLUTION. PENGUIN. 21. TURKLE, S., BREAZEAL, C., DASTÉ, O., SCASSELLATI, B. (2006). ENCOUNTERS WITH KISMET AND COG: CHILDREN RESPOND TO RELATIONAL ARTIFACTS. DIGITAL MEDIA: TRANSFORMATIONS IN HUMAN COMMUNICATION, 1-20. 22. WAINER, J., DAUTENHAHN, K., ROBINS, B., & AMIRABDOLLAHIAN, F. (2014). A PILOT STUDY WITH A NOVEL SETUP FOR COLLABORATIVE PLAY OF THE HUMANOID ROBOT KASPAR WITH CHILDREN WITH AUTISM. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 6(1), 45-65. 23. ZALAPA R., TENTORI M. (2014) MOVEMENT-BASED AND TANGIBLE INTERACTIONS TO OFFER BODY AWARENESS TO CHILDREN WITH AUTISM. UBIQUITOUS COMPUTING AND AMBIENT INTELLIGENCE. CONTEXT-AWARENESS AND CONTEXT-DRIVEN INTERACTION LNCS 8276, SPRINGER, 127-134.