Prioritizing Primary and Alternative Body Gestures for Intense Gameplay

4 downloads 84 Views 2MB Size Report
Apr 26, 2014 - Xiangshi Ren. Kochi University of Technology, Kochi, Japan. Kochi University of Technology, Kochi, Japan ren.xiangshi@kochi-tech.ac.jp.
Session: Understanding and Designing Games

CHI 2014, One of a CHInd, Toronto, ON, Canada

Jump and Shoot! - Prioritizing Primary and Alternative

Body Gestures for Intense Gameplay

Chaklam Silpasuwanchai Kochi University of Technology, Kochi, Japan [email protected]

Xiangshi Ren Kochi University of Technology, Kochi, Japan [email protected]

ABSTRACT

Motion gestures enable natural and intuitive input in video games. However, game gestures designed by developers may not always be the optimal gestures for players. A key challenge in designing appropriate game gestures lies in the interaction-intensive nature of video games, i.e., several ac­ tions/commands may need to be executed concurrently using different body parts. This study analyzes user preferences in game gestures, with the aim of accommodating high interac­ tivity during gameplay. Two user-elicitation studies were con­ ducted: first, to determine user preferences, participants were asked to define gestures for common game actions/commands; second, to develop effective combined-gestures, participants were asked to define possible game gestures using each body part (one and two hands, one and two legs, head, eyes, and torso). Our study presents a set of suitable and alternative body parts for common game actions/commands. We also present some simultaneously applied game gestures that assist inter­ action in highly interactive game situations (e.g., selecting a weapon with the feet while shooting with the hand). Interesting design implications are further discussed, e.g., transferability between hand and leg gestures. Author Keywords

Games; interactivity; motion gestures; concurrent gestures; user-defined approach. ACM Classification Keywords

H.5.m. Information Interfaces and Presentation (e.g. HCI): Miscellaneous. General Terms

Human Factors; Design. INTRODUCTION

The introduction of Kinect-based interaction has enabled more natural and intuitive input for video games. However, game gestures designed by developers may not always be the most suitable for players. Indeed, players have reported difficulties in playing some motion-based games, particu­ larly in interaction-intensive games (e.g., First Person Shoot­ ers/Action/Adventure) where several actions/commands have to be executed at or nearly at the same time (e.g., [4]). Thus one key challenge in game gestural interfaces lies in defining suitable gestures that enable players to effectively perform multiple game actions/commands simultaneously and with ease. Several full-body gesture studies have been conducted [1, 5, 10, 11], but still little study has been conducted considering the dynamic nature of game environments in general. When Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CHI 2014, April 26–May 1, 2014, Toronto, ON, Canada. c 2014 ACM 978-1-4503-2473-1/14/04...$15.00.

Copyright © http://dx.doi.org/10.1145/2556288.2557107

a player’s hand is occupied with “Shooting Zombies”, it is not known which other body parts and gestures players might prefer to perform simultaneous actions such as “Use First Aid Kit” or “Viewpoint Change”. Since a literal “Jump” or “Climb” action can be tiring, we need to consider whether users might prefer less tiring gestures. We would also do well to ask what gestures veteran gamers and non-gamers devise or envisage to enhance their interaction experiences. To investigate these potentials, this study analyzes how users define gestures for common game actions. In addition, we identify a set of alternative body parts and game gestures that facilitate the high amount of interaction experienced in games. Last, we highlight important design implications. RELATED WORK

Several works regarding full-body game interaction have been conducted, e.g., how emotions, role-play, and social expres­ sions interplay with body movement [1, 6]; how motivation af­ fects the player’s use of gestures [10]; factors affecting player engagement in body interaction [11]; naturalistic full-body gestures in public spaces [3, 8]; accommodating variability of same gesture execution [2]. Yet, the simultaneous use of gestures in video games has remained largely unexplored. One recent approach to design suitable gestures is the userelicitation methodology, originally based on the guessability technique [9, 12], where participants were asked to define gestures for commands/actions. Indeed, Morris et al. [7] suggested that a user-defined approach can produce more preferable gestures, compared to interactions designed by HCI experts who are likely to develop more “physically and con­ ceptually complex gestures than end-users”. Many works have been conducted using a user-defined approach, e.g., ges­ tures for surface computing [13]; full-body interactions for older adults [5]. Leveraging this approach, we were moti­ vated to design gestural interfaces that consider the dynamic, interaction-intensive nature of video games, or in a broader sense, to explore gestural interfaces in a highly interactive, fast-paced environment. GAME GESTURE STUDY

We conducted two user-elicitation studies. First, participants were asked to define single gestures for each common game actions/commands. Agreement scores, frequency of use of body parts, gesture types, and subjective assessment were ana­ lyzed. In the second study, participants were asked to define several gestures for each actions/commands using each body part (one and two hands, one and two legs, head, eyes, torso). This second study is intended to help designers consider a set of suitable and alternative body parts for each game ac­ tion/command. In addition, we aimed to uncover simultaneous game gestures that can promote higher interactivity in game situations. EXPERIMENT I: ANALYZE USER PREFERENCES Participants

Twelve university students (all males, M=22.1 years) were recruited. Five of the participants regularly played games

951

Session: Understanding and Designing Games

CHI 2014, One of a CHInd, Toronto, ON, Canada

P r represents all gestures performed for event r and P i is a subset of identical gestures from P r. Ar ranges 0 to 1.

everyday with more than 15 game hours per week (veteran gamers). Five reported to have no experience with Kinect and motion game gestures. All participants were right-handed and each was paid $10. Events

Thirty-two actions and commands were derived from typi­ cal interactions in video games. The complete set of events1 includes adventure/role-playing actions (Climb, Walk, Stop Walking, Run, Jump, Stealth Walking, Steal, Open Chest, Open Door, Pick Item, Use Item, Push Box, Shake Item), sport/fighting actions (Hitting Baseball, Catch Ball, Throw Ball, Row Boat, Roll Bowling, Racket Hitting, Kick, Guard), first person shooters actions (Shoot, Reload Gun, Slash, View­ point Change), racing actions (Drive, Accelerate, Shift gear), and system commands (Zoom-In, Zoom-Out, Open Menu, Select Menu Item). Procedure

At the start of the experiment, participants were asked to define game gestures for 32 game events. Each event was displayed along with the name of the event and a target scene on a large display. Target scenes were created by using Visual Studio 3D Toolkit to represent an effect (e.g., a treasure chest being opened, an opponent being slashed), and participants were asked to perform gestures to cause (trigger) the effect. Tar­ get scenes were not taken from animated screenshots of any existing video games, as we tried to keep the target scenes independent of any particular game, which might otherwise influence the resulting gesture. Some target scenes were not animated, e.g., stealth walking, as these scenes require the participants to see the actors in third-person view to show the effect clearly, which may influence participant’s defined ges­ tures; so we simply communicated the effect using instructions along with the name of the events. A think-aloud protocol was used where participants indicated the start and end of their performed gesture and described the corresponding rationale. Participants stood approximately 1.8 meters away from the display while performing gestures. A camera was set in front of the participant to record the performed gesture for later analysis. As with common elicitation studies, we did not want participants to take recognizer issues into consideration, i.e., to remove the gulf of execution between the user and the device, and to observe the users’ best possible gesture without users being affected by the recognition ability of the system - similar to the rationale described in [13]. Participants evaluated the gestures immediately after each performed gesture to assist recall efforts. A similar evaluation method to that of [13] was used: “The gesture I performed is a good match for its purpose”; “The gesture I performed is easy to perform”; “The gesture I performed is tiring”. The questionnaire followed a 7-point Likert scale style with 7 as “strongly agree”. The experimental session took around 1-hour. Results

Agreement scores, frequency of use of body parts, gesture types, and subjective assessment were analyzed. Total of 384 gestures were collected. Agreement score

Wobbrock et al’s method [13] was used to investigate the degree of consensus for each game event. The calculation of agreement score is as follows (see equation 1): Ar = Pi

Pi Pr

Figure 1 shows the agreement score for each game event. Two gestures were considered identical if they have similar trajecto­ ries and poses. The overall agreement score was 0.37, which is slightly higher than Wobbrock et al. (0.32 and 0.28)[13]. The high agreement score was due to participants mimicking their daily life gestures for performing some game actions. Regard­ less of the high overall agreement score, system commands including “Open Menu”, “Zoom-In”, “Zoom-Out” achieved relatively low overall agreement scores (0.126). Use of body parts and gesture types

Figure 2 shows the use of body parts. It is not surprising that one handed gestures were the most preferred (40%), fol­ lowed by two handed (35%), one leg (16%), two legs (4%), torso movement (3%), and head (2%). However, this pattern was not the same across all actions. For navigational actions such as “Run”, “Walking”, “Stop Walking”, most participants preferred leg gestures. For the “Viewpoint Change” action, participants preferred head input with a few using the torso. For the commands “Select Menu Item” and “Open Menu”, many participants preferred hands, however head input was used by some. Regarding gesture types, we feel it is important to highlight the difference in gesture preferences between veteran gamers and normal/non-gamers. We classified the defined gestures into four broad types [13]: physical (direct manipulation), symbolic (making an “OK” gesture), metaphorical (using a foot to “double click” items), and abstract (arbitrary mapping). As shown in Figure 2, most defined gestures resembled daily life gestures (physical gestures). On the other hand, non­ physical gestures were mostly performed by veteran gamers, e.g., for the action “Jump”, non-gamers tend to prefer using an actual “Jump” action while veteran gamers preferred a less tiring gesture (e.g., raising a leg slightly above the ground). Subjective assessment

Participants evaluated their defined gestures based on the Lik­ ert 7-scale rating (7 as strongly agree). The mean value is “The gesture I performed is a good match for its purpose?” (5.84); “The gesture I performed is easy to perform” (5.90); and “The gesture I performed is tiring” (5.51). The correla­ tion between participants’ good match ratings and agreement scores was found to be significant (r=0.746, p